doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
2023-05-21
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b0" ], "table_ref": [], "text": "Nowadays, fuel-powered vehicles cause widespread social concerns due to climate change and limited fossil fuel supply [1]- [3]. The electrification of the automobile is a promising solution to overcome these problems. However, the development of electric vehicles encounters technical difficulties [4]. As a compromise, hybrid electric vehicles (HEVs) have emerged as a promising technology for reducing fuel consumption and emissions within the current infrastructure [5], [6], which offer a balance of environmental benefits, fuel economy, and driving performance. For example, in the last ten years, the proportion of the newly registered HEVs in all kinds of vehicles has risen from less than 0.1 % to 31.2 % in the German market [7].\nHEVs generally have multiple sources to power the drivetrain. Therefore some sort of energy management is required to manage the cooperation of the individual power components. Some common strategies include: (1) Charge Depleting (CD) strategy, which uses the electric motor to power the vehicle as much as possible, and only switches to the internal combustion engine (ICE) when the battery's charge is depleted, (2) Charge Sustaining (CS) strategy, which maintains a constant state of charge (SOC) in the battery by adjusting the power split between the electric motor and the ICE, (3) Power-Split strategy, which uses both the electric motor and ICE to power the vehicle simultaneously. The power split between the two is adjusted to achieve optimal fuel efficiency.\nIn this work, we focus on the Power-Split strategy to achieve optimal efficiency for different HEVs, also referred to as the energy management strategy (EMS) in HEVs. Many automobile manufacturers have developed their own specific software for optimizing the EMS for their vehicles, and much of the research lacks a common platform as the baseline. This work aims to implement a framework to optimize the EMS of various HEVs with reinforcement learning (RL) in an opensource vehicle powertrain simulation tool, namely the Future Automotive Systems Technology Simulator (FASTSim) [8].\nCompared to the state-of-the-art (SotA), the contributions of this work are as follows: (1) We provide an open-source solution that leverages RL algorithms to learn optimal EMS in different driving situations. We re-programmed FASTSim, originally designed with a rule-based strategy, to be compatible with RL-based strategies. This is especially useful for researchers in the RL community. (2) Most SotA methods depend on Matlab or proprietary software for building specific vehicle models. In contrast, we offer generalized interfaces for various vehicle models and different driving cycles. (3) Many SotA methods hard-code boundary constraints, such as speed requirements, whereas we encode constraints as parts of the reward function and let the RLagent learn to obey them during exploration." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Energy Management Strategy", "publication_ref": [ "b8", "b9", "b2", "b5", "b10", "b11", "b12", "b14", "b14", "b20", "b5", "b15", "b16", "b5", "b17", "b18", "b5", "b19" ], "table_ref": [], "text": "The EMS in HEVs is often realized using rule-based approaches, which include deterministic rule-based and fuzzy rule-based methods [9], [10]. Experienced engineers must carefully design the rules to achieve the desired behavior. When designed correctly, the rule-based approach provides energy management with real-time capabilities and high accountability. Since the rules are hard-coded, the model, however, has limited flexibility [3], [6] and cannot fully exploit the potential fuel savings [11].\nAnother popular approach is the optimization-based method, such as model predictive control [12] or dynamic programming [13]- [15]. In such methods, a mathematical model of the HEV system is used to predict the vehicle's energy needs and determine the optimal power split between the electric motor and the ICE. The optimization algorithm takes into account factors such as the vehicle's current speed, the SOC of the battery, the engine load, and the driver's requested power. The optimization goal is to minimize the Fig. 1. State of the art for the optimization of EMS in HEVs [15], [21].\nfuel consumption of the HEV while meeting the driver's power demand. To some extent, these methods improve the real-time performance and fuel economy of EMS [6], [16], [17]. However, such methods require complex mathematical models and high computational resources. Recently, learning-based methods have been suggested to learn an appropriate EMS automatically. Especially RLbased methods showed promising results, which are more flexible than the rule-based approach and are also real-time capable [6]. Several works have already been conducted to examine the benefits and difficulties of using RL for energy management in HEVs. RL algorithms, including deep deterministic policy gradient (DDPG) [18], [19] and a variety of Q-learning algorithms have been tested to solve different energy management tasks [6], [20]. Figure 1 shows an overview of the three different kinds of methods for optimizing the operational strategy of energy management in HEVs." }, { "figure_ref": [], "heading": "B. Reinforcement Learning", "publication_ref": [ "b21", "b5", "b22", "b25", "b26", "b27", "b28", "b29", "b30" ], "table_ref": [], "text": "RL is a specific machine learning approach where an agent learns to make decisions by interacting with an environment and receiving feedback in the form of rewards [22]. The goal of the agent is to learn a policy, which is a mapping from states of the environment to actions. The goal is to maximizes the cumulative reward over time. RL has been applied to various control problems, such as robotics, gameplaying, and autonomous vehicles. It has also been used to optimize the EMS in HEVs to achieve better fuel efficiency, such as [6], [23]- [26].\nMany different algorithms can be used for RL, such as Qlearning [27], SARSA [28], actor-critic RL [29], and policy gradient [30]. The choice of the algorithm will depend on the specific problem and the type of environment. In this paper, we use DDPG [31], which is based on the actorcritic structure and utilizes deep neural networks (NNs) to generalize for continuous state and action spaces. The RL algorithm aims to learn a policy, which maps the vehicle's states to actions that maximize the cumulative reward over time." }, { "figure_ref": [], "heading": "C. Prioritized Experience Replay", "publication_ref": [ "b31" ], "table_ref": [], "text": "In most model-free and off-policy RL settings, the trajectories experienced by the RL-agent are usually saved in the replay buffer, and trajectories will be uniformly sampled and learned. However, some trajectories may have more information than others, and thus they should be more frequently chosen to be learned. The idea is similar to importance sampling. In RL, the importance of different trajectories can be quantified by various metrics, and [32] proposed prioritized experience replay (PER), where the trajectories are sampled based on the temporal difference error (TD-Error)." }, { "figure_ref": [], "heading": "D. Monte Carlo Dropout", "publication_ref": [ "b32", "b34", "b35", "b36" ], "table_ref": [], "text": "Dropout is a technique that has been proposed to improve the generalization and to suppress overfitting of deep NNs [33]- [35] by introducing a form of model uncertainty into the predictions made by deep NNs. By using dropout prior to of each layer, Monte Carlo dropout (MC dropout) [36] proposes to train the deep NN to approximate the underlying Gaussian Process [37]. MC dropout has been proven to improve the generalization and prediction performance further." }, { "figure_ref": [ "fig_2" ], "heading": "E. FASTSim", "publication_ref": [ "b7", "b37", "b38", "b39", "b7" ], "table_ref": [], "text": "Different from specific vehicle models or proprietary software, FASTSim is designed to be open-source, computationally lightweight, accurate, and scalable, offered by the National Renewable Energy Laboratory (NREL), USA [8], [38]. It provides Python implementations and a relatively simple approach to compare different vehicle powertrains on vehicle efficiency, performance, and battery life. Users can either select vehicles already predefined in FASTSim or model various vehicles by different parameters, such as vehicle weights, battery capacity, engine powers, and so on. Additionally, various driving cycles can be imported to test the vehicle model, such as the Urban Dynamometer Driving Schedule (UDDS) [39] or the Worldwide Harmonised Light Vehicle Test Procedure (WLTP) [40]. Therefore, variations of the vehicle or powertrain can be assessed under different driving conditions.\nFASTSim simulates the vehicle and its components through speed-vs-time drive cycles. At each timestep, FAST-Sim accounts for drag, acceleration, ascent, rolling resistance, regenerative braking, each powertrain component's efficiency, and power limits [8]. The vehicle models are simplified to some extent. Therefore, a scalable and generalized simulation of different kinds of vehicles becomes possible. Figure 2 gives an overview of the components in FASTSim." }, { "figure_ref": [], "heading": "III. DEEP RL FRAMEWORK WITH FASTSIM", "publication_ref": [ "b38", "b39", "b38" ], "table_ref": [], "text": "In this work, we propose a framework for training RLagents to learn driving strategies for various HEVs using FASTSim. The proposed framework provides a systematic approach for the training, simulation, and validation of the RL-based driving strategies.\nThe framework consists of four main blocks: input parameters, learning phase, simulation phase, and validation phase. The input parameters allow for the customization of the framework to different driving scenarios and HEVs. In the simulation phase and learning phase blocks, the agent interacts with the simulated environment and the agent's policy is updated based on trials and rewards, respectively. Finally, the validation phase evaluates the performance of the learned strategies on various driving cycles and validates the transferability of the RL-agent. Figure 3 shows the overview of the framework.\nA. Input Parameters a) Vehicle Model: The vehicle is represented by a set of parameters in FASTSim. Top-level parameters like frontal area and drag coefficient describe the physical properties of the vehicle as a complete unit. These parameters play a prominent role in road load equations. The road load equations are implemented in FASTSim to estimate the power required for the vehicle to meet the drive cycle. Lowlevel parameters represent the powertrain components of the vehicle, the ICE, the electric motor, fuel storage, and the battery. The parameters of the transmission components are pre-defined values that describe the properties of the component and constrain the behavior of the transmission. Table I lists the main parameters of the electric motor.\nb) Driving Cycle: A driving cycle, also known as a drive cycle or test cycle, is a standardized driving pattern used to evaluate the performance of a vehicle. The driving cycle consists of a series of speed and acceleration commands that simulate a specific driving scenario, such as city or highway driving. In FASTSim, a driving cycle is used to simulate the vehicle's dynamic behavior, fuel economy, and other performance characteristics under different driving conditions. This allows FASTSim to evaluate the performance of various HEVs and the effectiveness of different EMS under the specific driving conditions represented by the driving cycle. Many different driving cycles have been developed for use in vehicle testing, including the UDDS, WLTP, Highway Fuel Economy Test (HWFET) [39], New European Driving Cycle (NEDC) [40], and the US06 Supplemental Federal Test Procedure (US06) [39].\nc) Driving Cycle Generator: An RL-based strategy trained on a specific driving cycle may exhibit overfitting, leading to sub-optimal performance when applied to other driving situations or behaviors. To mitigate this issue, a random driving cycle generator can be implemented to increase the diversity of the training dataset. This can be achieved by incorporating noise, concatenating, or cropping standard driving cycles. The incorporation of a diverse set of driving cycles in the training dataset can lead to a more generalizable RL-based strategy, thus improving its performance in various driving conditions." }, { "figure_ref": [], "heading": "B. Learning Phase and Simulation Phase", "publication_ref": [], "table_ref": [], "text": "In the learning phase, the agent interacts with the simulated environment and the agent's policy is updated based on the observed rewards as feedback. During this phase, the agent learns from its own experiences and improves its decision-making over time. In the simulation phase, FAST-Sim allows the agent to explore different driving scenarios and conditions and learn a driving strategy robust to diverse operating conditions. a) Reward Function: In our approach, the reward function is designed to encourage the agent to learn a driving strategy that maximizes the energy efficiency of the vehicle while meeting the driving constraints, which assigns negative rewards for actions that result in low energy efficiency or violate the driving constraints. The state, action, and reward are defined as state = {SOC, s cycle , a cycle , s achieved } ,\n= p ICE /p cycle ,(1) action\n= -α 1 • p achieved -α 2 • [|s cycle -s achieved | > 0] -α 3 • [(SOC ref -SOC) > β] ,(2) reward\nwhere s cycle , a cycle and p cycle represent the required speed, acceleration, and power of the given driving cycle, respec- (α 1 , α 2 , α 3 ) are the non-negative coefficients for balancing the fuel efficiency and boundary conditions, while β > 0 defines the threshold of the allowed difference between current SOC and SOC ref . By using a well-designed reward function, the agent learns to take optimal actions that result in high energy efficiency, while keeping the SOC in a healthy working condition for the battery. We utilize Bayesian optimization to search for the best parameters.\nb) Boundary Conditions: In our proposed framework for training RL-based strategies for HEVs utilizing FAST-Sim, certain boundary conditions have been implemented to ensure the validity and applicability of the obtained results. One important boundary condition is the correspondence between the speed of the given driving cycle and the vehicle model. This boundary condition is essential as any discrepancies between the speed of driving cycle s cycle and the actual speed of the vehicle model s achieved can lead to inaccuracies in the evaluation of the vehicle's energy consumption. To mitigate this and to encourage the RL-agent to focus on completing the driving cycle correctly, we assign a negative reward -α 2 to the agent, as long as |s cycles achieved | > 0 is evaluated as true." }, { "figure_ref": [ "fig_3" ], "heading": "c) Benchmark between Rule-based and RL-based", "publication_ref": [], "table_ref": [], "text": "Strategies: To evaluate and compare the RL-based strategies against the default rule-based strategies provided by FAST-Sim, we implement the RL algorithm based on the interfaces of FASTSim. Figure 4 shows the comparison between the decision processes of both kinds of EMS in Unified Modeling Language (UML). The default rule-based strategies in FASTSim will first calculate the required output power for satisfying the driving cycle and then divide the power requirements between ICE and electric motor according to hard-coded rules. In contrast, the RL-based strategy will let the agent decide on the power-split itself. After that, the current SOC of the battery and the achieved speed s achieved will be fed back to the RL algorithm, guiding itself for learning an optimized strategy.\nd) Priorities for the Replay Buffer: In PER, transitions are assigned a priority value that reflects their importance or information gain for learning. The priority value can be based on various factors, such as the TD-Error. Transitions with higher priority values are more likely to be replayed during the learning process. Here, we use PER in the replay buffer to further improve the sampling efficiency and stability of the RL algorithm by focusing the learning process on the most informative transitions. This can lead to faster convergence and better performance of the learned policy." }, { "figure_ref": [], "heading": "C. Validation Phase", "publication_ref": [], "table_ref": [], "text": "In this framework, the validation phase plays a crucial role in ensuring the effectiveness and robustness of the learned RL-based strategies. The core component of the validation phase is the transfer test, which involves testing the learned agent on different driving cycles. For example, we train the RL-agent on WLTP-C3 and evaluate it on the other driving cycles, such as NEDC, UDDS, and HWFET. The transfer test allows evaluating the agent's performance under different driving conditions and assessing its ability to adapt to new scenarios. This is especially important for real-world use, as driving conditions can vary greatly depending on the route, traffic, and weather conditions. To this end, the transfer test provides a robust and reliable evaluation of the learned RL-agent. It further allows ensuring that the agent is generalizable and effective under different scenarios." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we present the experimental results of our proposed framework, which were conducted under different driving cycles and with various HEVs. In the validation phase, we will show the transfer tests of the learned RLagents on five different driving cycles, as shown in Figure 5. All the strategies are trained on WLTP-C3 and tested on the other cycles. We show the results on the following two HEVs, which both apply a power-split strategy (cf. Sec. I): a) BMW i3 REx, 2016: a series plug-in hybrid vehicle with range extender, where the ICE only works with a generator to recharge the battery and is isolated from the axle. Its lithium-ion battery has a capacity of 94 Ah (33 kWh).\nb) Toyota Prius Prime, 2017: a series-parallel plugin hybrid vehicle that combines the concepts of series and parallel hybrid, in which the ICE not only recharges the battery with a generator but also drives the transaxle together with the electric motor in different modes. It has a smaller lithium-ion battery with a capacity of 8.8 kWh." }, { "figure_ref": [], "heading": "A. Results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In our tests, most of the RL-agents are able to converge within 10 episodes for various HEVs on the WLTP-C3 driving cycle, thanks to the lightweight simulator and the PER buffer in our framework. In order to provide a comprehensive understanding of the learning process, we present the results of the RL-agent trained after 1 episode and after 10 episodes as a comparison. We also present the rule-based strategy's results as a benchmark. In all the experiments, we use NNs with two hidden layers with 100 neurons each for the Actor. For the Critic, we utilize three hidden layers, incorporating 100 neurons each for the first two layers and 50 neurons for the third layer, with MC Dropout in front of each layer. In the reward function (3), we use (1.5, 10, 0.1) for the three coefficients. The reason for the small value α 3 = 0.1 is explained in Sec. IV-B. a) Learning Process: As shown in Fig. 6, the RLagents can reduce their energy consumption for both HEVs and on all five driving cycles after training for 10 episodes. For example, the BMW i3 REx PHEV consumes 7.65 kWh on the WLTP-C3 driving cycle, with the RL-based strategy When trained only for one episode, both HEVs fail to finish the WLTP-C3 driving cycle and show speed differences between their achieved speeds s achieved and the required speeds s cycle (cf. the first column of Fig. 6). Thanks to the added boundary condition in the reward function (3), the RLagents learn to stricly follow the required speeds during the driving cycle when trained for ten episodes. Similar results are observed for many other HEV models, e.g., Chevrolet Volt, Ford C-MAX, and Hyundai Sonata, and prove the effectiveness of the learning process, as shown in Table II.\nb) Transferability: To validate the transferability and generalization of the learned RL-based strategies on different driving conditions, we evaluate the performance of the RLagents on the other four driving cycles. As shown in Figure 6, both agents can ensure no speed difference and correctly follow the cycles on UDDS and HWFET after 10 episodes, similar as on WLTP-C3; however, they fail to satisfy the speed requirements on the NEDC and US06 driving cycles. A possible reason is that US06 contains more challenging driving situations, such as higher average speed and more aggressive acceleration, while NEDC is designed to have more urban driving phases (66 %) compared to WLTP-C3 (52 %). Compared to the rule-based strategies, both of the RL-agents achieve to reach relatively lower or similar total energy consumption after 10 episodes on WLTP-C3, UDDS and HWFET driving cycles, which proves the general applicability of our framework in different conditions." }, { "figure_ref": [], "heading": "B. Limitations", "publication_ref": [], "table_ref": [], "text": "In an effort to integrate RL algorithms with a suitable simulation environment for HEVs, we chose FASTSim as the basis of the framework and implemented several wellknown RL algorithms. However, there are two limitations that we will focus on in future work.\na) Efficiency Map: instead of a complete efficiency map including different efficiency factors based on torque and speed of the engine, FASTSim adopts a simplified efficiency curve, where the efficiency rates depend merely on the output power of the engine. With such simplification, FASTSim provides a fast and lightweight simulation tool. However, it may lead to inaccurate simulation results of the learned RL-based strategies, which are dependent on the efficiency factors of different vehicle models." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by the Baden-Württemberg Ministry of Economic Affairs, Labor, and Tourism within the KI-Fortschrittszentrum \"Lernende Systeme and Kognitive Robotik\" under Grant No. 036-140100." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "b) Trade-off with Battery Lifetime: in the reward function (3), we include the factor α 3 to balance the overload of the battery with energy efficiency. Theoretically, SOC ref represents the optimal working conditions of the battery. However, such parameters are lacking in FASTSim. In our experiments, we assume 65% as the SOC ref for all HEVs and use a rather small factor α 3 to assign negative rewards if the current SOC relative to SOC ref is smaller than threshold β. A more sophisticated formulation and the fine-tuning of α 3 needs to be considered when more parameters about the batteries of different HEVs are available." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We provide a systematic framework for the training, simulation, and validation of the RL-based driving strategies for various HEVs under different driving conditions. Several boundary conditions and transfer tests are incorporated to ensure the validity and applicability of the learned RL-agents in real-world scenarios. The experimental results show the potential of using RL for improving the performance and energy efficiencies of HEVs. In future work, we will focus on integrating the proposed framework and evaluating the learned strategies in a real-world vehicle and bridge the gap between simulation and reality." } ]
In recent years, the development of Artificial Intelligence (AI) has shown tremendous potential in diverse areas. Among them, reinforcement learning (RL) has proven to be an effective solution for learning intelligent control strategies. As an inevitable trend for mitigating climate change, hybrid electric vehicles (HEVs) rely on efficient energy management strategies (EMS) to minimize energy consumption. Many researchers have employed RL to learn optimal EMS for specific vehicle models. However, most of these models tend to be complex and proprietary, making them unsuitable for broad applicability. This paper presents a novel framework, in which we implement and integrate RL-based EMS with the opensource vehicle simulation tool called FASTSim. The learned RL-based EMSs are evaluated on various vehicle models using different test drive cycles and prove to be effective in improving energy efficiency.
Towards Optimal Energy Management Strategy for Hybrid Electric Vehicle with Reinforcement Learning
[ { "figure_caption": "(a) Components of FASTSim.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Example of the simulation result for the driving speed.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. (a) Important components of FASTSim for the simulation. Vehicle parameters and a driving cycle are required as the input, and FASTSim enables further customization of the vehicle models or self-designed driving cycles. The powertrain model in FASTSim will calculate the achieved speeds, accelerations, motors' output powers, and energy consumption of the vehicle for completing the driving cycle. The power-split between ICE and the electric motor is determined by the rule-based strategy, which is designed by FASTSim for an accurate simulation of real-world situations. (b) shows the example of one simulation result on the WLTP-C3 driving cycle of the Toyota Prius Prime.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. The comparison between the default rule-based strategy and the implementation of the RL-based strategy in the form of a UML diagram. Both strategies share low-level interfaces in FASTSim.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "-LEVEL PARAMETERS FOR ELECTRIC MOTORS IN FASTSIM.", "figure_data": "Maximum Power / kWLimit the acceleration performance andTime to Full Power / smaximum speed of the electric motorBass Mass / kgEstimate and scale the mass of the electricSpecific Power / kg/Kwmotor based on powerEfficiency CurveEfficiency at different power output percentages", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "The proposed framework for training RL-based driving strategies for HEVs using FASTSim. The combination of FASTSim in this framework allows for evaluating strategies for different HEVs under a diverse set of driving conditions, resulting in more generalizable driving strategies.tively, s achieved and p achieved are the achieved speed and actual output power of the vehicle model in simulation, respectively. Further, p ICE is the output power of ICE, which refer to the power-splitting in HEVs. SOC ref means the reference SOC, which is a target that represents the desired level of charge for the vehicle's battery. It ensures that the battery is operated within a safe and efficient range.", "figure_data": "Vehicle ModelFASTSimactionMaximum PowerInterfaces for RLRL-based StrategyInput ParametersEfficiency MapRule-Based StrategyActor-Critic StructurestateBattery CapacityrewardLearning Phase...Simulation PhaseDriving CycleBoundary Conditions condition violated ?Prioritized Replay BufferReward FunctionValidation PhaseIn Development(Random) Driving Cycle GeneratorStandard Driving ...) Cylces (WLTP, UDDS,cyclescycles Transfer-Test on different standard drivingpolicyFig. 3.", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "OF THE ENERGY CONSUMPTION(KWH). which is even less than the 3.07 kWh of the rule-based strategy on the same driving cycle. Meanwhile, the learned strategy for the Toyota Prius Prime reduces its consumption from 7.23 kWh to 2.79 kWh after 10 episodes, while the rule-based strategy requires 2.94 kWh in total.", "figure_data": "rule-basedRL-basedCyclei3 Prius Volt C-MAX Sonata i3 Prius Volt C-MAX SonataWLTP C3 3.07 2.94 3.043.783.99 2.92 2.79 2.893.473.29UDDS1.17 1.27 1.211.401.46 1.06 1.16 1.091.301.34HWFET2.04 1.92 2.002.412.24 1.98 1.85 1.942.342.17trained for merely one epoch. As a comparison, the totalenergy consumption reduces to 2.92 kWh after training for10 episodes,", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" } ]
Xinyang Wu; Elisabeth Wedernikow; Christof Nitsche; Marco F Huber
[ { "authors": "S Amjad; S Neelakrishnan; R Rudramoorthy", "journal": "Renewable and Sustainable Energy Reviews", "ref_id": "b0", "title": "Review of design considerations and technological challenges for successful development and deployment of plug-in hybrid electric vehicles", "year": "2010" }, { "authors": "M A Hannan; F Azidin; A Mohamed", "journal": "Renewable and Sustainable Energy Reviews", "ref_id": "b1", "title": "Hybrid electric vehicles and their challenges: A review", "year": "2014" }, { "authors": "A M Ali; D Söffker", "journal": "Energies", "ref_id": "b2", "title": "Towards optimal power management of hybrid electric vehicles in real-time: A review on methods, challenges, and state-of-the-art solutions", "year": "2018" }, { "authors": "P Zhang; F Yan; C Du", "journal": "Renewable and Sustainable Energy Reviews", "ref_id": "b3", "title": "A comprehensive analysis of energy management strategies for hybrid electric vehicles based on bibliometrics", "year": "2015" }, { "authors": "C M Martinez; X Hu; D Cao; E Velenis; B Gao; M Wellers", "journal": "IEEE Transactions on Vehicular Technology", "ref_id": "b4", "title": "Energy management in plug-in hybrid electric vehicles: Recent progress and a connected vehicles perspective", "year": "2016" }, { "authors": "R Lian; J Peng; Y Wu; H Tan; H Zhang", "journal": "Energy", "ref_id": "b5", "title": "Rule-interposing deep reinforcement learning based energy management strategy for powersplit hybrid electric vehicle", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b6", "title": "Anteil ausgewählter kraftstoffarten an den neuzulassungen von personenkraftwagen in deutschland von 2012 bis 2022", "year": "2023-01-06" }, { "authors": "A Brooker; J Gonder; L Wang; E Wood; S Lopp; L Ramroth", "journal": "Tech. Rep", "ref_id": "b7", "title": "Fastsim: A model to estimate vehicle efficiency, cost and performance", "year": "2015" }, { "authors": "T Hofman; M Steinbuch; R Van Druten; A Serrarens", "journal": "International Journal of Electric and Hybrid Vehicles", "ref_id": "b8", "title": "Rulebased energy management strategies for hybrid vehicles", "year": "2007" }, { "authors": "S G Li; S M Sharkh; F C Walsh; C.-N Zhang", "journal": "IEEE Transactions on Vehicular Technology", "ref_id": "b9", "title": "Energy and battery management of a plug-in series hybrid electric vehicle using fuzzy logic", "year": "2011" }, { "authors": "J D Gonder", "journal": "Tech. Rep", "ref_id": "b10", "title": "Route-based control of hybrid electric vehicles", "year": "2008" }, { "authors": "E F Camacho; C B Alba", "journal": "Springer science & business media", "ref_id": "b11", "title": "Model predictive control", "year": "2013" }, { "authors": "R Bellman", "journal": "Science", "ref_id": "b12", "title": "Dynamic programming", "year": "1966" }, { "authors": "J Peng; H He; R Xiong", "journal": "Applied Energy", "ref_id": "b13", "title": "Rule based energy management strategy for a series-parallel plug-in hybrid electric bus optimized by dynamic programming", "year": "2017" }, { "authors": "A Panday; H O Bansal", "journal": "International Journal of Vehicular Technology", "ref_id": "b14", "title": "A review of optimal energy management strategies for hybrid electric vehicle", "year": "2014" }, { "authors": "S Onori; L Serrao; G Rizzoni", "journal": "", "ref_id": "b15", "title": "Adaptive equivalent consumption minimization strategy for hybrid electric vehicles", "year": "2010" }, { "authors": "G Jinquan; H Hongwen; P Jiankun; Z Nana", "journal": "Energy", "ref_id": "b16", "title": "A novel mpcbased adaptive energy management strategy in plug-in hybrid electric vehicles", "year": "2019" }, { "authors": "H Tan; H Zhang; J Peng; Z Jiang; Y Wu", "journal": "Energy Conversion and Management", "ref_id": "b17", "title": "Energy management of hybrid electric bus based on deep reinforcement learning in continuous state and action space", "year": "2019" }, { "authors": "Y Wu; H Tan; J Peng; H Zhang; H He", "journal": "Applied energy", "ref_id": "b18", "title": "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus", "year": "2019" }, { "authors": "T Liu; Y Zou; D Liu; F Sun", "journal": "IEEE Transactions on Industrial Electronics", "ref_id": "b19", "title": "Reinforcement learning of adaptive energy management with transition probability for a hybrid electric tracked vehicle", "year": "2015" }, { "authors": "T Rudolf; T Schürmann; S Schwab; S Hohmann", "journal": "Proceedings of the IEEE", "ref_id": "b20", "title": "Toward holistic energy management strategies for fuel cell hybrid electric vehicles in heavy-duty applications", "year": "2021" }, { "authors": "R S Sutton; A G Barto", "journal": "MIT press", "ref_id": "b21", "title": "Reinforcement learning: An introduction", "year": "2018" }, { "authors": "Y Zou; T Liu; D Liu; F Sun", "journal": "Applied energy", "ref_id": "b22", "title": "Reinforcement learning-based real-time energy management for a hybrid tracked vehicle", "year": "2016" }, { "authors": "J Wu; H He; J Peng; Y Li; Z Li", "journal": "Applied energy", "ref_id": "b23", "title": "Continuous reinforcement learning of energy management with deep q network for a power split hybrid electric bus", "year": "2018" }, { "authors": "X Qi; Y Luo; G Wu; K Boriboonsomsin; M Barth", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b24", "title": "Deep reinforcement learning enabled self-learning control for energy efficient driving", "year": "2019" }, { "authors": "F Zhang; X Hu; R Langari; D Cao", "journal": "Progress in Energy and Combustion Science", "ref_id": "b25", "title": "Energy management strategies of connected hevs and phevs: Recent progress and outlook", "year": "2019" }, { "authors": "C J Watkins; P Dayan", "journal": "Machine learning", "ref_id": "b26", "title": "Q-learning", "year": "1992" }, { "authors": "G A Rummery; M Niranjan", "journal": "", "ref_id": "b27", "title": "On-line Q-learning using connectionist systems", "year": "1994" }, { "authors": "V Konda; J Tsitsiklis", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Actor-critic algorithms", "year": "1999" }, { "authors": "R S Sutton; D Mcallester; S Singh; Y Mansour", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Policy gradient methods for reinforcement learning with function approximation", "year": "1999" }, { "authors": "T P Lillicrap; J J Hunt; A Pritzel; N Heess; T Erez; Y Tassa; D Silver; D Wierstra", "journal": "", "ref_id": "b30", "title": "Continuous control with deep reinforcement learning", "year": "2015" }, { "authors": "T Schaul; J Quan; I Antonoglou; D Silver", "journal": "", "ref_id": "b31", "title": "Prioritized experience replay", "year": "2015" }, { "authors": "G E Hinton; N Srivastava; A Krizhevsky; I Sutskever; R R Salakhutdinov", "journal": "", "ref_id": "b32", "title": "Improving neural networks by preventing coadaptation of feature detectors", "year": "2012" }, { "authors": "N Srivastava; G Hinton; A Krizhevsky; I Sutskever; R Salakhutdinov", "journal": "Journal of Machine Learning Research", "ref_id": "b33", "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "year": "2014" }, { "authors": "P Baldi; P J Sadowski", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Understanding dropout", "year": "2013" }, { "authors": "Y Gal; Z Ghahramani", "journal": "PMLR", "ref_id": "b35", "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "year": "2016" }, { "authors": "A Damianou; N D Lawrence", "journal": "PMLR", "ref_id": "b36", "title": "Deep gaussian processes", "year": "2013" }, { "authors": "J D Gonder; A D Brooker; E W Wood; M Moniot", "journal": "Tech. Rep", "ref_id": "b37", "title": "Future automotive systems technology simulator (fastsim) validation report", "year": "2018" }, { "authors": "", "journal": "", "ref_id": "b38", "title": "Epa dynamometer drive schedules", "year": "2022-11-15" }, { "authors": "S Tsiakmakis; G Fontaras; C Cubito; J Pavlovic; K Anagnostopoulos; B Ciuffo", "journal": "Publications Office of the European Union", "ref_id": "b39", "title": "From nedc to wltp: effect on the typeapproval co2 emissions of light-duty vehicles", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 350.31, 634.87, 207.69, 24.28 ], "formula_id": "formula_0", "formula_text": "= p ICE /p cycle ,(1) action" }, { "formula_coordinates": [ 3, 347.21, 649.82, 210.79, 60.88 ], "formula_id": "formula_1", "formula_text": "= -α 1 • p achieved -α 2 • [|s cycle -s achieved | > 0] -α 3 • [(SOC ref -SOC) > β] ,(2) reward" } ]
10.1017/CBO9780511815829
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b19", "b63", "b16", "b61", "b65", "b20", "b62", "b5", "b21", "b22", "b3", "b23", "b71", "b72", "b24", "b25", "b39", "b37", "b38", "b66", "b40", "b67", "b68" ], "table_ref": [], "text": "Machine Translation (MT) has an interesting history in computation and research [20] with new paradigms being introduced over decades. MT achieved a watershed moment with the introduction of numerous algorithmic, architectural and training enhancements, such as Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) [64]. SMT is a statistical-based MT paradigm, operating at the granularity of words and phrases, consisting of a translation model, a language model, and a decoder [17,62,66]. Further, the relatively recent success of deep neural networks has given us end-to-end variations of translation models such as recurrent NMT [21,63], attention-based NMT, and selfattention-based Transformer [6].\nThere have been parallel and related developments in language models, such as Bidirectional Encoder Representations from Transformers (BERT) [22] and AL-* For correspondence BERT [23]. Another variant of this, mBART, has provided benchmark solutions in NMT as well [4]. However, training an effective and accurate MT system still requires a large amount of parallel corpus consisting of source and target language pairs. When we talk about low-resource languages, the first problem is to find a fair amount of parallel corpus, sometimes even monolingual corpus, which makes it challenging to create tools and applications for extremely poor resource languages. Creating a large parallel corpus for MT for each language pair that falls into the low resource category is an expensive, time-consuming, and labor-intensive task.\nSo, the solution to improve NMT in a low-resource context is to bootstrap the process by leveraging the morphological, structural, functional, and perhaps deep semantic features of such languages. Fortunately, for similar languages, it also is possible to exploit the similarities for better modeling of closely related languages. We need to focus on features that help the MT system better learn the close relationships between such languages. Conference on Machine Translation (WMT) has also conducted shared tasks for similar language translations from 2019 [24].\nWhen we talk about Indian languages, most languages except Hindi come under extremely low resource categories. Even Hindi is, from some points of view either a low or medium resource language [72,73]. India being a country with rich linguistic diversity, there is a need for MT systems across the Indian (or South Asian) languages. India is also inhabited by a vast population who speak languages belonging to three prominent families, Indo-Aryan (a subfamily of Indo-European), Dravidian, and Tibeto-Burman, but due to very long contact and interactions, they have gone through a process of 'convergence', forming India as a linguistic area [25]. Due to this long term contact, there are more similarities among these languages than we would otherwise expect. In addition, significant fractions of their vocabularies, to varying degrees, have words originating in or borrowed from Sanskrit, Persian, Arabic, Turkish and English, among other languages.\nFor some of the major languages, and even for some of the 'regional' or 'minority languages' (since they were widely used for a long duration in the past for literary purposes), there are records available and there is a varying degree of well-developed tradition of at least (spoken) literary usage. However, only some languages, most of which are officially recognized, have some written tradition, particularly for non-literary prose. The rest have very little written data, or even if it is there, it is usually not in a machine-readable format. Therefore, they can be treated as extremely low or zero-resource languages. There is a need for development of MT systems for such languages, and the similarity between these languages helps in developing such MT systems.\nIn this article, we propose an approach based on leveraging the features of similar languages by simply, programmatically 1 , converting them into an intermediate Latin-based multilingual notation. The notation that we use here is the commonly used WX-notation [26], which is often used in NLP tools and systems for Indian languages developed in India. This notation (like many other similar notations) can project all the Indic or Brahmi origin scripts [40], which have -in many cases -different Unicode blocks, into a common character space. Our intuition, is that this should help in capturing phonological, orthographic, and, to some extent, morphosyntactic similarities that will help a neural network-based model in better multilingual learning and translation across this languages [38,39,67]. We do this by using this WX-converted text to learn byte pair encoding-based embeddings. The effect of this is that the similar but different languages are projected onto the same orthographic-phonetic space [41], and hence also in the same common morphological and lexical space, allowing better modeling of multilingual relationships in the context of India as a linguistic area.\nIn addition, using WX has another benefit, even for a single script such as Devanagari. Brahmi-derived scripts have different symbols for dependent vowels (called maatraas) which modify a consonant and independent vowels (written as aksharas) which are pronounced as syllables. WX uses the same symbols for these two variants of the same vowel, while Unicode uses different codes and the scripts themselves use different graphical symbols.\nAfter conversion to WX, we apply some of the stateof-the-art NMT techniques to build our MT systems. These NMT systems, such as the Transformer, should learn better the relationships between languages.\nWe select six pairs of similar languages: Gujarati (GU)↔Hindi (HI), Marathi (MR)↔Hindi (HI), Nepali (NE)↔Hindi (HI), Maithili (MAI)↔Hindi (HI), Punjabi (PA)↔Hindi (HI), and Urdu (UR)↔Hindi (HI). Table 1 contains some of the language features that help in figuring out how selected languages are similar to Hindi. For example, Hindi, Gujarati, Marathi, Nepali, Maithili, Punjabi, and Urdu belong to Indo-Aryan Language families, and all the selected languages except Punjabi and Urdu share a common Devanagari script. The word order of all the selected languages is mostly S ub ject + Ob ject + Verb. Apart from this, all these languages share lexical similarities with Hindi in terms of common words derived from Sanskrit and other languages as mentioned earlier. Also, these languages have phonological similarities with Hindi. We also note that though Urdu and Hindi are linguistically almost the same language, yet due to the great divergence in their vocabularies in their written form, they have only a relatively small overlap in their corpus-based vocabularies, albeit this overlap consists mainly of core words which form a major component of the linguistic identity of a language.\nThis papers is the first part of a series of three papers exploring and then extending the idea of using common phonetic-orthographic space for better NMT in the Indian context [68,69]. The contributions of this paper are summarized as follows:\n1. Propose a WX-based machine translation approach that leverages orthographic and phonological similarities between pairs of Indian languages.\n2. Proposed approach achieves an improvement of +0.01 to +10 BLEU points compared to baseline state-of-the-art techniques for similar language pairs in most cases. We also get +1 BLEU points improvement on distant and zero-shot language pairs. The rest of the paper is organized as follows. Section 2 discusses closely related works. Section 3 describes some background and the NMT models that we extend or compare with. Section 4 describes the proposed approach in more detail. Section 5 discusses corpus statistics and experimental settings used to conduct the experiments. Results and ablation studies are reported in Sections 6 and 7, respectively. Finally, the paper is summarized in Section 8 and includes some directions for future work." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b45", "b46", "b47", "b48", "b49", "b50", "b60", "b51", "b52", "b69", "b70" ], "table_ref": [], "text": "This section briefly describes some of the related work (Table 2) on language similarity, morphological richness, statistical and neural models, and language pairs used as discussed below.\nAlthough there had been work in the past, the recent sharper focus on machine translation for similar languages is also due to the shared tasks on this topic organized as part of the WMT conferences from 2019 to 2021. In [46], authors demonstrated that pre-training could help even when the language used for fine-tuning is absent during pre-training. In [47], authors experimented with attention-based recurrent neural network architecture (seq2seq) on HI↔MR and explored the use of different linguistic features like part-of-speech and morphological features, along with back translation for HI→MR and MR→HI machine translation. In [48], authors ensembled two Transformer models to try to allow the NMT system to learn the nuances of translation for low-resource language pairs by taking advantage of the fact that the source and target languages are written using the same script. In [49], authors' work relied on NMT with attention mechanism for the similar language translation in the WMT19 shared task in the context of NE↔HI language pair.\nIn [50], the authors conducted a series of experiments to address the challenges of translation between similar languages. Out of which, the authors developed one phrase-based SMT system and one NMT system using byte-pair embedding for the HI↔MR pair. In [51], authors used a Transformer-based NMT with sentencepiece for subword embedding on HI↔MR language pair [61]. In [52], authors used the Transformer-NMT for multilingual model training and evaluated the result on the HI↔MR pair. In [53], authors focused on incorporating monolingual data into NMT models with a back-translation approach. In [70], authors introduced NLP resources for 11 major Indian languages from two major language families. These resources include: large-scale sentence-level monolingual corpora, pre-trained word embeddings, pre-trained language models, and multiple NLU evaluation datasets. In [71], authors presented IndicBART, a multilingual, sequenceto-sequence pre-trained model focusing on 11 Indic languages and English. IndicBART utilized the orthographic similarity between Indic scripts to improve transfer learning between similar Indic languages." }, { "figure_ref": [], "heading": "Shortcomings of existing works", "publication_ref": [ "b50", "b51", "b52" ], "table_ref": [], "text": "In most of the existing work on MT for related languages (e.g., [51], [52], [53]), authors have discussed improving the NMT models using extra monolingual corpora in addition to bi-lingual data. However, the proposed approach improves translation quality using only bilingual corpora with the help of WX-transliteration. The proposed approach reduces language complexity by transliterating the text to roman script and helps the NMT models to better learn the context information by exploiting language similarities. In this way, where applicable, it can complement the approaches which use extra monolingual data." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "This section provides some background on the recent most successful machine translation techniques. From vanilla NMT to more robust and advanced BART, a denoising autoencoder for pre-training sequence-to-sequence models, remarkable advances in NMT techniques have been made in a relatively short time." }, { "figure_ref": [ "fig_0" ], "heading": "NMT", "publication_ref": [ "b1", "b45", "b46", "b47", "b48", "b49", "b50", "b51", "b52", "b69", "b70", "b4", "b6" ], "table_ref": [], "text": "Many of the NMT techniques use an encoder-decoder architecture based on neural networks that performs translation between language pairs. Numerous enhancements, toolkits, and open frameworks are available to train NMT models, such as OpenNMT. OpenNMT is one of the open-source NMT frameworks [2], used to model natural language tasks such as text summarization, tagging, and text generation. This toolkit is used for model architectures, feature representations, and source modalities in NMT research. Multilingual and zeroshot NMT have also been applied for NMT to achieve state-of-the-art results on different language pairs by Table 2. Comparison of some existing work. and represent presence and absence of a particular feature, respectively. [46] HI↔MR, ES↔PT [47] HI↔MR [48] HI↔MR [49] NE↔HI [50] HI↔MR [51] HI↔MR [52] HI↔MR [53] ES↔PT, CS↔PL, NE↔HI [70] 11 Indian languages [71] 11 Indic languages and English Proposed approach {GU,MR,NE,MAI,PA,UR}↔HI Note-HI: Hindi, MR: Marathi, ES: Spanish, PT: Portuguese, NE: Nepali, CS: Czech, PL:Polish, GU: Gujarati, MAI: Maithili, PA: Punjabi, UR: Urdu using a single standard NMT model for multiple languages [5]. Furthermore, the introduction of 'attention' in NMT has drastically improved the results significantly [7], as for many other problems. As shown in Figure 1, NMT is an encoder-decoder sequence-based model consisting of recurrent neural network (RNN) units. The encoder consists of RNN units (E 0 , E 1 , E 2 ) and takes as input the embedding of words from sentences and produces the context vector (C) as follows:" }, { "figure_ref": [], "heading": "Paper Similar Language Reducing Morphological Statistical Neural WX Language Pair Complexity", "publication_ref": [], "table_ref": [], "text": "C = Encoder(X 1 , X 2 , X 3 , ..., X n )(1)\nwhere, {X 1 , X 2 , X 3 ,..., X n } is the input source sequence. The decoder consists of RNN units (D 0 , D 1 , D 2 , D 3 ) and it decodes these context vectors into target sentences with an <END> (end of a sentence) symbol as follows:\nDecoder(C, Y 1 , Y 2 , Y 3 , ..., Y n ) = Y ′ 1 , Y ′ 2 , Y ′ 3 , ..., Y ′ m (2) where, {Y 1 , Y 2 , Y 3 ,..., Y n } and {Y ′ 1 , Y ′ 2 , Y ′ 3 ,..., Y ′ m }\nare target and predicted sequences, respectively." }, { "figure_ref": [], "heading": "Transformer-based NMT", "publication_ref": [ "b5", "b7", "b8", "b5" ], "table_ref": [], "text": "The Transformer can be characterized by its breakthrough in combining five innovations elegantly in a single architecture. The first is the attention mechanism [6]. It maps a query and a set of key-value pairs to an output. A compatibility function of the query with the corresponding key computes the weights. The second extends the first by using multi-head self-attention. The third is the use of positional encoding in terms of relative positions, which allows it to learn temporal relationships and dependencies. The fourth is the use of masking, which has proved to be immensely effective in many other later models. The fifth is the use of residual connections. Together, the elegant combination of these innovations not only allows the model to learn much better models, but also obviates the need for recurrent units in the architecture, which in turn allows a great degree of parallelism during training the models. In other words, the Transformer model not only learns much better models, but does so in much less time during the training phase. Moreover, the problem of overfitting is also much less with the Transformer-based models.\nThere are numerous state-of-the-art results reported for machine translation systems using a Transformer. Currey and Heafield [8] incorporated syntax into the Transformer using a mixed encoder model and multitask machine translation. Multi-head attention is one key feature of self-attention. Fixing the attention heads on the encoder side of the Transformer increases BLEU scores by up to 3 points in low-resource scenarios [9]. The most common attention functions are additive attention and dot product attention. Transformer generates the scaled dot-product attention as follows [6]:\nattn i = so f tmax ( Q i K i T √ d k ) V i(3)\nwhere, Q i , K i , V i and d k are query, key, value and the dimension of the key, respectively." }, { "figure_ref": [], "heading": "BART", "publication_ref": [ "b9", "b3" ], "table_ref": [], "text": "BART is a denoising autoencoder for pretraining sequence-to-sequence models [10]. It uses a standard Transformer-based NMT architecture to generalize BERT, GPT, and many other recent pre-training schemes. BART uses the standard Transformer architecture, except it modifies ReLU activation functions to GeLUs. Its mB-ART variation is a sequence-to-sequence denoising autoencoder pre-trained on monolingual corpora in multiple languages using the BART objective [4]." }, { "figure_ref": [], "heading": "Back-translation", "publication_ref": [ "b10", "b12" ], "table_ref": [], "text": "Back-translation is a method to prepare synthetic parallel corpus from a monolingual corpus for NMT [11].\nIn low-resource settings, back-translation can be a very effective method. Iterative back-translation is a further improvement [13]. It iterates over two back-translation systems multiple times. " }, { "figure_ref": [], "heading": "Similar languages", "publication_ref": [ "b24", "b41", "b42" ], "table_ref": [], "text": "Similar languages refer to a group of languages that share common ancestry or extensive contact for an extended period, or both, with each other, leading them to exhibit structural and linguistic similarities even across language families. Examples of languages that share common ancestors are Indo-Aryan languages, Romance languages, and Slavic languages. Languages in contact for a long period lead to the convergence of linguistic features even if languages do not belong to common ancestors. Prolonged contact among languages could lead to the formation of linguistic areas or sprachbunds.\nExamples of such linguistic areas are the Indian subcontinent [25], the Balkan [42], and Standard Average European [43] linguistic areas.\nSimilarities between languages depend on various factors. Some of the factors are lexical similarity, structural correspondence, and morphological isomorphisms. Lexical similarity means that the languages share many words with similar forms (spelling/ pronunciation) and meaning, e.g. Sunday is written as रिववार (ravivAra) in Hindi and रिबवार (rabiVra) in Bhojpuri (both are proximate and related Indo-Aryan languages). These lexically similar words could be cognates, lateral borrowings, or loan words from other languages. Structural correspondence means, for example, that languages have the same basic word order, viz. SOV (Subject-Object-Verb) or SVO (Subject-Verb-Object). Morphological isomorphisms refers to the one-to-one correspondence between inflectional affixes. While content words are borrowed or inherited across similar languages, function words are generally not lexically similar across languages. However, function words in related languages (whether suffixes or free words) tend to have a one-one correspondence to varying degrees and for various linguistic functions." }, { "figure_ref": [], "heading": "Transformer-based NMT + Back-translation", "publication_ref": [ "b2" ], "table_ref": [], "text": "Guzmán et.al [3], in their work, first trained a Transformer on Nepali-English and Sinhala-English language pairs in both directions, and then they used the trained model to translate monolingual target language corpora to source languages. Finally, the source language sentence corpus was merged with generated source language sentences and was given as input to the Transformer for training and producing the translation." }, { "figure_ref": [ "fig_1" ], "heading": "Proposed Approach", "publication_ref": [], "table_ref": [], "text": "To tackle the morphological richness related problems in NMT training for Indian languages and to be able work with very little resources, we propose a simple but effective approach for translating low-resource languages that are similar in features and behaviour.\nThe proposed approach consists of three modules: Text Encoder, Model Trainer, and Text Decoder (Figure 2), as discussed in the following section. " }, { "figure_ref": [], "heading": "Text Encoder", "publication_ref": [ "b0" ], "table_ref": [], "text": "The proposed model first encodes the source and target corpora of parallel languages into an intermediate representation, the WX-notation 2 [1]. The primary reason behind encoding the source and target language corpora into WX-notation is to encode different languages with the same or different scripts into a common representation by projecting them onto a common phoneticorthographic character space so that BPE can be linguistically better informed. WX-notation is a transliteration scheme for representing Indian languages in ASCII format, and as described earlier, it has many advantaged as an intermediate representation, even compared to using Devaganari or any other single Brahmi-based script. It implicitly helps the Transformer encoder model more cognates, loan words, and morphologically similar words between the languages, as well as model other kinds of similarities for better translation." }, { "figure_ref": [], "heading": "Model Training", "publication_ref": [ "b32", "b33" ], "table_ref": [], "text": "The intermediate representation of the source language text is passed to the Transformer encoder. The Transformer encoder-decoder model learns the relationship between languages. We have used the SentencePiece 3 li-2 https://pypi.org/project/wxconv/,https://github.com/ irshadbhat/indic-wx-converter 3 https://github.com/google/sentencepiece brary for tokenization of the text. SentencePiece is used as a pre-processing task for the WX-encoded sourcetarget text in the concerned language pair. Sentence-Piece is a language-independent sub-word tokenizer and detokenizer designed for Neural-based text processing, including neural machine translation. It implements two subword segmentation algorithms, Byte-Pair Encoding (BPE) and unigram language model, with direct training from raw sentences [33,34]. Therefore, it already indirectly, to some extent, provides cognates, loan words, and morphologically similar words to the Transformer, and our prior conversion to WX allows it to do so better. It may be noted that the approach is generalizable to other multilingual transliteration notations, perhaps even to IPA 4,5 , which is almost truly phonetic notation for written text." }, { "figure_ref": [], "heading": "Text Decoder", "publication_ref": [], "table_ref": [], "text": "After convergence of the training algorithm, the WXencoded generated target sentences are decoded back to the plain text format to evaluate the model." }, { "figure_ref": [], "heading": "Corpus and Experimental Settings", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the corpus statistics and experimental settings we used for our experiment." }, { "figure_ref": [], "heading": "Corpus description", "publication_ref": [ "b13", "b14", "b28", "b44" ], "table_ref": [], "text": "We evaluate the proposed model in an extremely lowresource scenario on the mutually similar languages which we selected for our experiments. These are Hindi (HI), Gujarati (GU), Marathi (MR), Nepali (NE), Maithili (MAI), Punjabi (PA), Urdu (UR), Bhojpuri (BHO), Magahi (MAG), Malayalam (ML), Tamil (TA) and Telgu (TE). We perform experiments on the following language pairs involving Hindi: GU↔HI, NE↔HI, MR↔HI, MAI↔HI, PA↔HI, and UR↔HI. Parallel corpora of GU↔HI, ML↔HI, TA↔HI, and TE↔HI for training, testing, and validation are downloaded from CVIT-PIB [14]. MR↔HI parallel corpus is collected from WMT 2020 shared tasks 6 . NE↔HI language pair corpus is made up of those collected from WMT 2019 shared tasks7 , Opus8 , and TDIL9 repositories. We use a monolingual corpus of Gujarati, Hindi, and Marathi for similarity computation in section 5.1 from the PM India dataset described in [15]. The rest of the monolingual corpora are collected from the Opus collection for similarity computation in section 5.1 [29]. We use Sentence-Piece [45] to pre-process the source and target sentences. We use 5K merge operations to learn BPE with the Sen-tencePiece model and restrict the source and target vocabularies to at most 5K tokens. There are some places where code-switching occurs in the employed dataset.\nThe WX-transliteration tool ignores code-switched data and keeps it in the datasets as it is." }, { "figure_ref": [], "heading": "Training details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Proposed approach", "publication_ref": [ "b17" ], "table_ref": [], "text": "We use the WX-notation tool 10 for transliterating the text and the fairseq 11 [18] toolkit, which is a sequence modelling toolkit, to train the Transformer. We use five encoder and decoder layers. The encoder and decoder embedding dimensions are set to 512. Feed-forward encoding and decoding embedding dimensions are set to 2048. The number of an encoder and decoder attention heads is set to 2. The dropout, the attention dropout, and the ReLU dropout are set to 0.4, 0.2, and 0.2, respectively. The weight decay is set at 0.0001, and the label smoothing is set to 0.2. We use the Adam optimizer, with β 1 and β 2 set to 0.9 and 0.98. The learning rate schedule is inverse square root, with an initial learning rate of 1e-3 and a minimum learning rate of 1e-9. The maximum number of tokens used is set to 4000. The maximum number of epochs for training is set to 100. We use a beam size equal to 5 for generating data using the test set." }, { "figure_ref": [], "heading": "Guzmán et al. [3]", "publication_ref": [ "b2", "b2" ], "table_ref": [], "text": "In Guzmán et al. [3], authors have demonstrated the experiments on extremely low resource languages using Transformer. Our proposed approach is based on the Transformer described in Guzmán et al. [3] with the addition of two extra modules, Text Encoder and Text Decoder. We use the Transformer model described in The projection to WX could be used for any other NMT approach as well that uses a subword embedding." }, { "figure_ref": [], "heading": "SMT", "publication_ref": [ "b53", "b54", "b55", "b56" ], "table_ref": [], "text": "We use Moses 12 , an open-source toolkit to train SMT [54].\nFor obtaining the phrase/word alignments from parallel corpora, we use GIZA++ [55]. A 5-gram KenLM language model is used for training [56]. The parameters are tuned on the validation set using MERT and tested with a test set [57]." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [ "b2", "b57", "b58", "b30", "b29", "b11" ], "table_ref": [ "tab_2", "tab_5", "tab_8", "tab_2", "tab_5" ], "text": "We compare the proposed approach with the Mosesbased SMT and the Transformer-based NMT model [3], where the latter is used as the baseline for NMT. We use six evaluation metrics, BLEU 13 [12], LEBLEU [58], WupLeBleu [59], TER [31], WER, and chrF2 [30] for better comparison of the proposed approach. We see from Tables 4 and5 that the proposed approach improves upon the baseline for most of the pairs. BLEU score, although a simple metric based on comparison of n-grams, is a standard metric accepted by NLP researchers to obtain the accuracy of predicted translated outputs compared to the human-translated reference sentences. This is because it has been observed that the value of the BLEU score correlates well with human-judged quality of translations. The formula for the BLEU score is as follows [12]: where the output_length and the reference_length are the lengths of the predicted sentences and the reference sentences, respectively. We also perform a comparison between SMT without WX-transliteration and SMT with it. These two sets of results are also compared with the proposed approach as shown in Table 6. In the case of SMT also we can easily note that the performance improves in most cases by using WX as the intermediate notation, even though SMT is not using subword embeddings.\nBLEU = min ( 1, output_length re f erence_length )         4 ∏ i=1 precision i         ,(4)\nWe also present some basic analysis of the scores as shown in Tables 4 and5. We use corpus-based language relatedness and complexity measures for further analysis for this purpose in the next section." }, { "figure_ref": [], "heading": "Similarity between languages", "publication_ref": [], "table_ref": [], "text": "Since there are no definitive methods to judge the similarity between two languages, we use the following techniques to compute the similarity between the languages:" }, { "figure_ref": [], "heading": "SSNGLMScore", "publication_ref": [ "b27", "b31" ], "table_ref": [ "tab_2", "tab_5" ], "text": "We use character-level n-gram language models based SSNGLMScore to measure the relatedness between languages [28,32]. SSNGLMScore is computed as follows:\nS sl,tl = m ∑ tl=1 p sl.tl (w n |w n-1 1 ),(5)\nwhere S stands for Scaled Sum of n-gram language model scores.\nMS sl,tl = S sl,tl -min(S S L,T L ) max(S S L,T L ) -min(S S L,T L ) ,(6)\nwhere, sl and tl represent the source language and the target language, respectively. Moreover, sl ∈ SL(Gujarati, Marathi, Maithili, Nepali, Urdu, Punjabi, Hindi, Malayalam, Tamil, Telugu, Bhojpuri, Magahi) and m is the total number of sentences in the target language tl ∈ TL(Gujarati, Marathi, Maithili, Nepali, Urdu, Punjabi, Hindi, Malayalam, Tamil, Telugu, Bhojpuri, Magahi).\nWe train the language model using a 6-gram characterlevel KenLM model on the source monolingual corpus (sl). Each language model is tested on target language (tl), and the scores are reported. Table 7 lists the cross-lingual similarity scores of Hindi, Gujarati, Marathi, Nepali, Maithili, Punjabi, Malayalam, Tamil, Telugu, Bhojpuri, Magahi, and Urdu with each other. Based on SSNGLMScore, Bhojpuri, Maithili and Magahi are the closest to Hindi, which matches linguistic knowledge about them, whereas Urdu seems to as far from Hindi as Malayalam and more than Telugu. The reasons Urdu is far from Hindi is partly that Urdu is in a different kind of script from Hindi which does not have a straightforward mapping to WX, but mainly because, though grammatically almost identical, the two use very different vocabularies in written and formal forms. Maithili is also the second official language of Nepal and is also highly similar to Nepali, perhaps due to prolonged close contact. What is more surprising is that the similarity between Urdu and Nepali is relatively high, whereas that between Urdu and Hindi is among the lowest. This could be because of the nature of the corpus. Going through Tables 4 and5, we find that there is an improvement in every metric except WER and TER in a majority of cases when we apply the proposed method on the translation direction from Maithili, Gujarati, Marathi, Nepali, Punjabi, and Urdu to Hindi. This observation allows us to assert that the proposed approach improves performance for translation between similar languages. Thus, even though the similarity measure we used mixes different kinds of similarities, it is suitable for our purposes because our method is based on sub-word and multilingual modelling.\nWe also see a gain of +1.34 BLEU points on Hindi to Urdu despite Urdu being far away from the rest of the language pairs in terms of the similarity score we used. There is a considerable improvement of +11.46 BLEU points on HI→NE and +10.63 BLEU points on NE→HI language pairs." }, { "figure_ref": [], "heading": "char-BLEU, TER and chrF2", "publication_ref": [ "b43" ], "table_ref": [ "tab_9" ], "text": "To better understand the slight fall in BLEU points despite the similarity for MAI → HI and large increment in the case of NE↔Hi (where Nepali and Maithili are known to be close), we also compute similarity by applying char-BLEU [44], chrF2, and TER on a training dataset of all language pairs. The reason behind using char-BLEU and chrF2 for similarity is that since they are character-based metrics, there is a greater chance of covering the morphological aspects. Before calculating the char-BLEU, the TER, and the chrF2 evaluation metrics, data must be in the same script to evaluate the score. So, we convert the corpus from UTF-8 to WX-notation. Table 8 contains the char-BLEU score of language pairs, whereas Table 9 contains the TER and chrF2 scores of each language pair. We see Table 8 and 9 and find out that HI and MAI are still more similar compared to other pairs. We can only hypothesize the reason being that this is due to the nature of the data that we have used." }, { "figure_ref": [], "heading": "Analysis on language complexity", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Morphological complexity", "publication_ref": [ "b73", "b34", "b35", "b11" ], "table_ref": [ "tab_10" ], "text": "Since Indian languages are morphologically rich, machine translation systems based on word tokens have difficulty with them. Therefore, we also tried to relate the results obtained with estimates of such complexity obtained from character-level entropy. It is reasonable to assume that the greater the character-level entropy, the more morphologically complex a language is likely to be.\nCharacter-level entropy We used Character-level word entropy to estimate morphological redundancy, following Bharati et al. [74] and Bentz and Alikaniotis 2016 [35].\nA \"word\" is defined in our experiments as a spaceseparated token, i.e., a string of alphanumeric Unicode characters delimited by white spaces. The average information content of character types for words is then calculated in terms of Shannon entropy [36]:\nH(T ) = - V ∑ i=1 p(c i ) log 2 (p(c i ))(7)\nwhere V is number of characters (c i ) in a word. Table 10 lists the word (unigram) entropy of languages at character level, which indirectly represents languages' lexical richness, i.e., how complex -in terms of characters they are made up of -word forms are. Since we compute the unigram entropy based on characters, we can say that lexical richness also indicates morphological complexity, both derivational and inflectional. Based on the corpus-based word entropy values, it appears that Hindi is more morphologically complex than the other six languages. However, this may be more of derivational complexity rather than inflectional complexity, as Hindi is relatively simpler in terms of inflectional morphology. The high derivational complexity of Hindi is because it is the official language of India and is more standardized than most other Indian languages. It, therefore, has borrowed and coined a large number of complicated words and technical terms, whether from Persian or Sanskrit or English. This adds a great deal to the derivational complexity of written formal Hindi, compared to commonly spoken Hindi. At least, this is our hypothesis based on the similarity and complexity results.\nWe also find that our approach shows a considerable improvement of about more than 10 BLEU points in both directions for the Hindi-Nepali language pair, i.e., NE→HI and HI→NE. Such improvement may be attributed to the effect caused by projecting to a common multilingual orthographic-phonetic notation, that is, WX. This probably helps the Transformer learn the context between languages better with the help of a sentence piece tokenizer.\nIn Tables 11,12 and 13, we present the values of word entropy and redundancy at character level. These tables show that the entropy increases when converting to WX and redundancy decreases. This is evidence of the fact that the project to a common orthrographic and phonetic space causes the entropy to increase and redundancy to decrease, thus allowing more compact representations to be learnt from the data after conver-sion to WX in our case." }, { "figure_ref": [], "heading": "Syntactic complexity", "publication_ref": [ "b27" ], "table_ref": [ "tab_14" ], "text": "Perplexity Perplexity (PP) of a language can be seen as a weighted average of the reciprocal of its branching factor [28]. Branching factor is the number of possible words that can succeed any given word based on the context. Therefore, perplexity -as a kind of the mean branching factor -is a mean representative of the possible succeeding words given a word. Thus, it can be seen as a rough measure of the syntactic complexity. If the model is a good enough representation of the true distribution for the language, then the PP value will actually indicate syntactic complexity.\nTo estimate distances of other languages from Hindi using perplexity, we trained the perplexity model on the Hindi corpus and tested it on the corpora of other languages.\nPP(C) = W √ 1 P(S 1 , S 2 , S 3 , ..., S n )(8)\nwhere corpus C contains n sentences with W words. Table 14 and 15 contain the assymmetric and symetric perplexity -average of the two translation directions -values between the concerned language pairs and indicate their distances from Hindi based on characterlevel language model. Pairs having higher perplexity scores means the languages are more distant. We see language pairs Urdu and Hindi have more perplexity scores. This is mostly because these two languages, though almost identical in spoken form and in terms of core syntax and core vocabulary, use very different extended vocabularies for written and formal purposes, besides using very different writing systems. Standard written Urdu uses Persian, Arabic, and Turkish words heavily, whether adapted phonologically or not.\nGiven the small amounts of data, it is not surprising that the values of perplexity are different in the two translation directions.\nSimilarly, standard and written Hindi uses words much more heavily derived or borrowed or even coined from Sanskrit. Despite higher perplexity between these two languages, our approach gives a +2 increment in the BLEU score, probably because the common core syntax and core vocabulary manifest themselves in every phrase or sentence and thus have higher probabilistic weight. They are, in fact, completely mutually intelligible in the spoken forms and partly in the written form. There are also a lot of Indians who can comfortably read and understand both these languages, even in their standard, written, and literary forms. The use of WX perhaps allows the models to exploit the core similarities better." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "This section discusses ablation studies conducted using the proposed method on distant and zero-shot language pairs and back-translation." }, { "figure_ref": [], "heading": "Analysis of the proposed approach on more distant language pairs", "publication_ref": [], "table_ref": [ "tab_15" ], "text": "To see whether and to what extent our approach generalizes to more distant language pairs, we also analyze the performance of the proposed approach on (ML↔HI, TA↔HI, and TE↔HI). Malayalam, Tamil, and Telugu belong the Dravidian family, and Hindi is from the Indo-Aryan family. We note that translating between these three Dravidian languages and Hindi still leads to improvement, considering both chrF2 and BLEU scores.\nThe results are shown in Table 16." }, { "figure_ref": [], "heading": "Unsupervised settings", "publication_ref": [ "b64" ], "table_ref": [ "tab_16" ], "text": "We also demonstrate the proposed approach under unsupervised scenarios on zero-shot language pairs, Bhojpuri-Hindi and Magahi-Hindi, for which no parallel train-ing corpora is available. The validation datasets for zero-shot experiments are collected from LoResMT 2020 shared tasks 14 . For training the model, we use NE↔HI language pairs and use language transfer on zero-shot pairs to evaluate the model on validation datasets. The reason behind using NE↔HI language pairs for training the model in unsupervised experiments on Bhojpuri-Hindi and Magahi-Hindi is the higher similarity between NE↔HI language pairs with both Bhojpuri-Hindi and Magahi-Hindi zero-shot language pairs based on [65].\nThe results are shown in Table 17, demonstrating the improvement in unsupervised settings also." }, { "figure_ref": [], "heading": "Back-translation", "publication_ref": [], "table_ref": [ "tab_17" ], "text": "Finally we report results on using the approach along with Back-Translation, which has been shown to benefit machine translation for very low resource languages. We selected Gujarati and Hindi language pairs for performing Back-Translation (BT) with the proposed approach. With Back-Translation also, the proposed approach shows an improvement of BLEU point +0.97 on HI→GU and +1.36 on GU→HI language pairs, as shown in Table 18." }, { "figure_ref": [], "heading": "Conclusion and Future Scope", "publication_ref": [], "table_ref": [], "text": "In this work, we have proposed a simple but effective MT system approach by encoding the source and target script into an intermediate representation, WX-notation, that helps the models to be learnt in a common phonetic and orthographic space. This language projection reduces the surface complexity of the algorithm and allows the neural network to better model the relationships between languages to provide an improved translation. Further, we have investigated these results by estimating the similarities and complexities of language pairs and individual languages to verify that our results are consistent and agree with the intuitively known facts about the closeness or distances between various language pairs. Moreover, this approach works well under unsupervised settings and works fine for some distant language pairs. The proposed approach improves baseline approaches by 0.01 BLEU points to 11.46 BLEU points. The proposed approach has some limitations and boundary conditions. First, it requires a common transliteration script, which may not be available for all morphologically rich languages. Second, it is only applicable to Indian languages. Third, we can see from Table 16 that performance on distant language pairs falls short of expectations.\nIn the future, we plan to extend this approach to the various ways described below: a. Multilingual NMT system: Since the proposed approach transforms all the Indian language scripts into a common notation called WX, this conversion favours the subword embeddings to work as character embedding. It may be, therefore, more beneficial to implement this approach in the multilingual system(s) for all Indian languages." }, { "figure_ref": [], "heading": "b. BART, MBART, and other representations:", "publication_ref": [], "table_ref": [], "text": "We tried the MBART-based translation of Gujarati to Hindi and Hindi to Gujarati, and the results are worse than a vanilla transformer. So, we plan to extend the proposed approach to more representations like BART, MBART, and other state-of-the-art representation techniques for Deep Learning.\nc. Dravidian languages and the rest of the Indo-Aryan language family: We also plan to extend the proposed approach to the Dravidian language family and the rest of the Indo-Aryan languages. " } ]
The use of subword embedding has proved to be a major innovation in Neural Machine Translation (NMT). It helps NMT to learn better context vectors for Low Resource Languages (LRLs) so as to predict the target words by better modelling the morphologies of the two languages and also the morphosyntax transfer. Even so, their performance for translation in Indian language to Indian language scenario is still not as good as for resource-rich languages. One reason for this is the relative morphological richness of Indian languages, while another is that most of them fall into the extremely low resource or zero-shot categories. Since most major Indian languages use Indic or Brahmi origin scripts, the text written in them is highly phonetic in nature and phonetically similar in terms of abstract letters and their arrangements. We use these characteristics of Indian languages and their scripts to propose an approach based on common multilingual Latin-based encodings (WX notation) that take advantage of language similarity while addressing the morphological complexity issue in NMT. These multilingual Latin-based encodings in NMT, together with Byte Pair Embedding (BPE) allow us to better exploit their phonetic and orthographic as well as lexical similarities to improve the translation quality by projecting different but similar languages on the same orthographic-phonetic character space. We verify the proposed approach by demonstrating experiments on similar language pairs (Gujarati↔Hindi, Marathi↔Hindi, Nepali↔Hindi, Maithili↔Hindi, Punjabi↔Hindi, and Urdu↔Hindi) under low resource conditions. The proposed approach shows an improvement in a majority of cases, in one case as much as ∼10 BLEU points compared to baseline techniques for similar language pairs. We also get up to ∼1 BLEU points improvement on distant and zero-shot language pairs.
Machine Translation by Projecting Text into the Same Phonetic-Orthographic Space Using a Common Encoding
[ { "figure_caption": "Figure 1 .1Figure 1. Vanilla NMT.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Proposed architecture.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Corpus Statistics showing the number of training, validation, and test sentences for each domain", "figure_data": "Lang-Pairs Train Validation TestDomainGU↔HI1578410001973PM IndiaNE↔HI13699130003000 WMT 2019 corpus, Agriculture, Entertainment, BibleMR↔HI4327410001411News, PM India, Indic WordNetPA↔HI22557671997200GNOME, KDE4, Ubuntu, wikimedia, TED2020MAI↔HI9313629722973GNOME, KDE4, wikimedia, UbuntuUR↔HI10817634523453Tanzil, GNOME, KDE4, wikimedia, UbuntuML↔HI17333500500PM IndiaTA↔HI43538500500PM IndiaTE↔HI2584500500PM IndiaBHO↔HI0500500Movie subtitles, Literature, NewsMAG↔HI0500500Movie subtitles, Literature, NewsNote: HI: Hindi, MR: Marathi, NE: Nepali, GU: Gujarati, MAI: Maithili, PA: Punjabi, UR: Urdu, ML:Malayalam, TA: Tamil, TE: Telgu, BHO: Bhojpuri, MAG: Magahi", "figure_id": "tab_0", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Experiment results (BLEU, chrF2, and TER scores).", "figure_data": "Languages(xx)BLEUchrF2TERXX→HIGuzmán et.", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "GU33.1433.1558570.5410.548NE30.5141.9746490.6580.652MR16.8722.3743440.7070.709PA78.5681.0582820.2200.216UR28.7430.0845450.6680.657MAI79.4981.8082810.2420.251HI→XXGuzmán et.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "GU25.4725.8256560.6160.619NE32.8943.5250510.6300.637MR14.0514.7641440.7890.762PA80.0181.8783840.2060.203UR22.7424.3546470.5970.596MAI86.5883.8289860.1480.168", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "LEBLEU, WupLeBleu and WER scores.", "figure_data": "Languages(xx)LEBLEUWupLeBLEUWERXX→HIGuzmán et.", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "GU0.6630.6570.6630.65766.7766.29NE0.5430.5470.5430.54766.9967.71MR0.4950.5410.4950.54172.7873.36PA0.8530.8530.8530.85322.2921.83UR0.5640.5660.5640.56668.3467.20MAI0.8650.8510.8650.85124.3425.23HI→XXGuzmán et.", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "GU0.6220.6230.6220.62373.1173.33NE0.5470.5190.5470.51963.4165.31MR0.4850.4540.4850.45480.1077.46PA0.8580.8650.8580.86520.8820.57UR0.6190.6290.6190.62962.3562.27MAI0.9160.9080.9160.90814.8316.89", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "BLEU score-based comparison of SMT, SMT + WX and the proposed approaches.", "figure_data": "Languages(xx)BLEUXX→HISMT SMT + WX ProposedGU43.4930.6933.15NE40.1453.2141.97MR7.411.4622.37PA68.3471.2281.05UR19.2121.8430.08MAI79.5681.4681.80HI→XXSMT SMT + WX ProposedGU39.2025.8925.82NE40.2154.8443.52MR7.361.4814.76PA67.2170.6481.87UR18.2418.4124.35MAI79.1283.0683.82", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "TER and chrF2 scores on the training dataLanguages GU → HI MR → HI NE → HI MAI → HI PA → HI UR → HI", "figure_data": "TER1.0661.3001.0520.6100.9881.093chrF2382934653212Languages HI → GU HI → MR HI → NE HI → MAI HI → PA HI → URTER0.8840.9400.8870.5550.9061.044chrF2392936623010Note: Applying TER and chrF2 scores on the training data of both the languages of apair", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Character-based entropy of languages with or without applying WX-notation", "figure_data": "Languages Character Entropy Character Entropy* DifferenceGujarati5.03683.74541.2914Marathi5.02203.68461.3374Nepali4.67223.57701.0952Maithili5.11593.91621.1997Punjabi5.08343.79321.2902Urdu4.88214.11980.7623Hindi5.21953.79741.4221", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Entropy computed on Vocabulary", "figure_data": "Complete corpusRestricted corpusLanguageWithout WXWith WXWithout WXWith WXMaxMedian Average MaxMedian Average MaxMedian Average MaxMedian AverageHI3.1674 0.58970.61964.94331.24841.31483.1623 0.59290.62304.94141.24951.3158GU6.4712 0.81130.838917.9337 1.46771.51576.4735 0.81280.841022.2253 1.46811.5163NE3.0311 0.80080.82876.68451.43271.48351.8080 0.53500.56364.74871.12621.1575MR3.7534 0.59820.62817.73721.23311.29953.5845 0.80490.84597.74001.21301.2734PA2.2077 0.57780.60488.99781.03491.11052.1662 0.55000.575313.5759 0.96441.0405UR2.8580 0.64840.67863.0920.77480.80882.2477 0.62820.65743.32970.75230.7828MAI2.0163 0.50970.53264.31351.09041.14321.6417 0.47730.50033.89231.04011.0888", "figure_id": "tab_11", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Redundancy", "figure_data": "LanguagesComplete corpus Without WX WXRestricted corpus Without WX WXHI0.89550.7693 0.89490.7691GU0.86060.7401 0.86030.7400NE0.88060.7866 0.91110.8147MR0.90500.7993 0.86100.7807PA0.91860.8502 0.91940.8554UR0.89410.8741 0.89680.8750MAI0.91250.8121 0.91720.8171", "figure_id": "tab_12", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Entropy and Redundancy", "figure_data": "Language pairWithout WX Maximum Entropy Median EntropyAverage EntropyRedundancyWith WX Maximum EntropyMedian EntropyAverage EntropyRedundancyGU-HI4.82920.43224 0.49850.927917.77311.39581.45090.7512NE-HI3.02730.74140.77250.89487.14541.35611.41260.7988MR-HI3.75570.60030.63030.90477.73421.23091.29770.7995PA-HI1.66420.33590.35100.95439.02321.11991.18430.8414UR-HI1.98410.35470.38640.94894.01330.79280.84720.8783MAI-HI2.04830.53400.55550.90966.82701.10971.16560.8091", "figure_id": "tab_13", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Cross-lingual distance between languages after applying character-level language model using perplexity-based score (Unnormalized on language directions)", "figure_data": "Language BHOGUHIMAGMAIMLMRNEPATATEURBHO0.0010 0.0443 0.0280 0.02900.0617 0.1006 0.0418 0.1648 0.0507 0.1383 0.0790 0.3134GU0.0319 0.00.0312 0.05040.0704 0.0648 0.0302 0.1736 0.0663 0.1117 0.0556 0.2675HI0.0116 0.0312 0.0007 0.02900.0715 0.0900 0.0190 0.1670 0.0458 0.1393 0.0705 0.2933MAG0.0414 0.0992 0.0712 6.3465e-06 0.0739 0.1897 0.0924 0.1710 0.0834 0.2036 0.1693 0.3491MAI0.0806 0.0875 0.0891 0.13400.0002 0.1394 0.0986 0.1769 0.0941 0.2168 0.1295 0.4006ML0.0713 0.0667 0.0773 0.09620.0790 0.0002 0.0695 0.1323 0.1171 0.0497 0.0403 0.3785MR0.0308 0.0280 0.0314 0.05030.0682 0.0623 0.0007 0.1625 0.0644 0.1175 0.0445 0.3423NE0.0949 0.1536 0.1370 0.10650.0955 0.1962 0.1321 0.0003 0.2130 0.2506 0.1862 0.3350PA0.0545 0.0935 0.0612 0.07820.0892 0.1573 0.0785 0.2762 0.0003 0.1716 0.1485 0.3245TA0.1239 0.1439 0.1384 0.15950.1009 0.0487 0.1204 0.1761 0.1613 0.0003 0.0972 0.3910TE0.0511 0.0539 0.0562 0.07850.0783 0.0449 0.0510 0.1513 0.1102 0.1165 0.0002 0.3401UR1.00.2823 0.5221 0.47710.1984 0.4330 0.4014 0.6438 0.3150 0.3276 0.5548 0.0001", "figure_id": "tab_14", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Experiments on distant language pairs.", "figure_data": "ModelBLEU chrF2 BLEU chrF2 BLEU chrF2HI → MLHI → TAHI → TEGuzmán et.al [3]5.12307.57417.1926Proposed3.61327.86444.5627ML → HITA → HITE → HIGuzmán et.al [3]9.082914.55377.9727Proposed9.963315.43409.0930", "figure_id": "tab_15", "figure_label": "16", "figure_type": "table" }, { "figure_caption": "Applying on zero-shot language pairs.", "figure_data": "ModelHI → BHOBHO → HIHI → MAGMAG → HIBLEU chrF2 BLEU chrF2 BLEU chrF2 BLEU chrF2Guzmán et.al [3]3.34144.58221.67134.8619Proposed3.13175.72272.68185.3225", "figure_id": "tab_16", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "Experiments on back-translation.", "figure_data": "Model", "figure_id": "tab_17", "figure_label": "18", "figure_type": "table" } ]
Amit Kumar; Shantipriya Parida; Ajay Pratap; Anil Kumar Singh
[ { "authors": "B Gillon", "journal": "Computational Linguistics", "ref_id": "b0", "title": "Review of Natural language processing: a Paninian perspective by Akshar Bharati", "year": "1995" }, { "authors": "G Klein; Y Kim; Y Deng; J Senellart; A M Rush", "journal": "", "ref_id": "b1", "title": "OpenNMT: Open-Source Toolkit for Neural Machine Translation", "year": "2017" }, { "authors": "F Guzmán; P J Chen; M Ott; J Pino; G Lample; P Koehn; V Chaudhary; M A Ranzato", "journal": "", "ref_id": "b2", "title": "The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English", "year": "2019" }, { "authors": "Y Liu; J Gu; N Goyal; X Li; S Edunov; M Ghazvininejad; M Lewis; L Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b3", "title": "Multilingual Denoising Pre-training for Neural Machine Translation", "year": "2020" }, { "authors": "M Johnson; M Schuster; M Le Q V, Krikun; Y Wu; Z Chen; N Thorat; F Viégas; M Wattenberg; G Corrado; M Hughes; Dean J ", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b4", "title": "Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation", "year": "2017" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L Kaiser; I Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Attention is All you Need", "year": "2017" }, { "authors": "T Luong; H Pham; C D Manning", "journal": "", "ref_id": "b6", "title": "Effective Approaches to Attention-based Neural Machine Translation", "year": "2015" }, { "authors": "A Currey; K Heafield", "journal": "", "ref_id": "b7", "title": "Incorporating Source Syntax into Transformer-Based Neural Machine", "year": "2019" }, { "authors": "A Raganato; Y Scherrer; J Tiedemann", "journal": "", "ref_id": "b8", "title": "Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation", "year": "2020" }, { "authors": "M Lewis; Y Liu; N Goyal; M Ghazvininejad; Mohamed A Levy; O Stoyanov; V Zettlemoyer; L ", "journal": "", "ref_id": "b9", "title": "BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension", "year": "2020" }, { "authors": "S Edunov; M Ott; M Auli; D Grangier", "journal": "", "ref_id": "b10", "title": "Understanding Back-Translation at Scale", "year": "2018" }, { "authors": "K Papineni; S Roukos; T Ward; W Zhu", "journal": "", "ref_id": "b11", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "year": "2002" }, { "authors": "P Hoang V C D, Koehn; G Haffari; T Cohn", "journal": "", "ref_id": "b12", "title": "Iterative Back-Translation for Neural Machine Translation", "year": "2018" }, { "authors": "J Philip; S Siripragada; V Namboodiri; C V Jawahar", "journal": "Association for Computing Machinery", "ref_id": "b13", "title": "Revisiting Low Resource Status of Indian Languages in Machine Translation", "year": "2021" }, { "authors": "B Haddow; F Kirefu", "journal": "", "ref_id": "b14", "title": "PMIndia -A Collection of Parallel Corpora of Languages of India", "year": "2020" }, { "authors": "J Slocum", "journal": "Computational linguistics", "ref_id": "b15", "title": "A survey of machine translation: Its history, current status and future prospects", "year": "1985" }, { "authors": "P Koehn", "journal": "Cambridge University Press", "ref_id": "b16", "title": "Statistical Machine Translation", "year": "2009" }, { "authors": "M Ott; S Edunov; A Baevski; A Fan; S Gross; N Ng; D Grangier; M Auli", "journal": "", "ref_id": "b17", "title": "fairseq: A Fast, Extensible Toolkit for Sequence Modeling", "year": "2019" }, { "authors": "T Dave", "journal": "", "ref_id": "b18", "title": "A study of the Gujarati language in the 16th century (VS), with special reference to the MS Balavabodha to Upadesamala", "year": "1931" }, { "authors": "A Booth", "journal": "Technology Press of the Massachusetts Institute of Technology and Wiley", "ref_id": "b19", "title": "Machine translation of languages, fourteen essays", "year": "1955" }, { "authors": "I Sutskever; O Vinyals; Le Q V ", "journal": "", "ref_id": "b20", "title": "Sequence to Sequence Learning with Neural Networks", "year": "2014" }, { "authors": "J Devlin; M W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b21", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Z Lan; M Chen; S Goodman; K Gimpel; P Sharma; R Soricut", "journal": "", "ref_id": "b22", "title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations", "year": "2020" }, { "authors": "L Barrault; O Bojar; C Costa-Jussà M R, Federmann; M Fishel; Y Graham; B Haddow; M Huck; P Koehn; S Malmasi; C Monz; M Müller; S Pal; M Post; M Zampieri", "journal": "", "ref_id": "b23", "title": "Findings of the 2019 Conference on Machine Translation (WMT19)", "year": "2019" }, { "authors": "M Emeneau", "journal": "Linguistic Society of America", "ref_id": "b24", "title": "India as a Lingustic Area. Language", "year": "1956" }, { "authors": "S Diwakar; P Goyal; R Gupta", "journal": "Saarland University Press", "ref_id": "b25", "title": "Transliteration among indian languages using WX notation", "year": "2010" }, { "authors": "M Post", "journal": "", "ref_id": "b26", "title": "A Call for Clarity in Reporting BLEU Scores", "year": "2018" }, { "authors": " Mundotiya R K; M K Singh; R Kapur; S Mishra; A K Singh", "journal": "", "ref_id": "b27", "title": "Basic Linguistic Resources and Baselines for Bhojpuri, Magahi and Maithili for Natural Language Processing", "year": "2020" }, { "authors": "J Tiedemann", "journal": "European Language Resources Association (ELRA", "ref_id": "b28", "title": "Parallel Data, Tools and Interfaces in OPUS", "year": "2012" }, { "authors": "M Popović", "journal": "", "ref_id": "b29", "title": "chrF: character n-gram F-score for automatic MT evaluation", "year": "2015" }, { "authors": "M Snover; B Dorr; R Schwartz; L Micciulla; J Makhoul", "journal": "", "ref_id": "b30", "title": "A study of translation edit rate with targeted human annotation", "year": "2006" }, { "authors": "T Rama; A K Singh", "journal": "", "ref_id": "b31", "title": "From Bag of Languages to Family Trees From Noisy Corpus", "year": "2009" }, { "authors": "R Sennrich; B Haddow; A Birch", "journal": "", "ref_id": "b32", "title": "Neural Machine Translation of Rare Words with Subword Units", "year": "2016" }, { "authors": "T Kudo", "journal": "", "ref_id": "b33", "title": "Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates", "year": "2018" }, { "authors": "C Bentz; D Alikaniotis", "journal": "", "ref_id": "b34", "title": "The word entropy of natural languages", "year": "2016" }, { "authors": "C Shannon; W Weaver", "journal": "The University of Illinois Press", "ref_id": "b35", "title": "The mathematical theory of communication", "year": "1949" }, { "authors": "K Kettunen", "journal": "Journal of Quantitative Linguistics Taylor & Francis", "ref_id": "b36", "title": "Can type-token ratio be used to show morphological complexity of languages", "year": "2014" }, { "authors": "A Singh", "journal": "", "ref_id": "b37", "title": "Modeling and Application of Linguistic Similarity", "year": "2010" }, { "authors": "A Singh", "journal": "", "ref_id": "b38", "title": "Using a single framework for computational modeling of linguistic similarity for solving many NLP problems", "year": "2007" }, { "authors": "A Singh", "journal": "", "ref_id": "b39", "title": "A computational phonetic model for indian language scripts", "year": "2006" }, { "authors": "K Singh; T Rama; P Dasigi", "journal": "", "ref_id": "b40", "title": "A Computational Model of the Phonetic Space and Its Applications", "year": "2009" }, { "authors": "N Trubetzkoy", "journal": "", "ref_id": "b41", "title": "Proposition 16", "year": "1928" }, { "authors": "M Haspelmath", "journal": "De Gruyter Mouton", "ref_id": "b42", "title": "The European linguistic area: standard average European. Halbband Language Typology and Language Universals 2.Teilband", "year": "2001" }, { "authors": "E Denoual; Y Lepage", "journal": "", "ref_id": "b43", "title": "BLEU in Characters: Towards Automatic MT Evaluation in Languages without Word Delimiters", "year": "2005" }, { "authors": "T Kudo; J Richardson", "journal": "", "ref_id": "b44", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing", "year": "2018" }, { "authors": "L Madaan; Sharma; P Singla", "journal": "", "ref_id": "b45", "title": "Transfer Learning for Related Languages: Submissions to the WMT20 Similar Language Translation Task", "year": "2020" }, { "authors": "V Mujadia; D Sharma", "journal": "", "ref_id": "b46", "title": "NMT based Similar Language Translation for Hindi -Marathi", "year": "2020" }, { "authors": "K Rathinasamy; A Singh; B Sivasambagupta; Prasad Neerchal; P Sivasankaran; V ", "journal": "Proc. Fifth Conference on Machine Translation", "ref_id": "b47", "title": "Infosys Machine Translation System for WMT20 Similar Language Translation Task", "year": "2020" }, { "authors": "S R Laskar; Pakray; S Bandyopadhyay", "journal": "ACL", "ref_id": "b48", "title": "Neural Machine Translation: Hindi-Nepali", "year": "2019" }, { "authors": "A K Ojha; P Rani; A Bansal; B R Chakravarthi; Kumar; J Mccrae", "journal": "", "ref_id": "b49", "title": "NUIG-Panlingua-KMI Hindi-Marathi MT Systems for Similar Language Translation Task @ WMT", "year": "2020" }, { "authors": "A Kumar; R Baruah; R Mundotiya; A Singh", "journal": "", "ref_id": "b50", "title": "Transformer-based Neural Machine Translation System for Hindi -Marathi: WMT20 Shared Task", "year": "2020" }, { "authors": "S Pal; M Zampieri", "journal": "", "ref_id": "b51", "title": "Neural Machine Translation for Similar Languages: The Case of Indo-Aryan Languages", "year": "2020" }, { "authors": "M Przystupa; Abdul-Mageed ; M ", "journal": "ACL", "ref_id": "b52", "title": "Neural Machine Translation of Low-Resource and Similar Languages with Backtranslation", "year": "2019" }, { "authors": "P Koehn; H Hoang; A Birch; C Callison-Burch; M Federico; N Bertoldi; B Cowan; W Shen; C Moran; R Zens; C Dyer; O Bojar; Constantin; E Herbst", "journal": "", "ref_id": "b53", "title": "Moses: Open Source Toolkit for Statistical Machine Translation", "year": "2007" }, { "authors": "F Och; H Ney", "journal": "Computational Linguistics", "ref_id": "b54", "title": "A Systematic Comparison of Various Statistical Alignment Models", "year": "2003" }, { "authors": "K Heafield", "journal": "", "ref_id": "b55", "title": "KenLM: Faster and Smaller Language Model Queries", "year": "2011" }, { "authors": "F Och", "journal": "", "ref_id": "b56", "title": "Minimum Error Rate Training in Statistical Machine Translation", "year": "2003" }, { "authors": "S Virpioja; S Grönroos", "journal": "", "ref_id": "b57", "title": "LeBLEU: N-gram-based Translation Evaluation Score for Morphologically Complex Languages", "year": "2015" }, { "authors": "D Banik; P Bhattacharyya", "journal": "NLPAI", "ref_id": "b58", "title": "The wordnet-based evaluation metric for machine translation", "year": "2018" }, { "authors": "T Kim", "journal": "Korean J Anesthesiol", "ref_id": "b59", "title": "T test as a parametric statistic", "year": "2015" }, { "authors": "Y Balashov", "journal": "Inquiry", "ref_id": "b60", "title": "The boundaries of meaning: a case study in neural machine translation", "year": "2022" }, { "authors": "D Banik; P Bhattacharyya", "journal": "Sādhanā", "ref_id": "b61", "title": "Statistical machine translation based on weighted syntax-semantics", "year": "2020" }, { "authors": "W Bao; J Zhang; Pan; X Yin", "journal": "", "ref_id": "b62", "title": "A Novel Chinese Dialect TTS Frontend with Non-Autoregressive Neural Machine Translation", "year": "2022" }, { "authors": "D Banik; P Bhattacharyya; S Bhattacharyya", "journal": "Applied Soft Computing", "ref_id": "b63", "title": "Assembling translations from multi-engine machine translation outputs", "year": "2019" }, { "authors": "A Kumar; R K Mundotiya; A Pratap; A Singh", "journal": "Journal of King Saud University -Computer and Information Sciences", "ref_id": "b64", "title": "TLSPG: Transfer learning-based semisupervised pseudo-corpus generation approach for zero-shot translation", "year": "2022" }, { "authors": "D Banik", "journal": "International Journal of Speech Technology", "ref_id": "b65", "title": "Phrase table re-adjustment for statistical machine translation", "year": "2021" }, { "authors": "Bharathi Raja; C Rani; P Arcan; M Mccrae; J ", "journal": "SN Computer Science", "ref_id": "b66", "title": "A survey of orthographic information in machine translation", "year": "2021" }, { "authors": "A Kumar; A Pratap; A Singh", "journal": "IEEE Transactions on Emerging Topics in Computational Intelligence", "ref_id": "b67", "title": "Generative Adversarial Neural Machine Translation for Phonetic Languages via Reinforcement Learning", "year": "2023" }, { "authors": "A Kumar; A Pratap; K Singh; S Saha", "journal": "Expert Systems with Applications", "ref_id": "b68", "title": "Addressing domain shift in neural machine translation via reinforcement learning", "year": "2022" }, { "authors": "D Kakwani; A Kunchukuttan; S Golla; N C ; G Bhattacharyya; A Khapra; M M Kumar; P ", "journal": "", "ref_id": "b69", "title": "IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages", "year": "2020" }, { "authors": "R Dabre; H Shrotriya; A Kunchukuttan; R Puduppully; M Khapra; P Kumar", "journal": "", "ref_id": "b70", "title": "In-dicBART: A Pre-trained Model for Indic Natural Language Generation", "year": "2022" }, { "authors": "C Cieri; M Maxwell; S Strassel; J Tracey", "journal": "ELRA", "ref_id": "b71", "title": "Selection Criteria for Low Resource Language Programs", "year": "2016" }, { "authors": "S Sitaram", "journal": "", "ref_id": "b72", "title": "Pronunciation Modeling for Synthesis of Low Resource Languages", "year": "2015" }, { "authors": "A Bharati; K P Rao; R Sangal; S Bendre", "journal": "", "ref_id": "b73", "title": "Basic statistical analysis of corpus and cross comparison among corpora", "year": "2000" } ]
[ { "formula_coordinates": [ 4, 105.15, 398.11, 185.4, 10.95 ], "formula_id": "formula_0", "formula_text": "C = Encoder(X 1 , X 2 , X 3 , ..., X n )(1)" }, { "formula_coordinates": [ 4, 49.61, 484.66, 240.95, 33.88 ], "formula_id": "formula_1", "formula_text": "Decoder(C, Y 1 , Y 2 , Y 3 , ..., Y n ) = Y ′ 1 , Y ′ 2 , Y ′ 3 , ..., Y ′ m (2) where, {Y 1 , Y 2 , Y 3 ,..., Y n } and {Y ′ 1 , Y ′ 2 , Y ′ 3 ,..., Y ′ m }" }, { "formula_coordinates": [ 4, 367.63, 453.2, 178.04, 32.6 ], "formula_id": "formula_2", "formula_text": "attn i = so f tmax ( Q i K i T √ d k ) V i(3)" }, { "formula_coordinates": [ 7, 310.19, 671.22, 235.48, 37.18 ], "formula_id": "formula_3", "formula_text": "BLEU = min ( 1, output_length re f erence_length )         4 ∏ i=1 precision i         ,(4)" }, { "formula_coordinates": [ 8, 374.33, 556.76, 171.34, 30.54 ], "formula_id": "formula_4", "formula_text": "S sl,tl = m ∑ tl=1 p sl.tl (w n |w n-1 1 ),(5)" }, { "formula_coordinates": [ 8, 349.36, 627.55, 196.31, 23.92 ], "formula_id": "formula_5", "formula_text": "MS sl,tl = S sl,tl -min(S S L,T L ) max(S S L,T L ) -min(S S L,T L ) ,(6)" }, { "formula_coordinates": [ 9, 365.74, 581.85, 179.93, 30.5 ], "formula_id": "formula_6", "formula_text": "H(T ) = - V ∑ i=1 p(c i ) log 2 (p(c i ))(7)" }, { "formula_coordinates": [ 10, 360.43, 480.91, 185.24, 35.29 ], "formula_id": "formula_7", "formula_text": "PP(C) = W √ 1 P(S 1 , S 2 , S 3 , ..., S n )(8)" } ]
10.18653/v1/P17-2082
2023-05-21
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b15", "b23", "b0", "b3", "b46", "b7", "b22", "b18", "b5", "b4", "b44", "b1", "b35", "b42", "b26", "b25", "b44", "b21", "b10", "b34", "b2", "b34", "b19", "b34", "b2", "b2", "b34", "b2", "b2", "b35", "b42", "b26", "b25", "b21", "b19", "b34", "b2", "b19", "b14", "b34", "b2", "b37", "b34", "b2" ], "table_ref": [], "text": "Information Extraction (IE) aims at extracting structured information from unstructured texts to classify and reconstruct massive amounts of content automatically. It covers a great variety of tasks, such as named entity recognition (NER) (Li et al., 2020;Ma et al., 2023), relation extraction (Abad et al., 2017;Bekoulis et al., 2018) and (1) an event trigger (deployed) and its argument (17000 U.S. Army soldiers), and (2) two entities corresponding to subject (17000 U.S. Army soldiers) and object (Persian Gulf) respectively.\nevent extraction (Zhou et al., 2016;Chen et al., 2017). Recently, thanks to the breakthroughs of deep learning, neural network models have achieved significant improvements in various IE tasks (Luan et al., 2019;Lin et al., 2020;Chen et al., 2022). However, it is too expensive to annotate a large amount of data in low-resource languages for supervised IE training. Therefore, cross-lingual IE under low-resource (Cabral et al., 2020;Yarmohammadi et al., 2021;Agirre, 2022) settings has attracted considerable attention. In this paper, we study zero-shot cross-lingual IE, where annotated training data is available only in the source language, but not in the target language.\nExisting methods on reducing the need for annotated data in cross-lingual IE tasks can be generally categorized into shared representation spacebased (Tsai et al., 2016;Wu and Dredze, 2019;M'hamdi et al., 2019), translation-based (Mayhew et al., 2017;Yarmohammadi et al., 2021;Lou et al., 2022) and language-universal features-based methods (Huang et al., 2018;Subburathinam et al., 2019;Ahmad et al., 2021). In this work, we study the last one. Subburathinam et al. (2019) have verified that language-universal symbolic and distributional representations are complementary for cross-lingual structure transfer. Language-universal features-based methods aggregate universal and complementary information to obtain schema consistency across languages for effective knowledge transfer (Liu et al., 2019;Subburathinam et al., 2019;Ahmad et al., 2021). For example, Ahmad et al. (2021) utilized part-of-speech (POS), entity type and dependency features to construct a cross-lingual transfer framework.\nAlthough language-universal features-based methods have achieved competitive performance on zero-shot cross-lingual IE, the features used in previous work are still incapable of capturing sufficient contextualized semantics. Existing methods have never explored the potential of establishing interactions between these features and contextual representations, which leads to a representation gap between them and information loss. Besides, previous studies focus on employing dependency structures for various IE tasks (Subburathinam et al., 2019;Ahmad et al., 2021), describing only the dependency relationships between two words. Essentially, most IE tasks aim at identifying and categorizing phrase spans, thus the span-level information such as the constituent span attributes and the relationships between multiple spans are crucial. However, capturing this information is well beyond the scope of features studied extensively in prior work (Ahmad et al., 2021). Here, we give an example to illustrate the importance of constituent span information as shown in Figure 1. For event extraction, given an event trigger deployed and two argument candidates U.S. Army and 17000 U.S. Army soldiers, the former candidate may be recognized as an argument without any guidance. However, the latter candidate is a constituent span which shares the same parent node labeled VP with deployed, then it can be recognized correctly as an argument under this signal. This property is common in many languages, so it is beneficial to be able to model this universal information when transferred across languages.\nOn account of the above issues, a syntaxaugmented hierarchical interactive encoder (SHINE) is proposed in this paper to transfer crosslingual IE knowledge. It is capable of interactively capturing complementary information between language-universal features and contextual information, so as to derive language-agnostic contextualized representations for various IE tasks. On the one hand, a multi-level interaction network is designed to encourage the distributions of features and contextual representations to approximate each other at three levels via an interactive loss mechanism. Specifically, the global-level interaction operates on the entire sentence, the local-level one operates on sub-spans in a sentence, and the task-level one operates on task-related mentions. In this way, the model can hierarchically interact the complementary information to strengthen its domain adaptability. Features of POS, dependency relation and entity type used in previous studies are adopted to verify the effectiveness of the proposed method.1 Additionally, a new syntax feature of constituency structure is introduced to explicitly utilize the span-level information in the text. Considering the overlap between constituent spans, these spans are first converted into word-level embeddings. Then a frequency matrix is designed where each element represents the number of occurrences of a sub-span in all constituent spans. These not only enrich the attribute information of constituent spans, but also model the importance of each sub-span, so that task-related spans can be accurately captured for effective cross-language transfer.\nTo measure the effectiveness of the proposed method and to test its generalization ability, SHINE is evaluated on three IE tasks including NER, relation extraction and event extraction. Experiments across seven languages on four benchmarks are conducted for evaluation. Results show that SHINE achieves highly promising performance, verifying the importance of interactions between universal features and contextual representations, as well as the effectiveness of constituency structure for IE. To facilitate others to reproduce our results, we will publish all source code later.\nIn summary, our contributions in this paper are three-fold: (1) A multi-level interaction framework is proposed to interact the complementary structured information contained in language-universal features and contextual representations. (2) The constituency feature is first introduced to explicitly utilize the constituent span information for crosslingual IE. (3) Experiments across seven languages on three tasks and four benchmarks verify the effectiveness and generalization ability of SHINE.\nMany researchers have investigated shared representation space-based, translation-based and language-universal features-based methods for zero-shot cross-lingual IE tasks. Shared representation space-based models capture features of labeled source-language data using multilingual pre-trained models, and then they are applied to the target languages directly (Tsai et al., 2016;Wu and Dredze, 2019;M'hamdi et al., 2019). However, this type of methods can transfer only superficial information due to representation discrepancy between source and target languages. Besides, translation-based methods translate texts from the source language to the target languages, and then project annotations accordingly to create silver annotations (Mayhew et al., 2017;Lou et al., 2022). But noise from translation and projection might degrade performance. Languageuniversal features-based methods are effective in cross-lingual IE by utilizing universal and complementary information to learn multi-lingual common space representations (Liu et al., 2019;Subburathinam et al., 2019;Ahmad et al., 2021). Liu et al. (2019) utilized GCN (Kipf and Welling, 2017) to learn representations based on universal dependency parses to improve cross-lingual transfer for IE. Subburathinam et al. (2019) exploited other features such as POS and entity type for embedding. Ahmad et al. (2021) employed transformer (Vaswani et al., 2017) to fuse structural information to learn the dependencies between words with different syntactic distances.\nCompared with Subburathinam et al. (2019) and Ahmad et al. (2021) that are the most relevant to this work, the main differences are highlighted. These methods have never explored the potential of establishing interactions between languageuniversal features and contextual representations. Besides, the well-studied features focus on word attributes and the relationship between words. They cannot model span-level information such as the constituent span attributes and the relationships between multiple spans which are crucial for crosslingual IE. To the best of our knowledge, this paper makes the first attempt to interactively capture complementary information between universal features and contextual information, and to introduce constituency structure to explicitly utilize spanlevel information." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we present the detailed framework of the proposed SHINE. For one thing, a multi-level interaction network is introduced. Specifically, three classes of adaptation methods are adopted to interact language-universal with contextual information via an interactive loss mechanism. In order to verify the effectiveness of the proposed framework, features such as POS, dependency relation and entity type used in previous studies are adopted. Furthermore, a new syntax feature of constituency structure is introduced to make up for the deficiencies of the existing work in explicitly modeling and utilizing the constituent span information." }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [], "table_ref": [], "text": "Denote one sentence as x = {x i } L i=1 with its language-universal features x l and annotations y, where x i denotes a word and L denotes the length of the sentence. An IE model generates predictions ȳ. The labeled training data D S train = {(x, x l , y)} is available for the source language, while only labeled testing data D T test = {(x, x l , y)} is available for the target language. Formally, zero-shot cross-lingual IE aims at achieving good performance on D T test by leveraging D S train ." }, { "figure_ref": [], "heading": "Basic Encoder", "publication_ref": [ "b8", "b2" ], "table_ref": [], "text": "The basic encoder in this paper consists of an mBERT (Devlin et al., 2019) and an L-layer Transformer denoted as Trans-L. They are utilized to extract the contextual representations h c and language-universal representations h l respectively. Following Ahmad et al. (2021), given a sentence x of length L from source language data D S train , three well-studied language-universal features are denoted as one-hot vectors x p for POS, x d for dependency relation, and x e for entity type. Besides, the proposed constituent type vector x c is not a one-hot vector and its construction is described in Section 3.4. These calculations are formulated as:\nx l = [x p ; x d ; x e ; x c ],\n(1)\nh c = mBERT(x),(2)\nh l = Trans-L(x l ),(3)\nwhere x l is the concatenation of four languageuniversal features.\nh c = {h c i } L i=1 and h l = {h l i } L i=1\n. h c i and h l i are representations of x i and x l i respectively. The overall structure of the proposed SHINE. h c and h l refer to the contextual and language-universal representations, respectively. h w is the concatenation of h c and h l . h f is the final representation of the model." }, { "figure_ref": [], "heading": "Multi-level Interaction Network", "publication_ref": [ "b19", "b11", "b39" ], "table_ref": [], "text": "Each type of language-universal features emphasizes its respective property during cross-language transfer, thus versatile abilities are exhibited when various features are available. For example, POS can help augment the connection between words of the same part of speech across different languages, and dependency models the relationship between words to mitigate the word-order problem (Liu et al., 2019). However, there is usually a semantic gap between these features and the contextual representations, leading to information loss since no explicit interactions are established between them when fusing these features.\nIn this section, a multi-level interaction framework is designed to hierarchically interact the contextual representations h c and the languageuniversal representations h l at the global-, localand task-level respectively, via an interactive loss mechanism. As Figure 2 depicts, the global-level interaction operates over the entire sentence to enhance the interaction between h c and h l as a whole. Besides, the local-level divides a sentence into fixed-length sub-spans\nX s = {X s i } T i=1 where X s i = {x j } i+P -1 j=i\n, enhancing attention to span information. Here P is the length of each span and T is the number of all sub-spans. Lastly, the task-level interaction utilizes task-related mentions (such as entities, event triggers, etc.) to strengthen the model adaptation ability to different tasks. In this way, various syntax features and contextual information can directly interact with each other at different levels, so that the representation capability can be enhanced at both word-level and spanlevel. To measure the distribution discrepancy of two different random variables and effectively enable information sharing, a symmetrized Kullback-Leibler (KL) divergence (Pérez-Cruz, 2008) is employed following Jiang et al. (2020) and Wang et al. (2021) at each level of interaction. KL(P Q) = k p k log (p k /q k ) denotes the KLdivergence of two discrete distributions P and Q with the associated parameters of p k and q k , respectively. These calculations are formulated as:\nLg = KL(h c ||h l ) + KL(h l ||h c ),(4)\nL l = 1 T T i=1 [KL(H c i ||H l i )+ KL(H l i ||H c i )],\n(5)\nLt = KL(H c t ||H l t ) + KL(H l t ||H c t ),(6)\nwhere\nH c i = {h c j } i+P -1 j=i and H l i = {h l j } i+P -1 j=i . H c\nt and H l t are the task-related mention representations from mBERT and Transformer respectively." }, { "figure_ref": [], "heading": "Constituency Structure Modeling", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "As aforementioned, previous studies focus on encoding dependency structures, which describe only the dependency relationships between two words. However, the attributes of a constituent span in text and the relationships between multiple spans are crucial to many IE tasks. To this end, we introduce the constituency structure to construct span representation and to model the importance of each sub-span, to explicitly utilize span-level information to accurately find task-related spans for effective cross-language transfer. Then the \"deployment orders\" is represented as \"B-NP I-NP\". The details are shown in Table 1. The count-based representation can reflect the depth of words in the constituency tree to a certain extent and deeper words have larger counts. In this way, the relationship between each word and each constituent type can be constructed to enrich representation information.\n0 1 0 have 0 0 1 0 0 1 received 0 0 1 1 0 1 deployment 1 0 0 2 0 1 orders 0 1 0 2 0 1\nSub-span Importance Construction Intuitively, a task-related phrase is more likely to be a constituent span or a sub-span of a constituent span. Despite the constituent type information for each word is modeled, the importance of each sub-span in the text also matters. Those constituent spans or spans contained within constituent spans should be assigned higher importance to help find task-related phrases. In this work, the importance of each sub-span in the text is modeled based on existing constituent spans.\nIn detail, a frequency matrix denoted as F ∈ R L×L is constructed, where F ij represents the number of occurrences of span (i, j) in all constituent spans. An example is illustrated in Table 2. 2 (1,4,VP) denotes that a span constructed with 1st to 4th words in a sentence is a VP, i.e., verb phrase. Words he bought apple juice here he\n1 1 1 1 1 bought 1 1 2 2 1 apple 1 2 1 3 1 juice 1 2 3 1 1 here 1 1 1 1 1\nTable 2: The frequency matrix shows the times of each sub-span in text is a sub-span of constituent spans. The annotated constituent spans are: (0,0,NP), (1,3,VP), (2,3,NP), (0,4,S). Considering that the attention between two words is bidirectional, we simply symmetrize the matrix. For the span (1,2), it is the subspan of both (1,3) and (0,4), so the frequency is 2. Since a non-constituent word is usually not task-related (e.g. not an entity), we set it to 1.\nA variant of Transformer RTrans-N(E, F ) shown in Figure 2 is designed by modifying the selfattention mechanism as:\nAttn-F(Q, K, V ) = G(softmax QK T √ d k V ).(7)\nHere, softmax function produces an attention matrix A ∈ R L×L where A ij denotes the attention weight that the i-th token pays to the j-th token in the sentence. G is a function that integrates the frequency matrix to obtain modified attention weights. The (i, j)-th element of the original attention matrix A is modified as:\nG(A) ij = F ij A ij j F ij A ij .(8)\nThus, the importance of each sub-span of a sentence is modeled according to the frequency it occurs across all constituent spans, with larger frequency values indicating higher importance. In this way, the model assigns more attention to subspans with higher importance. Taking Table 2 as an example, the entity \"apple juice\" has the highest importance, so the model could find it accurately. The final representation of the sentence h f is formulated as:\nh w = Linear([h c ; h l ]),(9)\nh f = RTrans-N(h w , F ), (10\n)\nwhere Linear is a linear transformation. h w is the concatenation of contextual representations and language-universal representations.\nThree downstream tasks are employed to evaluate the effectiveness of the proposed SHINE encoder as comprehensively as possible, including the tasks of NER, relation extraction and event extraction." }, { "figure_ref": [], "heading": "Named Entity Recognition", "publication_ref": [ "b6", "b16", "b9" ], "table_ref": [], "text": "This task targets to locate and classify named entities in a text sequence. Denote one sentence as x = {x i } L i=1 with its labels y = {y i } L i=1 and representation h f x from Eq. ( 10), where y i denotes the label of its corresponding word x i and L denotes the length of the sentence. It is worth noting that this task aims at extracting entities, so the entity type x e is removed from Eq. (1). Following Wu et al. (2020a) ,h f x is fed into a softmax classification layer to calculate the probability of each word denoted as p(x i ) and cross-entropy loss is utilized to optimize the model:\nL e = 1 L L i=1 L CE (p(x i ), y i ) .(11)\nAdditionally, previous studies on NER also focus on distillation methods (Wu et al., 2020a;Chen et al., 2021;Li et al., 2022). In this work, the distillation method used by Wu et al. (2020a) is adopted in this task for a fair and comprehensive comparison. The source language model is denoted as the teacher model Θ tea . During distillation, a student model Θ stu with the same structure as Θ tea is distilled based on the unlabeled target language data D T train , which is fed into Θ tea to obtain its soft labels. Given a sentence x of length L from D T train , the objective for distillation (Hinton et al., 2015) is to minimize the mean squared error (MSE) loss as:\nL KD = 1 L L i=1 MSE (ptea x i ), pstu x i .\n(12)" }, { "figure_ref": [], "heading": "Relation Extraction", "publication_ref": [ "b2" ], "table_ref": [], "text": "This task aims at predicting the relationship label of a pair of subject and object entities in a sentence. Given two entity mentions x m and x n with representations h f m and h f n are derived respectively from Eq. (10). Following Ahmad et al. (2021), a sentence representation h f s is also obtained. Maxpooling is applied over these three vectors to derive the ĥf m , ĥf n , and ĥf s . Then the concatenation of the three vectors is fed into a softmax classification layer to predict the label as follows:\np mn = softmax W r [ ĥf m ; ĥf n ; ĥf s ] + br ,(13)\nwhere W r ∈ R 3d model ×r and b r ∈ R r are trainable parameters, and r is the total number of relation types. The objective of this task is to minimize the cross-entropy loss as:\nL r = M m=1 R i n=1 L CE (p mn , y mn ) , (14\n)\nwhere M is the number of entity mentions, R i is the number of entity candidates for i-th entity mention and y mn denotes the ground truth relation type between x m and x n ." }, { "figure_ref": [], "heading": "Event Extraction", "publication_ref": [ "b2" ], "table_ref": [], "text": "This task can be decomposed into two sub-tasks of Event Detection and Event Argument Role Labeling (EARL). Event detection aims at identifying event triggers and their types. EARL predicts the argument candidate of an event trigger and assigns a role label to each argument from a pre-defined set of labels. In this paper, we focus on EARL and assume event triggers of the input sentence are provided following Ahmad et al. (2021). Given an event trigger x t and an argument mention x a with representation h f t and h f a respectively. The concatenation of the three vectors ĥf t , ĥf a , and ĥf s is fed into a softmax classification layer to calculate the probability of the argument role label p ta , following Eq. ( 13). The objective of this task L a is to minimize the cross-entropy loss following Eq. ( 14). We change M and R i with N and E i respectively, where N is the number of event triggers, E i is the number of argument candidates for i-th event triggers. y mn is changed with y ta , denoting the ground truth argument role type between x t and x a .\nFinally, the loss for SHINE is as follows:\nL f = L task + α(L g + L l + L t ), (15\n)\nwhere α is a manually set hyperparameter and L task is the task loss L e , L r or L a ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b30", "b31", "b28", "b2" ], "table_ref": [], "text": "We adopted CoNLL-2002(Sang, 2002), CoNLL-2003 (Sang andMeulder, 2003) and WikiAnn (Pan et al., 2017) For NER, following previous work (Wu et al., 2020a), English was employed as the source language and the other languages were employed as the target languages. Unlabeled target language data in the training set and its language-universal features were utilized for distillation in NER. As for relation extraction and EARL, all models were individually trained on one source language, and directly evaluated on the other two target languages following Ahmad et al. (2021)." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b18", "b2", "b34", "b2" ], "table_ref": [], "text": "For NER, an entity mention was correct if its offsets and type matched a reference entity. Following Sang ( 2002), entity-level F1-score was used as the evaluation metric. The relation-level F1 score was employed for relation extraction (Lin et al., 2020;Ahmad et al., 2021) was correct if its predicted type and the offsets of the two associated entity mentions were correct.\nThe argument-level F1 score was considered for EARL (Subburathinam et al., 2019;Ahmad et al., 2021). An event argument role label was correct if its event type, offsets, and argument role type matched any of the reference argument mentions.\nReaders can refer to Appendix A.2 for the details of the metric calculations." }, { "figure_ref": [], "heading": "Baselines and Implementation Details", "publication_ref": [ "b6", "b17", "b16", "b43", "b42", "b45", "b34", "b27", "b18", "b21" ], "table_ref": [], "text": "To compare SHINE on NER, these proposed approaches were chosen as baselines including:\n(1) distillation-based methods: TSL (Wu et al., 2020a), Unitrans (Wu et al., 2020b), AdvPicker (Chen et al., 2021), RIKD (Liang et al., 2021), and MTMT (Li et al., 2022), and (2) non-distillationbased methods: BWET (Xie et al., 2018), BS (Wu and Dredze, 2019) and TOF (Zhang et al., 2021).\nAs for relation extraction and EARL tasks, this method was mainly compared with the following: CL-GCN (Subburathinam et al., 2019), CL-RNN (Ni and Florian, 2019), OneIE (Lin et al., 2020), GATE (Ahmad et al., 2021) and CLEAE (Lou et al., 2022). Readers can refer to Appendix A.3 and Appendix A.4 for the implementation details of the baseline models and the proposed SHINE respectively. proposed SHINE w. Distill method outperformed MTMT (previous SOTA) on average by absolute margins of 0.06% and 2.74% in terms of CoNLL and WikiAnn datasets respectively. Besides, the proposed SHINE also significantly outperformed baseline distillation method TSL on average by absolute margins of 1.16% and 4.32% in terms of CoNLL and WikiAnn datasets respectively, demonstrating the effectiveness of these languageuniversal features. For a fair comparison, we compared SHINE against the version of Advpicker w/o. KD, RIKD w/o. KD, MTMT w/o. KD, and the results showed that SHINE significantly outperformed them. Our results also demonstrated the compatibility and scalability between SHINE and distillation." }, { "figure_ref": [], "heading": "Results and Comparison", "publication_ref": [], "table_ref": [], "text": "As for relation extraction and EARL, SHINE outperformed CLEAE (previous SOTA) in most all transfer directions with an average improvement of 1.95% and 1.49% in relation extraction and EARL tasks respectively. It was worth noting that GATE used mBERT as the contextual representation extractor without fine-tuning. This might lead to the cross-lingual information in mBERT not being fully exploited, subsequently degrading the performance of the model. Our results clearly demonstrate that SHINE is highly effective and generalizes well across languages and tasks." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Ablation Study To validate the contributions of different components in SHINE, the following variants and baselines were conducted to perform the ablation study: (1) SHINE w/o. interaction, which removed the hierarchical interaction framework. Besides, the constituency feature was still used during training. (2) SHINE w/o. frequency, which removed the frequency matrix. Besides, the constituent span embeddings constructed in Section 3.4 and the interaction framework were still used during training. (3) SHINE w/o. constituency, which removed constituent span embeddings and frequency matrix. Besides, the interaction framework was still utilized. (4) SHINE w/o. all, which removed all the components mentioned above. It was the base structure of the proposed SHINE.\nResults of the ablation experiments were shown in the bottom four lines of Table 3, 4 and 5 respectively. Additional in-depth analyses reveal:\n(1) removal of the interaction network (SHINE vs SHINE w/o. interaction) caused a significant performance drop, demonstrating the importance of establishing interaction between languageuniversal features and contextual information, and (2) removing constituency features caused significant performance drops (SHINE vs SHINE w/o. frequency and SHINE w/o. constituency). Both constituent type embeddings and sub-span importance information were useful.\nThe ablation study validated the effectiveness of all components. Moreover, the subtle integration of these modules achieved highly promising performance. Not only hierarchical interaction should be established to capture the complementary information of features and contextual information, but also constituency structure should be modeled for effective cross-lingual transfer.\nCase Study To further illustrate the effectiveness of SHINE and to explore how language-universal features play a role in cross-language transfer, the embedding distribution of three models was shown in Appendix A.5. Distribution discrepancy within SHINE was significantly smaller than the base model. It shows that with the guidance of languageuniversal features, the proposed encoder could capture complementary information to alleviate discrepancy between languages to derive languageagnostic contextualized representations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a syntax-augmented hierarchical interactive encoder for zero-shot crosslingual IE. A multi-level interaction framework is designed to interact the complementary structured information contained in language-universal features and contextual representations. Besides, the constituency structure is first introduced to explicitly model and utilize the constituent span information. Experiments show that the proposed method achieves highly promising performance across seven languages on three tasks and four benchmarks. In the future, we will extend this method to more languages and tasks. Besides, we will explore other sources of language-universal features as well as the relationships between features to augment representation capability." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although the proposed method has shown great performance for zero-shot cross-lingual IE, we should realize that the proposed method still can be further improved. For example, relationships between different language-universal features should be considered because there is an implicit mutual influence between them, explicitly modeling it can make these features organically form a whole. Then the model can use this information more efficiently and it will be a part of our future work. In addition, some languages cannot be supported by existing language tools. Although toolkits for some similar languages can be utilized for annotating, denoising these annotations is worth studying." }, { "figure_ref": [], "heading": "A Appendices", "publication_ref": [ "b30", "b31", "b28", "b38", "b34" ], "table_ref": [ "tab_6", "tab_7" ], "text": "A.1 Datasets We adopted CoNLL5 (Sang, 2002;Sang and Meulder, 2003) and WikiAnn6 (Pan et al., 2017) datasets for NER, Automatic Content Extraction (ACE) 2005 dataset7 (Walker et al., 2006) for relation extraction and EARL.\nCoNLL included two datasets: (1) CoNLL-2002 (Spanish, Dutch); (2) CoNLL-2003 (English, German); They were annotated with 4 entity types: LOC, MISC, ORG, and PER.\nWikiAnn included English, Arabic, Hindi, and Chinese. It was annotated with 3 entity types: LOC, ORG, and PER. CoNLL and WikiAnn datasets were annotated with the BIO entity labelling scheme and were divided into the training, development and testing sets. Table 6 shows the statistics of these datasets.\nACE 2005 included three languages (English, Chinese and Arabic). It defined an ontology that included 7 entity types, 18 relation subtypes, and 33 event subtypes. We added a class label None to denote that two entity mentions or a pair of an event mention and an argument candidate under consideration did not have a relationship belonging to the target ontology. Since the official did not divide the training, development and testing sets, We adopted the same dataset-splitting strategy following Subburathinam et al. (2019). Besides, we re-implemented these baseline models based on the code provided by authors using default settings, which were described in Appendix A.3 . Table 7 shows the statistics of the dataset." }, { "figure_ref": [], "heading": "A.2 Evaluation Metrics", "publication_ref": [ "b18", "b2", "b2", "b6", "b17", "b16", "b43", "b42", "b45", "b34", "b14", "b2", "b27", "b18" ], "table_ref": [], "text": "For NER, following Sang ( 2002), entity-level F1score was used as the evaluation metric. Denote A as the number of all entities classified by the model, B as the number of all correct entities classified by the model, and E as the number of all correct entities, the precision (P), recall (R), and entitylevel F1-score (E-F1) of the model were:\nP = B A , R = B E , E-F1 = 2 × P × R P + R .(16)\nFollowing previous works (Lin et al., 2020;Ahmad et al., 2021) \nP = B A , R = B E , R-F1 = 2 × P × R P + R .(17)\nAn event argument role label was correct if its event type, offsets, and argument role type matched any of the reference argument mentions (Ahmad et al., 2021). Denote A as the number of all arguments classified by the model, B as the number of all correct arguments classified by the model, and E as the number of all correct arguments, the argument-level F1 score (A-F1) was defined as similar to relation-level F1 score:\nP = B A , R = B E , A-F1 = 2 × P × R P + R .(18)\nA.3 Baseline Models\nWe described the implementation details for all the models for NER as follows: TSL (Wu et al., 2020a) proposed a teacher-student learning model, via using source-language models as teachers to train a student model on unlabeled data in the target language for cross-lingual NER.\nThe number of parameters of this model was about 110M.\nUnitrans (Wu et al., 2020b) unified both model transfer and data transfer based on their complementarity via enhanced knowledge distillation on unlabeled target-language data. The number of parameters of this model was about 331M.\nAdvPicker (Chen et al., 2021) proposed a novel approach to combine the feature-based method and pseudo labeling via language adversarial learning for cross-lingual NER. The number of parameters of this model was about 178M. RIKD (Liang et al., 2021) proposed a reinforced knowledge distillation framework. The number of parameters of this model was about 111M.\nMTMT (Li et al., 2022) proposed an unsupervised multiple-task and multiple-teacher model for crosslingual NER. The number of parameters of this model was about 220M.\nIn addition, BWET (Xie et al., 2018), BS (Wu and Dredze, 2019) and TOF (Zhang et al., 2021) were non-distillation-based methods. The number of parameters of these model was about 12M, 111M and 114M respectively.\nFurthermore, baselines for relation extraction and EARL are shown below: CL-GCN (Subburathinam et al., 2019) used GCN (Kipf and Welling, 2017) to embed the language-universal feature and context to learn structured space representation. We refer to the released code from Ahmad et al. (2021) 8 to re-implement it. CL-RNN (Ni and Florian, 2019) utilized BiLSTM to embed the language-universal feature and contextual representations to learn structured space the bottom three layers of the mBERT used in the teacher model and the student model were frozen. All models were trained for 10 epochs and chosen the best checkpoint with the target dev set. For relation extraction and EARL, the other parameters were set following Lin et al. (2020). We optimized our model with Adam (Kingma and Ba, 2015) for 80 epochs with a learning rate of 5e-5 and a dropout rate of 0.1. Furthermore, each experiment was conducted 5 times and reported the mean F1-score.\nThe number of parameters of a model was about 130M. The whole training of SHINE was implemented with one GeForce RTX 3090, which consumed about 3 hours for NER and 8 hours for relation extraction and EARL." }, { "figure_ref": [], "heading": "A.5 Case Study", "publication_ref": [ "b30", "b31", "b16" ], "table_ref": [], "text": "Figure 3 shows the representation distribution for the two languages across the three models. (Sang, 2002;Sang and Meulder, 2003). \"mbert\" refers to the untrained mBERT and \"base\" refers to the \"BERT-Softmax\" model without language-universal features, which is the backbone of Wu et al. (2020a) and Li et al. (2022). \"SHINE\" refers to the model proposed in this paper trained on English language of the CoNLL dataset. Each point refers to the average token representation of a sample in source/target languages." } ]
Zero-shot cross-lingual information extraction (IE) aims at constructing an IE model for some low-resource target languages, given annotations exclusively in some rich-resource languages. Recent studies based on language-universal features have shown their effectiveness and are attracting increasing attention. However, prior work has neither explored the potential of establishing interactions between language-universal features and contextual representations nor incorporated features that can effectively model constituent span attributes and relationships between multiple spans. In this study, a syntax-augmented hierarchical interactive encoder (SHINE) is proposed to transfer cross-lingual IE knowledge. The proposed encoder is capable of interactively capturing complementary information between features and contextual information, to derive language-agnostic representations for various IE tasks. Concretely, a multilevel interaction network is designed to hierarchically interact the complementary information to strengthen domain adaptability. Besides, in addition to the well-studied syntax features of part-of-speech and dependency relation, a new syntax feature of constituency structure is introduced to model the constituent span information which is crucial for IE. Experiments across seven languages on three IE tasks and four benchmarks verify the effectiveness and generalization ability of the proposed method.
SHINE: Syntax-augmented Hierarchical Interactive Encoder for Zero-shot Cross-lingual Information Extraction
[ { "figure_caption": "Figure 1 :1Figure 1: An example of constituency tree, including:(1) an event trigger (deployed) and its argument (17000 U.S. Army soldiers), and (2) two entities corresponding to subject (17000 U.S. Army soldiers) and object (Persian Gulf) respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure2: The overall structure of the proposed SHINE. h c and h l refer to the contextual and language-universal representations, respectively. h w is the concatenation of h c and h l . h f is the final representation of the model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Words B-NP I-NP B-VP I-VP B-S I-", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3: T-SNE visualization (Van der Maaten and Hinton, 2008) of semantic domains of different models by randomly sampling 100 unannotated English (en, source) and Spanish (es, target) sentences from the training set of the CoNLL datasets(Sang, 2002;Sang and Meulder, 2003). \"mbert\" refers to the untrained mBERT and \"base\" refers to the \"BERT-Softmax\" model without language-universal features, which is the backbone ofWu et al. (2020a) andLi et al. (2022). \"SHINE\" refers to the model proposed in this paper trained on English language of the CoNLL dataset. Each point refers to the average token representation of a sample in source/target languages.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "An example of the span representation construction for constituent spans in one sentence.", "figure_data": "Each element denotes the times each word occurs in allspans. For instance, (1,4,VP) and (2,4,VP) both contain\"orders\" in (orders, I-VP, 2), so the value is 2.Span Representation Construction Since theconstituent spans overlap each other, we firstconvert each span into a series of word-level one-hot vectors with BIO annotations (such as NP →B-NP, I-NP) (Sang, 2002). Then, these vectorsare summed to derive x", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation results (%) of entity-level F1-score on the test set of the WikiAnn dataset(Pan et al., 2017). Results except ours were cited from the published literature. For a fair comparison, scores of RIKD (mBERT) was listed. Numbers marked with † denoted that the improvement over the best performing baseline was statistically significant (t-test with p-value <0.05).", "figure_data": ". A relation mention4 https://stanfordnlp.github.io/CoreNLP/", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "4 and 5 reported the zero-shot crosslingual IE results of different methods on 3 tasks and 4 benchmarks, containing 7 target languages. For the NER task, the results show that theSubburathinam et al., 2019) 41.16 42.74 46.60 39.90 39.12 36.45 40.99 38.89 39.70 36.58 36.99 35.57 37.06 37.46 CL-RNN (Ni and Florian, 2019) 47.01 45.60 48.18 39.80 40.31 42.26 43.86 44.97 39.05 40.69 40.16 35.34 37.22 39.57 OneIE (Lin et al., 2020) 55.23 46.50 53.71 41.80 43.59 42.51 47.22 51.80 43.53 46.17 41.39 39.04 41.93 43.98 † 47.62 56.41 † 46.66 † 44.56 † 47.22 † 50.31 † 57.92 † 51.60 51.35 † 48.11 † 41.86 45.49 † 49.38 †", "figure_data": "Relation ExtractionEvent Argument Role LabelingEnEnZhZhArArEnEnZhZhArArMethod⇓⇓⇓⇓⇓⇓Avg⇓⇓⇓⇓⇓⇓AvgZhArEnArEnZhZhArEnArEnZhCL-GCN (GATE (Ahmad et al., 2021)53.52 50.77 52.25 45.36 41.67 44.14 47.95 48.61 50.18 45.99 45.04 42.52 38.39 45.12CLEAE (Lou et al., 2022)54.63 46.91 55.87 44.52 42.23 46.03 48.36 53.96 51.12 47.83 45.91 45.14 43.36 47.89SHINE 59.39 SHINE w/o. interaction 58.26 47.03 56.30 45.15 42.89 46.07 49.28 56.86 51.01 51.30 46.94 41.10 42.04 48.21SHINE w/o. frequency58.87 46.96 56.23 45.31 43.36 45.68 49.40 56.99 50.90 50.92 47.31 40.79 43.67 48.43SHINE w/o. constituency56.79 46.52 55.56 44.64 42.01 45.58 48.51 56.60 49.77 50.65 46.42 39.37 40.81 47.27SHINE w/o. all56.05 45.98 55.03 42.83 40.76 44.63 47.54 56.12 48.18 50.54 44.67 38.29 37.18 45.83", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evaluation results (%) of F1-score on the test set of the ACE 2005 dataset(Walker et al., 2006) for relation extraction and EARL. Results except ours were obtained by implementing the source code of the baseline models provided by the authors. Languages on top and bottom of ⇓ denoted the source and target languages respectively. Numbers marked with † denoted that the improvement over the best performing baseline was statistically significant (t-test with p-value <0.05).", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ", the relation-level F1 score", "figure_data": "LanguageTypeTrainDevTestCoNLL dataset (Sang, 2002; Sang and Meulder, 2003)English-enSentence 14,987 3,4663,684(CoNLL-2003)Entity23,499 5,9425,648German-deSentence 12,705 3,0683,160(CoNLL-2003)Entity11,851 4,8333,673Spanish-esSentence 8,3231,9151,517(CoNLL-2002)Entity18,798 4,3513,558Dutch-nlSentence 15,806 2,8955,195(CoNLL-2002)Entity13,344 2,6163,941WikiAnn dataset (Pan et al., 2017)English-enSentence 20,000 10,000 10,000 Entity 27,931 14,146 13,958Arabic-arSentence 20,000 10,000 10,000 Entity 22,500 11,266 11,259Hindi-hiSentence 5,000 Entity 6,1241,000 1,2261,000 1,228Chinese-zhSentence 20,000 10,000 10,000 Entity 25,031 12,493 12,532", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The statistics of the CoNLL(Sang, 2002;Sang and Meulder, 2003) and WikiAnn(Pan et al., 2017) datasets.", "figure_data": "LanguageTypeTrainDevTestSentence14,671873711English-enEvent Argument4,317 7,814492 933422 892Relation5,247550509Sentence5,847715733Chinese-zhEvent Argument2,610 6,281325 727336 793Relation5,392807664Sentence2,367203210Arabic-arEvent Argument921 2,523141 328127 367Relation2,389266268", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The statistics of the ACE 2005 dataset(Walker et al., 2006).", "figure_data": "was considered for relation extraction. A relationmention was correct if its predicted type and theoffsets of the two associated entity mentions arecorrect. Denote A as the number of all relationmentions classified by the model, B as the numberof all correct relation mentions classified by themodel, and E as the number of all correct relationmentions, the precision (P), recall (R), and relation-level F1-score (R-F1) of the model were:", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Jun-Yu Ma; Jia-Chen Gu; Zhen-Hua Ling; Quan Liu; Cong Liu; Guoping Hu
[ { "authors": "Azad Abad; Moin Nabi; Alessandro Moschitti", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Self-crowdsourcing training for relation extraction", "year": "2017-07-30" }, { "authors": "Eneko Agirre", "journal": "ACM", "ref_id": "b1", "title": "Few-shot information extraction is here: Pre-train, prompt and entail", "year": "2022-07-11" }, { "authors": "Ahmad Wasi Uddin; Nanyun Peng; Kai-Wei Chang", "journal": "AAAI Press", "ref_id": "b2", "title": "GATE: graph attention transformer encoder for cross-lingual relation and event extraction", "year": "2021-02-02" }, { "authors": "Giannis Bekoulis; Johannes Deleu; Thomas Demeester; Chris Develder", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Adversarial training for multi-context joint entity and relation extraction", "year": "2018-10-31" }, { "authors": "Bruno Souza Cabral; Rafael Glauber; Marlo Souza; Daniela Barreiro; Claro ", "journal": "Springer", "ref_id": "b4", "title": "Crossoie: Crosslingual classifier for open information extraction", "year": "2020-03-02" }, { "authors": "Beiduo Chen; Jun-Yu Ma; Jiajun Qi; Wu Guo; Zhen-Hua Ling; Quan Liu", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "USTC-NELSLIP at semeval-2022 task 11: Gazetteeradapted integration network for multilingual complex named entity recognition", "year": "2022-07-14" }, { "authors": "Weile Chen; Huiqiang Jiang; Qianhui Wu; Börje Karlsson; Yi Guan", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Advpicker: Effectively leveraging unlabeled data via adversarial discriminator for cross-lingual NER", "year": "2021-08-01" }, { "authors": "Yubo Chen; Shulin Liu; Xiang Zhang; Kang Liu; Jun Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Automatically labeled data generation for large scale event extraction", "year": "2017-07-30" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Geoffrey E Hinton; Oriol Vinyals; Jeffrey Dean", "journal": "", "ref_id": "b9", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Lifu Huang; Heng Ji; Kyunghyun Cho; Ido Dagan; Sebastian Riedel; Clare R Voss", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Zeroshot transfer learning for event extraction", "year": "2018-07-15" }, { "authors": "Haoming Jiang; Pengcheng He; Weizhu Chen; Xiaodong Liu; Jianfeng Gao; Tuo Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "SMART: robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization", "year": "2020-07-05" }, { "authors": "Phillip Keung; Yichao Lu; Vikas Bhardwaj", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER", "year": "2019-11-03" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b13", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b14", "title": "Semisupervised classification with graph convolutional networks", "year": "2017-04-24" }, { "authors": "Xiaoya Li; Jingrong Feng; Yuxian Meng; Qinghong Han; Fei Wu; Jiwei Li", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "A unified MRC framework for named entity recognition", "year": "2020-07-05" }, { "authors": "Zhuoran Li; Chunming Hu; Xiaohui Guo; Junfan Chen; Wenyi Qin; Richong Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "An unsupervised multiple-task and multiple-teacher model for cross-lingual named entity recognition", "year": "2022-05-22" }, { "authors": "Shining Liang; Ming Gong; Jian Pei; Linjun Shou; Wanli Zuo; Xianglin Zuo; Daxin Jiang", "journal": "ACM", "ref_id": "b17", "title": "Reinforced iterative knowledge distillation for crosslingual named entity recognition", "year": "2021-08-14" }, { "authors": "Ying Lin; Heng Ji; Fei Huang; Lingfei Wu", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "A joint neural model for information extraction with global features", "year": "2020-07-05" }, { "authors": "Jian Liu; Yubo Chen; Kang Liu; Jun Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Neural cross-lingual event detection with minimal parallel resources", "year": "2019-11-03" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b20", "title": "Decoupled weight decay regularization", "year": "2019-05-06" }, { "authors": "Chenwei Lou; Jun Gao; Changlong Yu; Wei Wang; Huan Zhao; Weiwei Tu; Ruifeng Xu", "journal": "ACM", "ref_id": "b21", "title": "Translation-based implicit annotation projection for zero-shot cross-lingual event argument extraction", "year": "2022-07-11" }, { "authors": "Yi Luan; Dave Wadden; Luheng He; Amy Shah; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "A general framework for information extraction using dynamic span graphs", "year": "2019-06-02" }, { "authors": "Jun-Yu Ma; Jia-Chen Gu; Jiajun Qi; Zhen-Hua Ling; Quan Liu; Xiaoyi Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "USTC-NELSLIP at semeval-2023 task 2: Statistical construction and dual adaptation of gazetteer for multilingual complex NER", "year": "2023-07-13" }, { "authors": "D Christopher; Mihai Manning; John Surdeanu; Jenny Rose Bauer; Steven Finkel; David Bethard; Mcclosky", "journal": "The Association for Computer Linguistics", "ref_id": "b24", "title": "The stanford corenlp natural language processing toolkit", "year": "2014-06-22" }, { "authors": "Stephen Mayhew; Chen-Tse Tsai; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Cheap translation for cross-lingual named entity recognition", "year": "2017-09-09" }, { "authors": "Marjorie Meryem M'hamdi; Jonathan Freedman; May", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Contextualized cross-lingual event trigger extraction with minimal resources", "year": "2019-11-03" }, { "authors": "Jian Ni; Radu Florian", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Neural crosslingual relation extraction based on bilingual word embedding mapping", "year": "2019-11-03" }, { "authors": "Xiaoman Pan; Boliang Zhang; Jonathan May; Joel Nothman; Kevin Knight; Heng Ji", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Crosslingual name tagging and linking for 282 languages", "year": "2017-07-30" }, { "authors": "Fernando Pérez-Cruz", "journal": "IEEE", "ref_id": "b29", "title": "Kullback-leibler divergence estimation of continuous distributions", "year": "2008-07-06" }, { "authors": "Erik F Tjong; Kim Sang", "journal": "ACL", "ref_id": "b30", "title": "Introduction to the conll-2002 shared task: Language-independent named entity recognition", "year": "2002" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b31", "title": "Introduction to the conll-2003 shared task: Language-independent named entity recognition", "year": "2003-05-31" }, { "authors": " ", "journal": "", "ref_id": "b32", "title": "", "year": "" }, { "authors": "Milan Straka; Jana Straková", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with udpipe", "year": "2017-08-03" }, { "authors": "Ananya Subburathinam; Di Lu; Heng Ji; Jonathan May; Shih-Fu Chang; Avirup Sil; Clare R Voss", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Cross-lingual structure transfer for relation and event extraction", "year": "2019-11-03" }, { "authors": "Chen-Tse Tsai; Stephen Mayhew; Dan Roth", "journal": "ACL", "ref_id": "b35", "title": "Cross-lingual named entity recognition via wikification", "year": "2016-08-11" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b36", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b37", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Christopher Walker; Stephanie Strassel; Julie Medero; Kazuaki Maeda", "journal": "", "ref_id": "b38", "title": "Ace 2005 multilingual training corpus", "year": "2006" }, { "authors": "Xinyu Wang; Yong Jiang; Nguyen Bach; Tao Wang; Zhongqiang Huang; Fei Huang; Kewei Tu", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Improving named entity recognition by external context retrieving and cooperative learning", "year": "2021-08-01" }, { "authors": "Qianhui Wu; Zijia Lin; Börje Karlsson; Jianguang Lou; Biqing Huang", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "a. Single-/multi-source cross-lingual NER via teacher-student learning on unlabeled data in target language", "year": "2020-07-05" }, { "authors": "Qianhui Wu; Zijia Lin; Börje F Karlsson; Biqing Huang; Jianguang Lou", "journal": "", "ref_id": "b41", "title": "Unitrans : Unifying model transfer and data transfer for crosslingual named entity recognition with unlabeled data", "year": "2020" }, { "authors": "Shijie Wu; Mark Dredze", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "year": "2019-11-03" }, { "authors": "Jiateng Xie; Zhilin Yang; Graham Neubig; Noah A Smith; Jaime G Carbonell", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Neural crosslingual named entity recognition with minimal resources", "year": "2018-10-31" }, { "authors": "Mahsa Yarmohammadi; Shijie Wu; Marc Marone; Haoran Xu; Seth Ebner; Guanghui Qin; Yunmo Chen; Jialiang Guo; Craig Harman; Kenton Murray; Aaron Steven White; Mark Dredze; Benjamin Van Durme", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Everything is all it takes: A multipronged strategy for zero-shot crosslingual information extraction", "year": "2021-07-11" }, { "authors": "Ying Zhang; Fandong Meng; Yufeng Chen; Jinan Xu; Jie Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Target-oriented fine-tuning for zero-resource named entity recognition", "year": "2021-08-01" }, { "authors": "Deyu Zhou; Tianmeng Gao; Yulan He", "journal": "The Association for Computer Linguistics", "ref_id": "b46", "title": "Jointly event extraction and visualization on twitter via probabilistic modelling", "year": "2016-08-07" } ]
[ { "formula_coordinates": [ 3, 366.6, 659.88, 98.84, 12.09 ], "formula_id": "formula_0", "formula_text": "x l = [x p ; x d ; x e ; x c ]," }, { "formula_coordinates": [ 3, 365.11, 676.42, 159.3, 12.33 ], "formula_id": "formula_1", "formula_text": "h c = mBERT(x),(2)" }, { "formula_coordinates": [ 3, 366.5, 694.51, 157.91, 12.33 ], "formula_id": "formula_2", "formula_text": "h l = Trans-L(x l ),(3)" }, { "formula_coordinates": [ 3, 306.14, 734.04, 218.27, 28.04 ], "formula_id": "formula_3", "formula_text": "h c = {h c i } L i=1 and h l = {h l i } L i=1" }, { "formula_coordinates": [ 4, 70.87, 639.28, 218.27, 28.27 ], "formula_id": "formula_4", "formula_text": "X s = {X s i } T i=1 where X s i = {x j } i+P -1 j=i" }, { "formula_coordinates": [ 4, 351.87, 473.86, 172.54, 10.76 ], "formula_id": "formula_5", "formula_text": "Lg = KL(h c ||h l ) + KL(h l ||h c ),(4)" }, { "formula_coordinates": [ 4, 361.62, 488.24, 107.31, 41.85 ], "formula_id": "formula_6", "formula_text": "L l = 1 T T i=1 [KL(H c i ||H l i )+ KL(H l i ||H c i )]," }, { "formula_coordinates": [ 4, 345.22, 535.59, 179.19, 11.14 ], "formula_id": "formula_7", "formula_text": "Lt = KL(H c t ||H l t ) + KL(H l t ||H c t ),(6)" }, { "formula_coordinates": [ 4, 306.14, 557.25, 220.18, 27.45 ], "formula_id": "formula_8", "formula_text": "H c i = {h c j } i+P -1 j=i and H l i = {h l j } i+P -1 j=i . H c" }, { "formula_coordinates": [ 5, 83.58, 93.25, 190.09, 65.25 ], "formula_id": "formula_9", "formula_text": "0 1 0 have 0 0 1 0 0 1 received 0 0 1 1 0 1 deployment 1 0 0 2 0 1 orders 0 1 0 2 0 1" }, { "formula_coordinates": [ 5, 339.1, 108.68, 151.99, 65.25 ], "formula_id": "formula_10", "formula_text": "1 1 1 1 1 bought 1 1 2 2 1 apple 1 2 1 3 1 juice 1 2 3 1 1 here 1 1 1 1 1" }, { "formula_coordinates": [ 5, 317.93, 361.31, 206.48, 22.88 ], "formula_id": "formula_11", "formula_text": "Attn-F(Q, K, V ) = G(softmax QK T √ d k V ).(7)" }, { "formula_coordinates": [ 5, 365.32, 500.91, 159.09, 27.15 ], "formula_id": "formula_12", "formula_text": "G(A) ij = F ij A ij j F ij A ij .(8)" }, { "formula_coordinates": [ 5, 356.75, 689.12, 167.66, 12.33 ], "formula_id": "formula_13", "formula_text": "h w = Linear([h c ; h l ]),(9)" }, { "formula_coordinates": [ 5, 358.34, 707.21, 161.53, 12.33 ], "formula_id": "formula_14", "formula_text": "h f = RTrans-N(h w , F ), (10" }, { "formula_coordinates": [ 5, 519.87, 710.08, 4.54, 9.46 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 6, 115.35, 341.94, 173.78, 33.71 ], "formula_id": "formula_16", "formula_text": "L e = 1 L L i=1 L CE (p(x i ), y i ) .(11)" }, { "formula_coordinates": [ 6, 87.55, 563.79, 166.72, 26.84 ], "formula_id": "formula_17", "formula_text": "L KD = 1 L L i=1 MSE (ptea x i ), pstu x i ." }, { "formula_coordinates": [ 6, 88.53, 756.8, 200.61, 11.79 ], "formula_id": "formula_18", "formula_text": "p mn = softmax W r [ ĥf m ; ĥf n ; ĥf s ] + br ,(13)" }, { "formula_coordinates": [ 6, 343.83, 132.42, 176.03, 33.83 ], "formula_id": "formula_19", "formula_text": "L r = M m=1 R i n=1 L CE (p mn , y mn ) , (14" }, { "formula_coordinates": [ 6, 519.87, 144.65, 4.54, 9.46 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 6, 343.16, 571.95, 176.7, 10.77 ], "formula_id": "formula_21", "formula_text": "L f = L task + α(L g + L l + L t ), (15" }, { "formula_coordinates": [ 6, 519.87, 572.3, 4.54, 9.46 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 13, 80.82, 678.5, 208.31, 24.43 ], "formula_id": "formula_23", "formula_text": "P = B A , R = B E , E-F1 = 2 × P × R P + R .(16)" }, { "formula_coordinates": [ 13, 315.79, 689.07, 208.62, 24.43 ], "formula_id": "formula_24", "formula_text": "P = B A , R = B E , R-F1 = 2 × P × R P + R .(17)" }, { "formula_coordinates": [ 14, 80.44, 148.31, 208.69, 24.43 ], "formula_id": "formula_25", "formula_text": "P = B A , R = B E , A-F1 = 2 × P × R P + R .(18)" } ]
10.18653/v1/2021.naacl-main.278
2024-02-08
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b44", "b19" ], "table_ref": [], "text": "Large language models (LLMs) like ChatGPT and GPT-4 (OpenAI, 2023) have been quite successful in solving different generative and reasoning tasks. The combination of their abilities in leveraging in-context learning as well as instruction following have unlocked new state-of-the-art results across the natural language processing (NLP) field. The existing LLMs are mostly pre-trained on a huge volume of unstructured data from the internet including books, articles, webtexts, repositories, Wikipedia, etc. Training on unstructured data naturally leads to relatively poor performance when dealing with tasks that demand organizing text into structured machine-readable format.\nA semantic graphs, as a form of graph-structured data, stores information in a machine-accessible 1 Our code is at https://github.com/Jiuzhouh/PiVe. way (van Harmelen et al., 2008). Generating a semantic graph from text is known as text-tograph (T2G) generation and is previously attempted mostly by fine-tuning small language models (Xu et al., 2022;Guo et al., 2020). However, generating graph-structured data remains a challenge for LLMs even in the presence of reasonable number of few-shot examples. In fact, regardless of the number of few-shot examples or prompting style the outputs from LLMs (e.g., ) still contain errors and require correction ( §4.4)." }, { "figure_ref": [ "fig_0" ], "heading": "Large Language Model Text", "publication_ref": [], "table_ref": [], "text": "Output Prompt Verifier Module\nIn this paper, we focus on how to improve the graph-based generative capability of LLMs. To this end, we propose the Prompting through Iterative Verification (PiVe) framework shown in Figure 1. Specially, PiVe involves leveraging an external verifier module (i.e., a much smaller LM) and incorporating the feedback from verifier module into the prompt. PiVe iteratively utilises the verifier module and refines the prompts, via corrective instructions, before sending them back into the LLM, leading to substantially improved quality of the generated semantic graphs.\nIn particular, to train the verifier modules, we start from a seed dataset of text and graph (T,G) pairs, and construct an arbitrarily large graphperturbation dataset via a simple procedure which takes any graph G from the seed set and perturbs it arbitrarily on its entities (E), relations (R), or triples (Tr). The text and perturbed graph ( Ḡ), along with a corrective description to invert the ap-plied perturbation (IP) form a verification dataset of (T, Ḡ,IP) triples which serve as the training data for self-supervised learning of our verifier module. The verification dataset could be as large as desired (i.e., for any seed dataset D, containing graphs of |E| entities, |R| relations, |Tr| triples, it could produce O(|D|×|E|× |R|×|Tr|) perturbations only by deleting. 2 We then devise fine-tuning and instruction-tuning to train domain-specific and unified verifiers, respectively.\nDuring the T2G generation via the LLM (e.g., in the zero-shot setting \"Transform the text into a semantic graph: Text: ... Graph:\"), the verifier takes the text T, the output graph from the LLM, and sends a corrective signal to the LLM (e.g., \"Transform the text into a semantic graph and add the given triples to the generated semantic graph: Text: ... Triples: ... Graph:\"). This process continues till the verifier module verifies the output as correct and terminates. We refer to this as Iterative Prompting. Additionally, there is another (more cost effective) mode to the verifier module, which starts by calling the LLM once at the start to get an initial graph, and then the rest of the corrective steps are all applied step-by-step and iteratively through the verifier offline. We refer to this as Iterative Offline Correction.\nOur extensive experiment results on three graphbased datasets demonstrate the effectiveness of the proposed PiVe framework in consistently improving the quality of the LLM output via providing iterative corrective guidance by an average of 26% across 3 datasets. We also create GenWiki-HIQ, a high-quality text-graph dataset and show how verifier module could be leveraged as a data augmentation technique to improve the quality of automatically constructed text-graph datasets." }, { "figure_ref": [], "heading": "Basic Definitions", "publication_ref": [], "table_ref": [], "text": "A semantic graph is a network that represents semantic relations between entities. Each semantic graph has its corresponding verbalisation, and can have different textual representations. A set of triples (i.e., [subject, predicate, object]) represents a semantic graph. Given a text, the task of text-to-graph generation is to query an LLM to generate a semantic graph of the text. The semantic graph should cover the information in the text as much as possible.\nTo prompt the LLM, we use few-shot by showing an example of T2G in the prompt to specify the expected format of the semantic graph (i.e., set of triples). We report experiments under various number of shots ( §4.6). The basic form of instruction we use in the prompt is \"Transform the text into a semantic graph.\" followed by a text and a semantic graph pair as a demonstration. We also show results under different prompting strategies ( §4.7). Different demonstrations are used for different datasets to adapt to the style of different datasets. We show the used demonstrations in Appendix I." }, { "figure_ref": [], "heading": "The PiVe Framework", "publication_ref": [], "table_ref": [], "text": "We first explain our training protocol for the verifier module ( §3.1), and then present our framework of iterative verification prompting ( §3.2)." }, { "figure_ref": [], "heading": "Verifier Module", "publication_ref": [], "table_ref": [], "text": "The quality of the generated semantic graph from LLM prompting could be quite poor. For instance the LLM often misses triples in the generated graph. In other words, some semantic relations between entities in the text are difficult to be captured for LLMs when they are generating a semantic graph. To detect the missing or incorrect parts of the generated semantic graph, we design a verifier module.\nThe verifier module is trained on a small pretrained LM ( §4.2). A typical graph-based dataset contains parallel text and semantic graph (T,G) pairs. For different graph-based datasets, we use their corresponding training data for the seed dataset to create data for the verifier module. In particular, for each text-graph pair in the original dataset, we create one correct instance and one perturbed instance. We concatenate the text with graph using a separator token <S> and the target is to generate a specific output, denoted as IP, during training. For correct instances, the IP is simply the word \"Correct\". For perturbed instances, we have two methods to create them:\n• Random method: if the graph contains more than one triple, we randomly omit one triple from it and concatenate the text with perturbed graph using a separated token <S>. The target is to generate the missing part (e.g., triple Tr).\n• Heuristic method: Based on the observation that LLMs tend to miss the triples whose subject and object are not in the text, in addition to randomly omitting one triple from the graph we also omit the triple from the graph if subject and object of it are not in the text.\nThe output to generate for perturbed examples is the missing triple, Tr. By utilising these two methods, we can create an arbitrarily large verification dataset to train a verifier module which will be used at inference time during prompting the LLM." }, { "figure_ref": [], "heading": "Iterative Prompting", "publication_ref": [], "table_ref": [], "text": "During the LLM prompting, the generated semantic graphs from LLMs is fed into the verifier module and the outputs from the verifier module is collected. If verifier generated \"Correct\" in its output, it means we do not need to make changes to the generated graph. Otherwise, the generated output from the verifier is added to the original prompt to create a new prompt. The new prompt is then used to query the LLM again. We repeat the whole process iteratively. The iteration process will stop when no missing triple is predicted or a maximum number of iterations is reached.\nNew Prompt Design As with the prompt used in the first iteration, we still provide an example in the new prompt for the subsequent iterations. The new prompt is \"Transform the text into a semantic graph and also add the given triples to the generated semantic graph.\" In addition to the text, we also include some triples predicted by the verifier module which LLMs are likely to miss. This explicitly instructs the LLM to generate semantic graph and include the given triples. The given triple set contains the predicted missing triples from each iteration, which prevents the LLM from making the same mistakes as in previous iterations. See Appendix I for the used demonstrations." }, { "figure_ref": [], "heading": "Iterative Offline Correction", "publication_ref": [], "table_ref": [], "text": "Similar to Iterative Prompting, the Offline Correction starts from the online LLM, but then continues with the step-by-step verification and correction steps offline. This approach is more cost effective as it relies only on one API call per instance (as opposed to several API calls of iterative prompting), however it is potentially weaker as it relies on the capability of the small verifier LM to both verify and apply the needed corrections. The offline correction stop under the same stopping criterion to Iterative Prompting." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We describe the datasets and pre-processing method ( §4.1), introduce the models and implementation details ( §4.2) and the evaluation metrics ( §4.3). In Subsection 4.4, we describe the main result from PiVe, and compare the two modes of verifier: Iterative Prompting vs. Iterative Offline Correction ( §4.5). We then conduct various configurations of shots ( §4.6), and prompting ( §4.7).\nIn Subsection 5, we show how PiVe could be used for data augmentation of automatically generated graph-text datasets (e.g., GenWiki)." }, { "figure_ref": [], "heading": "Datasets and Preprocessing", "publication_ref": [ "b9", "b18", "b22", "b10" ], "table_ref": [ "tab_0" ], "text": "We evaluate PiVe on three graph-based datasets, KELM (Agarwal et al., 2021), WebNLG+2020 (Gardent et al., 2017), GenWiki (Jin et al., 2020).\nKELM is a large-scale synthetic corpus that consists of the English Wikidata KG and the corresponding natural text. It has ∼15M sentences synthetically generated using a fine-tuned T5 model. Each graph in KELM is a linearised KG containing a list of triples of the form [subject, relation, object]. If a triple has a sub-property, then it is quadruplet instead. We use a subset (∼60K) of KELM which is named as KELM-sub. The creation of KELM-sub follows two criteria. We found that most graphs in KELM contain no more than six triples and only around 2,500 graphs contain more than six triples. Therefore, 1) we only consider the graphs with no more than six triples, and 2) we do not consider the graphs containing any triple with a sub-property. Based on these two criteria, for each size of graph (from one triple to six triples), we sampled equal number of (T,G) pairs. In total, the created KELM-sub contains 60,000/1,800/1,800 samples as train/validation/test set.\nWebNLG+2020 contains a set of triples extracted from DBpedia (Auer et al., 2007) We use the method described in Section 3.1 to create the data for training the verifier module. Table 1 shows the statistics of the created training data on these three seed datasets." }, { "figure_ref": [], "heading": "The LLM and Verifier Modules", "publication_ref": [ "b34", "b21", "b33", "b43", "b23", "b29", "b16" ], "table_ref": [], "text": "ChatGPT (gpt-3.5-turbo) is used as our default LLM to perform the T2G task. 3 We also experiment with GPT-4 in Subsection 4.7. For verifier module, we use T5-Large (Raffel et al., 2020), and Flan-T5-XXL (Chung et al., 2022) as the backbone models for dataset-specific verifier module, and unified verifier module, respectively. T5 models follow the encoder-decoder architecture and treat all NLP tasks as unified text-to-text transduction tasks. Flan-T5 is instruction-fine-tuned version of T5 which was trained on 1,836 NLP tasks initialized from fine-tuned T5 checkpoint. For T5-large, we fine-tune all parameters for separate verifier modules per each dataset. While for Flan-T5-XXL, we use LoRA (Hu et al., 2022) as a parameterefficient fine-tuning method, to train a unified verifier module which can follow the instruction. When using the unified verifier, we specify the dataset name in the instructions as datasets have different naming convention for relations.\nThe verifier are implemented using Pytorch (Paszke et al., 2019) and Transformers (Wolf et al., 2020). For the training, we use Adam optimizer (Kingma and Ba, 2015). Details about hyperparameter setting is provided in Appendix A. For the implementation of parameter efficient training method used in Flan-T5-XXL, we use PEFT (Mangrulkar et al., 2022) and 8-bit quantization technique (Dettmers et al., 2022). All training was done using a single A40 GPU with 48GB of RAM." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b35", "b45", "b8" ], "table_ref": [], "text": "To evaluate the quality of the generated graphs given the ground-truth graphs, we use four automatic evaluation metrics:\nTriple Match F1 (T-F1) calculates F1 score based on the precision and recall between the triples in the generated graph and the ground-truth. We calculate the F1 scores for all test samples and compute the average F1 score as the final triple Match F1 score.\nGraph Match F1 (G-F1) focuses on the entirety of the graph and evaluates how many graphs are exactly produced the same. For all test samples, we calculate the F1 score based on the precision and recall between all predicted graphs and all groundtruth graphs. This F1 score is the final Graph Match F1 score. Since graphs are represented in a linearised way, we could not simply use the string match method to check whether two graphs are the same. Instead, we first build directed graphs from linearised graphs using NetworkX (Laboratory et al., 2008), then we consider the two graphs to be the same when all node and edge attributes match.\nG-BERTScore (G-BS) is a semantic-level metric proposed by (Saha et al., 2021), which extends the BERTScore (Zhang et al., 2020) for graphmatching. It takes graphs as a set of edges and solve a matching problem which finds the best alignment between the edges in predicted graph and those in ground-truth graph. Each edge is considered as a sentence and BERTScore is used to calculate the score between a pair of predicted and ground-truth edges. Based on the best alignment and the overall matching score, the computed F1 score is used as the final G-BERTScore.\nGraph Edit Distance (GED) (Abu-Aisheh et al., 2015) computes the distance between the predicted graph and the ground-truth graph. It measures how many edit operations (addition, deletion, and replacement of nodes and edges) are required for transforming the predicted graph to a graph isomorphic to the ground-truth graph. Lower GED between two graphs indicates the two graphs are more similar. In practice, the cost of each operation is set to be 1. For each sample, GED is normalized by a normalizing constant which is the upper bound of GED to make sure it is between 0 and 1. For demonstration, we multiply GED by 100 to show more decimals.\nSingle Verifier Module Unified Verifier Module T-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓ KELM-" }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_0" ], "text": "We report the evaluation results of using PiVe with ChatGPT on the test set of three datasets in Table 2. All results presented under Base mean the direct output of the LLM without any verification. By utilising PiVe, on each dataset, we can see the consistent improvement of the quality of the generated graphs. For instance, in GenWiki which uses the same verifier module that was trained on the training data of WebNLG, the improvement of the scores over all metrics indicates the effectiveness of PiVe. Since the graphs are generated by the LLM through one-shot learning, G-F1 as the most strict metric, it is hard to get high G-F1 score (basically aiming for exact match without any minor deviation in wording, spelling, entities, or relations). On WebNLG and GenWiki datasets, single verifier module performs slightly better than unified verifier module. While on KELM-sub dataset, unified module performs far better. We speculate this is due to the size of training data for KELM-sub verifier module being larger than that for WebNLG and GenWiki (as shown in Table 1). Since unified verifier module combines the training data of different datasets, more training data leads to better performance for instruction-tuning. We demonstrate two qualitative examples in Appendix G and also conducted human evaluation which we include in Appendix F due to page limit." }, { "figure_ref": [], "heading": "Iterative Prompting vs. Iterative Offline Correction", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Instead of iteratively prompting the LLM, another way to utilise the results from verifier module is to append the predicted missing triples to the previously generated graph. The results of the comparison between iterative prompting and iterative offline correction using single verifier module and unified verifier module on KELM dataset is shown in Table 3. Iterative offline correction performs worse than iteratively prompting. This might be because iteratively prompting has the chance of doing self-correction. In each iteration, when we prompt the LLM, the generated graphs can probably correct the mistakes that were made in previous iteration. For example, in Figure 6 of the Appendix, in Base the predicted relation regarding birth date is \"birth year\", while the reference is \"date of birth\". As the PiVe iteration continues, in Iteration 2, the relation \"birth year\" is regenerated as \"date of birth\" even though we didn't mention this in the prompt. Due to the page limit, we report the comparison results on WebNLG and GenWiki datasets in Appendix B. Similarly, iterative prompting can achieve better results than iterative offline correction over all using different verifier modules." }, { "figure_ref": [], "heading": "Impact of More Shots", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "While in our main experiments, for cost reason, we used only one-shot demonstrations for the LLM prompting (i.e., GPT-3.5), we show that PiVe is effective in improving the results regardless of the underlying number of shots. Here we report the results of k-shot (k=6, 8, 10) with the iterative offline correction (i.e., only using the LLM once to get the initial graph, while the correction steps are all applied step-by-step and offline). Figure 2 demonstrates the results on KELM-sub using unified verifier with iterative offline correction (for detailed numbers see Table 10 in Appendix). The results show, as expected, that PiVe still provides consistent improvement even with the increase in the number of shots. Additionally, as the shots grow the improvement from PiVe also increases.\nIterative Prompting Iterative Offline Correction T-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓" }, { "figure_ref": [ "fig_1" ], "heading": "Baselines on LLMs", "publication_ref": [ "b41", "b24" ], "table_ref": [ "tab_0", "tab_8" ], "text": "To probe other prompting techniques as baselines of generating graphs from the LLM, we compare three diverse prompts. The first one we use is the default prompt used across our main experiments. This prompt is fairly direct and simple. Prompt 1:\nTransform the text into a semantic graph.\nIn the second prompt, we aim to instruct the LLM to generate larger graph with more triples. This is to increase the chance of LLM recovering more triples during the generation. Prompt 2: Transform the text into a semantic graph consisting of a set of triples. Generate as many triples as possible. For the third prompt, inspired by Chain-of-thought (Wei et al., 2022;Kojima et al., 2022) approach, we also ask the LLM to generate the semantic graph in two steps. Prompt 3: Transform the text into a semantic graph consisting of a set of triples. First produce all relations possible, then produce the graph.\nWe conduct experiments on Chat-GPT (gpt-3.5-turbo) and GPT-4 (gpt-4) in 6-shot learning on KELM-sub, using unified verifier with iterative offline correction. The results are shown in Figure 3 (for detailed numbers see Table 11 and Table 12 in Appendix). In general, as expected, GPT-4 performs far better than ChatGPT on the T2G task, but the effect of different prompts varies across these two models. Specifically, on ChatGPT, Prompt 2 achieves the best results while on GPT-4, Prompt 1 is outperforming the rest on most metrics. PiVe can consistently improve the results across all different settings, with the biggest jump in performance emerging in the first iteration, with slight improvements also observed between the second and third iterations of correction." }, { "figure_ref": [], "heading": "Computational Cost and Trade-off", "publication_ref": [], "table_ref": [], "text": "Training and Inference The training and inference of both single verifiers and unified verifier are on a single A40 GPU. Each single verifier takes around 6 hours and the unified verifier takes around 40 hours to train. The computation cost for training of verifiers is a feasible one-off cost. Once the training is finished, the inference of the verification of each instance takes 0.15s for single verifier, and 3.5s for unified verifier. Different verifiers present performance-speed trade-offs and are significantly effective in augmenting the LLMs." }, { "figure_ref": [], "heading": "Stopping Criterion", "publication_ref": [ "b32", "b11", "b37" ], "table_ref": [], "text": "In theory, the verification module could run till no missing triple is predicted or a maximum number of iterations is reached. However, running more iterations increases the associated cost (i.e., OpenAI API charges). We set a maximum of 3 iterations. First produce all relations possible, then produce the graph. To evaluate the effectiveness of the verifier module as data augmentation tool, as well the quality of the generated graph, first we use Flan-T5-XL model to generate a description of each graph in zero-shot setting by using the prompt \"Transform the semantic graph into a description.\" for each iteration. Then we leverage automatic quality evaluation metrics to calculate the score between the generated description and the corresponding text. Ideally, the higher the similarity between the graph and the corresponding text, the higher the score of the generated description and corresponding text. We use four commonly used quality evaluation metrics which are BLEU (Papineni et al., 2002), ME-TEOR (Banerjee and Lavie, 2005), TER (Snover et al., 2006), BERTScore." }, { "figure_ref": [ "fig_2" ], "heading": "GenWiki-HIQ", "publication_ref": [], "table_ref": [], "text": "Result. We used the dataset-specific verifier module to do the data augmentation. We conducted four iterations and the evaluation results are shown in Table 4. The results in Base represent the scores over non-parallel graph-text pairs from GenWiki FINE , which have low overlap between graph and text. By using verifier module iteratively, we add more missing triples to the original graph, thus leading the higher quality scores. As the iteration progresses, fewer missing triples are added and we take the augmented graph-text pairs from the last iteration as the final created GenWiki-HIQ dataset. We also conducted G2T experiments in Appendix E to further demonstrate the quality of GenWiki-HIQ. The G2T model trained on GenWiki-HIQ performs far better than the G2T model trained on GenWiki FINE-f on the human annotated GenWiki test set. This indicates that GenWiki-HIQ contains parallel text-graph pairs with high overlap.\nQualitative Example. In Figure 4, we demonstrate an example from the created GenWiki-HIQ dataset and the original graph in GenWiki FINE . After the data augmentation process, the graph in GenWiki-HIQ contains more information in text. 6 Background and Related Work" }, { "figure_ref": [], "heading": "In-Context Learning", "publication_ref": [ "b12", "b13", "b17", "b26", "b28" ], "table_ref": [], "text": "With the scaling of model size and training corpus size (Brown et al., 2020;Chowdhery et al., 2022), LLMs demonstrate new abilities of learning from a few demonstrations which contain some training examples (Dong et al., 2023). As a new paradigm, In-Context Learning does not require parameter updates and directly performs predictions on the pre-trained language models. The provided demonstration examples in the prompt follow the same format, which are usually written in natural language templates. By concatenating a query question with the demonstrations in the prompt, LLMs can learn from the given examples and make a prediction of the query question. Previous research (Liu et al., 2022;Lu et al., 2022) has shown that the number and order of the demonstrations can influence the In-Context Learning performance. These are further points of future investigation, which could potentially improve the initial graph produced by the LLM, which could further be corrected with the PiVe framework." }, { "figure_ref": [], "heading": "Instruction-Tuning", "publication_ref": [ "b30", "b40", "b27", "b36", "b39", "b46" ], "table_ref": [], "text": "Instruction-Tuning (Mishra et al., 2022;Wang et al., 2022;Longpre et al., 2023) is a framework of doing multi-task learning, which enables the use of human-readable instructions to guide the prediction of LLMs. This novel training paradigm can improve the performance of the downstream tasks and also shows great generalisation ability on unseen tasks (Chung et al., 2022;Sanh et al., 2022). Wang et al. (2023) proposed a unified information extraction framework based on multi-task instruction-tuning. Zhou et al. (2023) utilised instruction-tuning to perform controlled text generation following certain constraints. In our work, we use instruction-tuning to train a unified verifier module, which can follow the instruction to perform predictions on different datasets." }, { "figure_ref": [], "heading": "Verifiers", "publication_ref": [ "b15", "b42", "b42" ], "table_ref": [], "text": "Leveraging small models could further improve the performance of LLMs. Cobbe et al. (2021) proposed to solve math word problem by utilising verifier. The verifier is used to judge the correctness of model-generated solutions. During test time, based on multiple candidate solutions generated, verifier calculates the correctness probability and the final answer will be selected by the verifier from the ranked list. Welleck et al. (2023) proposed selfcorrection, an approach that trains a small models to iteratively apply self-correction. The idea of self-correction looks similar to our PiVe. While Welleck et al. (2023) focuses on the design of a selfcorrecting language model, PiVe presents a very simple verifier module design and a simple data perturbation strategy to train such model. The ideas presented in our work are developed concurrently and independently." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We proposed PiVe, an iterative verification framework, to improve the graph-based generative capability of LLMs. We illustrated how a simple perturbation technique could be used to build data for training a verifier module which both verifies and corrects outputs from an LLM. We used different training strategies to build both dataset-specific verifiers with fine-tuning, and a unified verifier with instruction-tuning. Our verifier module could act both as an iterative prompting guide to improve outputs of an LLM, as well as an iterative offline correction system that starts from an LLM outputs but continuously improves it offline. The experimental results on three graph-based datasets demonstrates the effectiveness of PiVe. Furthermore, PiVe can also be used as a data augmentation technique to help improve the quality of automatically generated parallel text-graph datasets. By using verifier module, we created GenWiki-HIQ, a dataset containing 110K parallel text and graphs with high overlap for future research in text-graph domain." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although the proposed framework is a straightforward and effective method of improving the generative capabilities of black box LLMs in graph generation, it still has some limitations. Firstly, PiVe is only designed for few-shot prompting setting on LLMs, using an external verifier module to enhance their generative capabilities. The improvement is less significant when utilising PiVe on LMs that have been fine-tuned on the task data. Secondly, PiVe is not designed for free-form text generation tasks. Due to the unique aspect of graph, which has a specific structure, it allows for a much more fine-grained detection of errors and enables a richer corrective feedback. Translation between text and other similar modalities of data (e.g., table, SQL) can also effectively leverage our verification mechanism. Thirdly, in this work, we only focus on the triple missing mistake made by LLMs, so that the verifier module is not sensitive to the order of the head entity and tail entity. This means when the order of the head entity and tail entity in a triple of a generated graph from LLMs is incorrect, verifier module is not able to detect this type of mistake. It would be more effective if other error-detection heuristic methods are developed for creating the training dataset of the verifier. " }, { "figure_ref": [], "heading": "Hyperparameter Assignment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "A Hyperparameter Setting" }, { "figure_ref": [], "heading": "B Additional Experiment Result", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "Table 7 and Table 8 show the results of the comparison between iteratively prompt and iterative offline correction on WebNLG and GenWiki datasets." }, { "figure_ref": [], "heading": "C Effect of Perturbation Method", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "As described in Section 3.1, we perturb the graph by omitting one triple when building the verifier module of PiVe. In addition, we also investigated other perturbation methods to train a verifier module, such as perturbing the head entity, relation and tail entity. To be specific, for head entity perturbation, if the graph contains more than one triple, we randomly choose one triple and replace the head entity with a different head entity from other triples of the same graph. Likewise, we replace the relation and tail entity for relation perturbation and tail entity perturbation, respectively. The target is to predict the original triple. Then we train different verifier modules using these three perturbation methods on KELM-sub. The results of doing different perturbations using Iterative Offline Correction is shown in Table 9.\nComparing with the result of omitting triple perturbation method shown in Table 3 using Single Verifier with Iterative Offline Correction, these three perturbation methods have varying effects. While the relational perturbation works in terms of T-F1, with more iterations, the G-BS score generally goes down for all these perturbations. This indicates the verifier module could potentially inject wrong corrections if not trained with the proper perturbation mechanism. We speculate the reason is because LLMs are less likely to make mistakes at entity level, so these perturbation methods are not useful for training a verifier module. This also indicates when building a verifier module, choosing reasonable perturbation methods is significant and necessary." }, { "figure_ref": [], "heading": "D ChatGPT vs GPT-3", "publication_ref": [], "table_ref": [ "tab_9", "tab_2" ], "text": "To further highlight the generalisation ability of PiVe, in addition to ChatGPT, we also experiment with as the backbone LLM to perform the T2G task. We perform experiments on KELM-sub dataset using iterative prompting and iterative offline correction with different verifiers. The results are shown in Table 13. Compared with the results of using ChatGPT (shown in Table 3), GPT-3 has a better graph-based generative capability. Nonetheless, PiVe can still consistently further improve its results over all settings. Using iterative prompting with the unified verifier can achieve the best result on KELM-sub." }, { "figure_ref": [], "heading": "E G2T Results on GenWiki-HIQ", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "To further verify the quality of GenWiki-HIQ dataset, we use T5-large as the backbone model to train a G2T model, which generates the corresponding text based on the graph. Then we test it on the original GenWiki test set containing a 1,000 high-quality human annotated parallel text-graph pairs. As comparison, we also train another G2T model on GenWiki FINE-f which is the seed dataset of GenWiki-HIQ.\nThe result is demonstrated in Table 14. On original Genwiki test set, the model trained on GenWiki-HIQ performs far better than the model trained on GenWiki FINE-f across all metrics. This indicates that GenWiki-HIQ contains parallel textgraph pairs with high overlap." }, { "figure_ref": [ "fig_3" ], "heading": "F Human Evaluation", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "We conducted a human evaluation on 105 randomly sampled instances from three datasets (KELM-sub, WebNLG, GenWiki). Specifically, for each dataset, After annotation, we took majority voting over the result of each instance, then calculated the number of wins for ChatGPT with or without PiVe. The results is shown in Table 15. From the results, we can see ChatGPT with PiVe wins on 85 out of 105 samples and the total winning rate is over 80%. This indicates the PiVe can effectively improve the graph-based generative capability of LLMs.\nIterative Prompting Iterative Offline Correction T-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓\nFor the cases that PiVe did not result in any improvement, we did error analysis and found that there were mainly two types of mistakes that PiVe made: redundancy and inaccuracy. In Figure 5, we demonstrate two examples containing these two types of mistakes shown in red text. In the Text: While pop rock can trace its stylistic roots back to rock music, Reggae music evolved out of different musical genre, known as ska. Interestingly, the Train song, Mermaid, belongs to the genre of pop rock, but is also considered to be of the reggae genre as well. " }, { "figure_ref": [], "heading": "G PiVe Examples", "publication_ref": [], "table_ref": [], "text": "In Figure 6 it. Then, the verifier predicts the missing triple\n[\"Francisco Uranga\", \"sex or gender\", \"male\"]. In Iteration 2, both of these two missing triples are included in the prediction from LLM, and at this time, the verifier predicts \"Correct\". The prediction from Iteration 2 contains all information correctly in the reference.\nIn Figure 7, we illustrate another example of PiVe from WebNLG test set using single verification module.\nIn Base, the verification module predict the missing triple [\"Agremiação Sportiva Arapiraquense\", \"ground\", \"Estádio Municipal Coaracy da Mata Fonseca\"], even though there is a similar triple but containing mistakes in the prediction from the LLM. In Iteration 1, the LLM corrects the mistakes in the previous iteration, and also includes the predicted missing triple. Based on the prediction from the LLM, the verification module predict the missing triple [\"Campeonato Brasileiro Série C\", \"country\", \"Brazil\"]. In Iteration 2, the verification module predict \"Correct\" to the final prediction from the LLM. After three iterations using PiVe, the predicted graph contains all information in the reference." }, { "figure_ref": [], "heading": "H Comparison with Fine-tuned Baselines", "publication_ref": [ "b34" ], "table_ref": [ "tab_12" ], "text": "While our work focuses on the fundamental question of \"How can we improve the generative capabilities of black box LLMs in graph generation?\", for completeness we also provide results of finetuned T5 (Raffel et al., 2020) in Table 16. As expected, fine-tuning on large amount of data surpasses few-shot prompting." }, { "figure_ref": [], "heading": "I Demonstrations in Prompt", "publication_ref": [], "table_ref": [], "text": "Figure 8 shows the demonstrations used for KELMsub and Figure 9 shows the demonstrations used for WebNLG and GenWiki. In Iteration 1, we use the demonstration that does not contain the missing triples. For subsequent iterations, we include the missing triples in the demonstration." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our work is built on top of existing pre-trained language models. Our goal was not to attend to alleviate the well-documented issues (e.g., privacy, undesired biases, etc) that such models embody. For this reason, we share the similar potential risks and concerns posed by these models." }, { "figure_ref": [], "heading": "Demonstration for Iteration 1:", "publication_ref": [], "table_ref": [], "text": "Transform the text into a semantic graph. Example: Text: Shotgate Thickets is a nature reserve in the United Kingdom operated by the Essex Wildlife Trust. Semantic Graph: [[\"Shotgate Thickets\", \"instance of\", \"Nature reserve\"], [\"Shotgate Thickets\", \"country\", \"United Kingdom\"], [\"Shotgate Thickets\", \"operator\", \"Essex Wildlife Trust\"]]" }, { "figure_ref": [], "heading": "Demonstration for Subsequent Iterations:", "publication_ref": [], "table_ref": [], "text": "Transform the text into a semantic graph and also add the given triples to the generated semantic graph. Example: Text: Shotgate Thickets is a nature reserve in the United Kingdom operated by the Essex Wildlife Trust. Triples: [\"Shotgate Thickets\", \"instance of\", \"Nature reserve\"], [\"Shotgate Thickets\", \"country\", \"United Kingdom\"] Semantic graph: [[\"Shot gate Thickets\", \"instance of\", \"Nature reserve\"], [\"Shotgate Thickets\", \"country\", \"United Kingdom\"], [\"Shotgate Thickets\", \"operator\", \"Essex Wildlife Trust\"]]\nFigure 8: The demonstrations used in prompt for KELM-sub." }, { "figure_ref": [], "heading": "Demonstration for Iteration 1:", "publication_ref": [], "table_ref": [], "text": "Transform the text into a semantic graph. Example: Text: Sportpark De Toekomst is located in Ouder-Amstel, Netherlands. It is owned and operated by AFC Ajax N.V. and their tenants include the Ajax Youth Academy. Semantic graph: [[\"Sport park De Toekomst\", \"location\", \"Ouder-Amstel\"], [\"Sportpark De Toekomst\", \"country\", \"Netherlands\"], [\"Sportpark De Toekomst\", \"owner\", \"AFC Ajax N.V.\"], [\"Sportpark De Toekomst\", \"operator\", \"AFC Ajax N.V.\"], [\"Sportpark De Toekomst\", \"tenant\", \"Ajax Youth Academy\"]]" }, { "figure_ref": [], "heading": "Demonstration for Subsequent Iterations:", "publication_ref": [], "table_ref": [], "text": "Transform the text into a semantic graph and also add the given triples to the generated semantic graph. Example: Text: Sportpark De Toekomst is located in Ouder-Amstel, Netherlands. It is owned and operated by AFC Ajax N.V. and their tenants include the Ajax Youth Academy. Triples: [\"Sportpark De Toekomst\", \"country\", \"Netherlands\"], [\"Sportpark De Toekomst\", \"operator\", \"AFC Ajax N.V.\"] Semantic graph: [[\"Sport park De Toekomst\", \"location\", \"Ouder-Amstel\"], [\"Sportpark De Toekomst\", \"country\", \"Netherlands\"], [\"Sportpark De Toekomst\", \"owner\", \"AFC Ajax N.V.\"], [\"Sportpark De Toekomst\", \"operator\", \"AFC Ajax N.V.\"], [\"Sportpark De Toekomst\", \"tenant\", \"Ajax Youth Academy\"]] " } ]
Large language models (LLMs) have shown great abilities of solving various natural language tasks in different domains. Due to the training objective of LLMs and their pretraining data, LLMs are not very well equipped for tasks involving structured data generation. We propose a framework, Prompting with Iterative Verification (PiVe), to improve graphbased generative capability of LLMs. We show how a small language model could be trained to act as a verifier module for the output of an LLM (i.e., ChatGPT, GPT-4), and to iteratively improve its performance via fine-grained corrective instructions. We also show how the verifier module could apply iterative corrections offline for a more cost-effective solution to the text-to-graph generation task. Experiments on three graph-based datasets show consistent improvement gained via PiVe. Additionally, we create GenWiki-HIQ and highlight that the verifier module can be used as a data augmentation tool to help improve the quality of automatically generated parallel text-graph datasets.
PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs
[ { "figure_caption": "Figure 1 :1Figure 1: Framework of PiVe.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Results of using 3 diverse prompts with 6shot on KELM-sub with Iterative Offline Correction on ChatGPT and GPT-4. The colors represent Base, and corrective iterations 1, 2, 3. Prompt 1: Transform the text into a semantic graph. Prompt 2: Transform the text into a semantic graph consisting of a set of triples. Generate as many triples as possible. Prompt 3: Transform the text into a semantic graph consisting of a set of triples.First produce all relations possible, then produce the graph.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An example from GenWiki-HIQ compared to the original graph in GenWiki FINE .", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Two examples of PiVe making two types of mistakes: redundancy and inaccuracy.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Statistics of the seed datasets for training the verifier modules on three datasets. cannot use it to train the verifier module. However, the relation types in GenWiki and WebNLG have some overlaps, so we use the verifier module trained on WebNLG+2020 and test it on GenWiki test set.", "figure_data": "in 16", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of using PiVe on three datasets across all metrics. Single verifier module represents single dataset-specific verifier module trained on T5-Large and Unified verifier module is trained on Flan-T5-XXL using instruction-tuning.", "figure_data": "Base13.504.8983.9213.20 13.504.8983.9213.20subIteration 1 17.92 Iteration 2 19.465.78 6.4485.91 86.5712.37 19.64 12.08 22.116.00 6.4486.39 87.3112.08 11.68Iteration 3 20.176.6186.8311.95 23.117.5087.7011.35Base17.29 13.4389.5911.46 17.29 13.4389.5911.46WebNLGIteration 1 18.32 14.0089.7411.23 18.22 13.8389.6711.23Iteration 2 18.57 14.0089.8211.22 18.55 13.8889.7411.20Base20.136.6088.4810.99 20.136.6088.4810.99GenWikiIteration 1 20.546.8088.7010.87 20.886.7088.6610.90Iteration 2 21.096.8088.7810.83 20.996.7088.9110.88", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison between Iterative Prompting and Iterative Offline Correction on KELM-sub dataset across all metrics using Single Verifier and Unified Verifier. Results of various number of shots (k=6, 8, 10) on KELM-sub with Iterative Offline Correction. The colors represent Base, and corrective iterations 1, 2, 3.", "figure_data": "Base13.504.8983.9213.20 13.504.8983.9213.20Single VerifierIteration 1 17.92 Iteration 2 19.465.78 6.4485.91 86.5712.37 17.76 12.08 18.515.83 6.1186.42 86.9112.37 12.19Iteration 3 20.176.6186.8311.95 18.556.1786.9412.18Base13.504.8983.9213.20 13.504.8983.9213.20Unified VerifierIteration 1 19.64 Iteration 2 22.116.00 6.4486.39 87.3112.08 16.99 11.68 17.765.67 5.6787.08 87.4812.95 12.96Iteration 3 23.117.5087.7011.35 17.855.6787.5212.9610-shot8-shot6-shotTF135363738394086878889 GBS10-shot8-shot6-shotGF112.515.017.58.59.09.510.0GEDIter. 1 2 3 4Figure 2:", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Hyperparameters of single verification module.", "figure_data": "ModelT5-LargeEpoch5Batch Size16OptimizerAdamLearning Rate2 × 10 -5Warm-up Step500Beam Size5HyperparameterAssignmentModelFLAN-T5-XXLEpoch2Batch Size48OptimizerAdamLearning Rate3 × 10 -5Warm-up Step100Beam Size4", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Hyperparameters of unified verification module.", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparison between Iterative Prompting and Iterative Offline Correction on WebNLG dataset across all metrics using Single Verifier and Unified Verifier.", "figure_data": "Base17.29 13.4389.5911.46 17.29 13.4389.5911.46Single VerifierIteration 1 18.32 14.0089.7411.23 18.03 13.5589.1611.52Iteration 2 18.57 14.0089.8211.22 18.10 13.5589.1911.51Base17.29 13.4389.5911.46 17.29 13.4389.5911.46Unified VerifierIteration 1 18.22 13.8389.6711.23 18.02 13.4989.2111.61Iteration 2 18.55 13.8889.7411.20 18.06 13.4989.0711.65Iterative PromptingIterative Offline CorrectionT-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓Base20.136.6088.4810.99 20.136.6088.4810.99Single VerifierIteration 1 20.546.8088.7010.87 20.246.7089.0010.95Iteration 2 21.096.8088.7810.83 20.326.8089.0710.93Base20.136.6088.4810.99 20.136.6088.4810.99Unified VerifierIteration 1 20.886.7088.6610.90 20.376.6089.0810.96Iteration 2 20.996.7088.9110.88 20.426.6089.1110.94", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Comparison between Iterative Prompting and Iterative Offline Correction on GenWiki dataset across all metrics using Single Verifier and Unified Verifier.", "figure_data": "T-F1↑ G-F1↑ G-BS↑ GED↓T-F1↑ G-F1↑ G-BS↑ GED↓Base13.504.8983.9213.20Base35.25 10.7885.4310.19HeadIteration 1 13.66 Iteration 2 13.654.89 4.8983.23 81.9913.31 13.316-shotIteration 1 36.96 12.56 Iteration 2 37.25 12.6188.20 88.4710.14 10.13Iteration 3 13.654.8980.8313.31Iteration 3 37.37 12.6888.5410.13Base13.504.8983.9213.20Base36.40 14.8986.639.59RelationIteration 1 15.09 Iteration 2 15.294.94 4.9483.68 82.9012.97 12.958-shotIteration 1 39.51 17.78 Iteration 2 39.80 18.1189.24 89.388.86 8.81Iteration 3 15.334.9482.0912.96Iteration 3 39.82 18.1789.388.81Base13.504.8983.9213.20Base37.14 14.7286.579.46TailIteration 1 13.52 Iteration 2 13.514.89 4.8983.83 83.7413.21 13.2210-shotIteration 1 40.40 18.61 Iteration 2 40.69 19.2889.46 89.668.64 8.56Iteration 3 13.504.8983.6413.23Iteration 3 40.70 19.3389.688.55Table 9: Results of doing different perturbations to the graph on KELM-sub to train a Single Verifier with Iter-ative Offline Correction.Table 10: Results of doing k-shot (k=6, 8, 10) learning on KELM-sub with Iterative Offline Correction.first we took the test set outputs from the first itera-tion and the last iteration, then we randomly sam-pled 35 instances from those with different outputs.The output from the first iteration is the originalChatGPT output without using PiVe, and the out-put from the last iteration is the result after usingPiVe. For the evaluation process we recruited threeannotators (1 PhD graduate and 2 PhD students inComputer Science and NLP) to select, for a giventext and two graph outputs, which graph matchesthe text better. Each annotator should only chooseone graph per each instance and evaluate all 105instances.", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Results of using diverse prompts with 6-shot learning on KELM-sub with Iterative Offline Correction on GPT-4.", "figure_data": "T-F1↑ G-F1↑ G-BS↑ GED↓T-F1↑ G-F1↑ G-BS↑ GED↓Base35.25 10.7885.4310.19Base43.46 26.5087.607.97Prompt 1Iteration 1 36.96 12.56 Iteration 2 37.25 12.6188.20 88.4710.14 10.13Prompt 1Iteration 1 45.68 32.50 Iteration 2 45.87 33.0689.86 90.047.39 7.32Iteration 3 37.37 12.6888.5410.13Iteration 3 45.87 33.0690.057.31Base34.46 11.5686.0710.08Base41.67 22.6787.288.47Prompt 2Iteration 1 37.38 14.22 Iteration 2 37.81 14.7288.48 88.619.35 9.24Prompt 2Iteration 1 43.64 26.61 Iteration 2 43.79 27.2288.87 88.997.98 7.91Iteration 3 37.85 14.8988.629.23Iteration 3 43.79 27.2889.007.91Iteration 1 31.899.7884.1610.58Base44.30 23.8987.368.11Prompt 3Iteration 2 36.15 13.22 Iteration 3 37.03 13.9487.89 88.299.46 9.23Prompt 3Iteration 1 46.65 29.61 Iteration 2 46.84 30.0689.22 89.287.49 7.45Iteration 4 37.11 13.9588.349.23Iteration 3 46.85 30.0889.307.45Table 11: Results of using diverse prompts with 6-shot learning on KELM-sub with Iterative Offline Correction on ChatGPT.first example, the triple [\"Train song Mermaid\",\"instrument\",\"Singing\"] predicted by PiVe isredundant. In the second example, the relation\"date Of Retirement\" in the triple [\"AlanShepard\",\"date Of Retirement\",\"1963\"] isinaccurate. We speculate these behaviours arecaused due to the presence of many similar textswith similar graphs in the training data. Duringtraining, PiVe learned the potential connectionsbetween these similar graphs, thus leading to re-", "figure_id": "tab_8", "figure_label": "12", "figure_type": "table" }, { "figure_caption": ", we demonstrate an example from KELM-sub test set using unified verifier. In Base, based on the prediction from the LLM, the verifier module predicts the missing triple [\"Francisco Uranga\", \"occupation\", \"swimmer\"]. By suggesting this missing triple in the next iteration of prompt, the prediction from LLM includes Results of using GPT-3-davinci as the backbone LLM on KELM-sub dataset over different settings.", "figure_data": "Iterative PromptingIterative Offline CorrectionT-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓Base15.117.7283.6312.91 15.117.7283.6312.91Single VerifierIteration 1 19.55 Iteration 2 21.578.78 9.3385.90 86.5911.98 18.97 11.56 19.658.72 9.0086.47 86.7612.10 11.96Iteration 3 22.499.8987.1011.37 19.699.0086.7711.95Base15.117.7283.6312.91 15.117.7283.6312.91Unified VerifierIteration 1 21.40 Iteration 2 24.378.78 9.5086.43 87.5011.69 20.46 11.12 21.548.78 9.0687.18 87.5611.90 11.71Iteration 3 26.06 10.2287.9910.83 21.579.0687.5711.71BLEU↑ METEOR↑ TER↓ BERTScore↑GenWiki FINE-f 35.71 GenWiki-HIQ 48.1736.67 42.0365.19 41.9493.74 95.44", "figure_id": "tab_9", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Results of G2T generation on original GenWiki test set training on different datasets. GenWiki FINE-f contains the filtered 110K text-graph pairs from original GenWiki FINE as described in Section 5. GenWiki-HIQ is the augmented dataset based on GenWiki FINE-f .", "figure_data": "Dataset# with PiVe wins # w/o PiVe winsKELM-sub314WebNLG287GenWiki269Total8520", "figure_id": "tab_10", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Human evaluation results on 105 samples from three datasets using ChatGPT with or without PiVe.", "figure_data": "", "figure_id": "tab_11", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Fine-tuning results of text-to-graph generation on three datasets on T5-Large model.", "figure_data": "T-F1↑ G-F1↑ G-BS↑ GED↓KELM-sub 58.45 47.26 94.12 8.48WebNLG54.77 45.31 93.51 9.11GenWiki36.34 29.69 91.14 9.74", "figure_id": "tab_12", "figure_label": "16", "figure_type": "table" } ]
Jiuzhou Han; Nigel Collier; ♯ Wray; Buntine ♭ Ehsan
[ { "authors": "", "journal": "GenWiki-FINE", "ref_id": "b0", "title": "Timma is a village development committee in Bhojpur District in the Kosi Zone of eastern Nepal. At the time of the 1991 Nepal census it had a population of 3336 persons living in 621 individual households", "year": "" }, { "authors": " Timma", "journal": "", "ref_id": "b1", "title": "", "year": "" }, { "authors": " Timma", "journal": "GenWiki-HIQ", "ref_id": "b2", "title": "pushpinMap, Nepal", "year": "" }, { "authors": " Timma", "journal": "", "ref_id": "b3", "title": "", "year": "" }, { "authors": " Timma", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": " Timma", "journal": "", "ref_id": "b5", "title": "", "year": "" }, { "authors": " Timma", "journal": "", "ref_id": "b6", "title": "is Part Of, Bhojpur District Kosi Zone", "year": "" }, { "authors": " Timma", "journal": "", "ref_id": "b7", "title": "county Development Committee, Bhojpur District", "year": "" }, { "authors": "Zeina Abu-Aisheh; Romain Raveaux; Jean-Yves Ramel; Patrick Martineau", "journal": "", "ref_id": "b8", "title": "An exact graph edit distance algorithm for solving pattern recognition problems", "year": "2015-01" }, { "authors": "Oshin Agarwal; Heming Ge; Siamak Shakeri; Rami Al-Rfou", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training", "year": "2021-06-06" }, { "authors": "Sören Auer; Christian Bizer; Georgi Kobilarov; Jens Lehmann; Richard Cyganiak; Zachary G Ives", "journal": "Springer", "ref_id": "b10", "title": "Dbpedia: A nucleus for a web of open data", "year": "2007-11-11" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "METEOR: an automatic metric for MT evaluation with improved correlation with human judgments", "year": "2005-06-29" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b12", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b13", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Sharan Chowdhery; Gaurav Narang; Adams Mishra; Vincent Y Yu; Yanping Zhao; Andrew M Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b14", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Karl Cobbe; Vineet Kosaraju; Mohammad Bavarian; Jacob Hilton; Reiichiro Nakano; Christopher Hesse; John Schulman", "journal": "", "ref_id": "b15", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "Tim Dettmers; Mike Lewis; Sam Shleifer; Luke Zettlemoyer", "journal": "", "ref_id": "b16", "title": "8-bit optimizers via block-wise quantization", "year": "2022-04-25" }, { "authors": "Qingxiu Dong; Lei Li; Damai Dai; Ce Zheng; Zhiyong Wu; Baobao Chang; Xu Sun; Jingjing Xu; Lei Li; Zhifang Sui", "journal": "", "ref_id": "b17", "title": "A survey for in-context learning", "year": "2023" }, { "authors": "Claire Gardent; Anastasia Shimorina; Shashi Narayan; Laura Perez-Beltrachini", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "The webnlg challenge: Generating text from RDF data", "year": "2017-09-04" }, { "authors": "Qipeng Guo; Zhijing Jin; Xipeng Qiu; Weinan Zhang; David Wipf; Zheng Zhang", "journal": "", "ref_id": "b19", "title": "Cyclegt: Unsupervised graph-to-text and text-to-graph generation via cycle training", "year": "2020" }, { "authors": "Jiuzhou Han; Ehsan Shareghi", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Self-supervised graph masking pre-training for graph-to-text generation", "year": "2022" }, { "authors": "Edward J Hu; Yelong Shen; Phillip Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Shean Wang; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b21", "title": "Lora: Low-rank adaptation of large language models", "year": "2022-04-25" }, { "authors": "Zhijing Jin; Qipeng Guo; Xipeng Qiu; Zheng Zhang", "journal": "International Committee on Computational Linguistics", "ref_id": "b22", "title": "Genwiki: A dataset of 1.3 million contentsharing text and graphs for unsupervised graph-totext generation", "year": "2020-12-08" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b23", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b24", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "", "journal": "United States. Department of Energy", "ref_id": "b25", "title": "Exploring Network Structure, Dynamics, and Function Using Networkx", "year": "2008" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "What makes good in-context examples for gpt-3? In Proceedings of Deep Learning Inside Out: The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, DeeLIO@ACL 2022", "year": "2022-05-27" }, { "authors": "Shayne Longpre; Le Hou; Tu Vu; Albert Webson; Hyung Won Chung; Yi Tay; Denny Zhou; Quoc V Le; Barret Zoph; Jason Wei; Adam Roberts", "journal": "", "ref_id": "b27", "title": "The flan collection: Designing data and methods for effective instruction tuning", "year": "2023" }, { "authors": "Yao Lu; Max Bartolo; Alastair Moore; Sebastian Riedel; Pontus Stenetorp", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity", "year": "2022-05-22" }, { "authors": "Sourab Mangrulkar; Sylvain Gugger; Lysandre Debut; Younes Belkada; Sayak Paul", "journal": "", "ref_id": "b29", "title": "Peft: Stateof-the-art parameter-efficient fine-tuning methods", "year": "2022" }, { "authors": "Swaroop Mishra; Daniel Khashabi; Chitta Baral; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Cross-task generalization via natural language crowdsourcing instructions", "year": "2022-05-22" }, { "authors": " Openai", "journal": "", "ref_id": "b31", "title": "GPT-4 technical report", "year": "2023" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "ACL", "ref_id": "b32", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002-07-06" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Köpf; Edward Z Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b33", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019-12-08" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b34", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Swarnadeep Saha; Prateek Yadav; Lisa Bauer; Mohit Bansal", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Explagraphs: An explanation graph generation task for structured commonsense reasoning", "year": "2021-07-11" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b36", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022-04-25" }, { "authors": "G Matthew; Bonnie J Snover; Richard M Dorr; Linnea Schwartz; John Micciulla; Makhoul", "journal": "Association for Machine Translation in the Americas", "ref_id": "b37", "title": "A study of translation edit rate with targeted human annotation", "year": "2006-08-08" }, { "authors": "", "journal": "Foundations of Artificial Intelligence", "ref_id": "b38", "title": "Handbook of Knowledge Representation", "year": "2008" }, { "authors": "Xiao Wang; Weikang Zhou; Can Zu; Han Xia; Tianze Chen; Yuansen Zhang; Rui Zheng; Junjie Ye; Qi Zhang; Tao Gui; Jihua Kang; Jingsheng Yang; Siyuan Li; Chunsai Du", "journal": "", "ref_id": "b39", "title": "Instructuie: Multitask instruction tuning for unified information extraction", "year": "2023" }, { "authors": "Yizhong Wang; Swaroop Mishra; Pegah Alipoormolabashi; Yeganeh Kordi; Amirreza Mirzaei; Atharva Naik; Arjun Ashok; Arut Selvan Dhanasekaran; Anjana Arunkumar; David Stap; Eshaan Pathak; Giannis Karamanolakis; Gary Haizhi; Ishan Lai; Ishani Purohit; Jacob Mondal; Kirby Anderson; Krima Kuznia; Kuntal Doshi; Maitreya Kumar Pal; Mehrad Patel; Mihir Moradshahi; Mirali Parmar; Neeraj Purohit; Varshney; Rohitha Phani; Pulkit Kaza; Ravsehaj Verma; Rushang Singh Puri; Savan Karia; Doshi; Keyur Shailaja; Siddhartha Sampat; Sujan Mishra; A Reddy; Sumanta Patro; Tanay Dixit; Xudong Shen", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Super-naturalinstructions: Generalization via declarative instructions on 1600+ NLP tasks", "year": "2022-12-07" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b41", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Sean Welleck; Ximing Lu; Peter West; Faeze Brahman; Tianxiao Shen; Daniel Khashabi; Yejin Choi", "journal": "", "ref_id": "b42", "title": "Generating sequences by learning to self-correct", "year": "2023" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Transformers: State-of-the-art natural language processing", "year": "2020-11-16" }, { "authors": "Yi Xu; Luoyi Fu; Zhouhan Lin; Jiexing Qi; Xinbing Wang", "journal": "", "ref_id": "b44", "title": "INFINITY: A simple yet effective unsupervised framework for graph-text mutual conversion", "year": "2022" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b45", "title": "Bertscore: Evaluating text generation with BERT", "year": "2020-04-26" }, { "authors": "Wangchunshu Zhou; Yuchen ; Eleanor Jiang; Ethan Wilcox; Ryan Cotterell; Mrinmaya Sachan", "journal": "", "ref_id": "b46", "title": "Controlled text generation with natural language instructions", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Pop rock\", \"stylistic Origin", "year": "" }, { "authors": " Chatgpt W/O Pive", "journal": "", "ref_id": "b48", "title": "Train song Mermaid\", \"also considered", "year": "" }, { "authors": "M A Shepard", "journal": "", "ref_id": "b49", "title": "and was also the Chief of the Astronaut Office in 1963", "year": "1957" }, { "authors": "Alan Shepard", "journal": "NWC M.A", "ref_id": "b50", "title": "served As Chief Of The Astronaut Office In", "year": "1957" }, { "authors": " Chatgpt W/O Pive", "journal": "", "ref_id": "b51", "title": "Alan Shepard", "year": "1923-11-18" }, { "authors": "Francisco Uranga", "journal": "", "ref_id": "b52", "title": "Fra ncisco Uranga", "year": "1905-01-01" }, { "authors": "", "journal": "", "ref_id": "b53", "title": "Francisco Uranga\", \"occupation\", \"swimmer\"] Base LLM Prediction", "year": "" }, { "authors": "Francisco Uranga", "journal": "", "ref_id": "b54", "title": "represented", "year": "1905" }, { "authors": "", "journal": "", "ref_id": "b55", "title": "Verification Module Output: Correct Iteration 2 Figure 7: An example from WebNLG test set using single verification module", "year": "" } ]
[ { "formula_coordinates": [ 5, 79.38, 73.22, 436.52, 58.41 ], "formula_id": "formula_0", "formula_text": "Single Verifier Module Unified Verifier Module T-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓ KELM-" }, { "formula_coordinates": [ 6, 212.28, 73.13, 305.07, 22.42 ], "formula_id": "formula_1", "formula_text": "Iterative Prompting Iterative Offline Correction T-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓" }, { "formula_coordinates": [ 13, 212.28, 73.13, 305.07, 22.42 ], "formula_id": "formula_2", "formula_text": "Iterative Prompting Iterative Offline Correction T-F1↑ G-F1↑ G-BS↑ GED↓ T-F1↑ G-F1↑ G-BS↑ GED↓" } ]
10.18653/v1/2021.acl-long.568
2023-05-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b18", "b29", "b33", "b6", "b17", "b24", "b3", "b9", "b34", "b20", "b15", "b21", "b12", "b31", "b29", "b19", "b25", "b33", "b17", "b6", "b8" ], "table_ref": [], "text": "Pre-trained language models (PLMs) (Devlin et al., 2019;Radford et al., 2018) have significantly advanced the state-of-the-art in various natural language processing tasks (Wang et al., 2018;Zhou and Lampouras, 2020;Dušek et al., 2020;Radev et al., 2020). However, these models often contain a vast amount of parameters, posing nontrivial requirements for storage and computation. Due to this inefficiency, the applications of PLMs in resource-constrained scenarios are still limited.\nTo resolve the above challenge, model compression (Sun et al., 2019;Ben Noach and Goldberg, 2020;Lan et al., 2020) has been actively studied to make PLMs meet the practical requirement. Among them, iterative pruning methods are widely adopted at only a tiny expense of model performance when adapting PLMs to downstream tasks. During the course of iterative pruning, model parameters can not only be updated but also * The corresponding author. be pruned based on the rank of their importance scores in order to satisfy the cardinality constraint. Prevalent importance criteria are based on the parameter's magnitude (Zhu and Gupta, 2017;Renda et al., 2020) or sensitivity (Louizos et al., 2018;Sanh et al., 2020;Liang et al., 2021;Zhang et al., 2022). Parameters with low importance scores are pruned and are expected to have little impact on model performance.\nDespite the empirical success, existing importance criteria for model pruning still face two major limitations: (1) they are heuristically defined and may not accurately quantify a parameter's contribution to the learning process, e.g., absolute weight value in magnitude-based pruning and gradient-weight product in sensitivity-based pruning; (2) they determine the importance of each parameter individually without considering the effect of coinstantaneous parameter updates on model performance, e.g., sensitivity is estimated by the absolute change in training error if only a single parameter is pruned and others remain unchanged.\nIn this paper, we rethink the design of the importance criterion for model pruning from an optimization perspective. We begin by analyzing the temporal variation of any given learning objective based on a single-step gradient descent update under the iterative pruning setting. We show that finding the optimal pruning decision can be framed as solving an equality-constrained 0-1 Integer Linear Programming (ILP) problem, where the constraint is defined by the specified sparsity. The resulting problem is a particular case of a general 0-1 Knapsack problem in which the weight for each item is the same. The solution to this problem naturally leads to a principled importance criterion which we use to rank all model parameters and derive the optimal stepwise pruning decision.\nWhen a high sparsity (e.g., 80%∼90%) is pursued, the limited capacity often renders the pruned model fails to retain satisfactory performance with conventional fine-tuning. To further improve the model's generalization ability, we propose a selfregularization scheme, where the model prediction is regularized by the latest best-performing model checkpoint during pruning. We show that such a scheme eases model learning with decreasing capacity and effectively yields a tighter upper bound of expected generalization error than learning from training data alone.\nTo validate the effectiveness of our approach, dubbed PINS (Pruning with principled Importance aNd Self-regularization), we conducted extensive experiments with various pre-trained language models on a wide variety of tasks, including natural language understanding on GLUE (Wang et al., 2018)), question answering on SQuAD (Rajpurkar et al., 2016), named entity recognition on CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003), and data-to-text generation on WebNLG (Zhou and Lampouras, 2020), DART (Radev et al., 2020), and E2E (Dušek et al., 2020). Experimental results show that PINS provides more accurate models at different sparsity levels. Detailed analysis shed further light on some intriguing properties of models pruned by PINS. By exploiting the resulting high sparsity, we show that the storage/inference can be reduced/accelerated by 8.9x and 2.7x using CSR format and a sparsityaware inference runtime (Kurtz et al., 2020) on consumer-level CPUs1 .\nIn summary, our contributions are:\n• We establish the equivalence between the optimal pruning decision and the solution to an equality-constrained 0-1 Integer Linear Programming problem. The solution to this problem leads to a principled importance criterion that can be used to rank parameters during iterative pruning.\n• We propose a simple yet effective selfregularization scheme to enhance the model's generalization capability, especially under a high-sparsity regime.\n• Comprehensive experiments and analyses confirm the effectiveness of our approach at various sparsity levels." }, { "figure_ref": [], "heading": "Background and Related Work", "publication_ref": [], "table_ref": [], "text": "In this section, we review the necessary background on Transformer-based pre-trained language models and popular importance criteria for iterative pruning." }, { "figure_ref": [], "heading": "Transformer-based Pre-trained Language Models", "publication_ref": [ "b18", "b5", "b30", "b4", "b27" ], "table_ref": [], "text": "Most existing pre-trained neural language models (Radford et al., 2018;Devlin et al., 2019;Wang et al., 2020;Clark et al., 2020) are based on the Transformer (Vaswani et al., 2017) architecture, which consists of several identical blocks of self-attention and feedforward network. After pre-training on a massive amount of unlabeled general-domain corpus in a self-supervised learning manner, these models exhibit superior performance on various downstream tasks via finetuning. However, good generalization performance comes at the cost of a vast amount of parameters. For example, the base version of BERT has 110M parameters and leads to more than 400MB of disk storage. Therefore, how to effectively reduce model size while preserving as much task accuracy as possible remains a challenging research problem." }, { "figure_ref": [], "heading": "Iterative Pruning", "publication_ref": [ "b10", "b7", "b15", "b21", "b31" ], "table_ref": [], "text": "Pruning methods can be divided into two categories: one-shot pruning (Lee et al., 2018;Frankle and Carbin, 2018) and iterative pruning (Louizos et al., 2018;Sanh et al., 2020;Zhang et al., 2022). One-shot pruning removes parameters of low importance after training. It is efficient but ignores the complicated training dynamics when applied to modern large neural language models. On the contrary, iterative pruning performs training and pruning simultaneously. Therefore, the resulting sparsity pattern is aware of the complex dynamics of parameters through the course of training and delivers considerable improvement compared to one-shot pruning.\nLet\nθ (t) = {θ (t) 1 θ (t) 2 , ..., θ(t)\nd } denote the ddimensional model parameters at t-th training iteration, the typical updating rule of iterative pruning can be formulated as:\nθ(t+1) = θ (t) -η (t) ∇ θ L(θ (t) )\n(1)\nθ (t+1) = θ(t+1) M (t)(2)\nwhere η (t) is the learning rate at time step t and L is the learning objective. The temporarily updated θ(t+1) is further pruned by the binary mask M (t) ∈ {0, 1} d , which is computed based on a given importance criterion S (t) :\nM (t) i = 1, if S (t) i is in the top-r (t) of S (t) 0, otherwise(3)\nwhere r (t) ≤ d indicates the number of remaining parameters at time step t according to a given sparsity scheduler." }, { "figure_ref": [], "heading": "Importance Criteria for Model Pruning", "publication_ref": [], "table_ref": [], "text": "Popular importance criteria for model pruning include parameters' magnitude and sensitivity.\nMagnitude is a simple yet effective importance criterion that is widely used for model pruning. It estimates the importance of each parameter as its absolute value, i.e., S\ni = |θ (t) i |.(t)\nDespite its simplicity, the magnitude cannot accurately gauge the importance of parameters because even parameters with small magnitude can have a large impact on the model prediction due to the complex compositional structure of PLMs.\nSensitivity is another useful importance criterion. It estimates the importance of each parameter as the absolute change of the learning objective if the parameter is pruned, i.e., set to zero. The mathematical formulation of the sensitivity of i-th parameter is given by:\nS (t) i = |L(θ (t) -i ) -L(θ (t) )| (4) ≈ |g (t) i θ (t) i |(5)\nwhere\nθ (t)\n-i is identical to θ (t) except that the i-th entry is set to zero and g (t) i is the gradient of i-th entry. Though taking the training dynamics into account, sensitivity still estimates the importance of each parameter individually without considering the effect of holistic parameter update." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Instead of heuristically defining the importance criterion as in prior pruning methods, we take a step back and rethink the design of the importance criterion for model pruning from an optimization perspective. From our analysis, we draw an equivalence between finding the optimal stepwise pruning decision and solving an equality-constrained 0-1 Integer Linear Programming problem. We further show that the optimal solution to this problem leads to a new importance criterion for model pruning. Moreover, we propose a simple yet effective self-regularization scheme to facilitate the generalization ability of the sparse model. We elucidate our analysis in Section 3.1 and describe our self-regularization scheme in Section 3.2." }, { "figure_ref": [], "heading": "Rethinking Importance Criterion from the Optimization Perspective", "publication_ref": [ "b23" ], "table_ref": [], "text": "Without loss of generality, we denote L as the learning objective when adapting a pre-trained language model f with parameter θ to a downstream task. At t-th training iteration, we denote the current model parameters as θ (t) and the evaluated learning objective as L(θ (t) ).\nThe temporal variation of the learning objective L(θ (t) ) at time step t is given by the second-order Taylor series expansion:\n∆L (t) = L(θ (t) + ∆θ (t) ) -L(θ (t) ) (6) = ∇ θ L(θ (t) ) ∆θ (t) + 1 2 ∆θ (t) H (t) ∆θ (t) + o(|∆θ (t) | 2 ) (7)\nwhere H (t) is the Hessian matrix at step t. It is known that the largest eigenvalue λ max of Hessian matrices in a PLM is typically small (Shen et al., 2019), i.e., ∆θ (t) H (t) ∆θ (t) ≤ λ max |∆θ (t) | 2 2 ≈ 0. Thus, we ignore the second-order term as well as the infinitesimal of higher order in Eq. ( 7):\n∆L (t) = ∇ θ L(θ (t) ) ∆θ (t) = d i=1 g (t) i • ∆θ (t) i (8)\nUnder the iterative pruning setting, the actual temporal variation ∆θ (t) i of i-th parameter depends on whether it is allowed to be updated or forced to zeroed out. Formally, we use a binary variable x (t) i to indicate the pruning decision of i-th parameter at time step t, i.e., x\n(t) i = 1 means θ (t) i is updated and x (t) i = 0 means θ (t)\ni is pruned. The temporal variation in Eq. ( 8) can now be rewritten as:\n∆L (t) = d i=1 g (t) i (x (t) i ∆ θ(t) i + (1 -x (t) i )(-θ (t) i ))(9)\nwhere ∆ θ(t\n) i = -η (t) g (t)\ni is the gradient descent update. Finding the optimal pruning decision that leads to the smallest ∆L (t) is now converted to an equality-constrained 0-1 integer linear programming (ILP) problem of variables x (t) :\nx(t) = arg min x (t) ∆L (t) s.t. d i=1 x (t) i = r (t) , x (t) i ∈ {0, 1}(10)\nwhere r (t) is the number of remaining parameters at step t according to the pre-defined sparsity scheduler. If we consider each parameter θ\n(t)\ni as an item and r (t) as the total capacity, the problem that Eq. ( 10) defines can be treated as a special case of 0-1 Knapsack problem where the weight for each item is one and the value for each item is given by:\nS (t) i = -g (t) i ∆ θ(t) i -g (t) i θ (t) i(11)\nContrary to the general 0-1 Knapsack problem which is known to be NP-complete, fortunately, the equal-weight 0-1 Knapsack is a P problem. Its optimal solution can be obtained by sorting items in descending order according to their values and selecting the top-r (t) ones:\nx(t) i = 1, if S (t) i is in the top-r (t) of S (t) 0, otherwise(12)\nPutting it in the context of iterative pruning, our analysis theoretically reveals the validity of: (1) selecting parameters based on the ranking of certain importance criterion; (2) using Eq. ( 11) as a principled new importance criterion." }, { "figure_ref": [], "heading": "Self-regularization", "publication_ref": [ "b13", "b16", "b21", "b26" ], "table_ref": [], "text": "In vanilla fine-tuning, the learning objective L is defined as the training error L er (a.k.a empirical risk in statistical learning) over the empirical data distribution. However, minimizing such training error does not translate to good generalization. Moreover, as iterative pruning proceeds, the number of non-zero parameters in the model monotonically decreases. The reduced model capacity increases the learning difficulty (Lopez-Paz et al., 2015;Mirzadeh et al., 2019) and usually leads to degenerated generalization performance of the sparsified model (Sanh et al., 2020).\nConfronting the above challenges, we propose an effective self-regularization scheme tailored to improving the model's generalization ability during iterative pruning. Concretely, besides learning from the hard label of training data, the output of the current model with parameter θ (t) is also regularized by the output of the latest best-performing model checkpoint with parameter θ (t l ) , where t l ≤ t denotes the time step at which the latest checkpoint was saved. The learning objective of selfregularization is defined as:\nL sr = D(y θ (t) , y θ (t l ) )(13)\nwhere D can be any divergence metric, e.g., KLdivergence for classification tasks. L sr is then integrated with the original learning objective, i.e., L = L er + L sr .\nWhy does self-regularization work? Our selfregularization is similar to teacher-student knowledge distillation in the sense that the model output is regularized by the output of another model. However, the most critical difference is that the \"teacher\" in self-regularization is instantiated by checkpoint with increasing sparsity, such that the capacity gap between \"teacher\" and \"student\" is dynamically adjusted. We theoretically justify the effectiveness of self-regularization as follows:\nTheorem 1. (Vapnik, 1998), we have the following asymptotic generalization bounds hold:\nR(f θ (t←t i ) ) ≤ O( |F θ (t) | C n α i ) + inf F θ (t←t i ) R(f θ (t) ) bound(f θ (t←t i ) ) R(f θ (t←t j ) ) ≤ O( |F θ (t) | C n α j ) + inf F θ (t←t j ) R(f θ (t) ) bound(f θ (t←t j ) )\nBecause θ (t i ) is a later checkpoint with higher sparsity than θ (t j ) , we have the learning speed 1 ≥ α i ≥ α j ≥ 1 2 , then the following inequality holds with high probability:\nbound(f θ (t←t i ) ) ≤ bound(f θ (t←t j ) )\nIn summary, self-regularization works by enabling a tighter generalization bound compared to learning from training data alone or a static dense teacher as in knowledge distillation. Please refer to Appendix B for detailed derivation." }, { "figure_ref": [], "heading": "The Algorithm", "publication_ref": [], "table_ref": [], "text": "Here we formally summarize our algorithm PINS (Pruning with principled Importance aNd Self-regularization) in Algorithm 1:\nAlgorithm 1 PINS Input: Training set D tr = {(x i , y i )} N i=1 ;\nValidation set D val ; pre-trained parameters θ; maximum training steps T ; evaluation interval t eval . Initialize: θ (0) ← θ, t l ← 0, best validation accuracy acc t l ← -INF.\n1: for t = 0 to T -1 do 2:\nSample a mini-batch (x, y) from D tr 3:\nCompute current model's output y θ (t)" }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "Compute latest best-performing checkpoint's output y θ (t l )" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "Compute L based on y θ (t) , y θ (t l ) and y 6:\nCompute S (t) via Eq. ( 11) 7:\nCompute θ (t+1) via Eq. ( 2) and Eq. (3) 8:\nif t%t eval = 0 and acc t >acc t l then 9: acc t l ← acc t , θ (t l ) ← θ (t) Output: the pruned parameters θ (T ) ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, We compare PINS with state-ofthe-art pruning algorithms and perform detailed analysis to understand the effectiveness of PINS." }, { "figure_ref": [], "heading": "Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Tasks", "publication_ref": [ "b29", "b19", "b31", "b25", "b6", "b17", "b33" ], "table_ref": [], "text": "We conduct experiments on a comprehensive spectrum of tasks following standard data splits. Natural Language Understanding. We opt for tasks from the GLUE (Wang et al., 2018) benchmark, including linguistic acceptability (CoLA), natural language inference (RTE, QNLI, MNLI), paraphrase (MRPC, QQP), sentiment analysis (SST-2) and textual similarity (STS-B). Because the official test set of GLUE is hidden, we randomly split a small portion of training set as validation set and treat the original validation set as test set. Question Answering. We use SQuAD v1.1 (Rajpurkar et al., 2016) as a representative dataset for extractive question answering following previous work (Zhang et al., 2022). Named Entity Recognition. We also examine our approach on CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) for token-level named entity recognition task. Data-to-Text Generation.\nBesides language understanding tasks, we also extend our evaluation to data-to-text generation on three datasets: E2E (Dušek et al., 2020), DART (Radev et al., 2020), and WebNLG (Zhou and Lampouras, 2020), which involves generating a piece of fluent text from a set of structured relational triples." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b34", "b15", "b21", "b11", "b31" ], "table_ref": [], "text": "Magnitude-based. Iterative magnitude pruning (IMP) (Zhu and Gupta, 2017) is the state-ofthe-art magnitude-based approach. Sensitivity-based.\nl 0 -regularization (Louizos et al., 2018) trains masking variables via reparametrization trick with l 0 penalty; SMvP (Sanh et al., 2020) uses accumulated sensitivity as importance metric; PST (Li et al., 2022) proposed a hybrid importance criterion combining both magnitude and sensitivity; PLATON (Zhang et al., 2022) uses a modified variant of sensitivity by exponential moving average and uncertainty re-weighting." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b5", "b30", "b4", "b18", "b11", "b21", "b31", "b31", "b14" ], "table_ref": [], "text": "We mainly conduct experiments on the pre-trained BERT base (Devlin et al., 2019) as a pruning target for all tasks except data-to-text generation. We defer the pruning results of MiniLM 12L-384H (Wang et al., 2020) and Electra base (Clark et al., 2020) to Appendix A. For data-to-text generation, we adopt the pre-trained GPT-2 (Radford et al., 2018) following a prior study (Li et al., 2022).\nDuring pruning, we employ the cubic sparsity scheduler (Sanh et al., 2020;Zhang et al., 2022) to gradually increase the sparsity level from 0 to the specified target sparsity. To avoid tremendous computation cost brought by hyper-parameter tuning, we only search the batch size from {16, 32} and fix the learning rate as 3e-5 for all experiments on GLUE and CoNLL. For SQuAD v1.1, we fix the batch size as 16 and the learning rate as 3e-5 following Zhang et al. (2022). We adopt AdamW (Loshchilov and Hutter, 2017) as the default optimizer. To reduce the variance induced by mini-batch sampling, we adopt a smoothing technique similar to PLATON. We run each experi- able to retain 97.5% overall performance of finetuning, outperforming 95.4% of the previous best method PLATON. Notably, PINS even surpasses fine-tuning on RTE and MRPC at 80% sparsity. This can be attributed to the fact that PLMs are heavily over-parameterized and PINS can effectively identify parameters crucial to the task to realize low bias and low variance simultaneously. 2 summarizes the pruning results on SQuAD v1.1. Interestingly, IMP outperforms all sensitivity-based methods except for PLATON at all considered sparsity levels, in contrast to the observations on GLUE. Our method, however, consistently yields the best performance at all sparsity settings. when further increasing sparsity." }, { "figure_ref": [], "heading": "Question answering Table", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Named entity recognition", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Data-to-text generation Table 4 shows the pruning results on E2E, DART and WebNLG at 80% sparsity. PINS achieves the best performance on all three datasets in all evaluation metrics. In particular, PINS delivers performance even better than fine-tuning on the E2E dataset by 0.7 ROUGE-L and 0.4 METEOR scores, respectively. We posit that this is due to the relative easiness of E2E compared to the other two datasets." }, { "figure_ref": [], "heading": "Results at Medium-to-Low Sparsity", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "The typical utility of pruning is to produce a sparse yet competitive model that can benefit downstream applications in terms of efficiency without sacrificing much task accuracy. We hypothesize that PINS might also bring a regularization effect compared to vanilla fine-tuning under the medium-to-low sparsity regime. As shown in Table 5, when specifying a medium-to-low sparsity, e.g., 50%∼30%, our method can effectively play a role of regularization and improve model performance compared to vanilla fine-tuning. With half of the parameters being pruned, the sparse model produced by PINS outperforms fine-tuning by 1 percentage point on the GLUE score. This observation suggests that appropriate pruning can effectively reduce variance without hurting model expressiveness." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_5", "tab_1" ], "text": "The self-regularization scheme is proposed and integrated into PINS to improve model generaliza- tion. Here we investigate the effectiveness of selfregularization by comparing it to the conventional knowledge distillation scheme and the classical empirical risk minimization scheme.\nThe pruning results of using the three different learning objectives on RTE, CoLA, and MRPC are listed in Table 6. Pruning with PINS using classical empirical risk minimization still achieves performance better than existing baselines (Table 1). Learning from a densely fine-tuned BERT base as the teacher does not always improve and sometime may even hurt performance. In contrast, our proposed self-regularization consistently boosts model performance, which echoes our theoretical justification in Section 3.2." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "We provide an in-depth analysis of various importance criteria to uncover more valuable insights." }, { "figure_ref": [], "heading": "Sparsity pattern of weight matrices", "publication_ref": [ "b1", "b0" ], "table_ref": [], "text": "We are interested in the sparsity pattern produced by different pruning criteria. To this end, we plot the remaining parameters' distribution of the same weight matrix in BERT base pruned via magnitude, sensitivity, and PINS in Figure 1. We observe that magnitude-based pruning generates a sparsity pattern close to randomness. Sensitivity-based pruning produces a more structured pattern where the remaining parameters tend to occupy complete rows. Interestingly, the sparsity pattern produced by PINS exhibits the highest concentration on specific rows. This implies that the parameters contributing most to the end-task are preferably distributed in a structured way and PINS is more effective at extracting such patterns.\nLayerwise rank distribution The highly structured sparsity pattern generated by PINS intrigues our interest to further analyze the intrinsic property of parameter matrices after pruning. Specifically, we inspect the matrix rank as it is usually associated with the complexity of matrix. To this end, we visualize the layerwise rank distribution of BERT base pruned using different importance criteria on SST-2 dataset. As shown in Figure 4, magnitude pruning produces sparse matrices that are still near full-rank despite containing 80% zeros. Sensitivity pruning tends to generate sparsity pattern with lower rank compared to magnitude pruning. Notably, model pruned by PINS shows consistently lower matrix rank than the other two criteria. This implies that PINS is more effective at identifying the low-dimensional task representation during adaptation, which is usually correlated with tighter generalization bounds (Arora et al., 2018;Aghajanyan et al., 2021)." }, { "figure_ref": [ "fig_1" ], "heading": "Empirical validation of importance criterion", "publication_ref": [], "table_ref": [], "text": "In Section 3.1 we prove that the pruning decision derived by our importance criterion is theoretically optimal. Here we empirically validate this point by visualizing the change of learning objective as pruning proceeds. Figure 3 illustrates that our importance criterion indeed leads to the most significant decrease in the learning objective compared to heuristical ones like magnitude and sensitivity." }, { "figure_ref": [], "heading": "Efficiency Gain", "publication_ref": [ "b8" ], "table_ref": [ "tab_6" ], "text": "We can exploit the resulting high sparsity to attain practical efficiency gain on storage and inference speed. We first apply quantization upon the pruned model and transform it into INT8 data type before saving it using Compressed Sparse Row (CSR) format. We then leverage a sparsity-aware runtime (Kurtz et al., 2020) for accelerating inference.\nAs shown in Table 7, on the RTE dataset, the disk space and inference time of BERT base pruned at 80% sparsity can be reduced by roughly 8.9x and 2.7x respectively with negligible accuracy loss." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present PINS, a new iterative pruning method that hinges on a principled weight importance criterion to deliver the optimal stepwise pruning decision. Integrated with a self-regularization scheme tailored to pruning-during-adaptation, PINS allows for provably better generalization ability. Empirical experiments and analyses confirm the effectiveness of our method and shed further light on the different sparsity patterns produced by PINS and other existing methods." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [ "tab_7", "tab_8" ], "text": "Compared to the empirical risk minimization scheme, the introduced self-regularization scheme incurs certain overhead because each mini-batch of data will go through two models. For BERT base scale pre-trained language models, the additional memory overhead is about 27% and the additional training time overhead is about 30%. Nevertheless, once pruned, the sparsified model can enjoy considerable efficiency gains in terms of storage and inference time. Therefore, this is a trade-off that future practitioners might need to consider.\nA Results with More PLMs on subset of GLUE\nIn addition the widely used BERT and GPT-2 models, we also perform pruning experiments upon other two pre-trained language models: Electra base and MiniLM 12L-384H to further verify the effectiveness of our method. Due to computing resource constraint, we restrict our experiments on a subset of GLUE task, including RTE, CoLA and QNLI at 80% and 90% sparsity. We compare PINS against IMP and PLATON as two representative baselines for magnitude-based and sensitivity-based pruning methods. We fix the batch size as 32 and learning rate as 3e-5 similar to the BERT experiments. We illustrate the pruning results on Table 8 andTable 9. At both sparsity levels, PINS consistently outperforms IMP and PLATON on all three datasets, verifying the general effectiveness of PINS for language model pruning. where t,ti is the approximation error of function class F θ (t←t i ) with respect to f θ (t i ) . t,tj is defined in analogy. Because: (1) θ (t i ) is a later checkpoint with higher sparsity than θ (t j ) , we have the learning speed 1 ≥ α i ≥ α j ≥ 1 2 ;\n(2) f θ (t i ) has lower generalization error than f θ (t j ) , we have the following inequality holds with high probability: bound(f θ (t←t i ) ) ≤ bound(f θ (t←t j ) )" }, { "figure_ref": [], "heading": "C More Post-pruning Analyses", "publication_ref": [], "table_ref": [], "text": "This section presents more visualized analyses of models sparsified by different pruning methods.\nFigure 5 shows the layerwise rank distribution of BERT base pruned using different importance criteria on the RTE dataset. The observation here is similar to what is discussed in the main body of the paper: PINS exhibits the lowest average matrix rank in the sparsified model compared to the other two criteria.\nFigure 4 illustrates the weight distribution of BERT base pruning using different importance criteria. From the left figure we can see that magnitude-based pruning tends to keep parameters with high absolute values, which is expected based on its definition. Sensitivity and PINS produce similar weight value distribution mainly because the two methods both contain the gθ term in their importance calculation. Despite the similarity, we can still observe that PINS produces smoother distribution than sensitivity and covers more weights with larger absolute values.\nThe right figure shows the layerwise distribution of remaining parameters after pruning. A clear trend is that PINS tends to retain more parameters in the middle layers (4-7), which also coincided with the inter-model sparsity pattern analysis in the main body of our paper. Both sensitivity and PINS remove a large proportion of parameters in the top layers (10-12) while magnitude-based pruning has no preference for model layers." }, { "figure_ref": [], "heading": "D Sparsity Scheduler", "publication_ref": [], "table_ref": [], "text": "The proportion of remaining weights is controlled by the sparsity scheduler, here we adopt the commonly used cubic sparsity schedule to progressively reach target sparsity, i.e., r (t) at time step t within the maximum time steps T is given by:\n     r i t ∈ [0, t i ) r f + (r i -r f )( T -t f -t T -t f -t i ) 3 t ∈ [t i , T -t f ) r f otherwise (14\n)\nwhere r i = 1.0, r f is the final percent of remained parameters, t i and t f are the warmup and cooldown steps." }, { "figure_ref": [], "heading": "E Accelerating Inference and Reducing Storage", "publication_ref": [], "table_ref": [], "text": "We attain practical efficiency gain in terms of inference time and disk storage space using different sets of off-the-shelf techniques. Specifically, we use DeepSparse2 , a sparsity-aware inference runtime to accelerate inference of sparse model on CPUs. We also utilize the Pytorch builtin quantization function3 and Compressed Sparse Row (CSR) format4 to achieve a much smaller disk space requirement." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was generously supported by the CMB Credit Card Center & SJTU joint research grant, and Meituan-SJTU joint research grant." } ]
Iterative pruning is one of the most effective compression methods for pre-trained language models. We discovered that finding the optimal pruning decision is an equalityconstrained 0-1 Integer Linear Programming problem. The solution to this optimization problem leads to a principled importance criterion which we use to rank parameters during iterative model pruning. To mitigate the poor generalization at high sparsity levels, we propose a self-regularization scheme where model prediction is regularized by the latest checkpoint with increasing sparsity throughout pruning. Our experiments on natural language understanding, question answering, named entity recognition, and datato-text generation with various Transformerbased PLMs show the effectiveness of the approach at various sparsity levels.
Pruning Pre-trained Language Models with Principled Importance and Self-regularization
[ { "figure_caption": "Figure 1 :Figure 2 :12Figure 1: Sparsity pattern (80%) of the same weight matrix in BERT base trained on SST-2. See Appendix C for more details on the matrix rank distribution.", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Change of learning objective (cross-entropy) during iterative pruning on SST-2.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: Weight distributions of BERT base pruned using different importance criteria on RTE dataset. Left figure shows the value distribution and the right figure shows how remaining parameters are distributed at different model layers.", "figure_data": "", "figure_id": "fig_2", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Results with BERT base on the GLUE development set. For MNLI, the results are averaged on MNLI-m and MNLI-mm. † indicates the results are directly quoted fromZhang et al. (2022) while ‡ indicates the results are reported byLi et al. (2022).", "figure_data": "SparsityMethodRTE AccMRPC F1STS-B PearsonCoLA MccSST-2 AccQNLI AccMNLI AccQQP AccAvg.0%Fine-tune †69.390.390.258.392.491.384.091.583.4IMP †65.786.286.842.584.389.282.286.077.9l0-regularization †63.280.282.80.085.085.080.888.570.780%SMvP † PST62.8 63.086.7 87.487.8 88.048.5 44.689.0 89.388.3 88.381.9 79.390.6 88.979.5 78.6PLATON †68.689.889.054.591.290.183.390.782.2PINS (ours)72.790.989.257.191.991.283.990.983.5IMP †57.480.383.418.380.786.678.978.870.5l0-regularizatio †59.979.582.70.082.582.878.487.669.190%SMvP † PST ‡58.8 62.885.9 85.686.5 81.70.0 42.587.4 88.786.6 86.080.9 76.790.2 83.972.1 76.0PLATON †65.388.887.444.390.588.981.890.279.6PINS (ours)68.590.187.949.891.089.582.790.681.3Sparsity80% 70% 60% 50%Sparsity MethodPRF1Fine-tune †88.10%Fine-tune93.5 94.6 94.0IMP †82.9 86.5 86.7 87.0IMP90.7 91.8 91.2l0-regularization † 81.9 82.8 83.9 84.670%SMvP92.9 94.1 93.5SMvP †-84.6-85.8PINS(ours) 93.5 94.3 93.9PLATON †86.1 86.7 86.9 87.2IMP84.4 87.3 85.8PINS (ours)86.4 86.9 87.4 88.080%SMvP92.1 93.1 92.6PINS(ours) 92.8 93.8 93.3Table 2: Results with BERT base on SQuAD v1.1. †indicates numbers reported from Zhang et al. (2022).F1 score is reported as evaluation metric.ment five times with different random seeds andreport the average results (significance tests withp-value < 0.05 are conducted for all performancegains).4.2 Main Results4.2.1 Comparison with BaselinesNatural language understanding We presentthe experimental results on GLUE at high spar-sity, i.e., 80% and 90% in Table 1. Amongall baselines, sensitivity-based methods generallyachieve better results than magnitude-based IMP,which implies the importance of training dynam-ics when designing pruning criteria. We can seethat PINS delivers more accurate sparsified mod-els on all datasets at both sparsity levels. The ad-vantage of PINS is more evident on small datasets.For example, PINS outperforms the previous best-performing baseline (PLATON) by 4.1 and 2.6points on RTE and CoLA at 80% sparsity, wherethere are only a few thousand training data. Un-der extremely high sparsity, i.e., 90%, PINS is still", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results with BERT base on CoNLL 2003. P and R stands for Precision and Recall respectively.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table3demonstrates the pruning results on CoNLL 2003 dataset for named entity recognition. At 70% sparsity, our method almost matches the performance of fine-tuning, outperforming baselines on all evaluation metrics. The gain of PINS is more prominent Results with GPT-2 on data-to-text generation datasets. The higher the BLEU, ROUGE-L, METEOR, and BLEURT scores are, the better the performance.", "figure_data": "SparsityMethodE2E BLEU ROUGE-L METEOR BLEU BLEURT BLEU BLEURT DART WebNLG0%Fine-tune69.471.146.246.60.3046.90.23IMP69.371.045.844.90.2239.90.0080%PST69.470.845.944.10.2244.30.16PINS (ours)69.671.846.646.20.2945.50.18SparsityMethodRTE AccMRPC F1STS-B PearsonCoLA MccSST-2 AccQNLI AccMNLI AccQQP AccAvg.0%Fine-tune69.390.390.258.392.491.384.091.583.450%PINS70.891.489.760.692.991.885.191.384.230%PINS71.791.289.860.493.392.085.191.584.4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results with BRET base on the GLUE development set under medium-to-low sparsity regime. Numbers are the mean of five trials with different random seeds. PINS outperforms fine-tuning at medium-to-low sparsity.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation Study with BERT base on the learning objective during iterative pruning at 80% sparsity.", "figure_data": "LRTE CoLA MRPCempirical risk70.955.490.6w/ knowledge distillatiojn 70.356.090.6w/ self-regularization72.757.190.9", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Practical time and storage efficiency gain on RTE with Deepsparse and CSR format. Inference is perform on Intel Xeon E5-2640 CPU with batch size 1.", "figure_data": "SparsityTime(s)Storage(MB) Acc.0%0.110 (1.0x)340 (1.0x)69.380%0.041 (2.7x)38 (8.9x)69.0", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Results with MiniLM 12L-384H on the GLUE development set.", "figure_data": "SparsityMethodRTE AccCoLA MccQNLI Acc0%Fine-tune73.058.591.5IMP60.521.687.580%PLATON68.254.189.8PINS (ours)69.554.490.4IMP57.514.183.990%PLATON63.138.888.0PINS (ours)66.244.888.6SparsityMethodRTE AccCoLA MccQNLI Acc0%Fine-tune81.969.093.1IMP59.911.287.580%PLATON73.660.091.0PINS (ours)75.563.792.0IMP52.90.083.090%PLATON69.948.089.7PINS (ours)72.349.290.2", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Results with Electra base on the GLUE development set.", "figure_data": "", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
Siyu Ren; Kenny Q Zhu
[ { "authors": "Armen Aghajanyan; Sonal Gupta; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Intrinsic dimensionality explains the effectiveness of language model fine-tuning", "year": "2021" }, { "authors": "Sanjeev Arora; Rong Ge; Behnam Neyshabur; Yi Zhang", "journal": "", "ref_id": "b1", "title": "Stronger generalization bounds for deep nets via a compression approach", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "Matan Ben; Noach ; Yoav Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Compressing pre-trained language models by matrix decomposition", "year": "2020" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b4", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Ondřej Dušek; Jekaterina Novikova; Verena Rieser", "journal": "Computer Speech & Language", "ref_id": "b6", "title": "Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge", "year": "2020" }, { "authors": "Jonathan Frankle; Michael Carbin", "journal": "", "ref_id": "b7", "title": "The lottery ticket hypothesis: Training pruned neural networks", "year": "2018" }, { "authors": "Mark Kurtz; Justin Kopinsky; Rati Gelashvili; Alexander Matveev; John Carr; Michael Goin; William Leiserson; Sage Moore; Bill Nell; Nir Shavit; Dan Alistarh", "journal": "Virtual. PMLR", "ref_id": "b8", "title": "Inducing and exploiting activation sparsity for fast inference on deep neural networks", "year": "2020" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b9", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2020" }, { "authors": "Namhoon Lee; Thalaiyasingam Ajanthan; Philip Torr", "journal": "", "ref_id": "b10", "title": "Snip: Single-shot network pruning based on connection sensitivity", "year": "2018" }, { "authors": "Yuchao Li; Fuli Luo; Chuanqi Tan; Mengdi Wang; Songfang Huang; Shen Li; Junjie Bai", "journal": "International Joint Conferences on Artificial Intelligence Organization", "ref_id": "b11", "title": "Parameter-efficient sparsity for large language models fine-tuning", "year": "2022" }, { "authors": "Chen Liang; Simiao Zuo; Minshuo Chen; Haoming Jiang; Xiaodong Liu; Pengcheng He; Tuo Zhao; Weizhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Super tickets in pre-trained language models: From model compression to improving generalization", "year": "2021" }, { "authors": "David Lopez-Paz; Léon Bottou; Bernhard Schölkopf; Vladimir Vapnik", "journal": "", "ref_id": "b13", "title": "Unifying distillation and privileged information", "year": "2015" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b14", "title": "Fixing weight decay regularization in adam", "year": "2017" }, { "authors": "Christos Louizos; Max Welling; Diederik P Kingma", "journal": "", "ref_id": "b15", "title": "Learning sparse neural networks through l_0 regularization", "year": "2018" }, { "authors": "Seyed-Iman Mirzadeh; Mehrdad Farajtabar; Ang Li; Hassan Ghasemzadeh", "journal": "", "ref_id": "b16", "title": "Improved knowledge distillation via teacher assistant: Bridging the gap between student and teacher", "year": "2019" }, { "authors": "Dragomir Radev; Rui Zhang; Amrit Rau; Abhinand Sivaprasad; Chiachun Hsieh; Nazneen Fatema Rajani; Xiangru Tang; Aadit Vyas; Neha Verma; Pranav Krishna; Yangxiaokang Liu; Nadia Irwanto; Jessica Pan; Faiaz Rahman; Ahmad Zaidi; Murori Mutuma; Yasin Tarabar; Ankit Gupta; Tao Yu; Yi Chern Tan; Xi Victoria Lin; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b17", "title": "Dart: Open-domain structured data record to text generation", "year": "2020" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b18", "title": "Language models are unsupervised multitask learners", "year": "2018" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "", "ref_id": "b19", "title": "Squad: 100, 000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Alex Renda; Jonathan Frankle; Michael Carbin", "journal": "", "ref_id": "b20", "title": "Comparing rewinding and fine-tuning in neural network pruning", "year": "2020" }, { "authors": "Victor Sanh; Thomas Wolf; Alexander Rush", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Movement pruning: Adaptive sparsity by fine-tuning", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "Sheng Shen; Zhen Dong; Jiayu Ye; Linjian Ma; Zhewei Yao; Amir Gholami; Michael W Mahoney; Kurt Keutzer", "journal": "", "ref_id": "b23", "title": "Q-BERT: hessian based ultra low precision quantization of BERT", "year": "2019" }, { "authors": "Siqi Sun; Yu Cheng; Zhe Gan; Jingjing Liu", "journal": "", "ref_id": "b24", "title": "Patient knowledge distillation for BERT model compression", "year": "2019" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b25", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "year": "2003" }, { "authors": "Vladimir Vapnik", "journal": "Wiley", "ref_id": "b26", "title": "Statistical learning theory", "year": "1998" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b29", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Wenhui Wang; Furu Wei; Li Dong; Hangbo Bao; Nan Yang; Ming Zhou", "journal": "", "ref_id": "b30", "title": "Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers", "year": "2020" }, { "authors": "Qingru Zhang; Simiao Zuo; Chen Liang; Alexander Bukharin; Pengcheng He; Weizhu Chen; Tuo Zhao", "journal": "", "ref_id": "b31", "title": "Platon: Pruning large transformer models with upper confidence bound of weight importance", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b32", "title": "", "year": "" }, { "authors": "Giulio Zhou; Gerasimos Lampouras", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "WebNLG challenge 2020: Language agnostic delexicalisation for multilingual RDF-to-text generation", "year": "2020" }, { "authors": "Michael Zhu; Suyog Gupta", "journal": "", "ref_id": "b34", "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "year": "2017" } ]
[ { "formula_coordinates": [ 2, 337.02, 644.97, 113.43, 15.86 ], "formula_id": "formula_0", "formula_text": "θ (t) = {θ (t) 1 θ (t) 2 , ..., θ(t)" }, { "formula_coordinates": [ 2, 349.28, 706.83, 133.55, 13.65 ], "formula_id": "formula_1", "formula_text": "θ(t+1) = θ (t) -η (t) ∇ θ L(θ (t) )" }, { "formula_coordinates": [ 2, 347.72, 725.36, 176.69, 12.69 ], "formula_id": "formula_2", "formula_text": "θ (t+1) = θ(t+1) M (t)(2)" }, { "formula_coordinates": [ 3, 80.75, 124.56, 208.38, 45.17 ], "formula_id": "formula_3", "formula_text": "M (t) i = 1, if S (t) i is in the top-r (t) of S (t) 0, otherwise(3)" }, { "formula_coordinates": [ 3, 165.53, 328.84, 51.13, 16 ], "formula_id": "formula_4", "formula_text": "i = |θ (t) i |.(t)" }, { "formula_coordinates": [ 3, 121.26, 512.97, 167.88, 35.8 ], "formula_id": "formula_5", "formula_text": "S (t) i = |L(θ (t) -i ) -L(θ (t) )| (4) ≈ |g (t) i θ (t) i |(5)" }, { "formula_coordinates": [ 3, 100.98, 561.3, 16.12, 13.31 ], "formula_id": "formula_6", "formula_text": "θ (t)" }, { "formula_coordinates": [ 3, 320.09, 349.48, 204.32, 60.54 ], "formula_id": "formula_7", "formula_text": "∆L (t) = L(θ (t) + ∆θ (t) ) -L(θ (t) ) (6) = ∇ θ L(θ (t) ) ∆θ (t) + 1 2 ∆θ (t) H (t) ∆θ (t) + o(|∆θ (t) | 2 ) (7)" }, { "formula_coordinates": [ 3, 354.62, 505.98, 169.79, 52.89 ], "formula_id": "formula_8", "formula_text": "∆L (t) = ∇ θ L(θ (t) ) ∆θ (t) = d i=1 g (t) i • ∆θ (t) i (8)" }, { "formula_coordinates": [ 3, 306.14, 635.88, 218.26, 31.98 ], "formula_id": "formula_9", "formula_text": "(t) i = 1 means θ (t) i is updated and x (t) i = 0 means θ (t)" }, { "formula_coordinates": [ 3, 306.14, 689.04, 218.43, 46.1 ], "formula_id": "formula_10", "formula_text": "∆L (t) = d i=1 g (t) i (x (t) i ∆ θ(t) i + (1 -x (t) i )(-θ (t) i ))(9)" }, { "formula_coordinates": [ 3, 351.63, 746.28, 68.75, 16 ], "formula_id": "formula_11", "formula_text": ") i = -η (t) g (t)" }, { "formula_coordinates": [ 4, 94.23, 123.4, 194.9, 60.69 ], "formula_id": "formula_12", "formula_text": "x(t) = arg min x (t) ∆L (t) s.t. d i=1 x (t) i = r (t) , x (t) i ∈ {0, 1}(10)" }, { "formula_coordinates": [ 4, 266.25, 222.88, 9.64, 6.99 ], "formula_id": "formula_13", "formula_text": "(t)" }, { "formula_coordinates": [ 4, 115.02, 316.58, 174.11, 16 ], "formula_id": "formula_14", "formula_text": "S (t) i = -g (t) i ∆ θ(t) i -g (t) i θ (t) i(11)" }, { "formula_coordinates": [ 4, 85.23, 434.7, 203.91, 45.17 ], "formula_id": "formula_15", "formula_text": "x(t) i = 1, if S (t) i is in the top-r (t) of S (t) 0, otherwise(12)" }, { "formula_coordinates": [ 4, 367.58, 209.84, 156.83, 12.67 ], "formula_id": "formula_16", "formula_text": "L sr = D(y θ (t) , y θ (t l ) )(13)" }, { "formula_coordinates": [ 4, 313.62, 575.95, 203.31, 102.96 ], "formula_id": "formula_17", "formula_text": "R(f θ (t←t i ) ) ≤ O( |F θ (t) | C n α i ) + inf F θ (t←t i ) R(f θ (t) ) bound(f θ (t←t i ) ) R(f θ (t←t j ) ) ≤ O( |F θ (t) | C n α j ) + inf F θ (t←t j ) R(f θ (t) ) bound(f θ (t←t j ) )" }, { "formula_coordinates": [ 4, 337.45, 763.57, 155.65, 13.64 ], "formula_id": "formula_18", "formula_text": "bound(f θ (t←t i ) ) ≤ bound(f θ (t←t j ) )" }, { "formula_coordinates": [ 5, 70.87, 219.17, 183.8, 28.13 ], "formula_id": "formula_19", "formula_text": "Algorithm 1 PINS Input: Training set D tr = {(x i , y i )} N i=1 ;" }, { "formula_coordinates": [ 5, 76.98, 305.1, 107.62, 23.01 ], "formula_id": "formula_20", "formula_text": "1: for t = 0 to T -1 do 2:" }, { "formula_coordinates": [ 12, 314.15, 423.35, 205.72, 61.31 ], "formula_id": "formula_21", "formula_text": "     r i t ∈ [0, t i ) r f + (r i -r f )( T -t f -t T -t f -t i ) 3 t ∈ [t i , T -t f ) r f otherwise (14" }, { "formula_coordinates": [ 12, 519.87, 475.19, 4.54, 9.46 ], "formula_id": "formula_22", "formula_text": ")" } ]
2023-11-09
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b8", "b7", "b9", "b10", "b11", "b12", "b13", "b13", "b0", "b1", "b3", "b4" ], "table_ref": [], "text": ". Feature selection (FS) is a critical technique in machine learning that identifies informative features within the original high-dimensional data. By removing irrelevant features, FS speeds up the learning process and enhances computational efficiency. In many real-world applications such as image processing, bioinformatics, and text mining [1][2][3], FS techniques are widely used to identify important features, thereby providing some explanations about the results and boosting the learning performance [4][5][6][7].\nWhile numerous FS methods have been proposed in both supervised and unsupervised settings, as several studies highlighted [8,9], the nature of FS is more unsupervised due to the unavailability of task-specific labels in advance. The selected features should be versatile to arbitrary downstream tasks, which motivates us to focus on the unsupervised FS in this study. Related works have recently resorted to neural networks to exploit the nonlinear information within feature space. For example, AEFS [9] uses the group Lasso to regularize the parameters in the first layer of the autoencoder, so as to reconstruct original features based on the restricted use of original features. Another well-known method is CAE [8], which selects features by learning a concrete distribution over the input features. However, most unsupervised deep methods rely on the reconstruction performance to select useful features. On the one hand, if there exists noise in the dataset, the reconstruction performance will be terrible even if useful features are selected, since the noise cannot be reconstructed with these informative features (see our reconstruction experiments on Madelon in Section 4.3). On the other hand, it is difficult to explain why selected features reconstruct the original data well. These issues prompt us to seek a new target for unsupervised deep FS.\nAs the saying goes, \"birds of a feather flock together\", the homophily principle [10] suggests that similar samples tend to be connected in a natural graph structure within real-world data. This graph structure is useful for describing the intrinsic structure of the feature space, and is commonly used in machine learning studies [11,12]. Building upon this graph structure, He et al. [13] introduce the Dirichlet Energy, which they call \"locality preserving power\", as a powerful tool for unsupervised FS that is able to identify informative features reflecting the intrinsic structure of the feature space.\nIn many practical applications, the graph structure is not naturally defined and needs to be constructed manually based on the input features using some similarity measurements. The quality of features affects the quality of the constructed graph. As highlighted in [14], the useless features increase the amount of unstructured information, which hinders the exploration of inherent manifold structure within data points and deteriorates the quality of constructed graph. Therefore, the reference [14] proposes the UDFS method to discard such nuisance features and constructs a k-nearest-neighbor (NN) graph on the selected features using the heat kernel. Despite the good performance of UDFS, constructing graphs using the heat kernel may not reflect the intrinsic structure of the feature space. Besides, the sorting algorithms in learning the k-NN graph in UDFS is non-differentiable in neural networks, which restricts its application in downstream networks.\nIn this paper: (1) We propose a deep unsupervised FS network that performs simultaneous feature selection and graph learning by minimizing the Dirichlet Energy, thereby revealing and harnessing the intrinsic structure in the dataset. (2) Within the network, a Unique Feature Selector (UFS) is devised to approximate discrete and distinct feature selection using the Gumbel Softmax technique combined with decomposition algorithms. (3) Moreover, a Differentiable Graph Learner (DGL) is devised based on the Optimal Transport theory, which is capable of obtaining a differentiable k-NN graph that more accurately reflects the intrinsic structure of the data than traditional graph constructing methods. Due to the differentiability, DGL is also theoretically capable of serving as a learnable graph module for other graph-based networks. (4) The entire framework is developed algorithmically. Unlike most deep learning networks with complex components that are tough to decipher, each core module in our framework has an algorithmic and physically interpretable design, which greatly facilitates observing and understanding the network's internal operations during the learning process. (5) Experimental results on both synthetic datasets and real-world datasets demonstrate the effectiveness of our method.\nNotations. For an arbitrary matrix M ∈ R a×b , m i , m i , and m i,j denote the i-th row, the i-th column, and the (i, j)-th entry of M , respectively. Given a vector m ∈ R b , its ℓ 2 -norm is defined as ∥m∥ 2 = b i=1 m 2 i . Based on this, the Frobenius norm of M is defined as ∥M ∥ F = a i=1 ∥m i ∥ 2 2 . When a = b, the trace of M is defined as tr(M ) = a i=1 m i,i . Given two matrices M , N ∈ R a×b , we define their inner product as ⟨M , N ⟩ = a i=1 b j=1 m i,j n i,j . 1 b denotes a b-dimensional column vector with all entries being 1, and I b denotes a b-order identity matrix. Bool(cond) is a boolean operator that equals 1 if cond is true, otherwise it equals 0. Moreover, given a vector m ∈ R b , we define its sorting permutation in ascending order as σ ∈ R b , namely, \nm σ1 ≤ m σ2 ≤ • • • ≤ m σ b . (a) (b) (c)" }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Dirichlet Energy", "publication_ref": [ "b9", "b14", "b15", "b10", "b12" ], "table_ref": [], "text": "Let X ∈ R n×d be the data matrix with the n samples and d-dimensional features. In this paper, we assume that the features have zero means and normalized variances,2 namely, 1 ⊤ n x i = 0 and x ⊤ i x i = 1 for i ∈ {1, . . . , d}. According to the homophily assumption [10], we assume that data X forms an inherent graph structure G with nodes standing for samples and edges standing for their correlations. In G, similar samples are more likely to connect to each other than dissimilar ones. Graph G can be represented with a similarity matrix S ∈ R n×n + , where s i,j denotes the similarity between x i and x j . If s i,j = 0, it means that there is no connection between x i and x j in G, which is common in k-NN graphs since we only consider the local structure of the data space.\nGiven an adjacency matrix S, 3 we define the Laplacian matrix [15] as L S = D -S, where D is a diagonal matrix whose diagonal entries represent the degrees of data points, namely, d i,i = n j=1 s i,j . Based on the Laplacian matrix L S , we introduce the Dirichlet Energy [16] as a powerful tool to identify important features. Specifically, given the Laplacian matrix L S , the Dirichlet Energy of a graph signal v is defined as\nL dir (v) = 1 2 n i=1 n j=1 s i,j (v i -v j ) 2 = v ⊤ L S v.(1)\nIn graph theory, each dimensional feature x i ∈ R n can be seen as a graph signal on G. The Dirichlet Energy in Eq. ( 1) provides a measure of the local smoothness [11] of each feature on graph G, which is small when the nodes that are close to each other on G have similar feature values. Hence, the Dirichlet Energy can be used to identify informative features by evaluating the consistency of the distribution of feature values with the inherent data structure. To demonstrate this, we provide an example in Fig. 1, where we generate a 2-NN graph G including two bubbles, and compose the data X using the two-dimensional coordinates of the graph nodes. Then we set the graph signal v as the first coordinate x 1 and visualize it on G in Fig. 1 Based on the Dirichlet Energy, a well-known FS method called Laplacian Score (LS) 4 is proposed in [13]. However, the Laplacian matrix in LS is precomputed and fixed. If X contains too many irrelevant features, the quality of the graph G will be poor and not reflect the underlying structure. As illustrated in Fig. 1(c), a poor-quality graph will lead to the poor smoothness even if the right feature is selected, this insight motivates us to learn graph and features jointly during the learning process." }, { "figure_ref": [], "heading": "Proposed Method", "publication_ref": [], "table_ref": [], "text": "In this paper, we devise a collaborative neural network driven by the Dirichlet Energy for joint feature and graph learning, as illustrated in Fig. 2. Generally, the proposed framework consists of two 𝝃𝝃 𝑖𝑖 modules: the Unique Feature Selector (UFS) and the Differentiable k-NN Graph Learner (DGL). At the beginning, the input features X are selected with the learnable feature mask F generated by UFS, which is carefully designed to avoid the duplicate feature selection. Based on the selected data X, we measure the distances between different samples, and feed the resulting distance vectors of each sample into DGL to learn their k nearest neighbors. The adaptive graph structure and informative features are learned jointly under the Dirichlet Energy, so as to identify the optimal feature subset that effectively captures the underlying data structure." }, { "figure_ref": [], "heading": "Unique Feature Selector", "publication_ref": [ "b1" ], "table_ref": [], "text": "Based on the original data X, the goal of FS is to identify a feature subset X ∈ R n×m from the original features by minimizing a prespecified target L obj ( X): min\nF L obj ( X) s.t. X = XF , F ∈ {0, 1} d×m , F ⊤ F = I m ,(2)\nwhere m ≤ d denotes the number of selected features, and F ∈ R d×m denotes the selection matrix selecting m features from X. Different from existing methods that use the reconstruction error as L obj ( X), in this paper, we utilize the Dirichlet Energy in Eq. ( 1) for FS as follows:\nL obj ( X) = m i=1 L dir (x i ) = tr( X⊤ L S X).(3)\nGiven the selection number m, L obj updates the network parameters by minimizing the Dirichlet Energy, thereby selecting m features that best reflect the intrinsic structure.\nThe constraints in problem (2) indicate that an ideal result F should be exact and unique. Exact means the result should exactly be the original features, instead of their linear combinations. Unique means each feature should be selected only once under a given number m. These two properties require F to be a binary and column-full-rank matrix including m orthogonal one-hot column vectors." }, { "figure_ref": [], "heading": "Approximating Discrete Feature Selection", "publication_ref": [ "b7", "b16", "b17", "b7" ], "table_ref": [], "text": "It is difficult to learn a discrete F in neural networks due to its non-differentiable property. Inspired by [8], we propose to learn the discrete distribution using the Gumbel Softmax [17,18] technique:\nf i = softmax((log w i + g i )/T ) with g i,j = -log(-log u i,j ), u i,j ∼ Uniform(0, 1),(4)\nwhere W = [w 1 , w 2 , . . . , w m ] denotes a learnable parameter. The random vector g i consists of d Gumbel-distributed variables g i,j , which is generated with u i,j sampled from Uniform distribution. Based on w i and g i , we obtain the approximated FS vector f i that represents the i-th selected feature. The distribution of f i is controlled by a non-negative temperature parameter T . A smaller value of parameter T will generate a better approximation of the one-hot vector, but will be more likely to be stuck in a poor local minimum. As suggested in [8], we employ the annealing schedule on T by initializing it with a high value and then gradually decreasing it during the learning process." }, { "figure_ref": [], "heading": "Selecting Unique Features Algorithm 1 UFS", "publication_ref": [], "table_ref": [], "text": "1: procedure UFS( F , ϵ)\n2: P ΛP ⊤ = F ⊤ F + ϵI m 3: LL ⊤ = F ⊤ F + ϵI m 4: F = Λ 1/2 P ⊤ 0 (L -1 ) ⊤ 5:\nreturn F 6: end procedure Despite having obtained the approximated FS vectors in neural networks, Eq. ( 4) does not consider the uniqueness requirement of FS. This is because Eq. ( 4) learns each selected feature separately, and does not consider the orthogonal constraint between columns in F , which is prone to result in the repeated selection of the same features. To address this issue, we develop a unique feature selector (UFS) in Algorithm 1, where 0 ∈ R (d-m)×m denotes the zero matrix. First, we add a small enough perturbation ϵI m (ϵ > 0) on F ⊤ F . Next, we perform the eigendecomposition (line 2) and the Cholesky decomposition (line 3) on the perturbed result respectively, and correspondingly obtain the diagonal matrix Λ ∈ R m×m , the orthogonal matrix P ∈ R m×m , and the lower triangle matrix L ∈ R m×m . Based on Λ, P , and L, we obtain the selection matrix F in line 4 and have the following conclusion: Proposition 3.1. Given any real matrix F ∈ R d×m , one can always generate a column-orthogonal matrix F through Algorithm 1.\nThe proof of Proposition 3.1 is provided in Appendix S1. On the one hand, the small perturbation ϵI m guarantees the column-full-rank property of F , thereby avoiding the duplicate selection results. On the other hand, the orthogonality property in Proposition 3.1 facilitates the approximation of discrete FS based on the matrix F . 5 We verify the efficacy of UFS in Section 4.2." }, { "figure_ref": [ "fig_0" ], "heading": "Differentiable k-NN Graph Learner", "publication_ref": [], "table_ref": [], "text": "The existence of noise and irrelevant features may negatively affect the quality of the constructed graph. As depicted in Fig. 1, a low-quality graph structure can significantly perturb the smoothness of features and undermine the performance of feature selection. Hence, we propose to learn an adaptive graph during the learning process using the selected features." }, { "figure_ref": [], "heading": "Learning an Adaptive k-NN Graph Using Dirichlet Energy", "publication_ref": [ "b18", "b19", "b20", "b21" ], "table_ref": [], "text": "Considering the objective function in Eq. ( 3), a natural way is to learn the similarity matrix S based on the Dirichlet Energy in L obj . However, this may yield a trivial solution where, for sample x i , only the nearest data point can serve as its neighbour with probability 1, while all the other data points will not be its neighbours. To avoid this trivial solution, we propose to learn an adaptive graph by incorporating the Tikhonov regularization [19] of S into the Dirichlet Energy:\nmin S tr( X⊤ L S X) + α 2 ∥S∥ 2 F s.t. S1 n = 1 n , s i,j ≥ 0, s i,i = 0,(5)\nwhere α denotes the trade-off parameter between the Dirichlet Energy and the Tikhonov regularization. Note that each row s i in S can be solved separately, instead of tuning α manually, we model α as a sample-specific parameter α i and determine it algorithmically, which plays an important role in learning k nearest neighbors for each sample. Based on problem (5), we define the distance matrix E with its entries being e i,j = ∥(x ixj )∥ 2 2 , then we solve each row s i in problem (5) separately as\nmin s i 1 2 ∥s i + e i 2α i ∥ 2 2 s.t. s i 1 n = 1, s i,j ≥ 0, s i,i = 0.(6)\nProblem ( 6) can be solved easily by constructing the Lagrangian function and then using the Karush-Kuhn-Tucker(KKT) conditions [20]. By doing so, we obtain the solution of s i,j as\ns i,j = ( 1 k + 1 k e i δ (k) i 2α i - e i,j 2α i ) + with δ (k) i,j = Bool(e i,j ≤ e i,σ k ),(7)\nwhere σ = [σ 1 , . . . , σ n ] denotes the sorting permutation over e i , i.e. e i,σ1 ≤ • • • ≤ e i,σn and δ\n(k) i\ndenotes the selection vector identifying the k minimal values in e i .\nRecall that we aim to learn k nearest neighbors for each sample, which implies that there are only k nonzero elements in s i corresponding to the nearest neighbors. To this end, we determine the trade-off parameters α i such that s i,σ k > 0 and s i,σ k+1 ≤ 0. Then we have:\n1 2 (ke i ξ (k) i -e i δ (k) i ) < α i ≤ 1 2 (ke i ξ (k+1) i -e i δ (k) i ) with ξ (k) i,j = Bool(e i,j = e i,σ k ),(8)\nwhere ξ\n(k) i denotes an indicator vector identifying the k-th minimal value in e i . Setting α i as the maximum and substituting it into Eq. ( 7), we obtain the final solution as:\ns i,σj = e i ξ (k+1) i -e i,σj ke i ξ (k+1) i -e i δ (k) i • Bool(1 ≤ j ≤ k).(9)\nThe detailed derivation of solution ( 9) can be found in Appendix S3. We note that the formulation in problem ( 6) bears similarity to CLR proposed in [21]. In Appendix S4, we discuss the connection between our method and CLR, and highlight the differences between the two w.r.t. the feature utilization and the sorting operation. Remarkably, the k-NN can be obtained easily in CLR using offthe-shelf sorting algorithms, which is not the case for neural networks due to the non-differentiability of sorting algorithms. To address this issue, we propose to transform the k-NN selection into a differentiable operator utilizing the Optimal Transport (OT) [22] technique as follows." }, { "figure_ref": [], "heading": "Differentiable k-NN Selector", "publication_ref": [ "b22", "b23", "b24", "b24", "b9", "b9", "b25", "b9", "b24", "b26" ], "table_ref": [], "text": "Let µ = [µ 1 , µ 2 , • • • , µ n1 ] ⊤ and ν = [ν 1 , ν 2 , • • • , ν n2\n] ⊤ be two discrete probability distributions defined on the supports A = {a i } n1 i=1 and B = {b j } n2 j=1 respectively . The goal of OT is to find an optimal transport plan Γ ∈ R n1×n2 between A and B by minimizing the following transport cost:\nmin Γ ⟨C, Γ⟩, s.t. Γ1 n2 = µ, Γ ⊤ 1 n1 = ν, Γ i,j ≥ 0,(10)\nwhere C ∈ R n1×n2 denotes the cost matrix with c i,j = h(a i -b j ) > 0 being the transport cost from a i to b j . It is widely known that the solution of the OT problem between two discrete univariate measures boils down to the sorting permutation [23][24][25]. As stated in [25], if h is convex, the optimal assignment can be achieved by assigning the smallest element in A to b 1 , the second smallest to b 2 , and so forth, which eventually yields the sorting permutation of A.\nGiven a distance vector e, to learn selection vectors δ (k) and ξ (k+1) , we set A = e, and B = [0, 1, . . . , k + 1], and define µ, ν, and c ij as\nµ i = 1 n , ν j = 1/n, 1 ≤ j ≤ k + 1 (n -k -1)/n, j = k + 2 , c ij = (a i -b j ) 2 = (e i -j + 1) 2 . (11\n)\nThe optimal transport plan of problem (10) assigns the i-th smallest value e σi to b i if\n1 ≤ i ≤ k + 1,\nand assigns the remaining nk -1 values in e to b k+2 . Namely, Given a sample p, once we obtain the optimal transport assignment Γ based on e p , we calculate the variables δ (k) p and ξ (k+1) p as follows:\nΓ σi,j = 1/n, if (1 ≤ i ≤ k + 1 and j = i) or (k + 1 < i ≤ n and j = k + 2) 0, if (1 ≤ i ≤ k + 1 and j ̸ = i) or (k + 1 < i ≤ n and j ̸ = k + 2) . (12\n)\nδ (k) p = n k i=1 Γ i , ξ (k+1) p = nΓ k+1 ,(13)\nwhere Γ i and Γ k+1 denote the i-th and the (k + 1)-th column of Γ, respectively. However, problem (10) is still non-differentiable. To address this issue, we consider the following entropy regularized OT problem:\nmin Γ ⟨C, Γ⟩ + γ i,j Γ i,j log Γ i,j s.t. Γ1 k+2 = µ, Γ ⊤ 1 n = ν, Γ i,j ≥ 0, (14\n)\nwhere γ is a hyperparameter. The differentiability of problem ( 14) has been proven using the implicit function theorem (see [26,Theorem 1]). Note that a smaller γ yields a better approximation to the original solution in problem (10), but may compromise the differentiability of problem ( 14) [25]. Problem ( 14) can be solved efficiently using the iterative Bregman projections algorithm [27], the details of which are provided in Appendix S5." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "Our experiments fall into three parts: (1) Toy Experiments: First, we verify the FS ability and the graph learning ability of the proposed method on synthetic datasets. (2) Quantitative Analysis: Next, we compare the performance of selected features in various downstream tasks on real-world datasets and compare our method with other unsupervised FS methods. (3) Ablation Study: Finally, we verify the effect of UFS and DGL by testing the performance of the corresponding ablated variants. We also provide the sensitivity analysis in Appendix S6.6. The implementation details of all experiments can be found in Appendix S6.1." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "For the toy experiments, we generate three 20-dimensional datasets named Blobs, Moons, and Circles (see Appendix S6.1.1 for generation details). On top of that, we evaluate the proposed method on twelve real-world datasets that include text, biological, image, and artificial data. Table 1 exhibits the details of these datasets, which include many high-dimensional datasets to test the performance of our method. We standardize all features to zero means and normalize them with the standard deviation." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4", "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "Toy Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we consider three synthetic binary datasets with increasing difficulty in separating different classes. The first two dimensions in each dataset contain useful features that indicate the underlying structure, while the remaining 18 dimensions are random noise sampled from N (0, 1).\nThe presence of noise obscures the inherent structure of the data, which makes the graph learning process highly challenging. To see this, we generate 3-D plots of each dataset using the useful features and one noise feature, along with their 2-D projections on each plane, which are shown in Fig. 3(a). We can see that the noise blurs the boundary of different classes, especially in Moons and Circles. In addition, we used a heat kernel (abbreviated as Heat) with σ = 1 to learn the 5-NN graph on 20-dimensional features, as shown in Fig. 3(b). We can see that the heavy noise obscures the underlying structure of data points, resulting in a chaotic graph outcome. Circles. In addition, we used a heat kernel (abbreviated as Heat) with σ = 1 to learn the 5-NN graph on 20-dimensional features, as shown in Fig. 3(b). We can see that the heavy noise obscures the underlying structure of data points, resulting in a chaotic graph outcome.\nResults: We test our method on toy datasets for selecting m = 2 target features. The results are presented in Fig. 3(c) and Fig. 3(d), which demonstrate the success of our method in learning target features and intrinsic structures simultaneously. Moreover, it can be seen from Fig. 3(d) that the proposed network obtains the approximately discrete FS vectors.\nLearning Without Unique Feature Selector: In addition, we conduct an ablation study by removing the UFS module from the network and only using Eq. ( 4) for FS. The results are shown in Fig. 3(e) and Fig. 3(f), where we can see that the ablated model repeatedly selects the same feature on all datasets, which verifies the efficacy of UFS. It is also noteworthy that the nodes in the graph are mostly connected either horizontally or vertically, indicating that DGL is able to learn the local structure relying only on the single selected feature." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "Quantitive Analysis", "publication_ref": [ "b28" ], "table_ref": [], "text": "Experimental Settings: In this section, we evaluate our method on real-world data. We partition each dataset into training data and testing data using an 8:2 ratio and identify useful features using training data. We then evaluate the performance of selected features on three downstream tasks:\n(1) Classification Accuracy: We train a random forest (RF) [29] classifier with 1000 trees using selected features and evaluate the prediction accuracy on the testing data. (2) Clustering Accuracy: Results. We test our method on toy datasets for selecting m = 2 target features. The results are presented in Fig. 3(c) and Fig. 3(d), which demonstrate the success of our method in learning target features and intrinsic structures simultaneously. Moreover, it can be seen from Fig. 3(d) that the proposed network obtains the approximately discrete FS vectors.\nLearning Without Unique Feature Selector. In addition, we conduct an ablation study by removing the UFS module from the network and updating F using Eq. ( 4) only. The results are shown in Fig. 3(e) and Fig. 3(f), where we can see that, without UFS, the ablated model repeatedly selects the same feature on all datasets. It is also noteworthy that the nodes in the graph are mostly connected either horizontally or vertically, indicating the effectiveness of DGL in learning the local structure solely based on the single selected feature." }, { "figure_ref": [], "heading": "Quantitive Analysis", "publication_ref": [ "b38", "b39", "b7", "b13", "b40", "b8", "b12", "b41", "b42", "b28", "b13", "b1" ], "table_ref": [ "tab_2", "tab_2" ], "text": "Experimental Settings. In this section, we evaluate our method on real-world data. We partition each dataset into training data and testing data using an 8:2 ratio and identify useful features using training data. We then evaluate the performance of selected features on three downstream tasks: (1) Classification Accuracy: We train a random forest (RF) [39] classifier with 1000 trees using selected features and evaluate the prediction accuracy on the testing data. (2) Clustering Accuracy: We cluster the testing set with selected features using k-means [40], where the cluster number is set to #Classes.\nThen we align the results with true labels and calculate the accuracy. (3) Reconstruction RMSE: We build a 1-hidden-layer network with ReLU activation to reconstruct the original data using selected features. The hidden dimension is set to 3m/2 except for AllFea, where the hidden size is set to d.\nThe network is learned on the training set with selected features, and evaluated on the testing set using root mean square error (RMSE) normalized by d.\nCompeting methods. We compare our methods with four deep methods (CAE [8], DUFS [14],\nWAST [41], AEFS [9]) and three classical methods (LS [13], RSR [42], UDFS [43]). Besides, we use all features (AllFea) as the baseline. To evaluate the performance of each FS method on downstream tasks, we average the results over 10 random runs with the feature number m varied in {25, 50, 75, 100, 150, 200, 300} except for Madelon, where m is varied in {5, 10, 15, 20} since Madelon consists of only 20 useful features [29]. Appendix S6.1 provides details of the overall evaluation workflow, including the implementation and the parameter selection of each method.\nResults. Similar to [14], we present the best result w.r.t. m in Table 2, the standard deviations is provided in Appendix S6.2. We also present some reconstruction results on PIX10 by our method in Appendix S6.3. From Table 2, we find that: (1) Our method generally achieves the best performance in all three tasks, indicating that our method selects more useful features. (2) In particular, we beat DUFS in all Classification and Clustering tasks, as well as most cases of the Reconstruction tasks.\nRecall that DUFS also selects features based on the Dirichlet Energy, this result shows that the model (5) in our method explores a superior graph structure compared to the traditional Heat method. (3) In classification and clustering, the best performance is mostly achieved by FS methods with fewer features, which verifies the necessity of FS. (4) AllFea achieves five optimums in Reconstruction, which is not surprising since, theoretically, AllFea can be projected to original features with an identity matrix. However, in biological data, the best reconstruction results are achieved by FS methods, probably because the high-dimensional data leads to overfitting in networks. ( 5) It is noteworthy that the reconstruction of Madelon poses a significant challenge for FS methods, indicating the difficulty of reconstructing noise even using useful features. This observation supports our claim regarding the lack of reasonability in selecting features based on the reconstruction performance in Section 1." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In this experiment, we demonstrate the efficacy of the UFS and the DGL modules through ablation studies on six datasets: Madelon, PCMAC, Jaffe, PIX10, GLIOMA, and PROSTATE." }, { "figure_ref": [ "fig_4" ], "heading": "Effect of FS.", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "Recall that we have demonstrated the efficacy of UFS in Fig. 3. To further verify the efficacy of FS in graph learning, we remove the entire FS module from the framework and learn the graph using all features based on DGL. We also compare the graph learning result using Heat. We cluster the obtained graphs with the spectral clustering (SC) method to verify their qualities. We tune the parameter σ of Heat in {1, 2, . . . , 5}, and fix k = 5 for our method and the variant. The results are shown in Table 3, which shows that FS has a positive effect on graph learning compared with \"DGL only\". Besides, in Appendix S6.4, we visualize the learned graph on COIL-20 and Jaffe using t-SNE, which shows that using fewer features, we achieve separable graphs that contain fewer inter-class connections than other methods.\nEffect of DGL. To verify the efficacy of DGL, we remove it from the model and learn the ablated variant with a fixed graph learned by Heat. Similar to Section 4.3, we first learn selected features using competing method, then evaluate the features in downstream tasks. We present the classification result in Table 3, and leave the other results in Appendix S6.5 due to limited space. We can see that our method significant outperforms the ablated variant, especially in Madelon. This is probably because the noise undermine the graph structure and disrupt the learning of informative features." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b43", "b25", "b25", "b44", "b44", "b15", "b45" ], "table_ref": [], "text": "Conclusion. This paper proposes a deep unsupervised FS method that learns informative features and k-NN graph jointly using the Dirichlet Energy. The network is fully differentiable and all modules are developed algorithmically to present versatility and interpretability. We demonstrate the performance of our method with extensive experiments on both synthetic and real-world datasets.\nBroader Impact. This paper presents not only an effective deep FS method, but also a differentiable k-NN graph learning strategy in the context of deep learning. This technique is particularly useful for end-to-end learning scenarios that require graph learning during the training process. And we do notice this practical need in existing literature, see [44] for example. We believe our study will inspire researchers who work on the dimensionality reduction and graph-related researches.\nLimitations. The major limitation of the proposed method is the lack of scalability, for which we do not evaluate our method on large datasets. This is because problem ( 14) requires an iterative solution, requiring storage of all intermediate results for back-propagation. While literature [26] proposes a memory-saving approach by deriving the expression of the derivative of Γ mathematically (see [26,Section 3]), it still requires at least O(nk) space to update all intermediate variables to learn k nearest neighbors for a singe sample, which results in a O(n 2 k) space complexity to learn for all n samples. This is a huge memory cost on large datasets. Although learning in batch seems to be the most straightforward solution, in our method, the neighbours of each sample are determined based on the global information of L S , which has an n × n size. This requires to load the entire batch's information during each iteration, for which we cannot employ subgraph sampling as other graph learning methods did to mitigate memory overhead. Another limitation of the proposed method is the low computational speed, as it is reported that the OT-based sorting can be slow [45].\nThe future developments of the proposed method are twofold. First, we will try more differentiable sorting algorithms to enhance computational speed. For example, reference [45] proposes to construct differentiable sorting operators as projections onto the permutahedron, which achieves a O(n log n) forward complexity and a O(n) backward complexity. Second, due to the large cost of the global relationship in L S , we are considering adopting a bipartite graph [16,46] to make batch learning feasible. This graph introduces a small number of anchor points, which are representative of the entire feature space. By doing this, smoothness can be measured based on the distance between samples to anchors, for which sample-to-sample relationships are no longer needed and the batch learning is enabled. It is worth noting that this idea is still in its conceptual stage, and we will explore its feasibility in upcoming research.\nS1 Proof of Proposition 3.1 Proposition 3.1. Given any real matrix F ∈ R d×m , one can always generate a column-orthogonal matrix F through Algorithm 1.\nProof. We begin our proof by showing the feasibility of Algorithm 1 for any real matrix F , as the eigendecomposition, Cholesky decomposition, and inverse mentioned in the algorithm are subject to specific conditions. For simplicity, we represent A = F ⊤ F + ϵI m , with ϵ > 0. Note that for any nonzero real column vector z ∈ R m , we have\nz ⊤ Az = z ⊤ ( F ⊤ F + ϵI m )z = ( F z) ⊤ ( F z) + ϵz ⊤ z = d i=1 ( f i z) 2 + ϵ m i=1 z 2 i > 0. (S1)\nHence, the matrix A is positive-definite and can be eigendecomposed as A = P ΛP -1 , where P ∈ R m×m is the square matrix whose i-th column p i is the eigenvector of A and Λ ∈ R m×m is the diagonal matrix whose diagonal entries are the corresponding eigenvalues. Moreover, it is easy to show that A is symmetric, for which we have P ⊤ = P -1 . Therefore, we prove that A can be decomposed as A = P ΛP ⊤ (line 2 in Algorithm 1).\nSince A is symmetric and positive-definite, we will be able to perform Cholesky decomposition on A as LL ⊤ = A (line 3 in Algorithm 1), which yields a lower triangular matrix L ∈ R m×m whose diagonal entries are all real and positive. This means that the determinant of L is larger than zero and L is invertible, which provides the feasibility of L -1 (line 4 in Algorithm 1). Consequently, the feasibility of Algorithm 1 for any real matrix F is proved.\nNext, we show that F is a column-orthogonal matrix. We denote Q = Λ 1/2 P ⊤ 0 and have:\nQ ⊤ Q = P Λ 1/2 , 0⊤ Λ 1/2 P ⊤ 0 = P ΛP ⊤ = A = LL ⊤ . (S2)\nThen we prove the orthogonality of the matrix F as follows:\nF ⊤ F =L -1 Q ⊤ Q(L -1 ) ⊤ =L -1 LL ⊤ (L -1 ) ⊤ =L -1 LL ⊤ (L ⊤ ) -1 =I m (S3)\nThe proof is completed." }, { "figure_ref": [], "heading": "S2 Discussion on Algorithm 1", "publication_ref": [], "table_ref": [], "text": "Although Algorithm 1 theoretically guarantees the orthogonality of the selection matrix F , utilizing this algorithm directly would bring us back to the problem of how to choose features and how to obtain discrete results. On one hand, the non-uniqueness of eigendecomposition in line 2 prevents us from ensuring the discrete properties of matrix F . On the other hand, it is important to note that in line 4 of Algorithm 1, we aim to construct a column full-rank matrix Q = Λ 1/2 P ⊤ 0 , whereas the construction of Q is also not unique since we can insert dm zero rows at any position within the original matrix Λ 1/2 P ⊤ to achieve the column full-rankness. The placement of these zero rows directly affects the result of feature selection.\nGuided by Algorithm 1, we devise a more empirical approach by calculating F with F = F (L -1 ) ⊤ , which effectively tackles the above two concerns. By doing so, we avoid the non-uniqueness of eigendecomposition, thereby obtaining a solution that is as discrete as F . Additionally, this approach ensures that the information of feature selection in F is retained within the column full-rank matrix.\nActually, F is an ϵ-approximation of column-orthogonal matrix, since we have:\nF ⊤ F = L -1 F ⊤ F (L -1 ) ⊤ = L -1 (A -ϵI m )(L -1 ) ⊤ = L -1 A(L -1 ) ⊤ -ϵL -1 (L -1 ) ⊤ = L -1 LL ⊤ (L -1 ) ⊤ -ϵL -1 (L -1 ) ⊤ = I m -ϵL -1 (L -1 ) ⊤ . (S4)\nThe experimental results in Section 4.2 verify the effectiveness of this approach in successfully avoiding duplicate feature selection." }, { "figure_ref": [], "heading": "S3 Derivation of the Solution to Problem 5", "publication_ref": [ "b19", "b40", "b13", "b7", "b8", "b40", "b12", "b41", "b42" ], "table_ref": [], "text": "Recall that we aim to solve the following problem to learn an adaptive k-NN graph:\nmin S tr( X⊤ L S X) + α 2 ∥S∥ 2 F , s.t. S1 n = 1 n , s i,j ≥ 0, s i,i = 0, = min si,j 1 2 n i=1 n j=1 ∥(x i -xj )∥ 2 2 s i,j + α i s 2 i,j , s.t. n j=1 s i,j = 1, s i,j ≥ 0, s i,i = 0. (S5)\nBased on X, we define the quantity e i,j = ∥(x ixj )∥ 2 2 , then we solve each row in problem (S5) separately as:\nmin si,j 1 2 n j=1 e i,j s i,j + α i s 2 i,j s.t. n j=1 s i,j = 1, s i,j ≥ 0, s i,i = 0, = min si,j 1 2 n j=1 (s i,j + e i,j 2α i ) 2 s.t. n j=1 s i,j = 1, s i,j ≥ 0, s i,i = 0, = min s i 1 2 ∥s i + e i 2α i ∥ 2 2 s.t. s i 1 n = 1, s i,j ≥ 0, s i,i = 0.(S6)\nWe first omit the constraint s i,i = 0 and consider it later, and solve problem (S6) with the first two constraints, the Lagrangian function of which is as follows:\nL(s i , λ i , β i ) = 1 2 ∥s i + e i 2α i ∥ 2 -λ i (s i 1 n -1) - n j=1 s i,j β i,j ,(S7)\nwhere λ i and β i,j are Lagrange multipliers. The derivative of L(s i , λ i , β i ) w.r.t. s i,j is:\n∂L ∂s i,j = s i,j + e i,j 2α i -λ i -β i,j(S8)\nThen we have the Karush-Kuhn-Tucker(KKT) conditions [20] of problem (S7) as follows:\n                       s i,j + e i,j 2α i -λ i -β i,j = 0 n j=1 s i,j = 1 s i,j ≥ 0 β i,j ≥ 0 β i,j s i,j = 0.(S9)\nThen we have:\ns i,j = (λ i - e i,j 2α i ) + (S10)\nRecall that there are only k nonzero elements in s i corresponding to the nearest neighbors of sample i, according to the constraint n j=1 s i,j = 1 on k nonzero entries in s i , we have:\nk j=1 (λ i - e i,σj 2α i ) = 1 ⇒ λ i = 1 k + 1 k e i δ (k) i 2α i with δ (k) i,j = Bool(e i,j ≤ e i,σ k ),(S11) where δ\n(k) i denotes the selection vector identifying the k minimal values in e i , and σ = [σ 1 , . . . , σ n ] denotes the sorting permutation over e i , i.e. e i,σ1 ≤ • • • ≤ e i,σn . Without loss of generality, we assume e i has no duplicates, namely e i,σ1 < • • • < e i,σn . Considering the constraint s i,i = 0, since e i,i = 0 being the minimal value in e i holds for all samples, we replace e i,i with a sufficiently large value to skip over this trivial solution.\nSubstituting (S11) into (S10), we have:\ns i,j = ( 1 k + 1 k e i δ (k) i 2α i - e i,j 2α i ) + (S12)\nRecall that there are only k nonzero entries in s i , we have\n1 k + 1 k e i δ (k) i 2α i - e i,σ k 2α i > 0, 1 k + 1 k e i δ (k) i 2α i - e i,σ k+1 2α i ≤ 0. (S13)\nNote that we assume α i > 0, then we have\n1 2 (ke i ξ (k) i -e i δ (k) i ) < α i ≤ 1 2 (ke i ξ (k+1) i -e i δ (k) i ) with ξ (k) i,j = Bool(e i,j = e i,σ k ),(S14) where ξ (k) i\nis an indicator vector identifying the k-th minimal value in e i . According to (S14), we set α i as its maximal value as follows:\nα i = 1 2 (ke i ξ (k+1) i -e i δ (k) i ).(S15)\nSubstituting S15 into Eq. S12, we have:\ns i,j = ( 1 k + 1 k e i δ (k) i 2α i - e i,j2α\ni ) + = ( 2α i + e i δ (k) i -ke i,j 2kα i ) + = ( ke i ξ (k+1) i -e i δ (k) i + e i δ (k) i -ke i,j k(ke i ξ (k+1) i -e i δ (k) i ) ) + = ( e i ξ (k+1) i -e i,j ke i ξ (k+1) i -e i δ (k) i ) + (S16)\nEq. (S16) is used for implementation in our code. Note that since e i,σ1 < • • • < e i,σ k < e i,σ k+1 < . . . e i,σn , we have\nke i ξ (k+1) i -e i δ (k) i = ke i,σ k+1 - k p=1 e i,σp = k p=1 (e i,σ k+1 -e i,σp ) > 0. (S17)\nThen we obtain the solution of s i,j as\ns i,σj =      e i ξ (k+1) i -e i,σj ke i ξ (k+1) i -e i δ (k) i , 1 ≤ j ≤ k 0, otherwise ,(S18)\nwhich is exactly the solution of Eq. ( 9) in our main paper:\ns i,σj = e i ξ (k+1) i -e i,σj ke i ξ (k+1) i -e i δ (k) i • Bool(1 ≤ j ≤ k).(S19)\n• Where to Pay Attention in Sparse Training (WAST) [41]: We use the official code released in https://github.com/GhadaSokar/WAST. The parameter settings were adopted in accordance with Appendix A.1 of the original paper. Specifically, we train each dataset for 10 epochs using stochastic gradient descent with a learning rate of 0.1 for all datasets except for SMK, where the learning rate is set to 0.01. For the parameter λ, we set λ = 0.9 on Madelon and PCMAC, λ = 0.4 on all image datasets, λ = 0.1 on all biological datasets except SMK, and λ = 0.01 on SMK. The remaining parameters are kept as they were in the original paper. • Differentiable Unsupervised Feature Selection (DUFS) [14]: We use the official code released in https://github.com/Ofirlin/DUFS and use the parameter-free loss version of DUFS. For all datasets, we set k = 2, and train the method with SGD with a learning rate of 1 for 10000 epochs according to Appendix S7 in the original paper. We set the parameter C = 5 on all datasets except for SRBCT, COIL, and PIX10, where C is set to 2. • Concrete AutoEncoder (CAE) [8]: We use the official code released in https://github. com/mfbalin/Concrete-Autoencoders. Since we could not find too much description about the parameter settings on different datasets in the original paper, we run CAE with default settings in the code. • AutoEncoder Feature Selector (AEFS) [9]: The original code provided by the authors is implemented in MATLAB, and it requires a prohibitively long time to run this method on MATLAB. Therefore, following the treatment in [41], we use the code provided by the authors of CAE in https://github.com/Ofirlin/DUFS (see experiments/generate_comparison_figures.py in their repository). We search the parameter α in {10 -9 , 10 -6 , 10 -3 , 10 0 , 10 3 , 10 6 , 10 9 }, and the size of the hidden layer in {128, 256, 512, 1024}. • Laplacian Score (LS) [13]: We use the official code released in http://www.cad.zju.edu. cn/home/dengcai/Data/ReproduceExp.html#LaplacianScore. We use the heat kernel for graph construction, and fix the size of neighbors k as 5 for all datasets. • Regularized Self-Representation (RSR) [42]: We use the official code released in https://github.com/AISKYEYE-TJU/RSR-PR2015. We search the parameter λ in {10 -9 , 10 -6 , 10 -3 , 10 0 , 10 3 , 10 6 , 10 9 }. • Unsupervised Discriminative Feature Selection (UDFS) [43]: We use the code provided in https://guijiejie.github.io/code.html. We use the heat kernel for graph construction, and fix the size of neighbors k as 5 for all datasets. We search the parameter γ in {10 -9 , 10 -6 , 10 -3 , 10 0 , 10 3 , 10 6 , 10 9 }." }, { "figure_ref": [], "heading": "S6.1.3 Evaluation Workflow", "publication_ref": [], "table_ref": [], "text": "The overall evaluation workflow in Section 4.3 is shown in Algorithm S2, which includes two steps:\n1. Given the dataset X and a prespecified feature number m, we first randomly split the dataset using an 8 : 2 ratio and select features based on the training set X tr by FS method F using different parameters Θ, as shown in Algorithm S1. This allows us to obtain FS results under different parameter candidates θ i , along with the corresponding reduced training data X ′ tr and testing data X ′ te . Based on the reduced data, we perform classification tasks on these datasets with the random forest, thereby obtaining the classification performance for each parameter combination. We select the parameter combination with the best classification performance as the optimal parameter θ * for F. 2. Based on the optimal parameter θ * , we construct the FS model F θ * and evaluate its performance in different downstream tasks. To avoid randomness, we randomly split the dataset 10 times. With each random split, we use the training set X tr to select features, and obtain reduced training set X ′ tr and testing set X ′ te . We use these sets for downstream tasks including classification, clustering, and reconstruction, and obtain corresponding performance metrics. For each downstream task, we calculate the average metric over 10 runs as the performance of F for the given number m.\nFor each dataset, we vary the value of m and follow the aforementioned procedure to obtain the corresponding performance. For each downstream task, we report the best metric and the corresponding feature number m as the performance of the FS method in this downstream task." }, { "figure_ref": [], "heading": "Algorithm S1 Param_tuning", "publication_ref": [], "table_ref": [], "text": "Input: Training data (X tr , y tr ), testing data (X te , y te ), selected number m, FS method F, and parameter set Θ = {θ i }. Output: Optimal parameter θ * .\n1: for θ i in Θ do 2:\nξ = F θi (X tr , m); ▷ Determining selected features ξ by F under the parameter θ i .\n3:\nX ′ tr = X tr (:, ξ), X ′ te = X te (:, ξ); ▷ Generating reduced datasets using selected features. Partitioning the dataset into training data (X tr , y tr ) and testing data (X te , y te )." }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "ξ * = F θ * (X tr , m);\nX ′ tr = X tr (:, ξ * ), X ′ te = X te (:, ξ * );\n7:\nfor T i in T do 8:\nm{i, j} = T i (X ′ tr , y tr , X ′ te , y te ); ▷ Evaluating the performance in downstream tasks. M i = Average(m{i, :}); 13: end for" }, { "figure_ref": [], "heading": "S6.1.4 Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "We employ two metrics in our experiments: the accuracy (ACC) and the root mean square error (RMSE) normalized by d." }, { "figure_ref": [], "heading": "The formulation of ACC is", "publication_ref": [], "table_ref": [], "text": "ACC(y, ŷ) = n i=1 Bool(y i = ŷi ) n ,(S23)\nwhere y ∈ R n denotes the groundtruth labels and ŷ ∈ R n denotes the prediction label.\nThe formulation of RMSE normalized by d is\nRMSE(X, X) = n i=1 ∥x i -xi ∥ 2 2 n × d ,(S24)\nwhere X = [x 1 ; x 2 ; . . . ;\nx n ] and X = [x 1 ; x2 ; . . . ; xn ] denote the original feature matrix and the reconstructed feature matrix, respectively." }, { "figure_ref": [], "heading": "S6.1.5 Formulation of Heat Kernel Method", "publication_ref": [], "table_ref": [], "text": "Here we describe the heat kernel (Heat) method compared in this paper. To implement Heat, we first compute the similarity matrix Ŝ as follows:\nŝi,j = exp(-\n∥x i -x j ∥ 2 2 2σ 2 ),(S25)\nBased on Ŝ, we keep the k-nearest neighbors for each sample. Namely, for each sample i, we obtain its similarity vector s i as\ns i,j = ŝi,j , x j ∈ K(x i ) 0, otherwise ,(S26)\nwhere K(x i ) denotes the k nearest neighbors of x i . When we need to calculate the Laplacian matrix using S (for example, when we analyze the effect of DGL in Section 4.4), we use the symmetrized version of S:\nS = S ⊤ + S 2 (S27)" }, { "figure_ref": [], "heading": "S6.2 Standard Deviations of Quantitative Analysis", "publication_ref": [], "table_ref": [ "tab_5", "tab_6", "tab_7", "tab_2" ], "text": "Table S1, Table S2, and Table S3 exhibit the mean and the standard deviations of the results in Table 2 in Section 4.3. " }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "S6.6 Additional Experiment: Parameter Sensitivity Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the effect of the parameters of our method, including the learning rate, the number of nearest neighbors k, the hyperparameter γ in the entropy regularized OT problem 14, and the selected feature number m. We use four real-world datasets, including a text dataset PCMAC, an artificial dataset Madelon, an image dataset Jaffe, and a biological dataset PROSTATE.\nThe analysis is based on the optimal parameter obtained in Section 4.3. For each dataset, we fix the values of the remaining parameters and vary the value of one parameter at a time. We retrain our method using the updated parameter combination and evaluate the corresponding FS result with the random forest. This allows us to observe the impact of different parameters on the performance of our method. For example, to analyze the effect of the learning rate on Madelon, we keep k, γ, and m at their optimal values, then we vary the learning rate in {10 -4 , 10 -3 , 10 -2 , 10 -1 , 10 0 , 10 1 } (the range as we described in Appendix S6.1.2), and evaluate their corresponding performance in the classification task with the random forest. The overall results are shown in Fig. S3, where the stars represent the results using optimal parameters.\nOne the one hand, we observe that the variations in the learning rate, k, γ have little impact on the performance of our method across different datasets. This suggests that we can set a value within a proper range for these parameters, without the need to determine their values on different datasets. On the other hand, the most sensitive parameter is m, where a higher number of features contributes to better results, aligning with intuition and observations from existing literature. However, it is important to emphasize that more features are not always better. As demonstrated in Section 4.3, the FS methods consistently outperform AllFea in most classification and clustering tasks. Fewer features not only result in lower computational costs but also contribute to faster learning speeds. This suggests the need to adjust the value of m, for example, starting with a relatively small value and gradually increasing it until the model performance begins to decline. Figure S3: Parameter sensitivity analysis using the random forest, where the starred point denotes the performance on the optimal parameter combination." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Natural Science Foundation of China under Grant 62276212 and Grant 61872190, in part by the National Key Research and Development Program of China under Grant 2022YFB3303800, and in part by the Key Research and Development Program of Jiangsu Province under Grant BE2021093." }, { "figure_ref": [], "heading": "S4 Connection to CLR", "publication_ref": [ "b20" ], "table_ref": [], "text": "We note that a similar formulation to problem 6 has been proposed in [21] (coined CLR), which expects closer samples to have higher similarity. It aligns with the notion of \"smoothness\" as we mentioned in Section 2. However, our method differs from CLR in at least two crucial aspects: Firstly, CLR measures the distance quantity e i,j across all original features, making it more sensitive to the noise and irrelevant features in the original data. In contrast, our approach learns the graph structure using only informative features, resulting in enhanced robustness against noisy features. Secondly, it is important to note that CLR is proposed in the context of traditional machine learning, where optimization is straightforward, as δ (k) i and ξ (k+1) i can be updated using off-the-shelf sorting algorithms. Different from CLR, problem 6 is introduced in the realm of deep learning, where conventional sorting algorithms are non-differentiable and not applicable. This poses a huge challenge in learning an adaptive k-NN graph in neural networks. To overcome this challenge, we proposed to transform the top-k selection into a differentiable operator using the Optimal Transport technique." }, { "figure_ref": [], "heading": "S5 Iterative Bregman Projections", "publication_ref": [ "b26" ], "table_ref": [], "text": "In this paper, we employ the iterative Bregman projections [27] algorithm to solve the following problem:\nWe first initialize two variables u ∈ R k+2 and K ∈ R n×(k+2) as u i = 1/(k+2) and k i,j = e -ci,j /γ , respectively. Then based on the following formulations, we repeatedly updating u and v for ζ iterations:\nwhere the division in Eq. ( S21) is element-wise. In this paper, we set ζ as 200. After updating ζ iteration, we obtain the optimal transport plan Γ as" }, { "figure_ref": [], "heading": "S6 Supplementary Experimental Details S6.1 Implementation Details", "publication_ref": [], "table_ref": [], "text": "All experiments are conducted on a server equipped with an RTX 3090 GPU and an Intel Xeon Gold 6240 (18C36T) @ 2.6GHz x 2 (36 cores in total) CPU." }, { "figure_ref": [], "heading": "S6.1.1 Synthetic Datasets", "publication_ref": [ "b46" ], "table_ref": [], "text": "We generate three datasets for toy experiments: (1) Blobs, (2) Moons, and (3) Circles. For each dataset, we generate the first two features using the scikit-learn library [47] by adding noise sampled from N (0, 0.1). Additionally, we generate 18-dimensional noise features sampled from N (0, 1)." }, { "figure_ref": [], "heading": "S6.1.2 Competing Methods", "publication_ref": [ "b47", "b25" ], "table_ref": [], "text": "The implementation details of different methods, as well as their corresponding parameter selections are provided below:\n• Our Method: Our method is implemented using the PyTorch framework [48]. We train our method using the Adam optimizer for 1000 epochs on all datasets, with the learning rate searched from {10 -4 , 10 -3 , 10 -2 , 10 -1 , 10 0 , 10 1 }. We search the parameter γ in {10 -3 , 10 -2 , 10 -1 } and the parameter k in {1, 2, 3, 4, 5}. Note that the implementation of differentiable top-k selector is based on the code provided by [26] in https://papers.nips.cc/paper_files/paper/2020/hash/ ec24a54d62ce57ba93a531b460fa8d18-Abstract.html, which provides a more memory-saving backward implementation compared to directly using the autograd method in PyTorch." }, { "figure_ref": [], "heading": "S6.3 Reconstruction Results on PIX10", "publication_ref": [], "table_ref": [], "text": "Fig. S1 presents the reconstruction results on PIX10 achieved by our method. We can see that our method is able to reconstruct the original images of 10000 dimensions reasonably well with only 300 features. Notably, the reconstructions capture important appearance details, including reflections, hair tips, and facial features, demonstrating the effectiveness of our method." }, { "figure_ref": [], "heading": "Reconstructed Images", "publication_ref": [], "table_ref": [], "text": "Groundtruth Images " }, { "figure_ref": [], "heading": "S6.4 Graph Visualization", "publication_ref": [], "table_ref": [], "text": "We visualize COIL-20 and Jaffe with t-SNE, and plot the graph structures obtained by different methods. The results are shown in Fig. S2, where red lines represent the intra-class connections and blue lines represent inter-class connections. Unlike \"Heat\" and \"DGL only\" that use the original features for visualization, we visualize the data points using only selected features. Remarkably, our method successfully achieves separable structures using t-SNE, demonstrating its ability to capture features that reflect the intrinsic data structure. " } ]
Feature selection (FS) plays an important role in machine learning, which extracts important features and accelerates the learning process. In this paper, we propose a deep FS method that simultaneously conducts feature selection and differentiable k-NN graph learning based on the Dirichlet Energy. The Dirichlet Energy identifies important features by measuring their smoothness on the graph structure, and facilitates the learning of a new graph that reflects the inherent structure in new feature subspace. We employ Optimal Transport theory to address the non-differentiability issue of learning k-NN graphs in neural networks, which theoretically makes our method applicable to other graph neural networks for dynamic graph learning. Furthermore, the proposed framework is interpretable, since all modules are designed algorithmically. We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets.
Joint Feature and Differentiable k-NN Graph Learning using Dirichlet Energy
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the Dirichlet Energy on various graph structures and graph signals. Blue points, black edges, and red bars represent nodes, connections, and signal values on nodes, respectively. Upside bars represent positive values, and downside bars represent negative values.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a). In Fig.1(b) we change v to a random noise vector. While in Fig.1(c), we change the graph structure to a random 2-NN graph. We compute the Laplacian matrix L S of each figure and present the corresponding Dirichlet Energy in the figures. We can see that Fig.1(a) achieves the best smoothness, whereas both Fig.1(b) and Fig.1(c) have poor smoothness due to a mismatch between the graph signal and the graph structure.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 : ( 1 )21Figure 2: (1) Top Panel: Overview of the proposed framework, where smiley faces denote the value 1 representing that the feature is selected, while sad faces denote the value 0 representing that the feature is unused. (2) Bottom Left Panel: Illustration of the Unique Feature Selector (UFS), where green bars denote the value distributions of different vectors. (3) Bottom Right Panel: Illustration of the Differentiable k-NN Graph Learner (DGL), where the \"Differentiable k-NN Selector\" in deep blue shows how to learn k nearest neighbors with the Optimal Transport theory.", "figure_data": "", "figure_id": "fig_2", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Toy results on synthetic datasets, where higher similarities are presented with thicker connections in k-NN graphs, and we only present the connections to 5-NN for each sample. Blue bars and orange bars represent the distribution of f 1 and f 2 in the FS matrix F , respectively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "end for 11: for T i in T do 12:", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "𝑤𝑤 11 𝑤𝑤 21 𝑤𝑤 31 𝑤𝑤 41 𝑤𝑤 𝑑𝑑1 𝑤𝑤 12 𝑤𝑤 22 𝑤𝑤 32 𝑤𝑤 42 𝑤𝑤 𝑑𝑑2 𝑤𝑤 1𝑚𝑚 𝑤𝑤 2𝑚𝑚 𝑤𝑤 3𝑚𝑚 𝑤𝑤 4𝑚𝑚 𝑤𝑤 𝑑𝑑𝑚𝑚", "figure_data": "Feature SelectionDGL𝒙𝒙 1� 𝒙𝒙 1𝒔𝒔 1…𝒙𝒙 2 𝒙𝒙 3� 𝒙𝒙 2Distance Measuring𝒔𝒔 2…𝒙𝒙 4� 𝑿𝑿𝑬𝑬𝒔𝒔 3……𝒙𝒙 𝑑𝑑� 𝒙𝒙 𝑚𝑚… 𝒔𝒔 𝑛𝑛Gumbel…softmaxUFSf𝑓11 f𝑓21 f𝑓31 f𝑓 41…f𝑓𝑑𝑑1𝜖𝜖𝑰𝑰𝑓𝑓 11 𝑓𝑓 21 𝑓𝑓 31 𝑓𝑓 41…𝑓𝑓 𝑑𝑑1Gumbel� 𝑭𝑭𝑭𝑭……softmaxf𝑓12 f𝑓22 f𝑓32 f𝑓 42…f𝑓𝑑𝑑2𝑓𝑓 12 𝑓𝑓 22 𝑓𝑓 32 𝑓𝑓 42 ……𝑓𝑓 𝑑𝑑2𝝃𝝃 𝑖𝑖 (1)Gumbel…softmax… 𝑓𝑓 1𝑚𝑚 𝑓𝑓 2𝑚𝑚 𝑓𝑓 3𝑚𝑚 𝑓𝑓 4𝑚𝑚 𝑓𝑓 𝑑𝑑𝑚𝑚f𝑓1𝑚𝑚 f𝑓2𝑚𝑚 f𝑓3𝑚𝑚 f𝑓 4𝑚𝑚…f𝑓𝑑𝑑𝑚𝑚-𝟏𝟏…", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Details of real-world data.", "figure_data": "TypeDataset#Samples #Features #ClassesTypeDataset#Samples #Features #ClassesTextPCMAC [28]194332892ArtificialMadelon [29]26005002GLIOMA [30]5044344COIL-20 [31]1440102420LUNG [32]20333125Yale [33]165102415BiologicalPROSTATE [34] SRBCT [36]102 835966 23082 4ImageJaffe [35] PIX10 [14]213 100676 1000010 10SMK [37]187199932warpPIE10P [38]210242010", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results in downstream tasks over 10 runs on optimal m that is shown in the bracket. \"Cla.\", \"Clu.\" and \"Rec.\" are short for classification, clustering, and reconstruction, respectively.", "figure_data": "TaskDatasetCAEDUFSWASTAEFSLSRSRUDFSOurAllFeaCla. (ACC ↑)Madelon PCMAC COIL-20 Yale Jaffe PIX10 warpPIE10P GLIOMA LUNG PROSTATE SRBCT SMK0.65 (15) 0.79 (50) 1.00 (300) 1.00 (300) 1.00 (200) 1.00 (300) 0.98 (300) 1.00 (200) 1.00 (300) 1.00 (200) 1.00 0.52 (20) 0.87 (15) 0.87 (10) 0.51 (20) 0.83 (10) 0.74 (20) 0.90 (20) 0.73 0.77 (300) 0.83 (300) 0.71 (300) 0.68 (300) 0.91 (300) 0.87 (300) 0.79 (300) 0.93 0.72 (300) 0.72 (300) 0.70 (200) 0.70 (150) 0.67 (300) 0.72 (200) 0.71 (200) 0.81 (300) 0.75 0.97 (50) 0.97 (50) 0.97 (25) 0.97 (100) 0.97 (300) 0.98 (300) 0.98 (75) 1.00 (150) 0.98 0.99 (25) 0.98 (50) 0.99 (150) 0.98 (100) 0.97 (100) 0.97 (50) 0.97 (100) 1.00 (25) 0.97 0.96 (100) 0.96 (200) 0.95 (300) 0.98 (75) 0.96 (300) 0.98 (300) 0.97 (200) 0.99 (300) 0.98 0.68 (200) 0.66 (300) 0.72 (100) 0.63 (300) 0.67 (200) 0.67 (300) 0.68 (75) 0.81 (300) 0.72 0.86 (100) 0.87 (150) 0.87 (300) 0.87 (300) 0.92 (300) 0.93 (300) 0.91 (300) 0.94 (50) 0.90 0.88 (300) 0.81 (150) 0.81 (300) 0.81 (150) 0.88 (300) 0.90 (150) 0.90 (300) 0.89 (300) 0.89 0.99 (300) 0.94 (150) 0.98 (300) 0.95 (200) 0.98 (300) 1.00 (200) 1.00 (150) 0.98 (150) 0.99 0.67 (200) 0.66 (50) 0.65 (50) 0.66 (150) 0.72 (300) 0.72 (75) 0.73 (100) 0.75 (200) 0.68Average ranking 4.75.94.95.56.32.83.31.83.2# Top-1111103392Clu. (ACC ↑)Madelon PCMAC COIL-20 Yale Jaffe PIX10 warpPIE10P GLIOMA LUNG PROSTATE SRBCT SMK0.60 (15) 0.52 (150) 0.51 (25) 0.52 (15) 0.69 (100) 0.59 (150) 0.65 (200) 0.60 (300) 0.53 (300) 0.59 (300) 0.6 (300) 0.52 (15) 0.55 (5) 0.52 (20) 0.61 (15) 0.57 (20) 0.51 (25) 0.51 (100) 0.52 (75) 0.51 (25) 0.51 (150) 0.53 (200) 0.51 0.60 (20) 0.58 0.66 (200) 0.63 0.55 (100) 0.55 (150) 0.56 (100) 0.54 (300) 0.58 (300) 0.60 (300) 0.58 (200) 0.62 (300) 0.61 0.85 (300) 0.80 (300) 0.83 (300) 0.80 (300) 0.76 (300) 0.83 (150) 0.83 (200) 0.87 (200) 0.82 0.86 (200) 0.79 (150) 0.85 (150) 0.79 (300) 0.85 (75) 0.72 (300) 0.81 (200) 0.87 (300) 0.78 0.55 (75) 0.42 (300) 0.44 (50) 0.53 (25) 0.55 (200) 0.57 (75) 0.49 (25) 0.51 (25) 0.45 0.69 (50) 0.65 (50) 0.65 (25) 0.62 (300) 0.63 (300) 0.65 (150) 0.68 (75) 0.75 (75) 0.62 0.64 (150) 0.64 (100) 0.62 (75) 0.66 (150) 0.65 (200) 0.69 (300) 0.61 (300) 0.72 (150) 0.69 0.64 (25) 0.59 (150) 0.58 (300) 0.59 (25) 0.71 (25) 0.63 (25) 0.69 (25) 0.68 (50) 0.64 0.76 (150) 0.54 (75) 0.56 (200) 0.57 (200) 0.56 (200) 0.60 (150) 0.59 (150) 0.63 (50) 0.52 0.60 (300) 0.58 (25) 0.58 (25) 0.58 (50) 0.59 (50) 0.59 (50) 0.61 (200) 0.64 (25) 0.60Average ranking 2.86.65.76544.31.85.1# Top-1200012070Rec. (RMSE ↓)Madelon PCMAC COIL-20 Yale Jaffe PIX10 warpPIE10P GLIOMA LUNG PROSTATE SRBCT SMK0.99 (20) 1.14 (25) 0.48 (300) 0.41 (300) 0.43 (300) 0.41 (300) 0.48 (300) 0.45 (300) 0.41 (300) 0.38 (300) 0.27 0.99 (20) 0.98 (20) 0.98 (20) 0.99 (20) 0.98 (20) 0.98 (20) 0.98 (10) 0.28 1.03 (25) 1.05 (25) 1.05 (25) 1.04 (25) 1.17 (50) 1.07 (25) 1.30 (25) 0.78 0.63 (300) 0.56 (300) 0.56 (300) 0.56 (300) 0.78 (300) 0.57 (300) 0.60 (200) 0.54 (300) 0.50 0.31 (300) 0.25 (300) 0.26 (300) 0.25 (300) 0.33 (300) 0.26 (300) 0.25 (300) 0.22 (300) 0.23 0.49 (300) 0.43 (300) 0.46 (300) 0.43 (300) 0.63 (50) 0.46 (300) 0.47 (300) 0.39 (300) 1.20 0.30 (300) 0.26 (300) 0.27 (300) 0.26 (300) 0.45 (300) 0.27 (300) 0.26 (300) 0.25 (300) 0.25 0.74 (300) 0.71 (300) 0.71 (300) 0.72 (300) 0.71 (300) 0.72 (300) 0.72 (300) 0.69 (300) 1.86 0.94 (300) 0.77 (300) 0.77 (300) 0.78 (300) 0.82 (300) 0.81 (300) 0.79 (300) 0.80 (300) 0.82 1.00 (300) 0.78 (300) 0.77 (300) 0.77 (300) 0.72 (300) 0.74 (300) 0.71 (300) 0.73 (300) 1.48 0.84 (300) 0.77 (300) 0.77 (300) 0.78 (300) 0.80 (300) 0.79 (300) 0.78 (300) 0.75 (300) 0.83 0.98 (25) 0.68 (300) 0.68 (300) 0.68 (300) 0.78 (300) 0.78 (300) 0.73 (300) 0.69 (300) 4.32Average ranking 7.933.53.26.45.54.12.74.8# Top-1022100155", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results in ablation studies, where \"w/o\" is short for \"without\".", "figure_data": "TaskMethodMadelonPCMACJaffePIX10GLIOMA PROSTATEEffect of FS (Clu. with SC)Heat DGL only 0.50±0.00 0.50±0.00 0.57±0.05 0.76±0.09 0.55±0.04 0.56±0.07 0.54±0.01 0.51±0.00 0.60±0.13 0.41±0.06 0.48±0.05 0.55±0.02 Our 0.58±0.01 0.51±0.00 0.80±0.07 0.77±0.04 0.59±0.04 0.65±0.02Effect of DGL (Cla. with RF)w/o DGL 0.51±0.02 0.72±0.02 0.97±0.01 0.98±0.03 0.66±0.10 0.81±0.04 Our 0.90±0.01 0.79±0.01 1.00±0.01 1.00±0.00 0.81±0.09 0.89±0.05", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Original dataset (X, y), selected feature number m, FS method F, parameter set Θ = {θ i }, and downstream tasks T = {T i }. Output:. Performance M = {M i } in downstream tasks.1: Partitioning the dataset into training data (X tr , y tr ) and testing data (X te , y te ).", "figure_data": "4: 5: 6: 7: 8: 9: end for ACC = RF(X ′ tr , y tr , X ′ te , y te );▷ Evaluating selected features with random forest classifier. if ACC > ACC * then ACC * = ACC; θ * = θ i ; end ifAlgorithm S2 Overall evaluation workflowInput:", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Classification results with standard deviations.", "figure_data": "DatasetCAEDUFSWASTAEFSLSRSRUDFSOurAllFeaMadelon PCMAC COIL-20 Yale Jaffe PIX10 warpPIE10P 0.96±0.04 0.96±0.03 0.95±0.03 0.98±0.04 0.96±0.05 0.98±0.03 0.97±0.05 0.99±0.02 0.98±0.04 0.65±0.04 0.52±0.04 0.87±0.02 0.87±0.01 0.51±0.01 0.83±0.02 0.74±0.06 0.90±0.01 0.73±0.02 0.79±0.06 0.77±0.02 0.83±0.01 0.71±0.03 0.68±0.03 0.91±0.02 0.87±0.02 0.79±0.01 0.93±0.01 1.00±0.01 1.00±0.00 1.00±0.00 1.00±0.00 0.98±0.01 1.00±0.00 1.00±0.00 1.00±0.00 1.00±0.00 0.72±0.05 0.72±0.05 0.70±0.05 0.70±0.06 0.67±0.07 0.72±0.08 0.71±0.06 0.81±0.05 0.75±0.06 0.97±0.02 0.97±0.01 0.97±0.01 0.97±0.01 0.97±0.03 0.98±0.02 0.98±0.02 1.00±0.01 0.98±0.02 0.99±0.02 0.98±0.03 0.99±0.02 0.98±0.03 0.97±0.06 0.97±0.04 0.97±0.04 1.00±0.00 0.97±0.03 GLIOMA 0.68±0.10 0.66±0.12 0.72±0.09 0.63±0.12 0.67±0.13 0.67±0.11 0.68±0.10 0.81±0.09 0.72±0.14 LUNG 0.86±0.05 0.87±0.05 0.87±0.04 0.87±0.05 0.92±0.04 0.93±0.05 0.91±0.06 0.94±0.03 0.90±0.05 PROSTATE 0.88±0.05 0.81±0.07 0.81±0.06 0.81±0.07 0.88±0.07 0.90±0.07 0.90±0.05 0.89±0.05 0.89±0.06 SRBCT 0.99±0.02 0.94±0.05 0.98±0.03 0.95±0.05 0.98±0.04 1.00±0.00 1.00±0.00 0.98±0.03 0.99±0.02 SMK 0.67±0.06 0.66±0.07 0.65±0.07 0.66±0.08 0.72±0.10 0.72±0.11 0.73±0.09 0.75±0.06 0.68±0.10", "figure_id": "tab_5", "figure_label": "S1", "figure_type": "table" }, { "figure_caption": "Clustering results with standard deviations. 65±0.11 0.65±0.10 0.62±0.10 0.63±0.13 0.65±0.16 0.68±0.11 0.75±0.08 0.62±0.08 LUNG 0.64±0.07 0.64±0.11 0.62±0.10 0.66±0.06 0.65±0.11 0.69±0.10 0.61±0.05 0.72±0.12 0.69±0.11 PROSTATE 0.64±0.11 0.59±0.05 0.58±0.05 0.59±0.05 0.71±0.13 0.63±0.10 0.69±0.11 0.68±0.09 0.64±0.10 SRBCT 0.76±0.09 0.54±0.05 0.56±0.08 0.57±0.08 0.56±0.07 0.60±0.11 0.59±0.11 0.63±0.10 0.52±0.06 SMK 0.60±0.06 0.58±0.05 0.58±0.06 0.58±0.05 0.59±0.05 0.59±0.05 0.61±0.07 0.64±0.05 0.60±0.07", "figure_data": "DatasetCAEDUFSWASTAEFSLSRSRUDFSOurAllFeaMadelon PCMAC COIL-20 Yale Jaffe PIX10 warpPIE10P 0.55±0.05 0.42±0.03 0.44±0.04 0.53±0.06 0.55±0.05 0.57±0.06 0.49±0.08 0.51±0.05 0.45±0.05 0.60±0.01 0.52±0.01 0.52±0.01 0.55±0.04 0.52±0.01 0.61±0.02 0.57±0.05 0.60±0.01 0.58±0.04 0.52±0.01 0.51±0.01 0.51±0.01 0.51±0.01 0.52±0.01 0.51±0.01 0.51±0.01 0.53±0.02 0.51±0.01 0.69±0.03 0.59±0.04 0.65±0.03 0.60±0.04 0.53±0.04 0.59±0.03 0.60±0.04 0.66±0.03 0.63±0.05 0.55±0.06 0.55±0.07 0.56±0.06 0.54±0.06 0.58±0.06 0.60±0.08 0.58±0.05 0.62±0.04 0.61±0.08 0.85±0.07 0.80±0.06 0.83±0.05 0.80±0.07 0.76±0.09 0.83±0.08 0.83±0.09 0.87±0.06 0.82±0.06 0.86±0.05 0.79±0.04 0.85±0.06 0.79±0.04 0.85±0.10 0.72±0.08 0.81±0.09 0.87±0.07 0.78±0.05 GLIOMA 0.69±0.07 0.", "figure_id": "tab_6", "figure_label": "S2", "figure_type": "table" }, { "figure_caption": "Reconstruction results with standard deviations. 30±0.03 0.26±0.02 0.27±0.02 0.26±0.02 0.45±0.06 0.27±0.03 0.26±0.02 0.25±0.01 0.25±0.04 GLIOMA 0.74±0.05 0.71±0.04 0.71±0.04 0.72±0.05 0.71±0.04 0.72±0.04 0.72±0.04 0.69±0.04 1.86±0.59 LUNG 0.94±0.06 0.77±0.02 0.77±0.02 0.78±0.02 0.82±0.04 0.81±0.02 0.79±0.02 0.80±0.03 0.82±0.28 PROSTATE 1.00±0.14 0.78±0.08 0.77±0.08 0.77±0.09 0.72±0.10 0.74±0.10 0.71±0.10 0.73±0.13 1.48±0.26 SRBCT 0.84±0.03 0.77±0.02 0.77±0.02 0.78±0.02 0.80±0.02 0.79±0.02 0.78±0.02 0.75±0.02 0.83±0.03 SMK 0.98±0.07 0.68±0.02 0.68±0.03 0.68±0.02 0.78±0.04 0.78±0.04 0.73±0.03 0.69±0.03 4.32±0.45", "figure_data": "DatasetCAEDUFSWASTAEFSLSRSRUDFSOurAllFeaMadelon PCMAC COIL-20 Yale Jaffe PIX10 warpPIE10P 0.0.99±0.01 0.99±0.00 0.98±0.00 0.98±0.00 0.99±0.00 0.98±0.00 0.98±0.00 0.98±0.00 0.28±0.00 1.14±0.17 1.03±0.06 1.05±0.06 1.05±0.07 1.04±0.06 1.17±0.19 1.07±0.12 1.30±0.22 0.78±0.03 0.48±0.03 0.41±0.02 0.43±0.01 0.41±0.01 0.48±0.02 0.45±0.02 0.41±0.02 0.38±0.02 0.27±0.01 0.63±0.03 0.56±0.02 0.56±0.02 0.56±0.02 0.78±0.04 0.57±0.03 0.60±0.03 0.54±0.02 0.50±0.01 0.31±0.05 0.25±0.05 0.26±0.05 0.25±0.05 0.33±0.08 0.26±0.05 0.25±0.05 0.22±0.01 0.23±0.04 0.49±0.03 0.43±0.04 0.46±0.05 0.43±0.02 0.63±0.05 0.46±0.04 0.47±0.05 0.39±0.03 1.20±0.23", "figure_id": "tab_7", "figure_label": "S3", "figure_type": "table" } ]
Lei Xu; Lei Chen; Rong Wang; Feiping Nie; Xuelong Li
[ { "authors": "Y Li; C Luo; S M Chung", "journal": "IEEE Trans. Knowl. Data Eng", "ref_id": "b0", "title": "Text clustering with feature selection by using statistical data", "year": "2008" }, { "authors": "C He; K Li; Y Zhang; Y Zhang; Z Guo; X Li; M Danelljan; F Yu", "journal": "", "ref_id": "b1", "title": "Strategic preys make acute predators: Enhancing camouflaged object detectors by generating camouflaged objects", "year": "2023" }, { "authors": "H.-J Yu; D.-S Huang", "journal": "IEEE/ACM Trans. Comput. Biol. Bioinformatics", "ref_id": "b2", "title": "Normalized feature vectors: A novel alignment-free sequence comparison method based on the numbers of adjacent amino acids", "year": "2013" }, { "authors": "C He; K Li; Y Zhang; L Tang; Y Zhang; Z Guo; X Li", "journal": "", "ref_id": "b3", "title": "Camouflaged object detection with feature decomposition and edge reconstruction", "year": "2023" }, { "authors": "Z Sun; G Bebis; R Miller", "journal": "Pattern Recognit", "ref_id": "b4", "title": "Object detection using feature subset selection", "year": "2004" }, { "authors": "L Xu; R Wang; F Nie; X Li", "journal": "", "ref_id": "b5", "title": "Efficient top-k feature selection using coordinate descent method", "year": "2023" }, { "authors": "C He; K Li; Y Zhang; G Xu; L Tang; Y Zhang; Z Guo; X Li", "journal": "", "ref_id": "b6", "title": "Weakly-supervised concealed object segmentation with sam-based pseudo labeling and multi-scale feature grouping", "year": "2023" }, { "authors": "M F Balın; A Abid; J Zou", "journal": "", "ref_id": "b7", "title": "Concrete autoencoders: Differentiable feature selection and reconstruction", "year": "2019" }, { "authors": "K Han; Y Wang; C Zhang; C Li; C Xu", "journal": "", "ref_id": "b8", "title": "Autoencoder inspired unsupervised feature selection", "year": "2018" }, { "authors": "M Mcpherson; L Smith-Lovin; J M Cook", "journal": "Annu. Rev. Sociol", "ref_id": "b9", "title": "Birds of a feather: Homophily in social networks", "year": "2001" }, { "authors": "D I Shuman; S K Narang; P Frossard; A Ortega; P Vandergheynst", "journal": "IEEE Signal Process. Mag", "ref_id": "b10", "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "year": "2013" }, { "authors": "R Wang; P Wang; D Wu; Z Sun; F Nie; X Li", "journal": "IEEE Trans. Neural Netw. Learn. Syst", "ref_id": "b11", "title": "Multi-view and multi-order structured graph learning", "year": "2023" }, { "authors": "X He; D Cai; P Niyogi", "journal": "", "ref_id": "b12", "title": "Laplacian score for feature selection", "year": "2005" }, { "authors": "O Lindenbaum; U Shaham; E Peterfreund; J Svirsky; N Casey; Y Kluger", "journal": "", "ref_id": "b13", "title": "Differentiable unsupervised feature selection based on a gated laplacian", "year": "2021" }, { "authors": "U ; Von Luxburg", "journal": "Stat. Comput", "ref_id": "b14", "title": "A tutorial on spectral clustering", "year": "2007" }, { "authors": "F R Chung", "journal": "American Mathematical Soc", "ref_id": "b15", "title": "Spectral graph theory", "year": "1997" }, { "authors": "E Jang; S Gu; B Poole", "journal": "", "ref_id": "b16", "title": "Categorical reparameterization with gumbel-softmax", "year": "2017" }, { "authors": "C J Maddison; A Mnih; Y W Teh", "journal": "", "ref_id": "b17", "title": "The concrete distribution: A continuous relaxation of discrete random variables", "year": "2017" }, { "authors": "A Tikhonov; V Arsenin", "journal": "Winston", "ref_id": "b18", "title": "Solutions of Ill-posed Problems", "year": "1977" }, { "authors": "S P Boyd; L Vandenberghe", "journal": "Cambridge University Press", "ref_id": "b19", "title": "Convex Optimization", "year": "2014" }, { "authors": "F Nie; X Wang; M Jordan; H Huang", "journal": "", "ref_id": "b20", "title": "The constrained laplacian rank algorithm for graph-based clustering", "year": "2016" }, { "authors": "L V Kantorovich", "journal": "Manage. Sci", "ref_id": "b21", "title": "Mathematical methods of organizing and planning production", "year": "1960" }, { "authors": "F Santambrogio", "journal": "Birkhäuser", "ref_id": "b22", "title": "Optimal Transport for Applied Mathematicians", "year": "2015" }, { "authors": "G Peyré; M Cuturi", "journal": "", "ref_id": "b23", "title": "Computational optimal transport", "year": "2020" }, { "authors": "M Cuturi; O Teboul; J.-P Vert", "journal": "", "ref_id": "b24", "title": "Differentiable ranking and sorting using optimal transport", "year": "2019" }, { "authors": "Y Xie; H Dai; M Chen; B Dai; T Zhao; H Zha; W Wei; T Pfister", "journal": "", "ref_id": "b25", "title": "Differentiable top-k with optimal transport", "year": "2020" }, { "authors": "J.-D Benamou; G Carlier; M Cuturi; L Nenna; G Peyré", "journal": "SIAM J. Sci. Comput", "ref_id": "b26", "title": "Iterative bregman projections for regularized transportation problems", "year": "2015" }, { "authors": "K Lang", "journal": "Morgan Kaufmann", "ref_id": "b27", "title": "Newsweeder: Learning to filter netnews", "year": "1995" }, { "authors": "I Guyon; J Li; T Mader; P A Pletscher; G Schneider; M Uhr", "journal": "Pattern Recognit. Lett", "ref_id": "b28", "title": "Competitive baseline methods set new standards for the nips 2003 feature selection benchmark", "year": "2007" }, { "authors": "C L Nutt; D R Mani; R A Betensky; P Tamayo; J G Cairncross; C Ladd; U Pohl; C Hartmann; M E Mclaughlin; T T Batchelor; P M Black; A Deimling; S L Pomeroy; T R Golub; D N Louis", "journal": "Cancer Res", "ref_id": "b29", "title": "Gene Expression-based Classification of Malignant Gliomas Correlates Better with Survival than Histological Classification1", "year": "2003" }, { "authors": "S A Nene; S K Nayar; H Murase", "journal": "", "ref_id": "b30", "title": "Columbia object image library (coil-100)", "year": "1996" }, { "authors": "H Peng; F Long; C H Q Ding", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b31", "title": "Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy", "year": "2005" }, { "authors": "D Cai; C Zhang; X He", "journal": "", "ref_id": "b32", "title": "Unsupervised feature selection for multi-cluster data", "year": "2010" }, { "authors": "I Petricoin; Emanuel F ; D K Ornstein; C P Paweletz; A Ardekani; P S Hackett; B A Hitt; A Velassco; C Trucco; L Wiegand; K Wood; C B Simone; P J Levine; W M Linehan; M R Emmert-Buck; S M Steinberg; E C Kohn; L A Liotta", "journal": "J. Natl. Cancer Inst", "ref_id": "b33", "title": "Serum Proteomic Patterns for Detection of Prostate Cancer", "year": "2002" }, { "authors": "M J Lyons; J Budynek; S Akamatsu", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b34", "title": "Automatic classification of single facial images", "year": "1999" }, { "authors": "J Khan; J S Wei; M Ringner; L H Saal; M Ladanyi; F Westermann; F Berthold; M Schwab; C R Antonescu; C Peterson", "journal": "Nat. Med", "ref_id": "b35", "title": "Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks", "year": "2001" }, { "authors": "A Spira; J E Beane; V Shah; K Steiling; G Liu; F Schembri; S Gilman; Y.-M Dumas; P Calner; P Sebastiani", "journal": "Nat. Med", "ref_id": "b36", "title": "Airway epithelial gene expression in the diagnostic evaluation of smokers with suspect lung cancer", "year": "2007" }, { "authors": "T Sim; S Baker; M Bsat", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b37", "title": "The cmu pose, illumination, and expression database", "year": "2003" }, { "authors": "L Breiman", "journal": "Mach. Learn", "ref_id": "b38", "title": "Random forests", "year": "2001" }, { "authors": "J Macqueen", "journal": "", "ref_id": "b39", "title": "Classification and analysis of multivariate observations", "year": "1967" }, { "authors": "G Sokar; Z Atashgahi; M Pechenizkiy; D C Mocanu", "journal": "", "ref_id": "b40", "title": "Where to pay attention in sparse training for feature selection?", "year": "2022" }, { "authors": "P Zhu; W Zuo; L Zhang; Q Hu; S C Shiu", "journal": "Pattern Recognit", "ref_id": "b41", "title": "Unsupervised feature selection by regularized self-representation", "year": "2015" }, { "authors": "Y Yang; H T Shen; Z Ma; Z Huang; X Zhou", "journal": "", "ref_id": "b42", "title": "L2,1-norm regularized discriminative feature selection for unsupervised learning", "year": "2011" }, { "authors": "S Miao; Y Luo; M Liu; P Li", "journal": "", "ref_id": "b43", "title": "Interpretable geometric deep learning via learnable randomness injection", "year": "2023" }, { "authors": "M Blondel; O Teboul; Q Berthet; J Djolonga", "journal": "", "ref_id": "b44", "title": "Fast differentiable sorting and ranking", "year": "2020" }, { "authors": "W Liu; J He; S.-F Chang", "journal": "", "ref_id": "b45", "title": "Large graph construction for scalable semi-supervised learning", "year": "2010" }, { "authors": "F Pedregosa; G Varoquaux; A Gramfort; V Michel; B Thirion; O Grisel; M Blondel; P Prettenhofer; R Weiss; V Dubourg; J Vanderplas; A Passos; D Cournapeau; M Brucher; M Perrot; E Duchesnay", "journal": "J. Mach. Learn. Res", "ref_id": "b46", "title": "Scikit-learn: Machine learning in Python", "year": "2011" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "", "ref_id": "b47", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 384.89, 695.05, 106.98, 12.72 ], "formula_id": "formula_0", "formula_text": "m σ1 ≤ m σ2 ≤ • • • ≤ m σ b . (a) (b) (c)" }, { "formula_coordinates": [ 3, 210.92, 369.89, 293.74, 30.32 ], "formula_id": "formula_1", "formula_text": "L dir (v) = 1 2 n i=1 n j=1 s i,j (v i -v j ) 2 = v ⊤ L S v.(1)" }, { "formula_coordinates": [ 4, 187.25, 534.64, 317.42, 17.82 ], "formula_id": "formula_2", "formula_text": "F L obj ( X) s.t. X = XF , F ∈ {0, 1} d×m , F ⊤ F = I m ,(2)" }, { "formula_coordinates": [ 4, 219.15, 610.86, 285.52, 30.32 ], "formula_id": "formula_3", "formula_text": "L obj ( X) = m i=1 L dir (x i ) = tr( X⊤ L S X).(3)" }, { "formula_coordinates": [ 5, 130.16, 119.56, 374.5, 13.89 ], "formula_id": "formula_4", "formula_text": "f i = softmax((log w i + g i )/T ) with g i,j = -log(-log u i,j ), u i,j ∼ Uniform(0, 1),(4)" }, { "formula_coordinates": [ 5, 370.38, 270.89, 129.08, 64.14 ], "formula_id": "formula_5", "formula_text": "2: P ΛP ⊤ = F ⊤ F + ϵI m 3: LL ⊤ = F ⊤ F + ϵI m 4: F = Λ 1/2 P ⊤ 0 (L -1 ) ⊤ 5:" }, { "formula_coordinates": [ 5, 174.29, 645.23, 330.38, 23.54 ], "formula_id": "formula_6", "formula_text": "min S tr( X⊤ L S X) + α 2 ∥S∥ 2 F s.t. S1 n = 1 n , s i,j ≥ 0, s i,i = 0,(5)" }, { "formula_coordinates": [ 6, 197.44, 115.64, 307.22, 24.8 ], "formula_id": "formula_7", "formula_text": "min s i 1 2 ∥s i + e i 2α i ∥ 2 2 s.t. s i 1 n = 1, s i,j ≥ 0, s i,i = 0.(6)" }, { "formula_coordinates": [ 6, 169.74, 175.47, 334.93, 26.36 ], "formula_id": "formula_8", "formula_text": "s i,j = ( 1 k + 1 k e i δ (k) i 2α i - e i,j 2α i ) + with δ (k) i,j = Bool(e i,j ≤ e i,σ k ),(7)" }, { "formula_coordinates": [ 6, 492.87, 208.65, 10.63, 14.07 ], "formula_id": "formula_9", "formula_text": "(k) i" }, { "formula_coordinates": [ 6, 122.19, 278.9, 382.47, 23.54 ], "formula_id": "formula_10", "formula_text": "1 2 (ke i ξ (k) i -e i δ (k) i ) < α i ≤ 1 2 (ke i ξ (k+1) i -e i δ (k) i ) with ξ (k) i,j = Bool(e i,j = e i,σ k ),(8)" }, { "formula_coordinates": [ 6, 209.75, 339.29, 294.92, 30.72 ], "formula_id": "formula_11", "formula_text": "s i,σj = e i ξ (k+1) i -e i,σj ke i ξ (k+1) i -e i δ (k) i • Bool(1 ≤ j ≤ k).(9)" }, { "formula_coordinates": [ 6, 108, 481.06, 219.55, 12.05 ], "formula_id": "formula_12", "formula_text": "Let µ = [µ 1 , µ 2 , • • • , µ n1 ] ⊤ and ν = [ν 1 , ν 2 , • • • , ν n2" }, { "formula_coordinates": [ 6, 200.57, 524.11, 304.1, 17 ], "formula_id": "formula_13", "formula_text": "min Γ ⟨C, Γ⟩, s.t. Γ1 n2 = µ, Γ ⊤ 1 n1 = ν, Γ i,j ≥ 0,(10)" }, { "formula_coordinates": [ 6, 117.96, 639.97, 382.55, 24.05 ], "formula_id": "formula_14", "formula_text": "µ i = 1 n , ν j = 1/n, 1 ≤ j ≤ k + 1 (n -k -1)/n, j = k + 2 , c ij = (a i -b j ) 2 = (e i -j + 1) 2 . (11" }, { "formula_coordinates": [ 6, 500.52, 645.5, 4.15, 12 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 6, 445.14, 671.77, 60.1, 9.96 ], "formula_id": "formula_16", "formula_text": "1 ≤ i ≤ k + 1," }, { "formula_coordinates": [ 6, 123.88, 700.34, 376.64, 23.86 ], "formula_id": "formula_17", "formula_text": "Γ σi,j = 1/n, if (1 ≤ i ≤ k + 1 and j = i) or (k + 1 < i ≤ n and j = k + 2) 0, if (1 ≤ i ≤ k + 1 and j ̸ = i) or (k + 1 < i ≤ n and j ̸ = k + 2) . (12" }, { "formula_coordinates": [ 6, 500.52, 705.86, 4.15, 12 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 7, 231.61, 206.99, 273.06, 30.32 ], "formula_id": "formula_19", "formula_text": "δ (k) p = n k i=1 Γ i , ξ (k+1) p = nΓ k+1 ,(13)" }, { "formula_coordinates": [ 7, 159.55, 286.88, 340.96, 22.31 ], "formula_id": "formula_20", "formula_text": "min Γ ⟨C, Γ⟩ + γ i,j Γ i,j log Γ i,j s.t. Γ1 k+2 = µ, Γ ⊤ 1 n = ν, Γ i,j ≥ 0, (14" }, { "formula_coordinates": [ 7, 500.52, 286.88, 4.15, 12 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 14, 128.01, 193.46, 376.65, 30.32 ], "formula_id": "formula_22", "formula_text": "z ⊤ Az = z ⊤ ( F ⊤ F + ϵI m )z = ( F z) ⊤ ( F z) + ϵz ⊤ z = d i=1 ( f i z) 2 + ϵ m i=1 z 2 i > 0. (S1)" }, { "formula_coordinates": [ 14, 181.46, 394.71, 323.21, 21.82 ], "formula_id": "formula_23", "formula_text": "Q ⊤ Q = P Λ 1/2 , 0⊤ Λ 1/2 P ⊤ 0 = P ΛP ⊤ = A = LL ⊤ . (S2)" }, { "formula_coordinates": [ 14, 251.45, 445.77, 253.22, 58.83 ], "formula_id": "formula_24", "formula_text": "F ⊤ F =L -1 Q ⊤ Q(L -1 ) ⊤ =L -1 LL ⊤ (L -1 ) ⊤ =L -1 LL ⊤ (L ⊤ ) -1 =I m (S3)" }, { "formula_coordinates": [ 15, 217.62, 91.81, 287.05, 79.73 ], "formula_id": "formula_25", "formula_text": "F ⊤ F = L -1 F ⊤ F (L -1 ) ⊤ = L -1 (A -ϵI m )(L -1 ) ⊤ = L -1 A(L -1 ) ⊤ -ϵL -1 (L -1 ) ⊤ = L -1 LL ⊤ (L -1 ) ⊤ -ϵL -1 (L -1 ) ⊤ = I m -ϵL -1 (L -1 ) ⊤ . (S4)" }, { "formula_coordinates": [ 15, 141.3, 256.03, 363.37, 55.2 ], "formula_id": "formula_26", "formula_text": "min S tr( X⊤ L S X) + α 2 ∥S∥ 2 F , s.t. S1 n = 1 n , s i,j ≥ 0, s i,i = 0, = min si,j 1 2 n i=1 n j=1 ∥(x i -xj )∥ 2 2 s i,j + α i s 2 i,j , s.t. n j=1 s i,j = 1, s i,j ≥ 0, s i,i = 0. (S5)" }, { "formula_coordinates": [ 15, 172.03, 354.7, 332.64, 94.5 ], "formula_id": "formula_27", "formula_text": "min si,j 1 2 n j=1 e i,j s i,j + α i s 2 i,j s.t. n j=1 s i,j = 1, s i,j ≥ 0, s i,i = 0, = min si,j 1 2 n j=1 (s i,j + e i,j 2α i ) 2 s.t. n j=1 s i,j = 1, s i,j ≥ 0, s i,i = 0, = min s i 1 2 ∥s i + e i 2α i ∥ 2 2 s.t. s i 1 n = 1, s i,j ≥ 0, s i,i = 0.(S6)" }, { "formula_coordinates": [ 15, 180.79, 487.61, 323.88, 30.32 ], "formula_id": "formula_28", "formula_text": "L(s i , λ i , β i ) = 1 2 ∥s i + e i 2α i ∥ 2 -λ i (s i 1 n -1) - n j=1 s i,j β i,j ,(S7)" }, { "formula_coordinates": [ 15, 243.52, 545.53, 261.15, 23.89 ], "formula_id": "formula_29", "formula_text": "∂L ∂s i,j = s i,j + e i,j 2α i -λ i -β i,j(S8)" }, { "formula_coordinates": [ 15, 245.95, 590.39, 258.72, 99.35 ], "formula_id": "formula_30", "formula_text": "                       s i,j + e i,j 2α i -λ i -β i,j = 0 n j=1 s i,j = 1 s i,j ≥ 0 β i,j ≥ 0 β i,j s i,j = 0.(S9)" }, { "formula_coordinates": [ 15, 265.97, 702.44, 238.7, 23.89 ], "formula_id": "formula_31", "formula_text": "s i,j = (λ i - e i,j 2α i ) + (S10)" }, { "formula_coordinates": [ 16, 107.64, 104.8, 397.03, 51.04 ], "formula_id": "formula_32", "formula_text": "k j=1 (λ i - e i,σj 2α i ) = 1 ⇒ λ i = 1 k + 1 k e i δ (k) i 2α i with δ (k) i,j = Bool(e i,j ≤ e i,σ k ),(S11) where δ" }, { "formula_coordinates": [ 16, 242.58, 225.64, 262.08, 26.36 ], "formula_id": "formula_33", "formula_text": "s i,j = ( 1 k + 1 k e i δ (k) i 2α i - e i,j 2α i ) + (S12)" }, { "formula_coordinates": [ 16, 186.34, 274.52, 318.33, 26.36 ], "formula_id": "formula_34", "formula_text": "1 k + 1 k e i δ (k) i 2α i - e i,σ k 2α i > 0, 1 k + 1 k e i δ (k) i 2α i - e i,σ k+1 2α i ≤ 0. (S13)" }, { "formula_coordinates": [ 16, 107.64, 319.75, 397.03, 42.09 ], "formula_id": "formula_35", "formula_text": "1 2 (ke i ξ (k) i -e i δ (k) i ) < α i ≤ 1 2 (ke i ξ (k+1) i -e i δ (k) i ) with ξ (k) i,j = Bool(e i,j = e i,σ k ),(S14) where ξ (k) i" }, { "formula_coordinates": [ 16, 246.25, 375.58, 258.42, 23.54 ], "formula_id": "formula_36", "formula_text": "α i = 1 2 (ke i ξ (k+1) i -e i δ (k) i ).(S15)" }, { "formula_coordinates": [ 16, 209.93, 419.02, 113.92, 26.36 ], "formula_id": "formula_37", "formula_text": "s i,j = ( 1 k + 1 k e i δ (k) i 2α i - e i,j2α" }, { "formula_coordinates": [ 16, 226.75, 428.23, 277.92, 113.04 ], "formula_id": "formula_38", "formula_text": "i ) + = ( 2α i + e i δ (k) i -ke i,j 2kα i ) + = ( ke i ξ (k+1) i -e i δ (k) i + e i δ (k) i -ke i,j k(ke i ξ (k+1) i -e i δ (k) i ) ) + = ( e i ξ (k+1) i -e i,j ke i ξ (k+1) i -e i δ (k) i ) + (S16)" }, { "formula_coordinates": [ 16, 161.94, 574.65, 342.73, 30.2 ], "formula_id": "formula_39", "formula_text": "ke i ξ (k+1) i -e i δ (k) i = ke i,σ k+1 - k p=1 e i,σp = k p=1 (e i,σ k+1 -e i,σp ) > 0. (S17)" }, { "formula_coordinates": [ 16, 210.05, 628.95, 294.61, 43.48 ], "formula_id": "formula_40", "formula_text": "s i,σj =      e i ξ (k+1) i -e i,σj ke i ξ (k+1) i -e i δ (k) i , 1 ≤ j ≤ k 0, otherwise ,(S18)" }, { "formula_coordinates": [ 16, 209.75, 695.61, 294.92, 30.72 ], "formula_id": "formula_41", "formula_text": "s i,σj = e i ξ (k+1) i -e i,σj ke i ξ (k+1) i -e i δ (k) i • Bool(1 ≤ j ≤ k).(S19)" }, { "formula_coordinates": [ 19, 232.45, 511.91, 272.22, 25.97 ], "formula_id": "formula_43", "formula_text": "ACC(y, ŷ) = n i=1 Bool(y i = ŷi ) n ,(S23)" }, { "formula_coordinates": [ 19, 225.13, 581.9, 279.54, 25.97 ], "formula_id": "formula_44", "formula_text": "RMSE(X, X) = n i=1 ∥x i -xi ∥ 2 2 n × d ,(S24)" }, { "formula_coordinates": [ 19, 306.14, 703.03, 198.53, 24.44 ], "formula_id": "formula_45", "formula_text": "∥x i -x j ∥ 2 2 2σ 2 ),(S25)" }, { "formula_coordinates": [ 20, 244.35, 99.59, 260.32, 25.26 ], "formula_id": "formula_46", "formula_text": "s i,j = ŝi,j , x j ∈ K(x i ) 0, otherwise ,(S26)" }, { "formula_coordinates": [ 20, 278.78, 169.26, 225.88, 25.2 ], "formula_id": "formula_47", "formula_text": "S = S ⊤ + S 2 (S27)" } ]
2023-12-08
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b30", "b22", "b12", "b25", "b24", "b15", "b11", "b28", "b26", "b28", "b26", "b13" ], "table_ref": [], "text": "Audio-visual question answering (AVQA) has received considerable attention due to its potential applications in many real-world scenarios. It provides avenues to integrate multimodal information to achieve scene understanding ability as humans.\nAs shown in Figure 1, the AVQA model aims to answer questions regarding visual objects, sound patterns, and their spatio-temporal associations. Compared to traditional video question answering, the AVQA task presents specific challenges in the following areas. Firstly, it involves effectively fus-Figure 1: An illustration of the Audio-Visual Question Answering task. Concretely, the question is centered around the \"instruments\" (i.e., the target) and is broken down into \"how many\", \"did not sound\", and \"from beginning to end\" in terms of visual space, audio, and temporality, respectively. Identifying the three instruments that did not produce sound throughout the video may entail a significant time investment for a human viewer. Nonetheless, for an AI system with effective spatio-temporal reasoning capabilities, the task can be accomplished much more efficiently.\ning audio and visual information to obtain the correlation of the two modalities, especially when there are multiple sounding sources, such as ambient noise, or similar categories in either audio or visual feature space, such as guitars and ukuleles. Secondly, it requires capturing the question-relevant audiovisual features while maintaining their temporal synchronization in a multimedia video.\nAlthough there have been a number of promising works (Zhou et al., 2021;Tian et al., 2020;Lin and Wang, 2020) in the audio-visual scene understanding community that attempted to solve the first challenge, they are primarily a targetless parsing of the entire audio-visual scenes. Most of them (Xuan et al., 2020;Wu et al., 2019;Mercea et al., 2022) obtain untargeted sound-related visual regions by designing attention schemes performing on audio-to-visual while ignoring the questionoriented information from the text modality. However, the understanding of audio-visual scenes in AVQA tasks is often target-oriented. For exam-Figure 2: Comparison of different question-aware temporal grounding. (a.) The traditional approach usually adopts a dual-stream network that treats audio and video as separate entities. (b.) Our proposed cross-modal synchrony loss ensures the interaction between audio and visual modalities. (c.) Our proposed single-stream architecture is able to treat audio and video as a whole, thus incorporating temporal grounding and fusion.\nple, as illustrated in Figure 1, our focus lies solely on the subject of inquiry, i.e., instruments, disregarding the singing person or ambient sound. Traditional AVQA approaches (Li et al., 2022;Yun et al., 2021;Yang et al., 2022), inherited from the audio-visual scene understanding community, rely on aligning all audio-visual elements in the video to answer a question. This results in much irrelevant information and difficulties in identifying the relevant objects in complex scenes. As for the second challenge, most existing methods (Yun et al., 2021;Yang et al., 2022;Lin et al., 2023) employ a typical attention-based two-stream framework. As shown in Figure 2.a, such a two-stream architecture processes audio and video in each stream separately while overlooking the unity of audio and visual modalities. In particular, the temporal grounding and audio-visual fusion are isolated, with fusion occurring through an additional module.\nTo effectively address these two challenges, we propose a target-aware joint spatio-temporal grounding (TJSTG) network for AVQA. Our proposed approach has two key components.\nFirstly, we introduce the target-aware spatial grounding (TSG) module, which enables the model to focus on audio-visual cues relevant to the query subject, i.e., target, instead of all audio-visual elements. We exploit the explicit semantics of text modality in the question and introduce it during audio-visual alignment. In this way, there will be a noticeable distinction between concepts such as the ukulele and the guitar. Accordingly, we propose an attention-based target-aware (TA) module to recognize the query subject in the question sentence first and then focus on the interesting sounding area through spatial grounding.\nSecondly, we propose a cross-modal synchrony loss (CSL) and corresponding joint audio-visual temporal grounding (JTG) module. In contrast to the existing prevalent two-stream frameworks that treat audio and video as separate entities (Figure 2.a), the CSL enforces the question to have synchronized attention weights on visual and audio modalities during question-aware temporal grounding (Figure 2.b) via the JS divergence. Furthermore, it presents avenues to incorporate question-aware temporal grounding and audio-visual fusion into a more straightforward single-stream architecture (Figure 2.c), instead of the conventional approach of performing temporal grounding first and fusion later. In this way, the network is forced to jointly capture and fuse audio and visual features that are supposed to be united and temporally synchronized. This simpler architecture facilitates comparable or even better performance.\nThe main contributions of this paper are summarized as follows:\n• We propose a novel single-stream framework, the joint audio-visual temporal grounding (JTG) module, which treats audio and video as a unified entity and seamlessly integrates fusion and temporal grounding within a single module.\n• We propose a novel target-aware spatial grounding (TSG) module to introduce the explicit semantics of the question during audio-visual spatial grounding for capturing the visual features of interesting sounding areas. An attention-based target-aware (TA) module is proposed to recognize the target of interest from the question.\n• We propose a cross-modal synchrony loss (CSL) to facilitate the temporal synchronization between audio and video during question-aware temporal grounding." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Audio-Visual-Language Learning", "publication_ref": [ "b18", "b15", "b0", "b8", "b9", "b14", "b13", "b15", "b31", "b21", "b1", "b29", "b4", "b11" ], "table_ref": [], "text": "By integrating information from multiple modalities, it is expected to explore a sufficient understanding of the scene and reciprocally nurture the development of specific tasks within a single modality. AVLNet (Rouditchenko et al., 2020) and MCN (Chen et al., 2021a) utilize audio to enhance text-to-video retrieval. AVCA (Mercea et al., 2022) proposes to learn multi-modal representations from audio-visual data and exploit textual label embeddings for transferring knowledge from seen classes of videos to unseen classes. Compared to previous works in audio-visual learning, such as sounding object localization (Afouras et al., 2020;Hu et al., 2020Hu et al., , 2022)), and audio-visual event localization (Liu et al., 2022;Lin et al., 2023), these works (Mercea et al., 2022;Zhu et al., 2020;Tan et al., 2023) have made great progress in integrating the naturally aligned visual and auditory properties of objects and enriching scenes with explicit semantic information by further introducing textual modalities. Besides, there are many works (Akbari et al., 2021;Zellers et al., 2022;Gabeur et al., 2020) propose to learn multimodal representations from audio, visual and text modalities that can be directly exploited for multiple downstream tasks. Unlike previous works focused on learning single or multi-modal representations, this work delves into the fundamental yet challenging task of spatio-temporal reasoning in scene understanding. Building upon MUSIC-AVQA (Li et al., 2022), our approach leverages textual explicit semantics to integrate audio-visual cues to enhance the study of dynamic and long-term audio-visual scenes." }, { "figure_ref": [], "heading": "Audio-Visual Question Answering", "publication_ref": [ "b10", "b27", "b23", "b32", "b17", "b20", "b31", "b28", "b11", "b26", "b13", "b11" ], "table_ref": [], "text": "The demand for multimodal cognitive abilities in AI has grown alongside the advancements in deep learning techniques. Audio-Visual Question Answering (AVQA), unlike previous question answering (Lei et al., 2018;You et al., 2021;Chen et al., 2021b;Wang et al., 2021), which exploits the natural multimodal medium of video, is attracting increasing attention from researchers (Zhuang et al., 2020;Miyanishi and Kawanabe, 2021;Schwartz et al., 2019;Zhu et al., 2020). Pano-AVQA (Yun et al., 2021) introduces audio-visual question answering in panoramic video and the corresponding Transformer-based encoder-decoder approach. MUSIC-AVQA (Li et al., 2022) offers a strong baseline by decomposing AVQA into audio-visual fusion through spatial correlation of audio-visual elements and question-aware temporal grounding through text-audio cross-attention and text-visual cross-attention. AVQA (Yang et al., 2022) pro-posed a hierarchical audio-visual fusing module to explore the impact of different fusion orders between the three modalities on performance. LAV-ISH (Lin et al., 2023) introduced a novel parameterefficient framework to encode audio-visual scenes, which fuses the audio and visual modalities in the shallow layers of the feature extraction stage and thus achieves SOTA. Although LAVISH proposes a robust audio-visual backbone network, it still necessitates the spatio-temporal grounding network proposed in (Li et al., 2022), as MUSCI-AVQA contains dynamic and long-duration scenarios requiring a significant capability of spatio-temporal reasoning. Unlike previous works, we propose a TSG module to leverage the explicit semantic of inquiry target and JTG to leverage the temporal synchronization between audio and video in a novel single-stream framework, thus improving the multimodal learning of audio-visual-language." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "To solve the AVQA problem, we propose a targetaware joint spatio-temporal grounding network and ensure the integration between the audio-visual modalities by observing the natural integrity of the audio-visual cues. The aim is to achieve better audio-visual scene understanding and intentional spatio-temporal reasoning. An overview of the proposed framework is illustrated in Figure 3." }, { "figure_ref": [], "heading": "Audio-visual-language Input Embeddings", "publication_ref": [ "b11", "b19", "b7", "b16" ], "table_ref": [], "text": "Given an input video sequence containing both visual and audio tracks, it is first divided into T non-overlapping visual and audio segment pairs {V t , A t } T 1 . The question sentence Q consists of a maximum length of N words. To demonstrate the effectiveness of our proposed method, we followed the MUSIC-AVQA (Li et al., 2022) approach and used the same feature extraction backbone network.\nAudio Embedding. Each audio segment A t is encoded into f t a ∈ R da by the pretrained on Au-dioSet (Gemmeke et al., 2017a) VGGish (Gemmeke et al., 2017b) model, which is VGG-like 2D CNN network, employing over transformed audio spectrograms.\nVisual Embedding. A fixed number of frames are sampled from all video segments. Each sampled frame is encoded into visual feature map f t v,m ∈ R h×w×dv by the pretrained on ImageNet (Russakovsky et al., 2015) ResNet18 (He et al., 2016) for each segment V t , where h and w are the We introduce text modality with explicit semantics into the audio-visual spatial grounding to associate specific sound-related visual features with the subject of interest, i.e., the target. We exploit the proposed cross-modal synchrony loss to incorporate audiovisual fusion and question-aware temporal grounding within a single-stream architecture. Finally, simple fusion is employed to integrate audiovisual and question information for predicting the answer.\nheight and width of the feature maps, respectively.\nQuestion Embedding. The question sentence Q is tokenized into N individual words {q n } N n=1 by the wrod2vec (Mikolov et al., 2013). Next, a learnable LSTM is used to process word embeddings obtaining the word-level output f q ∈ R N ×dq and the last state vector\n[h N ; c N ] ∈ R 2×dq . [h N ; c N ] is then transformed using a MLP to yield h q ∈ R 1×dq\nas the encoded sentence-level question feature.\nNoted the used pretrained models are all frozen." }, { "figure_ref": [], "heading": "Target-aware Spatial Grounding Module", "publication_ref": [], "table_ref": [], "text": "While sound source localization in visual scenes reflects the spatial association between audio and visual modality, it is cumbersome to elaborately align all audio-visual elements during question answering due to the high complexity of audio-visual scenes. Therefore, the target-aware spatial grounding module (TSG) is proposed to encourage the model to focus on the truly interested query object by introducing text modality from the question.\nTarget-aware (TA) module. For the word-level question feature f q ∈ R N ×dq , we aim to locate the target subject, represented as f tgt ∈ R dq , which owns the explicit semantic associated with the audio-visual scenes. Specifically, we index the target according to the question-contributed scores. To compute question-contributed scores, we use sentence-level question feature h q , as query vector and word-level question feature f q as key and value vector to perform multi-head self-attention (MHA), computed as:\ns = σ( h q f ⊤ q √ d )(1)\nwhere f q = f 1 q ; • • • ; f N q , and d is a scaling factor with the same size as the feature dimension. s ∈ R 1×N represents the weight of each word's contribution to the final question feature. Next, we index the feature of the target, which will be enhanced in the subsequent spatial grounding, as:\nidx = arg max n=1,2,••• ,N {s(n)}\n(2)\nf tgt = f idx q (3)\nwhere f tgt has the highest contribution weight to the question feature.\nInteresting spatial grounding module. One way of mapping man-made concepts to the natural environment is to incorporate explicit semantics into the understanding of audio-visual scenarios.\nFor each video segment, the visual feature map f t v,m , the audio feature f t a and the interesting target feature f tgt compose a matched triplet. Firstly, We reshape the dimension of the\nf t v,m from h × w × d v to hw × d v .\nFor each triplet, we can compute the interesting sound-related visual features f t v,i as:\nf t v,i = f t v,m • σ(s a ⊙ ŝq ) (4) s a = σ((f t a ) ⊤ • f t v,m )(5)\ns q = σ((f tgt ) ⊤ • f t v,m ) (6) ŝq = s q I(s q -τ ) (7)\nwhere\ns a , s q ∈ R 1×hw , f t v,i ∈ R 1×dv\n, σ is the softmax function, and (•) ⊤ represents the transpose operator. In particular, we adopt a simple thresholding operation to better integrate the text modality. Specifically, τ is the hyper-parameter, selecting the visual areas that are highly relevant to the query subject. I(•) is an indicator function, which outputs 1 when the input is greater than or equal to 0, and otherwise outputs 0. By computing text-visual attention maps, it encourages the previous TA module to capture the visual-related entity among the question. Next, we perform the Hadamard product on the audio-visual attention map and text-visual attention map to obtain the target-aware visual attention map. In this way, the TSG module will focus on the interesting sounding area instead of all sounding areas. To prevent possible visual information loss, we averagely pool the visual feature map f t v,m , obtaining the global visual feature f t v,g . The two visual feature is fused as the visual representation:\nf t v = FC(tanh f t v,g ; f t v,i )\n, where FC represents fully-connected layers, and f t v ∈ R 1×dv ." }, { "figure_ref": [], "heading": "Joint Audio-visual Temporal Grounding", "publication_ref": [], "table_ref": [], "text": "In the natural environment, visual and audio information are different attributes of the same thing, i.e., the two are inseparable. Therefore, we propose the joint audio-visual temporal grounding (JTG) module and cross-modal synchrony (CSL) loss to treat the visual modality and audio modality as a whole instead of separate entities as before.\nCross-modal synchrony (CSL) loss. Temporal synchronization is a characteristic of the audio and visual modalities that are united, but in multimedia videos, they do not strictly adhere to simple synchronization. We use the question feature as the intermediary to constrain the temporal distribution consistency of the audio and visual modalities, thus implicitly modeling the synchronization between the audio and video. Concretely, given a h q and audio-visual features {f t a , f t v } T t=1 , we first compute the weight of association between the given question and the input sequence, based on how closely each timestamp is related to the question, as:\nA q = σ( h q f ⊤ a √ d )(8)\nV q = σ( h q f ⊤ v √ d )(9)\nwhere\nf a = f 1 a ; • • • ; f T a and f v = f 1 v ; • • • ; f T v ; h q ∈ R 1×dq , f a ∈ R T ×da , f v ∈ R T ×dv ;\nd is a scaling factor with the same size as the feature dimension. In this way, we obtain the question-aware weights A q , V q ∈ R 1×T of audio and video sequence, respectively.\nNext, we employ the Jensen-Shannon (JS) divergence as a constraint. Specifically, the JS divergence measures the similarity between the probability distributions of two sets of temporal vectors, corresponding to the audio and visual questionaware weights, respectively. By minimizing the JS divergence, we aim to encourage the temporal distributions of the two modalities to be as close as possible, thus promoting their question-contributed consistency in the JTG process. The CSL can be formulated as:\nL csl = 1 2 D KL (A q ∥M ) + 1 2 D KL (V q ∥M ) (10) M = 1 2 (A q + V q )(11)\nD KL (P ∥Q) = T t P (t) log\nP (t) Q(t)(12)\nNote that JS divergence is symmetric, i.e., JS(P ||Q) = JS(Q||P ).\nJoint audio-visual temporal grounding (JTG) module. Previous approaches to joint audio-visual learning have typically used a dual-stream structure with a decoupled cross-modal fusion module. However, the proposed CSL makes single-stream networks for audio-visual learning possible and can naturally integrate audio-visual fusion and temporal grounding into one module. Specifically, we first interleave the LSTM encoded video feature tensor and audio feature tensor along rows, i.e., the temporal dimension, as:\nf av = IL(f v ; f a ) = f 1 v ; f 1 a ; • • • ; f T v ; f T a (13\n)\nwhere IL denotes that the features of two modalities are InterLeaved in segments, f av ∈ R 2T ×d represents the multimedia video features, and d = d v = d a . Next, we perform MHA to aggregate critical question-aware audio-visual features among the dynamic audio-visual scenes as:\nf att = MHA(h q , f av , f av ) = 2T t=1\nw av t f t av (14)\nw av = Softmax((h q W q )(f av W k ) ⊤ ) (15) f q av = f att + MLP(Avg(f av ))(16)\nwhere f q av ∈ R 1×dc represents the question grounded audiovisual contextual embedding, which is more capable of predicting correct answers. The model will assign higher weights to segments that are more relevant to the asked question. Then, we can retrieve the temporal distribution weights specific to each modality from the output of multi-head attention MHA and apply our proposed CSL as follows:\nL csl = JS(w a ∥w v )(17)\nw v = {w av 2i } 2T i=1,••• ,T(18)\nw a = {w av 2i-1 } 2T i=1,••• ,T(19)\nwhere w a , w v ∈ R 1×T are question-aware temporal distribution weights of audio and video, respectively. By leveraging the CSL, the proposed JTG module can effectively perform both temporal grounding and audio-visual fusion while considering the synchronization between the audio and visual modalities. The resulting single-stream architecture simplifies the overall system and treats audio and video as a whole." }, { "figure_ref": [], "heading": "Answer Prediction", "publication_ref": [], "table_ref": [], "text": "In order to verify the audio-visual fusion of our proposed joint audio-visual temporal grounding module, we employ a simple element-wise multiplication operation to integrate the question features h q and the previously obtained audiovisual features f q av . Concretely, it can be formulated as:\ne = f q av ⊙ h q (20)\nNext, we aim to choose one correct answer from a pre-defined answer vocabulary. We utilize a linear layer and softmax function to output probabilities p ∈ R C for candidate answers. With the predicted probability vector and the corresponding groundtruth label y, we optimize it using a cross-entropy loss: L qa = -C c=1 y c log(p c ). During testing, the predicted answer would be ĉ = arg max c (p)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "This section presents the setup details and experimental results on the MUSIC-AVQA dataset. We also discuss the model's performance and specify the effectiveness of each sub-module in our model through ablation studies and qualitative results." }, { "figure_ref": [], "heading": "Experiments Setting", "publication_ref": [ "b11", "b11", "b11", "b11", "b11" ], "table_ref": [], "text": "Dataset. We conduct experiments on the MUSIC-AVQA dataset (Li et al., 2022), which contains 45,867 question-answer pairs distributed in 9,288 videos for over 150 hours. It was divided into sets with 32,087/4,595/9,185 QA pairs for training/validation/testing. The MUSIC-AVQA dataset is well-suited for studying spatio-temporal reasoning for dynamic and long-term audio-visual scenes.\nMetric. Answer prediction accuracy. We also evaluate the model's performance in answering different questions.\nImplementation details. The sampling rates of sounds and frames are 16 kHz and 1 fps. We divide the video into non-overlapping segments of the same length with 1s-long. For each video segment, we use 1 frame to generate the visual features of size 14×14×512. For each audio segment, we use a linear layer to process the extracted 128-D VGGish feature into a 512-D feature vector. The dimension of the word embedding is 512. In experiments, we used the same settings as in (Li et al., 2022) and sampled the videos by taking 1s every 6s. Batch size and number of epochs are 64 and 30, respectively. The initial learning rate is 2e-4 and will drop by multiplying 0.1 every 10 epochs. Our network is trained with the Adam optimizer. We use the torchsummary library in PyTorch to calculate the model's parameters. Our model is trained on an NVIDIA GeForce GTX 1080 and implemented in PyTorch.\nTraining Strategy. As previous methods (Li et al., 2022) use a two-stage training strategy, training the spatial grounding module first by designing a coarse-grained audio-visual pair matching task, formulated as: We utilize the pretrained stage I module in (Li et al., 2022) directly without retraining for certain layers that overlap with our approach. We use our proposed L = L qa + L csl + λL s to train for AVQA task, where λ is 0.5 following previous setting (Li et al., 2022).\nL s = L ce (y match , ŷt ) (21) ŷt = σ(MLP( f t v ; f t a ))(22)" }, { "figure_ref": [], "heading": "Comparisons with SOTA Methods", "publication_ref": [ "b20", "b28", "b11" ], "table_ref": [], "text": "We challenge our method against current SOTA methods on AVQA task. For a fair comparison, we choose the same audio and visual features as the current methods. As shown in Table 1, We compare our TJSTG approach with the AVSD (Schwartz et al., 2019), PanoAVQA (Yun et al., 2021), and AVST (Li et al., 2022) methods. Our method achieves significant improvement on all audio and visual questions compared to the second-best method AVST (average of 2.60% ↑ and 2.48%↑, respectively). In particular, our method shows clear superiority when answering counting (average of 2.31%↑) and comparative (average of 2.12%↑) questions. These two types of questions require a high conceptual understanding and reasoning ability. The considerable improvement achieved by TJSTG can be attributed to our proposed TSG module, which introduces textual modalities with explicit semantics into the audio-visual spatial grounding process. Although we were slightly behind AVST in the audio-visual temporal question, we still achieved the highest accuracy of 70.13% on the total audio-visual question with a simpler single-stream architecture, outperforming AVST by 0.6%↑. Benefiting from our proposed JTG leveraging the natural audio-visual integration, our full model has achieved the highest accuracy of 73.04% with a more straightforward architecture, which outperforms AVST by 1.45%↑." }, { "figure_ref": [], "heading": "Ablation studies", "publication_ref": [ "b11", "b11", "b11" ], "table_ref": [ "tab_1", "tab_1", "tab_2", "tab_3", "tab_4", "tab_5", "tab_5", "tab_2" ], "text": "The effectiveness of the different modules in our model. To verify the effectiveness of the proposed components, we remove them from the primary model and re-evaluate the new model on the MUSIC-AVQA dataset. Table 2 shows that after removing a single component, the overall model's performance decreases, and different modules have different performance effects. Firstly, when we remove the TA module and the target-aware process during spatial grounding (denoted as \"w/o T-A\") and use traditional audio-visual spatial grounding, the accuracy decreases by 1.24%, 0.83% and 0.27% under audio, visual and audio-visual questions, respectively. This shows that it is essential to have a targeting process before feature aggregation instead of attending to all the audio-visual cues. Secondly, we remove the proposed CSL (denoted as \"w/o L csl \"), and the overall accuracy drops to 72.28% (0.76% below our full model). Lastly, we remove two modules and employ a vanilla single-stream structured network (denoted as \"w/o TA+L csl ), the overall accuracy severely drops by 1.36%, from 73.04% to 71.78%. These results show that every component in our system plays an essential role in the AVQA.\nEffect of introducing text during audio-visual learning. As Table 2 shows, removing the TA module and target-aware process resulted in a lower accuracy (75.17%) for audio questions consisting of counting and comparative questions compared to the \"w/o L csl \" (75.73%) and our full model (76.47%). In Table 3, we utilize AVST (Li et al., 2022) as a baseline model to further validate the robustness and effectiveness of our proposed targetaware approach. We implement AVST with our proposed TA module and the corresponding targetaware process denoted as \"AVST w/ T-A\", which surpasses AVST by 0.84% in overall accuracy (from 71.59% to 72.43%). These results demonstrate that the explicit semantics in the audiovisual spatial grounding can facilitate audio-visual question answering.\nEffect of Target-aware module. As shown in Table 4, we adopt different ways to introduce question information into the spatial grounding module, thus verifying the effectiveness of our proposed Target-aware module during the target-aware process. Specifically, we conduct average-pooling (denoted as \"TSG w/ Avg\") and max-pooling (denoted as \"TSG w/ Max\") on the LSTM-encoded question embedding f q to represent the target feature, respectively. We also adopt the question feature vector h q as the target feature during spatial grounding. Compared to these methods, our approach (denoted as \"TSG w/ TA\") achieves the highest accuracy of 73.04%. The experimental results not only prove the superiority of our proposed target-aware module but also further demonstrate the effectiveness of our introduction of textual modalities carrying explicit semantics in the audio-visual learning stage.\nIn addition, we explore the impact of hyperparameter τ on model performance. As shown in Table 5, while τ plays a role in selecting relevant visual areas, our experiments revealed that it does not significantly impact performance within the context of the MUSIC-AVQA dataset (Li et al., 2022). The highest accuracy of 73.04% is achieved when τ = 0.005. However, the removal of the thresholding operation (τ = 0.000) causes a decrease of 0.81% in accuracy. This may be caused by information redundancy, and we believe that this phenomenon can be improved by utilizing a pretrained model with a priori information on imagetext pairs in future work.\nEffect of singe-stream structure. We validate the effectiveness of our designed specialized audiovisual interleaved pattern, i.e., IL(A;V), which maintains both the integrity of the audio-visual content at the segment level and the relative independence between the audio and visual content at the video level. As shown in Table 6, we explore different ways of arranging visual and audio features, and our interleaved-by-segments pattern is 0.41% higher on average than the concatenated-by-modals pattern. we also conduct a comprehensive comparison between single-stream and dual-stream networks. During the temporal grounding, we switch to the prevalent two-stream network structure like in (Li et al., 2022), but still with our proposed TSG module and cross-synchrony loss, which is denoted as \"Dual-stream Net\" in Table 7. As shown in Table 7, the \"Single-stream Net\" that omits the additional fusion module yields 0.15% higher accuracy with 3.5M fewer parameters than the \"Dual-stream Net\". This indicates the superiority of single-stream networks over two-stream networks, which utilize the integration of the audio and visual modalities to simultaneously accomplish question-aware temporal grounding and audio-visual fusion. Effect of cross-modal synchrony loss. Similarly, as shown in Table 3, we verified the validity of our proposed L csl on AVST (denoted as \"AVST w/ L csl \"). \"AVST w/ L csl \" achieved an accuracy of 72.47%, exceeding the baseline model by 0.88%. We further consider the impact of the combination order of multimedia video features f av on performances, as shown in Table 6. Specifically, we compose f av by interleaving the audio and visual features but putting the audio modality in front (denoted as \"IL(A;V) w/ L csl \"). Compared to our full model (denoted as \"IL(V;A) w/ L csl \"), the overall accuracy is the same (both are 73.04%). The concatenate operation also has similar results. That is, the order in which the audio-visual features are combined does not have a significant impact on the performance of our entire system. These results validate the robustness and effectiveness of our proposed CSL." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "Qualitative analysis", "publication_ref": [ "b11", "b11", "b11" ], "table_ref": [], "text": "In Figure 4, we provide several visualized targetaware spatial grounding results. The heatmap indicates the location of the interesting-sounding source. Through the results, the sounding targets are visually captured, which can facilitate spatial reasoning. For example, in the case of Figure 4.(a), compared to AVST (Li et al., 2022), our proposed TSTJG method can focus on the target, i.e., the flute, during spatial grounding. The TSG module offers information about the interesting-sounding object in each timestamp. In the case of Figure 4.(b) with multiple sound sources related to the target, i.e., instruments, our method also indicates a more accurate spatial grounding compared to AVST (Li et al., 2022). When there is no target of interest in the video, as shown in Figure 4.(c), i.e., the ukulele, it can be seen that our method presents an irregular distribution of spatial grounding in the background region instead of the undistinguished sounding area of the guitar and bass presented by AVST (Li et al., 2022). Furthermore, the JTG module aggregates the information of all timestamps based on the question. These results demonstrate that our proposed method can focus on the most question-relevant audio and visual elements, leading to more accurate question answers." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper proposes a target-aware spatial grounding and joint audio-visual temporal grounding to better solve the target-oriented audio-visual scene understanding within the AVQA task. The targetaware spatial grounding module exploits the explicit semantics of the question, enabling the model to focus on the query subjects when parsing the audio-visual scenes. Also, the joint audio-visual temporal grounding module treats audio and video as a whole through a single-stream structure and encourages the temporal association between audio and video with the proposed cross-modal synchrony loss. Extensive experiments have verified the superiority and robustness of the proposed module. Our work offers an inspiring new direction for audio-visual scene understanding and spatiotemporal reasoning in question answering." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The inadequate modeling of audio-visual dynamic scenes potentially impacts the performance of question answering. Specifically, although our proposed TSG module enables the model to focus on question-related scene information, sometimes information not directly related to the question can also contribute to answering the question. Experimental results demonstrate that adding the targetaware spatial grounding module to the basic model resulted in a marginal improvement in the accuracy of answering audio-visual questions compared to incorporating the cross-modal synchrony loss into the basic model. We believe this limits the overall performance of our approach, showing an incremental improvement for the audio-visual question type (0.6% on average) compared to a significant improvement for the uni-modal question type (2.5% on average). In the future, we will explore better ways to integrate natural language into audio-visual scene parsing and mine scene information that is not only explicitly but also implicitly related to the question." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported partly by the National Natural Science Foundation of China (Grant No. 62173045, 62273054), partly by the Fundamental Research Funds for the Central Universities (Grant No. 2020XD-A04-3), and the Natural Science Foundation of Hainan Province (Grant No. 622RC675)." } ]
Audio-visual question answering (AVQA) is a challenging task that requires multistep spatiotemporal reasoning over multimodal contexts. Recent works rely on elaborate target-agnostic parsing of audio-visual scenes for spatial grounding while mistreating audio and video as separate entities for temporal grounding. This paper proposes a new target-aware joint spatio-temporal grounding network for AVQA. It consists of two key components: the targetaware spatial grounding module (TSG) and the single-stream joint audio-visual temporal grounding module (JTG). The TSG can focus on audio-visual cues relevant to the query subject by utilizing explicit semantics from the question. Unlike previous two-stream temporal grounding modules that required an additional audio-visual fusion module, JTG incorporates audio-visual fusion and question-aware temporal grounding into one module with a simpler single-stream architecture. The temporal synchronization between audio and video in the JTG is facilitated by our proposed crossmodal synchrony loss (CSL). Extensive experiments verified the effectiveness of our proposed method over existing state-of-the-art methods.
Target-Aware Spatio-Temporal Reasoning via Answering Questions in Dynamic Audio-Visual Scenarios
[ { "figure_caption": "-wise product Text attention score Audio attention weights over time Visual attention weights over time", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: The proposed target-aware joint spatio-temporal grounding network. We introduce text modality with explicit semantics into the audio-visual spatial grounding to associate specific sound-related visual features with the subject of interest, i.e., the target. We exploit the proposed cross-modal synchrony loss to incorporate audiovisual fusion and question-aware temporal grounding within a single-stream architecture. Finally, simple fusion is employed to integrate audiovisual and question information for predicting the answer.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Visualized target-aware spatio-temporal grounding results. Based on the grounding results of our method, the sounding area of interest are accordingly highlighted in spatial perspectives in different cases (ac), respectively, which indicates that our method can focus on the query subject, facilitating the target-oriented scene understanding and reasoning.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Comparisons with state-of-the-art methods on the MUSIC-AVQA dataset. The top-2 results are highlighted.", "figure_data": "MethodAudio Question Counting Comparative Avg. Counting Location Avg. Existential Counting Location Comparative Temporal Avg. Avg. Visual Question Audio-Visual Question AllAVSD(Schwartz et al., 2019)72.4162.4668.7866.0074.5370.3180.7764.0357.9362.8561.0765.44 67.32Pano-AVQA(Yun et al., 2021)75.7165.9972.1370.5175.7673.1682.0965.3861.3063.6762.0466.97 69.53AVST(Li et al., 2022)77.7867.1773.8773.5275.2774.4082.4969.8864.2464.6765.8269.53 71.59TJSTG(Ours)80.3869.8776.4776.1977.5576.8882.5971.5464.2466.2164.8470.13 73.04MethodAudio Question Counting Comparative Avg. Counting Location Avg. Existential Counting Location Comparative Temporal Avg. Avg. Visual Question Audio-Visual Question Allw/o T-A79.3568.0175.1775.9476.1676.0582.7971.7064.2465.1264.1169.86 72.44w/o L csl79.9468.5275.7375.1977.0676.1482.0971.0763.8064.9463.5069.35 72.28w/o TA+L csl79.0668.5275.1774.3575.5174.9483.1070.3663.8063.0365.0969.21 71.78TSJTG(Ours)80.3869.8776.4776.1977.5576.8882.5971.5464.2466.2164.8470.13 73.04", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation studies of different modules on MUSIC-AVQA dataset. The top-2 results are highlighted.", "figure_data": "MethodA Avg. V Avg. AV Avg.AllAVST(Li et al., 2022) 73.8774.4069.5371.59AVST w/ T-A75.1776.3469.7072.43AVST w/ L csl76.1077.0469.1572.47", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation studies of different modules against baseline model. The top-2 results are highlighted.", "figure_data": "MethodA Avg. V Avg. AV Avg.AllTSG w/ Avg75.4276.3470.0272.65TSG w/ Max76.0276.7569.9272.70TSG w/ hidden75.8576.3470.0472.74TSG w/ TA (ours) 76.4776.8870.1373.04", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Effect of the Target-aware module on the accuracy(%). The top 2 results are highlighted.", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Impact of various values of τ on the system accuracy. The top-2 results are highlighted.", "figure_data": "MethodA Avg.V Avg.AV Avg.Allτ = 0.00075.9876.1469.1972.23τ = 0.00275.6176.8069.7472.65τ = 0.00576.4776.8870.1373.04τ = 0.01075.9276.6369.8472.71MethodA Avg. V Avg. AV Avg.AllCat(A;V) w/ L csl75.9276.6769.4972.53Cat(V;A) w/ L csl75.8575.9370.2372.74IL(A;V) w/ L csl76.7876.8870.0473.04IL(V;A) w/ L csl (ours) 76.4776.8870.1373.04Table 6: Effect of the Cross-modal synchrony loss onthe accuracy(%). \"IL\" denotes that audio and visualfeatures are interleaved in segments. \"Cat\" denotes thataudio and visual features are concatenated. The top 2results are highlighted.", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of dual-stream structure and singestream structure.", "figure_data": "MethodTrainable Param. (M)↓ Accuracy (%)Dual-stream Net14.672.89Single-stream Net11.173.05", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" } ]
Yuanyuan Jiang; Jianqin Yin
[ { "authors": "Triantafyllos Afouras; Andrew Owens; Joon ; Son Chung; Andrew Zisserman", "journal": "", "ref_id": "b0", "title": "Self-supervised learning of audio-visual objects from video", "year": "2020" }, { "authors": "Hassan Akbari; Liangzhe Yuan; Rui Qian; Wei-Hong Chuang; Shih-Fu Chang; Yin Cui; Boqing Gong", "journal": "NeurIPS", "ref_id": "b1", "title": "Vatt: Transformers for multimodal selfsupervised learning from raw video, audio and text", "year": "2021" }, { "authors": "Brian Chen; Andrew Rouditchenko; Kevin Duarte; Hilde Kuehne; Samuel Thomas; Angie Boggust; Rameswar Panda; Brian Kingsbury; Rogerio Feris; David Harwath", "journal": "", "ref_id": "b2", "title": "Multimodal clustering networks for self-supervised learning from unlabeled videos", "year": "2021" }, { "authors": "Feilong Chen; Xiuyi Chen; Can Xu; Daxin Jiang", "journal": "", "ref_id": "b3", "title": "Learning to ground visual objects for visual dialog", "year": "2021" }, { "authors": "Valentin Gabeur; Chen Sun; Karteek Alahari; Cordelia Schmid", "journal": "", "ref_id": "b4", "title": "Multi-modal transformer for video retrieval", "year": "2020" }, { "authors": " Jort F Gemmeke; P W Daniel; Dylan Ellis; Aren Freedman; Wade Jansen; R Channing Lawrence; Manoj Moore; Marvin Plakal; Ritter", "journal": "", "ref_id": "b5", "title": "a. Audio set: An ontology and human-labeled dataset for audio events", "year": "2017" }, { "authors": " Jort F Gemmeke; P W Daniel; Dylan Ellis; Aren Freedman; Wade Jansen; R Channing Lawrence; Manoj Moore; Marvin Plakal; Ritter", "journal": "", "ref_id": "b6", "title": "Audio set: An ontology and human-labeled dataset for audio events", "year": "2017" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b7", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Di Hu; Rui Qian; Minyue Jiang; Xiao Tan; Shilei Wen; Errui Ding; Weiyao Lin; Dejing Dou", "journal": "NeurIPS", "ref_id": "b8", "title": "Discriminative sounding objects localization via self-supervised audiovisual matching", "year": "2020" }, { "authors": "Xixi Hu; Ziyang Chen; Andrew Owens", "journal": "", "ref_id": "b9", "title": "Mix and localize: Localizing sound sources in mixtures", "year": "2022" }, { "authors": "Jie Lei; Licheng Yu; Mohit Bansal; Tamara Berg", "journal": "", "ref_id": "b10", "title": "Tvqa: Localized, compositional video question answering", "year": "2018" }, { "authors": "Guangyao Li; Yake Wei; Yapeng Tian; Chenliang Xu; Ji-Rong Wen; Di Hu", "journal": "", "ref_id": "b11", "title": "Learning to answer questions in dynamic audio-visual scenarios", "year": "2022" }, { "authors": "Yan-Bo Lin; Yu-Chiang Frank; Wang ", "journal": "", "ref_id": "b12", "title": "Audiovisual transformer with instance attention for audiovisual event localization", "year": "2020" }, { "authors": "Yi-Lin Lin; Yan-Bo An Sung; Jie Lei; Mohit Bansal; Gedas Bertasius", "journal": "", "ref_id": "b13", "title": "Vision transformers are parameter-efficient audio-visual learners", "year": "2023" }, { "authors": "Shuo Liu; Weize Quan; Chaoqun Wang; Yuan Liu; Bin Liu; Dong-Ming Yan", "journal": "TMM", "ref_id": "b14", "title": "Dense modality interaction network for audio-visual event localization", "year": "2022" }, { "authors": " Otniel-Bogdan; Lukas Mercea; Riesch; Zeynep Koepke; Akata", "journal": "", "ref_id": "b15", "title": "Audio-visual generalised zeroshot learning with cross-modal attention and language", "year": "2022" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b16", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "Taiki Miyanishi; Motoaki Kawanabe", "journal": "", "ref_id": "b17", "title": "Watch, listen, and answer: Open-ended videoqa with modulated multi-stream 3d convnets", "year": "2021" }, { "authors": "Andrew Rouditchenko; Angie Boggust; David Harwath; Brian Chen; Dhiraj Joshi; Samuel Thomas; Kartik Audhkhasi; Hilde Kuehne; Rameswar Panda; Rogerio Feris", "journal": "", "ref_id": "b18", "title": "Avlnet: Learning audio-visual language representations from instructional videos", "year": "2020" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "IJCV", "ref_id": "b19", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "Idan Schwartz; Alexander G Schwing; Tamir Hazan", "journal": "", "ref_id": "b20", "title": "A simple baseline for audio-visual sceneaware dialog", "year": "2019" }, { "authors": "Reuben Tan; Arijit Ray; Andrea Burns; Bryan A Plummer; Justin Salamon; Oriol Nieto; Bryan Russell; Kate Saenko", "journal": "", "ref_id": "b21", "title": "Language-guided audio-visual source separation via trimodal consistency", "year": "2023" }, { "authors": "Yapeng Tian; Dingzeyu Li; Chenliang Xu", "journal": "", "ref_id": "b22", "title": "Unified multisensory perception: Weakly-supervised audio-visual video parsing", "year": "2020" }, { "authors": "Junjie Wang; Yatai Ji; Jiaqi Sun; Yujiu Yang; Tetsuya Sakai", "journal": "", "ref_id": "b23", "title": "Mirtt: Learning multimodal interaction representations from trilinear transformers for visual question answering", "year": "2021" }, { "authors": "Yu Wu; Linchao Zhu; Yan Yan; Yi Yang", "journal": "", "ref_id": "b24", "title": "Dual attention matching for audio-visual event localization", "year": "2019" }, { "authors": "Zhenyu Hanyu Xuan; Shuo Zhang; Jian Chen; Yan Yang; Yan", "journal": "AAAI", "ref_id": "b25", "title": "Cross-modal attention network for temporal inconsistent audio-visual event localization", "year": "2020" }, { "authors": "Pinci Yang; Xin Wang; Xuguang Duan; Hong Chen; Runze Hou; Cong Jin; Wenwu Zhu", "journal": "", "ref_id": "b26", "title": "Avqa: A dataset for audio-visual question answering on videos", "year": "2022" }, { "authors": "Chenyu You; Nuo Chen; Yuexian Zou", "journal": "", "ref_id": "b27", "title": "Selfsupervised contrastive cross-modality representation learning for spoken question answering", "year": "2021" }, { "authors": "Heeseung Yun; Youngjae Yu; Wonsuk Yang; Kangil Lee; Gunhee Kim", "journal": "", "ref_id": "b28", "title": "Pano-avqa: Grounded audio-visual question answering on 360deg videos", "year": "2021" }, { "authors": "Rowan Zellers; Jiasen Lu; Ximing Lu; Youngjae Yu; Yanpeng Zhao; Mohammadreza Salehi; Aditya Kusupati; Jack Hessel; Ali Farhadi; Yejin Choi", "journal": "", "ref_id": "b29", "title": "Merlot reserve: Neural script knowledge through vision and language and sound", "year": "2022" }, { "authors": "Jinxing Zhou; Liang Zheng; Yiran Zhong; Shijie Hao; Meng Wang", "journal": "", "ref_id": "b30", "title": "Positive sample propagation along the audio-visual event line", "year": "2021" }, { "authors": "Ye Zhu; Yu Wu; Yi Yang; Yan Yan", "journal": "", "ref_id": "b31", "title": "Describing unseen videos via multi-modal cooperative dialog agents", "year": "2020" }, { "authors": "Yueting Zhuang; Dejing Xu; Xin Yan; Wenzhuo Cheng; Zhou Zhao; Shiliang Pu; Jun Xiao", "journal": "TOMM", "ref_id": "b32", "title": "Multichannel attention refinement for video question answering", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 70.87, 500.11, 218.27, 26.13 ], "formula_id": "formula_0", "formula_text": "[h N ; c N ] ∈ R 2×dq . [h N ; c N ] is then transformed using a MLP to yield h q ∈ R 1×dq" }, { "formula_coordinates": [ 4, 383.69, 509.18, 141.45, 29.45 ], "formula_id": "formula_1", "formula_text": "s = σ( h q f ⊤ q √ d )(1)" }, { "formula_coordinates": [ 4, 362.65, 644.71, 105.24, 18.38 ], "formula_id": "formula_2", "formula_text": "idx = arg max n=1,2,••• ,N {s(n)}" }, { "formula_coordinates": [ 4, 390.36, 668.26, 134.78, 14.19 ], "formula_id": "formula_3", "formula_text": "f tgt = f idx q (3)" }, { "formula_coordinates": [ 5, 70.87, 112.61, 217.48, 26.6 ], "formula_id": "formula_4", "formula_text": "f t v,m from h × w × d v to hw × d v ." }, { "formula_coordinates": [ 5, 125.17, 166.85, 164.69, 34.21 ], "formula_id": "formula_5", "formula_text": "f t v,i = f t v,m • σ(s a ⊙ ŝq ) (4) s a = σ((f t a ) ⊤ • f t v,m )(5)" }, { "formula_coordinates": [ 5, 129.06, 205.14, 160.81, 31.37 ], "formula_id": "formula_6", "formula_text": "s q = σ((f tgt ) ⊤ • f t v,m ) (6) ŝq = s q I(s q -τ ) (7)" }, { "formula_coordinates": [ 5, 101.89, 250.62, 142.52, 14 ], "formula_id": "formula_7", "formula_text": "s a , s q ∈ R 1×hw , f t v,i ∈ R 1×dv" }, { "formula_coordinates": [ 5, 118.11, 525.5, 119.37, 14 ], "formula_id": "formula_8", "formula_text": "f t v = FC(tanh f t v,g ; f t v,i )" }, { "formula_coordinates": [ 5, 379.56, 165.27, 145.58, 28.39 ], "formula_id": "formula_9", "formula_text": "A q = σ( h q f ⊤ a √ d )(8)" }, { "formula_coordinates": [ 5, 379.21, 196.8, 145.93, 28.39 ], "formula_id": "formula_10", "formula_text": "V q = σ( h q f ⊤ v √ d )(9)" }, { "formula_coordinates": [ 5, 306.14, 235.72, 218.27, 39.23 ], "formula_id": "formula_11", "formula_text": "f a = f 1 a ; • • • ; f T a and f v = f 1 v ; • • • ; f T v ; h q ∈ R 1×dq , f a ∈ R T ×da , f v ∈ R T ×dv ;" }, { "formula_coordinates": [ 5, 319.34, 473.06, 205.8, 64.85 ], "formula_id": "formula_12", "formula_text": "L csl = 1 2 D KL (A q ∥M ) + 1 2 D KL (V q ∥M ) (10) M = 1 2 (A q + V q )(11)" }, { "formula_coordinates": [ 5, 459.9, 540.51, 65.24, 24.43 ], "formula_id": "formula_13", "formula_text": "P (t) Q(t)(12)" }, { "formula_coordinates": [ 5, 312.83, 761.08, 207.77, 14.19 ], "formula_id": "formula_14", "formula_text": "f av = IL(f v ; f a ) = f 1 v ; f 1 a ; • • • ; f T v ; f T a (13" }, { "formula_coordinates": [ 5, 520.6, 763.92, 4.54, 9.46 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 6, 78.69, 166.09, 150.52, 33.58 ], "formula_id": "formula_16", "formula_text": "f att = MHA(h q , f av , f av ) = 2T t=1" }, { "formula_coordinates": [ 6, 87.7, 203.3, 202.16, 40.92 ], "formula_id": "formula_17", "formula_text": "w av = Softmax((h q W q )(f av W k ) ⊤ ) (15) f q av = f att + MLP(Avg(f av ))(16)" }, { "formula_coordinates": [ 6, 135.11, 387.86, 154.75, 10.77 ], "formula_id": "formula_18", "formula_text": "L csl = JS(w a ∥w v )(17)" }, { "formula_coordinates": [ 6, 134, 403.36, 155.86, 15.24 ], "formula_id": "formula_19", "formula_text": "w v = {w av 2i } 2T i=1,••• ,T(18)" }, { "formula_coordinates": [ 6, 129.44, 422.6, 160.42, 16.04 ], "formula_id": "formula_20", "formula_text": "w a = {w av 2i-1 } 2T i=1,••• ,T(19)" }, { "formula_coordinates": [ 6, 150.72, 694.81, 139.14, 14.19 ], "formula_id": "formula_21", "formula_text": "e = f q av ⊙ h q (20)" }, { "formula_coordinates": [ 6, 363.83, 740.47, 161.31, 30.73 ], "formula_id": "formula_22", "formula_text": "L s = L ce (y match , ŷt ) (21) ŷt = σ(MLP( f t v ; f t a ))(22)" } ]
10.1109/CVPR52688.2022.01955
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b12", "b15", "b16", "b17", "b15", "b18", "b19", "b20", "b21", "b22", "b23", "b20", "b5", "b6", "b7", "b11", "b15", "b24", "b25", "b26", "b21", "b27", "b28", "b29", "b30", "b31", "b32" ], "table_ref": [], "text": "H UMAN action recognition, one of the core tasks of video understanding, is a classification task that observes and infers agent behavior from a third-person perspective [1]. This task can be widely used in human-computer interaction [2], video surveillance [3], healthcare [4], and short entertainment videos [5]. Among them, skeleton-based action recognition [6], [7], [8], [9], [10], [11], [12], [13], [14], [15] is widespread because of its robustness to various environmental noises in the video and its high compactness that facilitates model focus. Graph convolutional network (GCN) based methods [13], [16] made revolutionary advances in this field.\nThe human musculoskeletal system allows body parts to move and thus perform different actions [17], [18]. The skeleton-based data modality conforms to the human anatomy [16], thus making the learning of GCN more interpretable. It contains only 2D or 3D coordinates of the main joints of the human body, allowing the model to recognize human movements by reasoning about the skeleton sequences. In addition, the skeleton modal is more privacy-friendly than other modalities.\nThis paper introduces a language model knowledge-assisted graph convolutional network (LA-GCN) to enhance skeletonbased action recognition. Inspired by current cognitive neuroscience research [19], [20], [21] and benefiting from the devel- opment of Large-scale Language Model (LLM) [22], [23], [24], LA-GCN uses a large-scale textual knowledge base to simulate the brain regions that the human brain uses to accomplish behavioral prediction to help GCN network make predictions.\nAs shown in the upper part of Fig. 1, when observing the arXiv:2305.12398v1 [cs.CV] 21 May 2023\nbehavior of others, the temporoparietal joint area in the brain is stimulated to activate the corresponding brain area associated with the current action, and the prediction of that action is accomplished by reasoning based on a priori knowledge and goals [21]. Much research [6], [7], [8], [12], [16], [25], [26], [27] has tried to model essential joint information to enhance topology learning, but it still needs action-related a priori information to assist. Therefore, the proposed LA-GCN has two parts, the global prior relationship topology (GPR Graph) between nodes and the category prior relationship topology (CPR Graph) between nodes obtained from the LLM to assist the GCN in action reasoning. First, the GPR Graph is used to guide the generation of a new skeleton modal to assume the function of modeling critical information from the data level and using this modal as the input to the \"neuronal cluster\" GCN for feature aggregation. Then, the CPR Graph is used to compose an a priori consistency-assisted classification (PC-AC) module to help model learning based on features with enhanced semantic relationships. The framework of our proposed approach is shown at the bottom of Fig. 1 for the \"make a phone call\" action.\nOur GPR-Graph in LA-GCN contains relatively wellestablished meaningful inter-node relationships, even if two nodes are spatially distant. Specifically, we use the text encoder of LLM to extract node features for each joint and establish correlations by finding class centers for all joints. We borrow the BERT [22] widely used for feature extraction in language text as our LLM model. The GPR-Graph generates new skeleton data for the GCN network by preserving the key \"bones\" according to the rules. The global prior connection of the new skeleton representation can reduce the difficulty of topology modeling and get the differentiated feature representation.\nCPR Graph is a mapping of LLM's prior knowledge for action classes. Meanwhile, our PC-AC module aims to simulate how humans think and reason with a priori knowledge. Therefore, the PC-AC module encodes the CPR Graph as a category template topology to add additional supervision for GCN. It is suitable for solving some challenging process-like action classification problems, such as \"reading\" and \"writing\" which are similar in node relationships.\nIn addition, we propose a new feature aggregation method, a multi-hop attention graph convolution (MHA-GC) block, for improving the efficiency of message passing between nodes. When GC performs normal messaging, feature aggregation of a node is contributed by the directly connected nodes, which are called one-hop neighbors. However, as the depth of the GCN increases, the limitations of one-hop messaging increase layer by layer, leading to message delays and semantic over-smoothing that are detrimental to constructing inter-node contexts [28], [29], [30]. For this reason, we use multi-hop attention in a single GC layer to establish remote node relationships. Specifically, we spread the computation of attention from each node to the nodes indirectly connected to it, and the attention is represented using the distance between node features. As a result, MHA-GC can strengthen the semantic relationships between nodes with connectivity and accelerate model convergence.\nWe evaluated three skeleton-based action recognition datasets NTU RGB+D 60 [31], NTU RGB+D 120 [32], and NW-UCLA [33]. The performance on cross-subjects split for the first two benchmarks is 93.5% and 90.7%. On NW-UCLA, we have 97.6% of top1 accuracy. The experiments show that the proposed LA-GCN outperforms the state-of-the-art techniques. Our main contributions are summarized as follows:\n• An LA-GCN is proposed to use the prior knowledge of LLM to assist the GCN for skeleton-based action recognition.\n• A new skeleton representations method is proposed for GCN models ensemble. GPR Graph with global information is performed in this method to reduce the topological modeling difficulty.\n• An auxiliary supervised module PC-AC with class information encoding is proposed to improve the recognition rate of similar actions.\n• A new multi-hop attention feature aggregation method, MHA-GC, is proposed to improve the model's information transfer efficiency and accelerate model convergence." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Topology Construction", "publication_ref": [ "b33", "b34", "b35", "b24", "b36", "b37", "b38", "b5", "b6", "b7", "b11", "b15", "b24", "b26", "b15", "b10", "b5", "b8", "b12", "b11", "b7", "b39" ], "table_ref": [], "text": "Early skeleton-based action recognition methods contain CNNbased [34], [35], [36] and RNN-based [25], [37], [38], [39] methods. However, it is still necessary to explore the skeletal data structure adequately. Topology modeling is the design focus of GCN-based methods [6], [7], [8], [12], [16], [25], [27]. The pioneer of graph convolution, ST-GCN [16], predefines the topology based on the human structure as the input to GCNs. The topology is fixed in both the training and testing phases. Based on this, multi-scale graph building is introduced to GCNs for multilevel joint relationship modeling [11]. There are limitations in the inference performance of static methods due to their inability to follow the network co-optimization. Some works [6], [9], [13] augment topology learning using self-attention mechanisms to model the correlation between two joints given the corresponding features. Topology learned using local embeddings still needs to satisfy the need of GCN for high-quality node relationship modeling. Dynamic GCN [12] uses contextual features of all joints learned together to obtain global correlations. To make the features used for modeling more differentiated, CTR-GCN [8] designs channel-specific topology maps to explore more possibilities for feature learning in different channels. Shift-GCN [40] removes the limitations of predefined topological graphs and uses shifting operators for inter-joint feature fusion, learning node relationships implicitly. Since the action changes in real time, the dynamic approach is more advantageous and generalizes better than the static approach. In this paper, our LA-GCN topology modeling belongs to the dynamic mode." }, { "figure_ref": [], "heading": "Language Model in Skeleton-Based Action Recognition", "publication_ref": [ "b21", "b23", "b40", "b41", "b42", "b22" ], "table_ref": [], "text": "The development of natural language processing (NLP) tasks gave birth to the pre-trained representation model BERT [22] from transformers with a bi-directional encoder. It is a solution to solve NLP tasks using pre-trained LLMs to finetune. However, this solution could be more efficient and can only be adapted to one task at a time. In order to use pre-trained LLMs more efficiently, prompt learning (PL) emerged as a technique capable of adapting different tasks to large models. Specifically, the PL technique adapts the model to a new downstream task by adding specific textual parameters to the input of LLMs based on the definition of new tasks, significantly increasing the efficiency of knowledge utilization in LLMs. Meanwhile, related approaches such as CLIP [24] and [41] have successfully applied PL to learning computer vision (CV) downstream tasks and demonstrated powerful graphical and textual representation capabilities. This brings light to the CV field to step into a new phase of visual text.\nFor the action recognition task, ActionCLIP [42] uses the CLIP training scheme for video action recognition and adds a transformer layer to guarantee the temporal modeling of video data. In the construction of PL templates, ActionCLIP directly uses class labels as input text to construct cues such as \"[action] of video,\" \"carry out [action] of person,\" and so on. LST [43] uses LLM for skeleton-text multimodal representation learning in the skeletonbased action recognition task. The PL technique in LST is mainly used as a skeleton text pair building, i.e., using the prompt to allow detailed descriptions generated by GPT3 [23] for each class of skeleton actions. Inspired by human cognitive reasoning processes in cognitive neuroscience, our approach uses LLM knowledge to model human knowledge of brain regions in action reasoning. PL is used to construct topological maps for assisted learning, which contain fine-grained semantic relationships between human actions and joints." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [ "b5", "b7", "b10", "b11", "b12", "b15", "b15" ], "table_ref": [], "text": "The GCN family of methods [6], [8], [11], [12], [13], [16] models the human skeleton in space and time, respectively. Precisely, these methods follow the work of [16] by first converting the skeleton data into a spatiotemporal graph G = (V, E), where V denotes joints, and E is edges, i.e., the physical connection between joints, represented by the graph's adjacency matrix A. Spatial modeling aggregates node features based on the adjacency matrix and use them to represent the relationships between joints in the skeleton graph. Temporal modeling is then used to model the motion patterns of the joints.\nIf the skeleton-based action recognition task is symbolized can be expressed as follows: R N ×T ×d → R p , where it has N joints and T frames, and the dimension of each joint coordinate is d. The GCN finally outputs the probability value that the action belongs to each class, and there are p classes in total. For each frame, the graph convolution can be expressed as:\nF l+1 = s∈S Ãs F l W s ,(1)\nwhere F l ∈ R N ×C1 is denoted as the input feature of channel C 1 and F l+1 ∈ R N ×C2 is denoted as the output feature of channel C 2 . S defines the neighborhood of each joint node, and S = {root, centripetal, centrifugal}, where the root is itself, the centripetal group is nodes near the skeleton center, and the centrifugal group is nodes far from the center.\nÃs = Λ -1 2 s A s Λ -1 2 s\nis normalized matrix A ∈ {0, 1} N ×N , and the diagonal matrix Λ s in Ãs is define as Λ ii s = j (A ij s ) + α to prevent empty row, α is a small positive number, say 0.001. W s ∈ R 1×1×C1×C2 is the weight of every s. The graph convolution of most GCN-based methods on the T dimension acts as the rule 1D convolution." }, { "figure_ref": [], "heading": "LLM GUIDED TOPOLOGY ASSISTANT GRAPH CONVOLUTION NETWORK", "publication_ref": [], "table_ref": [], "text": "LA-GCN is a novel learning framework that utilizes LLM model knowledge for reasoning about the actions of a given skeleton sequence. We first integrate a priori knowledge into a multimodal skeleton representation (Sec. 4.1 and 4.2). For better feature aggregation of the prior representation, we also introduce a neural architecture (Sec. 4.3) and a loss of category-based prior knowledge for multi-task learning (Sec. 4.4). Finally, the overall learning scheme is given. 1 " }, { "figure_ref": [ "fig_1", "fig_1", "fig_2" ], "heading": "A Global Prior Relation Graph Generated by LLM", "publication_ref": [ "b21" ], "table_ref": [], "text": "We will first extract text features for each class of action labels and all the joints using a large-scale pre-trained network Bert. The loss of BERT [22] during pre-training consists of completing and entering two sentences to predict whether the latter one is the following sentence of the previous one. The output of the last layer of the model is used for the completion of the blank, and the output of the Pooler part is used for the next sentence prediction. As shown in Fig. 2(a), Bert first tokenizes the input sentences and then adds [CLS] and [SEP] tokens at the end of the sentences. The tokens in the sentence are the indexes of the corresponding words in the vocab. After getting these indexes, we transform them into continuous vectors by the embedding layer. The embedding layer can be regarded as a lexicon-size vector library, and the embedding process indexes the vectors according to their indexes. The output of the last layer of the model can be obtained after N transformer layers: the features of each token and the features of [CLS] after Pooler. Since the features after Pooler contain the semantics of the whole sentence, they can directly use when doing tasks such as text classification in general. Therefore, in this paper, the features after Pooler are also selected as the features of our skeleton nodes. Given M action categories and N human nodes, our specific process of extracting features for each node is shown in Fig. 2(b). We feed the contained class name and joint name text, e.g., [joint] function in [class], into the text encoder of LLM to get the corresponding output features C ∈ R N ×C for all joint tokens of each action category, where C = 256 is the feature dimension of node feature J.\nThen, we want to obtain a global inter-node relationship topology graph GPR-Graph containing the semantic knowledge in LLM to guide the skeleton representation generation. As shown in the left part of Fig. 3, to obtain the GPR Graph, we first have to find the centroids (CoCLS) of the text features J ∈ R 1×C of each node on the class dimension. Specifically, the features J i ∈ R cls×C , i ∈ (1, N ), and cls = M of the same node on different categories are averaged to obtain the category centroid vector J CoCLS ∈ R N ×C , and\nJ CoCLS i = 1 M M j=1 J Cj i .\nThen correlations are calculated between nodes, and the similarity measure chooses the Euclidean distance. The final GPR Graph 1. Note that all symbols used in this section are summarized in Table 11 of the Appendix. ∈ R N ×N with global prior information is obtained, corresponding to the distance between the node class centers." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "A Priori Skeleton Modal Representation", "publication_ref": [ "b5", "b7", "b10", "b12", "b15", "b30", "b31", "b30", "b15" ], "table_ref": [], "text": "In this section, we introduce a form of skeleton representation generated using a GPR Graph called a priori skeleton modal representation. Consistent with previous work [6], [8], [11], [13], we train our model to complete inference using multiple modal representations. The prior representation primarily utilizes the relative positions of joints for complementary learning, i.e., representing the data as joints and bones. In this case, the bone feature is a transformation of the joint feature: it is obtained by subtracting the starting node from the ending node according to the physical connection [16]. In detail, the joint-bone relationship at the moment t can be expressed as:\nX t = (I -B)X t ,(2)\nwhere B ∈ R N ×N denotes the bone matrix containing the position relationship between the source and target joints, which is a binary matrix with B ij = 1 if the i-th joint is the source of the j-th joint and otherwise B ij = 0, where the row corresponding to the base point is a zero vector. This processing significantly improves the performance of action recognition, which means that bone representations with considerable differences from the joints can learn more differentiated features to complement. In addition, because of the high weight of physical bone topology in relationship modeling, reasonable skeleton construction is critical to learning inter-node relationships.\nWe consider the difference between nodes as \"bone\" data for additional representation. For instance, the NTU datasets [31], [32] include 25 nodes, and if we regard the difference vector between any two nodes as a \"bone\", there are C 2 25 = 300 \"bones.\" According to the nature of the bone matrix B, if we want to construct a matrix that is different from the bone but has the same characteristics as the bone matrix, we need to transform the problem into an \"alignment problem\": Pick a \"bone\" connected to each joint except the base point that cannot be repeated. In other words, we will bind the degree of entry for each joint.\nWe extracted the \"bone\" data of all samples in the NTU+D 60 [31] training set. 2 After calculation, the bone links in the physical bone matrix [16] have a small sum of standard deviations. Based on this, we define the \"bone\" selection function as g(•), the new \"bones\" are represented as B = g min (B std ), where B std is the sum of the standard deviations of the selected bones and g min means that the links in the smallest B std is chosen as the new \"bone\" representation. Specifically, given a bone vector xt represented in Eq. 2, its skeleton standard deviation (std) is the average of the std of three coordinates in the time dimension: b std = mean(σ x ( xt ), σ y ( xt ), σ z ( xt )). We visualize the selected B in Fig. 4, while other designs are given for comparison. Nearly half of the bones in B in Fig. 4(b) are consistent with the original bone matrix but focus more on some complicated relationships, such as fingertips, toe tips, and hip joints, and have symmetry. 3 Further, we use the inter-node distance of the GPR Graph in Sec. 4.1 to weight the bone vector and extract the bones with the minimum std summation as the new skeleton representation. Our a priori modal representation has two main advantages: (1) It captures interactions between remote nodes that are not directly connected. The interaction may help recognize some actions. (2) It contains context-dependence on class properties." }, { "figure_ref": [ "fig_5" ], "heading": "Feature Aggressive", "publication_ref": [], "table_ref": [], "text": "We introduce a multi-hop attention mechanism to model the context dependence of joints, adding further information about neighboring nodes in the topology learning process. This is considering enhancing the learning of long-distance relationships present in the a priori skeletal modal representation. Fig. 5 provides an overview of the encoder-classifier structure of LA-GCN." }, { "figure_ref": [ "fig_4" ], "heading": "More Information for Learnable Topology", "publication_ref": [ "b15", "b5", "b7", "b43" ], "table_ref": [], "text": "When relying solely on the physical topology of the skeleton to aggregate features, information learning suffers from latency and dilution effects [16]. As the relationship between joints changes with the execution of actions, relationship learning between unnaturally connected nodes is affected by node spacing. These two issues also affect each other in the aggregation process of 2. the average standard deviation of all 300 \"bones\" extracted from the training set is 0.1073.\n3. The summation of standard deviation from (a) to (d) in Fig. 4 are: 0.8265, 0.6977, 2.2312, and 4.1997. raw GC. For illustration, two remote nodes have just established a connection at some point, and they may be diluted or lost after averaging and non-linear transformation operations in the neighborhood (see Eq.1). The information delay will be further aggravated.\nMax Pool 3 × 1 Conv 1 × 1 Conv 5 × 1 𝑟𝑑𝑖𝑙𝑎𝑡𝑒𝑑 = 1 Conv 1 × 1 Conv 5 × 1 𝑟𝑑𝑖𝑙𝑎𝑡𝑒𝑑 = 2 Conv 1 × 1 Input 𝐹 𝑙+1 ∈ ℝ 𝑁×𝐶2×𝑇×𝑉 C Conv 1 × 1 ⨁ Output MS-TC 𝐿 × MHA-GC MS-TC Add & Norm X 𝐺𝑃𝑅 PC-AC ⨁ PE 𝜃 𝑛 GAP Softmax Linear Softmax 𝐿𝑝𝑟𝑖 𝐿𝑎𝑢𝑔 Encoder Classifier Auxiliary Classifier Sub Block PC-AC ⨂ Input 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝐶×𝑇×𝑉 Avg Pool 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝐶×1×𝑉 Reading 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝐶×𝑐𝑙𝑠×𝑉 Linear → 𝐹𝑛 𝑙 ∈ ℝ 𝑁×1×𝑐𝑙𝑠×𝑉 Avg Pool Writing Drink Water Pushing T -C1 T -C2 T -CM-1 T -CM CPR Graph Output 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝑐𝑙𝑠 MHA-GC Input 𝐹 𝑙 ∈ ℝ 𝑁×𝐶1×𝑇×𝑉 Topology 𝐴 𝑙 ∈ ℝ 𝑉×𝑉 Pool & Conv 1 × 1 𝑀 = 𝑃 𝐹𝑙𝑊1 ∈ ℝ𝑁×𝑅×𝑉 Pool & Conv 1 × 1 𝑁 = 𝑃 𝐹𝑙𝑊2 ∈ ℝ𝑁×𝑅×𝑉 Cross joint correlation 𝜎(𝑀𝑖 -𝑁𝑗 ) 𝑣 𝑖,𝑗 =1 𝐴 𝑙 ∈ ℝ 𝑁×𝑅×𝑉×𝑉 Conv 1 × 1 𝐴𝑙𝑊3 ∈ ℝ𝑁×𝐶2×𝑉×𝑉 Conv 1 × 1 𝐹 𝑙 𝑊4 ∈ ℝ 𝑁×𝐶2×𝑇×𝑉 Output 𝐹 𝑙+1 ∈ ℝ 𝑁×𝐶2×𝑇×𝑉 0 𝜔0 ⋅ 𝑘 + ⋯ + 𝜔𝑘 ⋅ × +𝛽 ⋅ 1 -𝛽 ⋅ Loop 𝑘 times × 𝐹 𝑘 𝑘 → ∞\nSome approaches [6], [8] use an attention mechanism to guide the learning of the internal topology to complement the node relations adaptively. However, this is not optimal because it contains only first-order neighbor information in the semantic space. The neighborhood information indicated by the semantic space needs to be sufficiently learned, which is especially true for our a priori modal representation with contextual information. Like the video understanding task, allow the network to focus first on extracting intra-frame features and then on the fusion of interframe features [44]. Here we want to allow the nodes to learn more fully before integrating. Therefore, we propose an architecture that uses a multi-hop attention mechanism to capture each node's neighborhood to improve the efficiency of information exchange and speed up the convergence of the GCN." }, { "figure_ref": [], "heading": "Pre-preparation of The Input Data", "publication_ref": [], "table_ref": [], "text": "Before the skeleton sequence X is fed into the encoder for feature aggregation, we need to make two preparations: one is to weight X using the GPR Graph as mentioned in Sec. 4.2, and the other is to add the position embedding (PE) after X has been linearly transformed to obtain the embedding representation of the nodes. The P E is the embedding that contains the abstract position of each node in all nodes V . This gives us the initial feature representation:\nF (0) = X GP R W 0 + P E,(3)\nwhere W 0 is the parameter matrix, F (0) ∈ R N ×C×T ×V , and P E ∈ R C×V . If using the raw skeleton modal, the X GP R in Eq. 3 is changed to X." }, { "figure_ref": [ "fig_5" ], "heading": "Encoder Structure", "publication_ref": [], "table_ref": [], "text": "The base block of the encoder consists of two sub-blocks MHA-GC and MS-TC responsible for the spatial and temporal modeling, respectively. As shown in Fig. 5, the output features F l of the base block are obtained by adding the hidden layer features output by the two submodules using BN normalization with the upper layer feature F l-1 after residual connection." }, { "figure_ref": [ "fig_5" ], "heading": "MS-TC", "publication_ref": [ "b5", "b7", "b10", "b44", "b45" ], "table_ref": [], "text": "We use the commonly used multi-scale temporal encoding module [6], [8], [11] to model skeletal sequences with different time lengths. The module contains two dilated convolutions with different kernel settings and a max pooling layer. The output is obtained by adding the skip connections with the 1 × 1 convolution.\nMHA-GC We propose a new multi-hop attention graph convolution module for inter-node relationship modeling called MHA-GC.\nComputing multi-hop attention [45] for complementary neighborhood node information, MHA-GC first computes the first-order attention on all nodes Āl for multi-hop diffusion.\nAs shown in Fig. 5, at each layer l, the feature F l is passed through two branches containing the transformation equation P, consisting of a pooling layer and a 1 × 1 convolution, respectively. Feature vectors M, N ∈ R N ×R×V are obtained after P, where R is the reduced-to-feature dimension. Any pair of vertices (v i , v j ) in M and N is computed separately to obtain the first-order neighborhood information Ãl = v i,j=1 σ(M i -N j ), where σ is the activation function, v indicates all nodes, and Ãl ij denotes the messages aggregation from node j to node i.\nWe combine the shared topology Ȧl ∈ R V ×V with the linear mapping of learnable attention Ãl ∈ R R×V ×V to obtain the refined attention:\nĀl = Ȧl + γ Ãl W 3 ,(4)\nwhere γ and W 3 are the refinement weight parameters and mapping layer weights, respectively. The refinement method of Āl and the generation method of the GPR Graph are based on calculating feature distance between nodes to ensure the consistency of feature semantics.\nThen, the attention diffusion module calculates the attention between indirectly connected node pairs through a diffusion process based on the first-order attention matrix. Our multi-hop attention is computed as follows:\nĀ = k i=0 ω i Āi ,(5)\nwhere\nω i = β(1 -β) i , β ∈ (0, 1]\nis the decay factor of Ā, and ω i > ω i+1 ; Āi is the power of the matrix Ā. The original GC layer passes information in a one-hop attention weighting pattern (see Eq. 1). The power matrix Āi gives node information paths of length i, increasing the receptive field for associating relations between nodes. The implementation of ω is based on inductive bias, which determines the weight of the contribution of node j acting on each node i on the path, and the larger the number of hops, the smaller the weight. The overlap of node aggregation on the same path also alleviates the information dilution effect of the original GC layer aggregating two remote nodes. Finally, we define the feature aggregation function of the Ābased MHA-GC as\nF l+1 = σ( Āl F l W l 4 ),(6)\nwhere σ denotes the activation function ReLU [46] and W 4 is the weights of the output layer." }, { "figure_ref": [ "fig_2", "fig_5", "fig_5", "fig_6" ], "heading": "A Priori Consistency-Assisted Classification Module", "publication_ref": [ "b46", "b27", "b28", "b29", "b44" ], "table_ref": [], "text": "The goal of our a priori consistency-assisted classification (PC-AC) module is to design a class relationship topology graph containing a priori node relationships related to the predicted actions to help the GCN perform node feature aggregation. Specifically, the PC-AC module adds additional branch co-supervision to the GCN at training time, forcing each feature that passes through a branch containing a specific category topology to be predicted for that class.\nAs shown in Fig. 3, we generate the category topology graph T-C using the text features constructed in Sec. 4.1. For each action, the text features of N nodes are combined two by two to calculate the similarity. We refer to T-C as a class topology exemplar and assume that they contain the node relationships that should be present to identify an action.\nNext, to achieve our goal, LA-GCN will train the main and PC-AC auxiliary branches in a standard multi-task learning setup. We represent the entire multi-task model with a parameter θ as a function f θ (x) and the input is X. This parameter θ will be updated by the common loss of the primary and auxiliary branches. The input features are obtained using a hard parametersharing approach [47]. Specifically, the category prediction of the auxiliary branch uses a set of features shared by the main branch θ n , as shown in Fig. 5, where n is the selected main branch layer. After the final feature layer, we applied a task-specific layer to output the corresponding predictions, with the softmax classifier used in both branches. In this case, the classification head of the main branch of the model consists of an average global pool, and the classification head of the secondary branch consists of a fully connected layer and an average pooling layer.\nWe denote the main prediction as f θ pri (x), the auxiliary prediction as f θ aux (x), and the ground truth label as ŷ. Given the corresponding layer features F l n , and the category topology T-C, the cross-entropy loss in terms of the auxiliary branch is defined as:\nL aug = - k y k • log( ŷk ),(7)\nwhere y k is the one-hot vector containing the accurate category c of the sample X, and ŷk = e z i k e z k , logit z is the output of feature Z p containing the category prior after our auxiliary classification head, while Z p ∈ R N ×C×cls×V is obtained by multiplying F l n ∈ R N ×C×1×V and T-C ∈ R cls×V ×V in Fig. 5. The PC-AC module aims to maximize the likelihood that the output logit z i belongs to category c. For the primary supervision of the predicted action labels, logit is then the output of the main branch classification head. Ultimately, to update the parameters θ of LA-GCN, we define the multi-task objective as:\narg min θ (L(f θ pri (x), ŷ) + λL(f θn aux (x), ŷ)),(8)\nwhere λ is the hyper-parameter used to adjust the L aug weight. When testing, LA-GCN will drop the auxiliary branch to use the main branch for prediction.\nThe PC-AC module forces the features to be correctly classified is a relatively complex task, and it plays a role in regularization to some extent. During testing, the PC-AC module was removed without affecting the inference speed of the model. Meanwhile, T-C belongs to the fully connected graph, which contains relatively well-established inter-node relationships in each class. This property of T-C allows the PC-AC module to alleviate the over-smoothing problem [28], [29], [30], [45] common to GCN networks. Class-specific semantic information in T-C also improved the recognition rate of similar process actions, as shown in Fig. 6: \"reading\" and \"writing\" improved by 9.16% and 8.46%, respectively." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b30", "b31", "b32", "b32" ], "table_ref": [], "text": "NTU RGB+D. The dataset [31] contains 56,880 skeleton action sequences and has 60 classes, which can be categorized into daily, healthy, and interactive behaviors. All action data were captured simultaneously by three Microsoft Kinect v2 cameras from different angles. Two evaluation schemes are presented in this paper:1) cross-subject (X-sub), where the training set is the data collected from 20 subjects and the data from the remaining 20 is the test set. 2) cross-view (X-view), using the data captured by the number 2 and 3 cameras as the training set and the data captured by the number 1 camera as the test set. NTU RGB+D 120. The most commonly used is the incremental dataset NTU RGB+D 120 [32] of NTU RGB+D. It has 120 classes, 106 subjects, and 114,480 skeleton sequences. All of these actions were acquired by three cameras. Two benchmarks were introduced to evaluate model performance in NTU RGB+ D120: 1) crossover subjects (X-sub), which, as in NTU RGB+D, requires differentiation between two groups of subjects, and each group consists of 53 volunteers. 2) crossover setup (X-setup), in which data are acquired in different configurations. The training set is the even configuration data, and the test set is the odd configuration data. NW-UCLA. The dataset [33] contains ten basic human actions and 1494 video clips from 10 actors. All data were obtained from three Kinect cameras captured simultaneously. Following the evaluation protocol introduced in [33], the data from the first two cameras are used as the training set, and the data from the last camera as the test set." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b47", "b30" ], "table_ref": [], "text": "All experiments used the PyTorch deep learning framework [48] on 2× NVIDIA RTX 3090 GPUs. We used SGD with Nesterov momentum (0.9) as an optimizer with a weight decay of 0.0004. The entire training epoch was 110. The first five epochs were used TABLE 1: The comparison of Top-1 accuracy (%) on the NTU RGB+D [31] benchmark." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b24", "b15", "b50", "b12", "b38", "b51", "b52", "b39", "b53", "b11", "b10", "b54", "b55", "b6", "b7", "b5" ], "table_ref": [], "text": "X-Sub X-View VA-LSTM [25] 79.4 87.6 ST-GCN [16] 81.5 88.3 AS-GCN [51] 86.8 94.2 2s-AGCN [13] 88.5 95.1 AGC-LSTM [39] 89.2 95.0 Directed-GNN [52] 89.9 96.1 ST-TR [53] 90.3 96.3 Shift-GCN [40] 90.7 96.5 DC-GCN+ADG [54] 90.8 96.6 Dynamic-GCN [12] 91.5 96.0 MS-G3D [11] 91.5 96.2 DDGCN [55] 91.1 97.1 MST-GCN [56] 91.5 96.6 EfficientGCN [7] 92.1 96.1 CTR-GCN [8] 92.4 96.8 Info-GCN [6] 93.0 97.1 LA-GCN (ours) 93.5 97.2" }, { "figure_ref": [], "heading": "TABLE 2:", "publication_ref": [ "b31", "b30", "b36", "b35", "b15", "b12", "b49", "b52", "b39", "b10", "b11", "b55", "b6", "b7", "b5", "b48", "b7", "b49", "b39" ], "table_ref": [], "text": "The comparison of Top-1 accuracy (%) on the NTU RGB+D 120 [32] benchmark.\nMethod X-Sub X-Set Part-Aware LSTM [31] 26.3 25.5 ST-LSTM [37] 55.7 57.9 RotClips+MTCNN [36] 62.2 61.8 ST-GCN [16] 70.7 73.2 2s-AGCN [13] 82.9 84.9 SGN [50] 82.9 84.9 ST-TR [53] 85.1 87.1 Shift-GCN [40] 85.9 87.6 MS-G3D [11] 86.9 88.4 Dynamic-GCN [12] 87.3 88.6 MST-GCN [56] 87.5 88.8 EfficientGCN [7] 88.7 88.9 CTR-GCN [8] 88.9 90.6 Info-GCN [6] 89. with a warm-up strategy [49] to stabilize the training process. The primary learning rate is 0.1 and is reduced by 0.1 at 90 and 100 epochs. On NTU RGB+D and NTU RGB+D 120, the batch size of the experiment is 200, and the data are processed similarly [8], [50]. For NW-UCLA, the batch size was 64, and the same data preprocessing method as in [40] was used. The above configuration was used for all subsequent experiments if not explicitly stated. The source code of LA-GCN is publicly available on https://github.com/damNull/LAGCN. Overall Architecture. The encoder consists of nine basic blocks with 64-64-64-128-128-128-256-256-256 channels. The time dimension is reduced to half-time in blocks four and six. The skeleton sequence is first subjected to a feature transformation to obtain an embedding representation of the nodes, and the transformation uses a fully connected layer. This representation is then passed through the spatial and temporal modules to extract the sequence features." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b32" ], "table_ref": [], "text": "Many state-of-the-art methods use a multi-stream fusion strategy. Specifically, the final results of the experiments fuse four modes (4s), namely, joint, bone, joint motion, and bone motion. They are the original joint coordinates, the vector obtained by differencing TABLE 3: The comparison of Top-1 accuracy (%) on the NW-UCLA [33] benchmark." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b56", "b57", "b58", "b37", "b38", "b39", "b53", "b7", "b5", "b5", "b7", "b11", "b39", "b30", "b31", "b32", "b5" ], "table_ref": [], "text": "Top1 Lie Group [57] 74.2 Actionlet ensemble [58] 76.0 HBRNN-L [59] 78.5 Ensemble TS-LSTM [38] 89.2 AGC-LSTM [39] 93.3 4s Shift-GCN [40] 94.6 DC-GCN+ADG [54] 95.3 CTR-GCN [8] 96.5 InfoGCN [6] 97.0 LA-GCN (ours) 97.6 the coordinates of the joints with physical connections, the differentiation of the joint in the time dimension, and the differentiation of the bone in the time dimension, respectively. After training a model for each data stream, the softmax scores of each data stream are summed up as the final score during inference. For a fair comparison, we used the same setup as in [6], [8], [12], [40].\nOur models are compared with state-of-the-art methods on the datasets NTU RGB+D [31], NTU RGB+D 120 [32], and NW-UCLA [33], respectively, and the experimental results are displayed in Table 1, 2, and 3. Our approach achieves state-ofthe-art performance on three datasets under almost all evaluation benchmarks. We use the proposed new \"bone\" representation for 6s integration. The bones used for training 5s and 6s were selected from the proposed prompt 2 (p2) and prompt 5 (p5), respectively (see Appendix). Compared to the joint stream (86.5%), our method improved by 3.2%, 3.4%, and 4.2% for 2s, 4s, and 6s, respectively. On NTU-RGB+D 120, according to the same settings, our 4s fusion model is 0.6% and 1.0% higher than CTR-GCN 4s [8] and 0.5% higher than both InfoGCN 6s [6]. Our 6s ensemble is 1.1% and 1.3% higher than InfoGCN 6s. Notably, our method is the first to effectively aid topological modeling using the knowledge of large-scale pre-trained models." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b31" ], "table_ref": [], "text": "In this section, we analyze different configurations and ablation studies for each component of LA-GCN on the X-sub benchmark of the NTU RGB + D120 [32] dataset." }, { "figure_ref": [ "fig_6" ], "heading": "Effectiveness of PC-AC", "publication_ref": [ "b59" ], "table_ref": [ "tab_1" ], "text": "The validation of the PC-AC module in Sec. 4.4 includes the overall improvement due to L aug and the performance of different generated prompts for the class exemplar topology T-C graphs. The model accuracy steadily improves by adding L aug shown in Table 4. There are differences between the T-C graphs obtained from different textual cues, and we design the prompt text p as We compare action representations based on trained models with and without PC-AC by principal component analysis (PCA [60]), as shown in Fig. 6. Compared to the case without L aug loss, the potential representations learned with the aid of the category prior present a class-conditional distribution with a more apparent separation and perform better in the subspace differentiation of process-similar actions. In addition, it makes the intra-class features more convergent." }, { "figure_ref": [ "fig_7", "fig_5" ], "heading": "Improved Interaction Between Neighbor Nodes.", "publication_ref": [ "b5", "b10", "b53", "b31" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Based on the above baseline, we design two schemes in Table 5 to replace the original 1-hop GC. The first one uses MHA-GC for all blocks, and the other replaces only the 1-hop GC in the first layer. Then, we select k on top of that. We observe that the message passing performance of multi-hop GC for both schemes is strictly better than that of 1-hop GC. However, option I suffer from memory overflow at 3-hop. Due to the relatively simple topology of the skeleton, complete replacement is somewhat oversupplied. The best performance in scheme II is 86.5% at k = 4. We visualize the MHA in Fig. 7. The multi-hop learnable topology greatly improves the description of the node relationships. The loss curve on the right side shows that MHA-GC can accelerate the model convergence.\nComparison of Complexity with Other Models As shown in the Fig. 5, ω 0 = β and Ā0 = I, ĀF l is an approximation of below implementation when the condition k-hop → ∞ is satisfied: where 0 ≤ k < K. If k → ∞ , then lim k→∞ k i=0 ω i = 1 and ω i > 0, with the proposition lim K→∞ F K = ĀF l holds. The proof of this is given in the Appendix. This approximation indicates that the model complexity is in the same magnitude order as the previous attention-based SOTA approach [6], [11], [54]. We compare the model and computational complexity with these methods on NTU-RGB+D 120 [32]. As shown in Table 6, our model balances complexity and final accuracy well. Our model is 1.4% and 1.6% more accurate than the previous SOTA InfoGCN and CTR-GCN while having the same level GFLOPs.\nF k+1 = (1 -β) ĀF k + βF l ,(9)" }, { "figure_ref": [], "heading": "LIMITATIONS", "publication_ref": [], "table_ref": [], "text": "Despite the good results of the proposed LA-GCN in experiments, the attempts to acquire and apply LLM prior knowledge information are more oriented towards manual design. Further integration of language model assistance with self-supervised learning should be exciting. In addition, our skeleton modal representation is obtained based on the statistics of the existing dataset. It would also be interesting to see if new observations and conclusions can be made on more categories of datasets. Finally, expanding LA-GCN to in-the-wild settings is also worth thinking about." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We present LA-GCN, a representation learning framework aided by prior knowledge of language models. It builds on the cognitive neuroscience of human action recognition, i.e., the recognition process requires multiple brain regions to work together. LA-GCN accomplishes processes of encoding critical information and prediction based on the category a priori knowledge assistance. We introduce a new \"bone\" multimodal representation guided by global a priori information for model integration. Further, we propose a new multi-hop attention-based graph convolution module, MHA-GC, and demonstrate that it can effectively improve information interaction efficiency for modeling learnable topologies. The \"bone\" representation and MHA-GC are jointly used to ensure crucial information encoding. Ultimately, our model exhibits state-of-the-art performance on three popular datasets for skeleton-based action recognition." }, { "figure_ref": [ "fig_5" ], "heading": "APPENDIX A STRUCTURE APPROXIMATION PROPOSITION", "publication_ref": [], "table_ref": [], "text": "As described in Sec. 5.4.2, the structure in the dashed box in Fig. 5 MHA-GC sub-block can be approximated by our k-hop structure under certain conditions. The equation of the dash box implementation is\nF k+1 = (1 -β) ĀF k + βF l ,\nwhere k is the iterations number of attention diffusion, F k and F k+1 is the state of F l at the k-th and k+1-th hop. Proof. Let E 0 = F l , then Eq. 9 becomes:\nProposition 1. lim K→∞ F K = ĀF l\nE k+1 = (1 -β) ĀE k + βE 0 . (10\n)\nLet K > 0 be the hop number, we approximate F l by E K . By recursion, we can obtain\nE K = (1 -β) ĀE K-1 + βE 0 = (1 -β) Ā((1 -β) ĀE K-2 + βE 0 ) + βE 0 = (1 -β) 2 Ā2 E K-2 + β(1 -β) ĀE 0 + βE 0 = (1 -β) 3 Ā3 E K-3 + β(1 -β) 2 Ā2 E 0 + β(1 -β) ĀE 0 + βE 0 • • • = ((1 -β) K ĀK + β K-1 i=0 (1 -β) i Āi )E 0 .(11)\nThat is\nE K = ((1 -β) K ĀK + β K-1 i=0 (1 -β) i Āi )F l . As β ∈ (0, 1] and ĀK i,j ∈ (0, 1], when K → ∞, the term (1 - β) K ĀK → 0. Thus, lim K→∞ E K = ( K-1 i=0 β(1 -β) i Āi )F l = ĀF l .\nThe skeleton sequence contains a fixed number of edges ε, and by the above approximation, we conclude that the complexity of multi-hop attention does not differ much from one-hop attention.\nThe complexity can all be expressed as O(|ε|), where the number of hops K is the factor that determines the complexity of multiple hops. A better representation can be obtained for skeletal data 1 ≤ K ≤ 4 in Sec. 5.4.2. Also, if the structural complexity of the graph increases, then K needs to increase as well." }, { "figure_ref": [ "fig_8" ], "heading": "APPENDIX B MATHEMATICAL EXPLANATION OF EFFECTIVENESS FOR MHA", "publication_ref": [ "b60", "b61", "b62", "b63" ], "table_ref": [ "tab_2" ], "text": "In addition to the intuitive conclusion from Table 5 that the expressiveness of multiple hops is better than that of one hop, we can also perform a spectral analysis [61] of Ā and Ā to obtain relevant proof.\nThe relationship between our multi-hop and one-hop attention is given in Eq. 5: Ā = k i=0 ω i Āi , ω i = β(1 -β) i , and Āi is the power matrix. In spectral analysis, the Jordan decomposition of graph attention Ā is Ā = U ΛU -1 , where the i-th column of U is the eigenvector u i of Ā, Λ is the diagonal matrix, and each element in Λ is the eigenvalue corresponding to u i , Λii = λ i . Then we have:\nĀ = K l=0 ω l Āl = K l=0 ω l (U ΛU -1 ) l(12)\nThe eigenvectors of the power matrix Ān are same as Ā, we can get Ān = (U ΛU\n-1 )(U ΛU -1 ) • • • (U ΛU -1 ) = U Λn U -1 .\nIn addition, the eigenvectors of Ā + Ā2 are same as Ā. By recursion, the summation of the Āl , l ∈ [0, K] has the same eigenvectors as Ā, we have:\nĀ = U ( K l=0 ω l Λl )U -1 .(13)\nTherefore, we have an analogy that the eigenvectors of Ā and Ā are also the same.\nLemma 1. Let λi and λ i be the i-th eigenvalues of Ā and Ā, respectively. Then, the eigenvalue relationship function is\nλi = ∞ l=0 ω l λ i l = ∞ l=0 β(1 -β) l λ i l = β 1 -(1 -β)λ i(14)\nProof.\nL = I -Q -1 2 ĀQ -1 2\nis the symmetric normalized Laplacian of skeleton graph G, where Q = diag([q 1 , q 2 , • • • , q N ]), q i = Āij j=0 is the degree matrix. Since Ā is the one-hop attention matrix of G, if Ā is generated by softmax, then q i = 1 and thus Q = I. Therefore, L = I -Ā, and the eigenvalue of L is λi = 1 -λ i . Also, the work [62] proves that the value domain of eigenvalues λi of symmetric normalized Laplace matrix is [0, 2]. Thus, -1 ≤ λ i ≤ 1 and β ∈ (0, 1), we have\n|(1 -β)λ i | ≤ (1 -β) < 1. When K → ∞, ((1 -β)λ i ) K → 0 and λi = lim K→∞ K l=0 β(1 -β) l λ i l . Let R be K l=0 β(1 -β) l λ i l is calculated as follow R = β + β(1 -β)λ i + β(1 -β) 2 λ i 2 + • • • + β(1 -β) K λ i K = β(1 + (1 -β)λ i + ((1 -β)λ i ) 2 + • • • + ((1 -β)λ i ) K ) = β 1 -((1 -β)λ i ) K 1 -(1 -β)λ i .(15)\nWe\nget λi = lim K→∞ R = lim K→∞ β(1-((1-β)λi) K ) 1-(1-β)λi = β 1-(1-β)λi .\nThe above proof is under the condition that Ā is obtained by softmax, but here, we use the feature distance to compute Ā to maintain semantic consistency with the GPR Graph (Sec. 4.3). For this purpose, we visualize the degree matrix Q with value domain of eigenvalue less than 0.3. Thus, the eigenvalue λ Qi of Q -1/2 has a value domain greater than 1, and\n-1 ≤ λ Qi λ i λ Qi ≤ 1. Since λi ∈ [0, 2], |λ Qi λ i λ Qi | ≤ 1. And λ Qi ≥ 1, we have |λ i | ≤ 1. The condition |(1 -β)λ i | ≤ (1 -β) < 1 still holds.\nWe further obtain the eigenvalue relations for the normalized laplacian graph of Ā and Ā according to Eq. 14:\nλG i λ G i = 1 -λi λ G i = 1 - β 1-(1-β)λi λ G i = 1 - β 1-(1-β)(1-λ G i ) λ G i = 1 β 1-β + λ G i .(16)\nIf λ G i is large, the multi-hop Laplacian eigenvalues λG i less than one-hop λ G i . Eq. 16 shows that multi-hop can add smaller eigenvalues. The smaller the beta, the more pronounced the effect. It has been shown in the spectral analysis that low-frequency signals correspond to topological information of the graph, while higher eigenvalues correspond more to noise [63], [64]. Therefore, multi-hop learned representations can better describe the internode relationships. In addition, we visualize the joint attention of MHA on more actions in Fig. 8, including \"drinking,\" \"playing with phone table,\" \"touching the head,\" \"writing,\" \"brushing teeth,\" \"reach into pocket,\" \"jump up,\" \"pick up,\" \"sitting down,\" and \"staggering.\" Some actions focus on the upper body, such as the head and hands. Some actions focus on the lower body, such as the hip, knee, and foot. Our MHA can correctly show the strength of the relationship between these critical parts in different actions. The relationship between the left hand and the head was stronger for \"brushing\" than for \"drinking.\" In the two actions of \"writing\" and \"playing with phone tablet,\" MHA paid more vital attention to the \"spine\" during writing. In addition, MHA can effectively describe the remote interaction between hands and legs in \"jump up\" and between neck, shoulders, and feet in \"staggering.\"" }, { "figure_ref": [], "heading": "APPENDIX C SUPPLEMENTAL EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 Ablations on NTU RGB+D 120", "publication_ref": [], "table_ref": [ "tab_4", "tab_5", "tab_5" ], "text": "Weighting Factors for L aug in Total Loss. We perform a grid search for the coefficients λ of L aug and the prompt that generates T-C in PC-AC. As shown in Table 7, 8 left. Our model consists of nine base blocks. These nine basis blocks are divided into three stages based on the feature dimensional descent process. θ 3 , θ 2 , and θ 1 corresponds to the output features of the ninth, sixth, and third blocks. Except for the input feature selection, we further compared the different pooling methods used to reduce the feature dimensionality and get classification logits. Table 8 right shows that the average pooling performs the best. The average relationship between joints is more meaningful as the category-by-category representation. Multi-stream Ensemble. We integrated the skeletal modal representations generated by different prompts to observe the final TABLE 9: The accuracy (%) of the five-stream (5s) and six-stream (6s) ensemble on the NTU RGB+D 120 X-Sub split between the traditional four-stream (4s) and the new \"bone\" representatives. The basis is the joint acc: 86.5%. The model accuracy (%) of the new \"bone\" representations {B p1 , B p2 , B p3 , B p4 , B p5 , B p6 } on the NTU RGB+D 120 X-Sub split are 85.9, 86.0, 84.9, 85.9, 85. performance of the multi-stream integration. We give the classification accuracy of the skeleton representation B p * generated by each prompt and the ensemble results for 2s, 4s, 5s, and 6s in Table 9. The 2s in Table 9 represent the integrated joint and bone models, the 4s represent the integrated 2s and their motion modalities, the 5s represent the integrated 4s and arbitrary B p * modalities, and the 6s represent the integrated 4s and the two best-performing prompts B p2 and B p5 in the 5s. In Table 9, we can see that the models trained with B p * have a corresponding improvement in accuracy. Choosing the two best-performing B p * for integration can lead to higher performance." }, { "figure_ref": [], "heading": "C.2 Multi-stream Ensemble on All Datasets", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "The experimental results of the multi-stream integration of our method on different validation classifications of the three benchmark datasets are shown in Table 10. All 6s ensembles were done by the best-performing skeleton modalities B p2 and B p5 on NTU RGB+D 120 X-Sub. It can be observed that the integration performance of the skeleton modalities generated by these two prompts is robust on all datasets. Our method uses the sum of standard deviations as the criterion for selecting a new skeleton representation and requires that the newly selected \"bone\" be similar to the original skeleton representation. Moreover, the preserved bones are guided by a priori information. Although the model learned from the new skeleton representation does not perform as well as the original bone model, its good performance in ensemble experiments demonstrates that the variability of the new \"bone\" can effectively complement feature learning. We give all the symbols in Sec. 4 in Table 11 to make it more accessible. " } ]
How humans understand and recognize the actions of others is a complex neuroscientific problem that involves a combination of cognitive mechanisms and neural networks. Research has shown that humans have brain areas that recognize actions that process top-down attentional information, such as the temporoparietal association area. Also, humans have brain regions dedicated to understanding the minds of others and analyzing their intentions, such as the medial prefrontal cortex of the temporal lobe. Skeletonbased action recognition creates mappings for the complex connections between the human skeleton movement patterns and behaviors. Although existing studies encoded meaningful node relationships and synthesized action representations for classification with good results, few of them considered incorporating a priori knowledge to aid potential representation learning for better performance. LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance. First, the LLM knowledge is mapped into a priori global relationship (GPR) topology and a priori category relationship (CPR) topology between nodes. The GPR guides the generation of new "bone" representations, aiming to emphasize essential node information from the data level. The CPR mapping simulates category prior knowledge in human brain regions, encoded by the PC-AC module and used to add additional supervision-forcing the model to learn class-distinguishable features. In addition, to improve information transfer efficiency in topology modeling, we propose multi-hop attention graph convolution. It aggregates each node's k-order neighbor simultaneously to speed up model convergence. LA-GCN reaches state-of-the-art on NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.
Language Knowledge-Assisted Representation Learning for Skeleton-Based Action Recognition
[ { "figure_caption": "Stage 1 :Fig. 1 :11Fig. 1: Schematic of LA-GCN concept. The top half of this figure shows two brain activity processes when humans perform action recognition. The bottom half shows the proposed multi-task learning process. The knowledge of the language model is divided into global information and category information to simulate the a priori knowledge used in human reasoning to aid the model. The encoder infers the correlation between joints and thus refines the topology using contextual information.", "figure_data": "", "figure_id": "fig_0", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Extraction of text features. Subfigure (a) is Bert's architecture. (b) Our method uses the learned text encoder to extract text features by embedding the names of classes [C] and the names of all joints [J] of the target dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig.3: Summarize our approach to generate prior topologies. GPR Graph is obtained by computing the class centers of the joints and computing correlations between the node feature of each action to obtain the CPR Graph.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Multimodal representation of the skeleton. The arrows depict the \"bone\" vector from the source to the target joint. (a) is the bone matrix [16]. (b), (c) and (d) is our minimum, mean, and maximum std summation matrix, respectively. It was found that nearly half of the skeletons in (b) are consistent with (a) but contain more detailed relationships, and (c) implicitly contains information about the connections between body parts, such as hands, torso and hands, torso and legs, and legs. Where (b) is used as our new \"bone\" representation.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: LA-GCN architecture. The model consists of a main branch and an auxiliary branch containing an encoder and a classifier. The details of each sub-block structure are shown in the gray box. The symbols N , C, T , and V in this sub-block indicate the batch size, the number of input/output channels, the number of video frames, and the number of joints, respectively. We use the GPR Graph to generate a skeleton representation X GP R as input for learning the topology. This allows the encoder to capture the inter-node context directly with global prior semantic information. The PC-AC guides our neural structure to learn conditional representations based on the class prior node relations.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: The left and middle subplots show the PCA projections of potential representations with and without the auxiliary loss L aug of the PC-AC. The five action classes visualized were randomly selected from NTU RGB+D 120. The right subplot shows the visualization of actions with a greater than 3% change in accuracy.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig.7: The left and middle subplots visualize the attention and its matrices of two actions, \"clapping\" and \"kicking,\" with and without MHA-GC. The action selection focuses on the upper and lower body, respectively. Each action has a sampling interval of 20 frames. The larger the attention value, the larger the circle's radius centered on the joint. The right subplot shows the visualization of the loss function before and after adding MHA-GC.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: More action visualization for MHA, using different colors for different actions. Each action has a sampling interval of 20 frames. The larger the attention value, the larger the circle's radius centered on the joint.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "λ = 0.2 performs best for the class exemplars T-C generated using p3. Prompt p3 searches for possible state features of each node as the action is performed and calculates the correlation of features between nodes in T-C. The design has the same goal as the skeleton-based action recognition task and validates the importance of node correlation modeling for correct classification. Input and Output Configuration of The PC-AC Module. PC-AC has the best results when the feature θ 3 of the last stage of the main branch is selected as an input, and the experiments are shown in Table", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(i) The comparison of accuracy without and with PC-AC, where the λ of L aug is 0.2. (ii) Contribution of different text prompts.", "figure_data": "MethodsTop1w/o Laug84.9L total86.1 ↑1.2p1: [J] function in [C].85.6 ↑0.7p2: What happens to [J] when a person is [C]?85.8 ↑0.9p3: What will [J] act like when [C]?86.1 ↑1.2p4: When [C][J] of human body.85.5 ↑0.6p5: When [C] what will [J] act like?85.7 ↑0.8p6: When a person is [C], [J] is in motion.85.5 ↑0.6", "figure_id": "tab_1", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "(i) The comparison of accuracy without and with PC-AC, where the λ of L aug is 0.2. (ii) Contribution of different text prompts. complete sentence containing action names [C] and joints [J]. For sample, p1 emphasizes \"nodes function,\" and p2 describes the change of nodes state, as demonstrated in Appendix Table 7. The best result was obtained by p3: \"What will [J] act like when [C]\" with a 1.2% improvement.", "figure_data": "MethodsTop1Ā86.1Ā86.51-hop : Ā86.12-hop86.33-hopOOM2-hop 1st86.13-hop 1st86.24-hop 1st86.55-hop 1st85.9", "figure_id": "tab_2", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of computational and model complexity of the state-of-the-arts on NTU RGB+D 120 dataset.", "figure_data": "MethodsX-SubGFLOPsParam (M)DC-GCN [54]84.01.833.37MS-G3D [11]84.95.223.22CTR-GCN [8]84.91.971.46InfoGCN [6]85.11.841.57Ours86.51.761.46", "figure_id": "tab_3", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "λ selection of PC-AC with different text prompts T-C.", "figure_data": "λ0.10.20.30.5p1: [J] function in [C].85.085.685.384.8p2: What happens to [J] when a person is [C]?85.185.885.585.1p3: What will [J] act like when [C]?85.386.185.685.3p4: When [C][J] of human body.85.085.585.084.8p5: When [C] what will [J] act like?85.285.785.284.9p6: When a person is [C], [J] is in motion.85.185.585.285.0", "figure_id": "tab_4", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The PC-AC input and output configuration by Top1 Acc (%). On the left is the result of selecting the input features theta n . On the right is the pooling method's output logit R N ×1×cls×V → R N ×cls .", "figure_data": "θnAccPoolAccθ 184.7Avg86.1θ 285.3Max85.9θ 386.1Avg ⊕ Max85.4θ 2 + θ 384.2Weighted sum 85.7", "figure_id": "tab_5", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "8, and 85.8, respectively.", "figure_data": "ModesComponentsAcc2sJoint + Bone89.7 ↑3.24sJ + B + JM + BM89.9 ↑3.44s + B p190.1 ↑3.64s + B p290.4 ↑3.95s4s + B p3 4s + B p490.3 ↑3.8 90.1 ↑3.64s + B p590.4 ↑3.94s + B p690.2 ↑3.76s4s + B p2 + B p590.7 ↑4.2", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Accuracies (%) of the six streams ensemble on NTU RGB+D 120 X-Set split, NTU RGB+D 60 X-Sub and X-view split, and NW-UCLA", "figure_data": "ModesComponentsNTU RGB+D 120 X-SetNTU RGB+D 60 X-Sub X-viewNW-UCLA Top-1Joint88.090.595.595.71sBone B p288.6 87.190.9 89.695.2 93.893.1 90.7B p587.089.493.992.52sJoint + Bone91.092.396.696.34sJ + B + JM + BM91.393.097.196.86s4s + B p2 + B p591.893.597.297.6", "figure_id": "tab_7", "figure_label": "10", "figure_type": "table" } ]
Haojun Xu; Yan Gao; Zheng Hui; Jie Li; Xinbo Gao; 𝑁×𝑁 Gpr
[ { "authors": "Z Sun; Q Ke; H Rahmani; M Bennamoun; G Wang; J Liu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b0", "title": "Human action recognition from various data modalities: A review", "year": "2023" }, { "authors": "Y Li; J Huang; F Tian; H.-A Wang; G.-Z Dai", "journal": "Virtual Reality & Intelligent Hardware", "ref_id": "b1", "title": "Gesture interaction in virtual reality", "year": "2019" }, { "authors": "C I Nwakanma; F B Islam; M P Maharani; D.-S Kim; J.-M Lee", "journal": "", "ref_id": "b2", "title": "Iot-based vibration sensor data collection and emergency detection classification using long short term memory (lstm)", "year": "2021" }, { "authors": "B X B Yu; Y Liu; X Zhang; S Zhong; K C C Chan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b3", "title": "Mmnet: A model-based multimodal network for human action recognition in RGB-D videos", "year": "2023" }, { "authors": "R Ying; R He; K Chen; P Eksombatchai; W L Hamilton; J Leskovec", "journal": "", "ref_id": "b4", "title": "Graph convolutional neural networks for web-scale recommender systems", "year": "2018" }, { "authors": "H Chi; M H Ha; S Chi; S W Lee; Q Huang; K Ramani", "journal": "", "ref_id": "b5", "title": "Infogcn: Representation learning for human skeletonbased action recognition", "year": "2022" }, { "authors": "Y Song; Z Zhang; C Shan; L Wang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b6", "title": "Constructing stronger and faster baselines for skeleton-based action recognition", "year": "2023" }, { "authors": "Y Chen; Z Zhang; C Yuan; B Li; Y Deng; W Hu", "journal": "", "ref_id": "b7", "title": "Channelwise topology refinement graph convolution for skeleton-based action recognition", "year": "2021" }, { "authors": "P Zhang; C Lan; W Zeng; J Xing; J Xue; N Zheng", "journal": "", "ref_id": "b8", "title": "Semanticsguided neural networks for efficient skeleton-based human action recognition", "year": "2020" }, { "authors": "Z Chen; S Li; B Yang; Q Li; H Liu", "journal": "", "ref_id": "b9", "title": "Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition", "year": "2021" }, { "authors": "Z Liu; H Zhang; Z Chen; Z Wang; W Ouyang", "journal": "", "ref_id": "b10", "title": "Disentangling and unifying graph convolutions for skeleton-based action recognition", "year": "2020" }, { "authors": "F Ye; S Pu; Q Zhong; C Li; D Xie; H Tang", "journal": "", "ref_id": "b11", "title": "Dynamic GCN: Context-enriched topology learning for skeleton-based action recognition", "year": "2020" }, { "authors": "L Shi; Y Zhang; J Cheng; H Lu", "journal": "", "ref_id": "b12", "title": "Two-stream adaptive graph convolutional networks for skeleton-based action recognition", "year": "2019" }, { "authors": "F M Thoker; H Doughty; C G M Snoek", "journal": "", "ref_id": "b13", "title": "Skeleton-contrastive 3d action representation learning", "year": "2021" }, { "authors": "Y Su; G Lin; R Sun; Y Hao; Q Wu", "journal": "", "ref_id": "b14", "title": "Modeling the uncertainty for self-supervised 3d skeleton action representation learning", "year": "2021" }, { "authors": "S Yan; Y Xiong; D Lin", "journal": "", "ref_id": "b15", "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition", "year": "2018" }, { "authors": "A M Agur; A F Dalley", "journal": "", "ref_id": "b16", "title": "Grant's atlas of anatomy", "year": "2009" }, { "authors": "M Nordin; V H Frankel", "journal": "", "ref_id": "b17", "title": "Basic biomechanics of the musculoskeletal system", "year": "2001" }, { "authors": "A Finisguerra; R Borgatti; C Urgesi", "journal": "Frontiers in psychology", "ref_id": "b18", "title": "Non-invasive brain stimulation for the rehabilitation of children and adolescents with neurodevelopmental disorders: a systematic review", "year": "2019" }, { "authors": "M F Wurm; R I Schubotz", "journal": "NeuroImage", "ref_id": "b19", "title": "The role of the temporoparietal junction (tpj) in action observation: agent detection rather than visuospatial transformation", "year": "2018" }, { "authors": "D C Hyde; C E Simon; F Ting; J I Nikolaeva", "journal": "The Journal of neuroscience : the official journal of the Society for Neuroscience", "ref_id": "b20", "title": "Functional organization of the temporal-parietal junction for theory of mind in preverbal infants: A near-infrared spectroscopy study", "year": "2018" }, { "authors": "J Devlin; M Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b21", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "T B Brown; B Mann; N Ryder; M Subbiah; J Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell; S Agarwal; A Herbert-Voss; G Krueger; T Henighan; R Child; A Ramesh; D M Ziegler; J Wu; C Winter; C Hesse; M Chen; E Sigler; M Litwin; S Gray; B Chess; J Clark; C Berner; S Mccandlish; A Radford; I Sutskever; D Amodei", "journal": "", "ref_id": "b22", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark; G Krueger; I Sutskever", "journal": "", "ref_id": "b23", "title": "Learning transferable visual models from natural language supervision", "year": "2021-07" }, { "authors": "P Zhang; C Lan; J Xing; W Zeng; J Xue; N Zheng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b24", "title": "View adaptive neural networks for high performance skeleton-based human action recognition", "year": "2019" }, { "authors": "X Shu; B Xu; L Zhang; J Tang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b25", "title": "Multi-granularity anchorcontrastive representation learning for semi-supervised skeleton-based action recognition", "year": "2023" }, { "authors": "Y Wen; L Gao; H Fu; F Zhang; S Xia; Y Liu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b26", "title": "Motif-gcns with local and non-local temporal blocks for skeleton-based action recognition", "year": "2023" }, { "authors": "Q Li; Z Han; X.-M Wu", "journal": "", "ref_id": "b27", "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "year": "2018" }, { "authors": "Z Liu; C Chen; L Li; J Zhou; X Li; L Song; Y Qi", "journal": "", "ref_id": "b28", "title": "Geniepath: Graph neural networks with adaptive receptive paths", "year": "2019-01-27" }, { "authors": "L A C Xhonneux; M Qu; J Tang", "journal": "", "ref_id": "b29", "title": "Continuous graph neural networks", "year": "2020-07" }, { "authors": "A Shahroudy; J Liu; T Ng; G Wang", "journal": "", "ref_id": "b30", "title": "NTU RGB+D: A large scale dataset for 3d human activity analysis", "year": "2016" }, { "authors": "J Liu; A Shahroudy; M Perez; G Wang; L.-Y Duan; A C Kot", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b31", "title": "NTU RGB+d 120: A large-scale benchmark for 3d human activity understanding", "year": "2020" }, { "authors": "J Wang; X Nie; Y Xia; Y Wu; S Zhu", "journal": "", "ref_id": "b32", "title": "Cross-view action modeling, learning, and recognition", "year": "2014" }, { "authors": "T S Kim; A Reiter", "journal": "", "ref_id": "b33", "title": "Interpretable 3d human action analysis with temporal convolutional networks", "year": "2017-07" }, { "authors": "C Li; Q Zhong; D Xie; S Pu", "journal": "", "ref_id": "b34", "title": "Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation", "year": "2018-07" }, { "authors": "Q Ke; M Bennamoun; S An; F Sohel; F Boussaid", "journal": "IEEE Transactions on Image Processing", "ref_id": "b35", "title": "Learning clip representations for skeleton-based 3d action recognition", "year": "2018" }, { "authors": "J Liu; A Shahroudy; D Xu; G Wang", "journal": "", "ref_id": "b36", "title": "Spatio-temporal LSTM with trust gates for 3d human action recognition", "year": "2016" }, { "authors": "I Lee; D Kim; S Kang; S Lee", "journal": "", "ref_id": "b37", "title": "Ensemble deep learning for skeleton-based action recognition using temporal sliding LSTM networks", "year": "2017" }, { "authors": "C Si; W Chen; W Wang; L Wang; T Tan", "journal": "", "ref_id": "b38", "title": "An attention enhanced graph convolutional LSTM network for skeleton-based action recognition", "year": "2019" }, { "authors": "K Cheng; Y Zhang; X He; W Chen; J Cheng; H Lu", "journal": "", "ref_id": "b39", "title": "Skeletonbased action with shift graph convolutional network", "year": "2020" }, { "authors": "C Jia; Y Yang; Y Xia; Y.-T Chen; Z Parekh; H Pham; Q Le; Y.-H Sung; Z Li; T Duerig", "journal": "", "ref_id": "b40", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "M Wang; J Xing; Y Liu", "journal": "", "ref_id": "b41", "title": "Actionclip: A new paradigm for video action recognition", "year": "2021" }, { "authors": "W Xiang; C Li; Y Zhou; B Wang; L Zhang", "journal": "", "ref_id": "b42", "title": "Language supervised training for skeleton-based action recognition", "year": "2022" }, { "authors": "C Snoek; M Worring; A W M Smeulders", "journal": "", "ref_id": "b43", "title": "Early versus late fusion in semantic video analysis", "year": "2005" }, { "authors": "G Wang; R Ying; J Huang; J Leskovec", "journal": "", "ref_id": "b44", "title": "Multi-hop attention graph neural networks", "year": "2021-08-27" }, { "authors": "V Nair; G E Hinton", "journal": "", "ref_id": "b45", "title": "Rectified linear units improve restricted boltzmann machines", "year": "2010" }, { "authors": "Y Zhang; H Tang; K Jia", "journal": "", "ref_id": "b46", "title": "Fine-grained visual categorization using meta-learning optimization with sample selection of auxiliary data", "year": "2018" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Köpf; E Z Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "", "ref_id": "b47", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b48", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "P Zhang; C Lan; W Zeng; J Xing; J Xue; N Zheng", "journal": "", "ref_id": "b49", "title": "Semanticsguided neural networks for efficient skeleton-based human action recognition", "year": "2020" }, { "authors": "M Li; S Chen; X Chen; Y Zhang; Y Wang; Q Tian", "journal": "", "ref_id": "b50", "title": "Actionalstructural graph convolutional networks for skeleton-based action recognition", "year": "2019" }, { "authors": "L Shi; Y Zhang; J Cheng; H Lu", "journal": "", "ref_id": "b51", "title": "Skeleton-based action recognition with directed graph neural networks", "year": "2019" }, { "authors": "C Plizzari; M Cannici; M Matteucci", "journal": "Computer Vision and Image Understanding", "ref_id": "b52", "title": "Skeleton-based action recognition via spatial and temporal transformer networks", "year": "2021" }, { "authors": "K Cheng; Y Zhang; C Cao; L Shi; J Cheng; H Lu", "journal": "", "ref_id": "b53", "title": "Decoupling GCN with dropgraph module for skeleton-based action recognition", "year": "2020" }, { "authors": "M Korban; X Li", "journal": "", "ref_id": "b54", "title": "DDGCN: A dynamic directed graph convolutional network for action recognition", "year": "2020" }, { "authors": "Z Chen; S Li; B Yang; Q Li; H Liu", "journal": "", "ref_id": "b55", "title": "Multi-scale spatial temporal graph convolutional network for skeleton-based action recognition", "year": "2021" }, { "authors": "V Veeriah; N Zhuang; G Qi", "journal": "", "ref_id": "b56", "title": "Differential recurrent neural networks for action recognition", "year": "2015" }, { "authors": "J Wang; Z Liu; Y Wu; J Yuan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b57", "title": "Learning actionlet ensemble for 3d human action recognition", "year": "2014" }, { "authors": "Y Du; W Wang; L Wang", "journal": "", "ref_id": "b58", "title": "Hierarchical recurrent neural network for skeleton based action recognition", "year": "2015" }, { "authors": "L Van Der Maaten; G E Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b59", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "A Sandryhaila; J M F Moura", "journal": "", "ref_id": "b60", "title": "Discrete signal processing on graphs: Graph fourier transform", "year": "2013" }, { "authors": "B Mohar; Y Alavi; G Chartrand; O Oellermann; A Schwenk", "journal": "Graph Theory, Combinatorics and Applications", "ref_id": "b61", "title": "The laplacian spectrum of graphs", "year": "1991" }, { "authors": "A Y Ng; M I Jordan; Y Weiss", "journal": "", "ref_id": "b62", "title": "On spectral clustering: Analysis and an algorithm", "year": "2001" }, { "authors": "J Gasteiger; S Weißenberger; S Günnemann", "journal": "", "ref_id": "b63", "title": "Diffusion Improves Graph Learning", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 129.35, 485.26, 170.65, 22.13 ], "formula_id": "formula_0", "formula_text": "F l+1 = s∈S Ãs F l W s ,(1)" }, { "formula_coordinates": [ 3, 227.78, 570.73, 70.52, 14.74 ], "formula_id": "formula_1", "formula_text": "Ãs = Λ -1 2 s A s Λ -1 2 s" }, { "formula_coordinates": [ 3, 437.71, 680.56, 102.84, 14.95 ], "formula_id": "formula_2", "formula_text": "J CoCLS i = 1 M M j=1 J Cj i ." }, { "formula_coordinates": [ 4, 137.64, 480.93, 162.36, 9.65 ], "formula_id": "formula_3", "formula_text": "X t = (I -B)X t ,(2)" }, { "formula_coordinates": [ 5, 70.59, 45.66, 205.59, 306.12 ], "formula_id": "formula_4", "formula_text": "Max Pool 3 × 1 Conv 1 × 1 Conv 5 × 1 𝑟𝑑𝑖𝑙𝑎𝑡𝑒𝑑 = 1 Conv 1 × 1 Conv 5 × 1 𝑟𝑑𝑖𝑙𝑎𝑡𝑒𝑑 = 2 Conv 1 × 1 Input 𝐹 𝑙+1 ∈ ℝ 𝑁×𝐶2×𝑇×𝑉 C Conv 1 × 1 ⨁ Output MS-TC 𝐿 × MHA-GC MS-TC Add & Norm X 𝐺𝑃𝑅 PC-AC ⨁ PE 𝜃 𝑛 GAP Softmax Linear Softmax 𝐿𝑝𝑟𝑖 𝐿𝑎𝑢𝑔 Encoder Classifier Auxiliary Classifier Sub Block PC-AC ⨂ Input 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝐶×𝑇×𝑉 Avg Pool 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝐶×1×𝑉 Reading 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝐶×𝑐𝑙𝑠×𝑉 Linear → 𝐹𝑛 𝑙 ∈ ℝ 𝑁×1×𝑐𝑙𝑠×𝑉 Avg Pool Writing Drink Water Pushing T -C1 T -C2 T -CM-1 T -CM CPR Graph Output 𝐹𝑛 𝑙 ∈ ℝ 𝑁×𝑐𝑙𝑠 MHA-GC Input 𝐹 𝑙 ∈ ℝ 𝑁×𝐶1×𝑇×𝑉 Topology 𝐴 𝑙 ∈ ℝ 𝑉×𝑉 Pool & Conv 1 × 1 𝑀 = 𝑃 𝐹𝑙𝑊1 ∈ ℝ𝑁×𝑅×𝑉 Pool & Conv 1 × 1 𝑁 = 𝑃 𝐹𝑙𝑊2 ∈ ℝ𝑁×𝑅×𝑉 Cross joint correlation 𝜎(𝑀𝑖 -𝑁𝑗 ) 𝑣 𝑖,𝑗 =1 𝐴 𝑙 ∈ ℝ 𝑁×𝑅×𝑉×𝑉 Conv 1 × 1 𝐴𝑙𝑊3 ∈ ℝ𝑁×𝐶2×𝑉×𝑉 Conv 1 × 1 𝐹 𝑙 𝑊4 ∈ ℝ 𝑁×𝐶2×𝑇×𝑉 Output 𝐹 𝑙+1 ∈ ℝ 𝑁×𝐶2×𝑇×𝑉 0 𝜔0 ⋅ 𝑘 + ⋯ + 𝜔𝑘 ⋅ × +𝛽 ⋅ 1 -𝛽 ⋅ Loop 𝑘 times × 𝐹 𝑘 𝑘 → ∞" }, { "formula_coordinates": [ 5, 386.98, 158.71, 177.02, 11.72 ], "formula_id": "formula_5", "formula_text": "F (0) = X GP R W 0 + P E,(3)" }, { "formula_coordinates": [ 5, 400.93, 587.34, 163.08, 12.17 ], "formula_id": "formula_6", "formula_text": "Āl = Ȧl + γ Ãl W 3 ,(4)" }, { "formula_coordinates": [ 5, 411.58, 720.93, 152.42, 29.41 ], "formula_id": "formula_7", "formula_text": "Ā = k i=0 ω i Āi ,(5)" }, { "formula_coordinates": [ 6, 74.24, 43.4, 110.88, 11.23 ], "formula_id": "formula_8", "formula_text": "ω i = β(1 -β) i , β ∈ (0, 1]" }, { "formula_coordinates": [ 6, 130.74, 199.15, 169.26, 13.14 ], "formula_id": "formula_9", "formula_text": "F l+1 = σ( Āl F l W l 4 ),(6)" }, { "formula_coordinates": [ 6, 119.8, 676.33, 180.2, 19.69 ], "formula_id": "formula_10", "formula_text": "L aug = - k y k • log( ŷk ),(7)" }, { "formula_coordinates": [ 6, 345.61, 118.35, 218.39, 19.06 ], "formula_id": "formula_11", "formula_text": "arg min θ (L(f θ pri (x), ŷ) + λL(f θn aux (x), ŷ)),(8)" }, { "formula_coordinates": [ 8, 114.38, 734.85, 185.62, 11.37 ], "formula_id": "formula_12", "formula_text": "F k+1 = (1 -β) ĀF k + βF l ,(9)" }, { "formula_coordinates": [ 8, 384.02, 694.51, 121.24, 11.26 ], "formula_id": "formula_13", "formula_text": "F k+1 = (1 -β) ĀF k + βF l ," }, { "formula_coordinates": [ 8, 312, 734.85, 147.74, 12.17 ], "formula_id": "formula_14", "formula_text": "Proposition 1. lim K→∞ F K = ĀF l" }, { "formula_coordinates": [ 9, 113.46, 259.01, 182.58, 11.38 ], "formula_id": "formula_15", "formula_text": "E k+1 = (1 -β) ĀE k + βE 0 . (10" }, { "formula_coordinates": [ 9, 296.04, 262.14, 3.96, 8.24 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 9, 64.99, 307.71, 235.01, 121.09 ], "formula_id": "formula_17", "formula_text": "E K = (1 -β) ĀE K-1 + βE 0 = (1 -β) Ā((1 -β) ĀE K-2 + βE 0 ) + βE 0 = (1 -β) 2 Ā2 E K-2 + β(1 -β) ĀE 0 + βE 0 = (1 -β) 3 Ā3 E K-3 + β(1 -β) 2 Ā2 E 0 + β(1 -β) ĀE 0 + βE 0 • • • = ((1 -β) K ĀK + β K-1 i=0 (1 -β) i Āi )E 0 .(11)" }, { "formula_coordinates": [ 9, 48, 435.6, 252, 49.58 ], "formula_id": "formula_18", "formula_text": "E K = ((1 -β) K ĀK + β K-1 i=0 (1 -β) i Āi )F l . As β ∈ (0, 1] and ĀK i,j ∈ (0, 1], when K → ∞, the term (1 - β) K ĀK → 0. Thus, lim K→∞ E K = ( K-1 i=0 β(1 -β) i Āi )F l = ĀF l ." }, { "formula_coordinates": [ 9, 372.16, 281.21, 191.84, 29.64 ], "formula_id": "formula_19", "formula_text": "Ā = K l=0 ω l Āl = K l=0 ω l (U ΛU -1 ) l(12)" }, { "formula_coordinates": [ 9, 383.09, 328.21, 169.34, 11.37 ], "formula_id": "formula_20", "formula_text": "-1 )(U ΛU -1 ) • • • (U ΛU -1 ) = U Λn U -1 ." }, { "formula_coordinates": [ 9, 395, 375.13, 169, 29.64 ], "formula_id": "formula_21", "formula_text": "Ā = U ( K l=0 ω l Λl )U -1 .(13)" }, { "formula_coordinates": [ 9, 323.8, 468.97, 240.2, 29.64 ], "formula_id": "formula_22", "formula_text": "λi = ∞ l=0 ω l λ i l = ∞ l=0 β(1 -β) l λ i l = β 1 -(1 -β)λ i(14)" }, { "formula_coordinates": [ 9, 341.51, 505.55, 110.56, 11.61 ], "formula_id": "formula_23", "formula_text": "L = I -Q -1 2 ĀQ -1 2" }, { "formula_coordinates": [ 9, 312, 603.12, 252, 106.63 ], "formula_id": "formula_24", "formula_text": "|(1 -β)λ i | ≤ (1 -β) < 1. When K → ∞, ((1 -β)λ i ) K → 0 and λi = lim K→∞ K l=0 β(1 -β) l λ i l . Let R be K l=0 β(1 -β) l λ i l is calculated as follow R = β + β(1 -β)λ i + β(1 -β) 2 λ i 2 + • • • + β(1 -β) K λ i K = β(1 + (1 -β)λ i + ((1 -β)λ i ) 2 + • • • + ((1 -β)λ i ) K ) = β 1 -((1 -β)λ i ) K 1 -(1 -β)λ i .(15)" }, { "formula_coordinates": [ 9, 313.2, 718.52, 250.8, 30.44 ], "formula_id": "formula_25", "formula_text": "get λi = lim K→∞ R = lim K→∞ β(1-((1-β)λi) K ) 1-(1-β)λi = β 1-(1-β)λi ." }, { "formula_coordinates": [ 10, 48, 102.67, 252, 33.32 ], "formula_id": "formula_26", "formula_text": "-1 ≤ λ Qi λ i λ Qi ≤ 1. Since λi ∈ [0, 2], |λ Qi λ i λ Qi | ≤ 1. And λ Qi ≥ 1, we have |λ i | ≤ 1. The condition |(1 -β)λ i | ≤ (1 -β) < 1 still holds." }, { "formula_coordinates": [ 10, 54.71, 171.76, 245.29, 57.77 ], "formula_id": "formula_27", "formula_text": "λG i λ G i = 1 -λi λ G i = 1 - β 1-(1-β)λi λ G i = 1 - β 1-(1-β)(1-λ G i ) λ G i = 1 β 1-β + λ G i .(16)" } ]
10.1145/3583780.3615109
2023-11-22
[ { "figure_ref": [], "heading": "Politics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b3", "b10", "b18", "b4", "b7", "b19", "b27", "b8", "b12", "b23", "b25", "b29", "b16", "b24", "b27" ], "table_ref": [], "text": "Weakly supervised text classification methods, including zero-shot prompting, can build competent classifiers from raw texts by only asking humans to provide (1) a few examples per class [8,16] or (2) class names [5,17,25]. All these methods require that the humanprovided known classes cover all the classes of interest, however, it can be very difficult especially in the dynamic and ever-changing real world. For example, the human expert could be exploring a new, large corpus without a complete picture.\nIn this paper, we work on a novel yet important problem of weakly supervised open-world text classification as shown in Figure 1. Specifically, the human is only asked to provide a few examples for every known class; the machine is tasked to dive into the raw texts, discover possible unknown classes, and classify all the raw texts into corresponding classes, including both known and unknown. The open-world setting here releases the all-class requirement, further reducing the required human effort in weakly supervised text classification. We argue that this problem is feasible because one could expect that unknown classes follow a similar taste as the known classes, i.e., the classes should follow certain underlying high-level semantic meanings and the same granularity level. For example, if the known classes are \"Awesome\" and \"Good\", one would expect to see classes like \"Terrible\" and \"Bad\"; in Figure 1, \"Politics\" can be a reasonable choice for unknown class.\nOpen-world classification [6,10,21,23,27] has been studied, mostly in image classification; however, existing methods typically assume the availability of sufficient known-class supervision and strong unknown-class prior knowledge (e.g., the number and/or data distribution). Text classification has its uniqueness -The text is composed of words, some of which reflect the semantics of the Given the corpus and a part of labels, we first estimate document representations and construct the initial clusters. And then, we perform an iterative cluster refinement to remove redundant clusters. At the end of each iteration, we will update the document representations and recluster them.\nclasses. These class-indicative words (abbreviated as class-words) are the key source of weak supervision signals [14,22,25]. As shown in Figure 1, class-words such as \"Governor\" can help discover the unknown class Politics and its related documents. Such a feature motivates us to design a new open-world framework particularly for weakly supervised text classification.\nWe propose a novel, practical framework WOT-Class 1 , which lifts those strong assumptions of existing methods. Figure 2 illustrates the general idea of WOT-Class. It leverages the class-words in text to iteratively refine the text clustering and ranking of classwords. Specifically, we first make an overestimation of the number of classes and construct initial clusters of documents based on the names of known classes. Then, we employ an iterative process to refine these clusters. We first select a set of candidate class-words for them through statistics and semantics. Then we learn a classifier to provide each cluster a ranking of class-words based on the limited known-class supervision. When there is redundancy among these clusters, the high-ranked class-words for the clusters will overlap, in which case we know at least one cluster is redundant. The refined set of class-words will help re-cluster documents, and we repeat this process till the number of classes no longer decreases.\nWe conduct our experiments by fixing the most infrequent half of classes as unseen, which emphasizes the imbalanced and emerging nature of real-world scenarios. And our extensive experiments on 7 popular text classification datasets have shown the strong performance of WOT-Class. By leveraging merely a few knownclass examples and the names of known classes, WOT-Class gains a 23.33% greater average absolute macro-F 1 over the current best method across all datasets. When given our prediction of classes as an extra input, WOT-Class still achieves 21.53% higher average absolute macro-F 1 . While precisely discovering unseen classes identical to the ground truth remains challenging, our method can provide predictions closest to the actual classes more stably than existing approaches. And considering WOT-Class provides classwords for each discovered unknown class, it shall only require a reasonable amount of effort for humans to digest the discovered ones to something similar to the ground truth. Finally, WOT-Class " }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [ "b32", "b27", "b21" ], "table_ref": [], "text": "In this section, we formally define the problem of weakly supervised open-world text classification. And then, we brief on some preliminaries about CGExpan and X-Class, two building blocks that we will use in our method. Problem Formulation. In an open-world setting, there exists a not-fully-known set of classes C, which follows the same hyperconcept and a set of documents D, each uniquely assigned to a class. A weakly supervised open-world model can observe partial information of C. In this work, we assume that partial information is given as a labeled few-shot dataset D 𝑠 = {𝑥 𝑖 , 𝑦 𝑖 } 𝑛 𝑖=1 , 𝑦 𝑖 ∈ C 𝑠 , where C 𝑠 ⊂ C where C 𝑠 is the known subset of classes and 𝑛 is rather small (e.g., a ten-shot dataset would mean 𝑛 = 10 * |C 𝑠 |). The goal of the model is to classify the remainder of the dataset, D 𝑢 = D\\D 𝑠 , where some of the labels in C 𝑢 = C\\C 𝑠 is completely unknown to the model. We emphasize that different from extremely weakly supervised or zero-shot prompting based text classifiers, the names of the unknown classes are also not given to the model. CGExpan. Entity set expansion aims to expand a set of seed keywords (e.g., United Sates, China) to new keywords (e.g., Japan) following the same hyper-concept (i.e., Country). This is the exact technique to help us discover potential class words that are highly suggestive of the hidden class names. In our method, we employ CGExpan [30], one of the current state-of-the-art methods for set expansion. CGExpan selects automatically generated hyper-concept words by probing a pre-trained language model (e.g., BERT), and further ranks all possible words guided by selected hyper-concept. However, a common problem of such set expansion method is that they typically give duplicated and semantically-shifted entities even at the top of the rank list. In our work, we utilize CGExpan to find semantically related words to the user-given class names as candidates for the class-words. Our method resolves this imperfect set of candidates problem by ranking them based on a scoring metric learned by few-shot supervision. X-Class. X-Class is an extremely weakly supervised text classification method that works with only names of the classes [25]. It proposes a framework that learns representations of classes and text, and utilizes clustering methods, such as a Gaussian Mixture Model [19] to assign text to classes. While X-Class showed promising performance in close-world classification settings with minimal supervision, it does not work in open-world settings. Our method for the open-world classification problem reduces open-world text classification to close-world classification by iterative refinement of class-words. Therefore, a strong performing (and efficient) closeworld text classifier X-Class is employed. Static Representation. For each unique word 𝑤 in the input corpus, we obtain its static representation s 𝑤 by averaging BERT's contextualized representations of all its appearances. That is,\ns 𝑤 = 𝑤 ′ =𝑤 t 𝑤 ′ 𝑤 ′ =𝑤 1\n, where 𝑤 ′ are occurrences of the word in the corpus and t 𝑤 ′ is its contextualized word representation 3 . A static representation is useful in determining the similarity and relatedness of two words." }, { "figure_ref": [ "fig_2" ], "heading": "OUR WOT-CLASS METHOD", "publication_ref": [ "b8" ], "table_ref": [], "text": "In this section, we introduce our WOT-Class framework. To be able to accompany unknown classes, a common approach [6] is to first overestimate the number of classes and then reduce them after observing the data. We follow this approach and integrate it with class-words, with the goal of reducing the problem into an existing weakly supervised text classification (WS-TC) problem, where there are solutions to classify text in a close-world setting when class-words are given for each class and all classes are known.\nIn simple words, WOT-Class first proposes a set of high-potential words from which class-words can be found, and a list of classwords for an over-estimated number of classes. At each iteration, we start with clusters of documents obtained by WS-TC methods on the proposed class-words, and rerank the high-potential words to find the new list of class-words for each class. During this, classes of similar class-words are removed, and a new cluster of documents can be obtained by WS-TC methods. The iteration stops when no class removal happens. We summarize our iterative refinement framework in Algorithm 1. And Figure 3 shows an overview of the sub-components of this method. 3 For a word that can be split into multiple tokens, its contextualized word representation is obtained by taking the average of the contextualized representations of all its constituent tokens. Find class-indicative words W 3:\nfor each cluster C 𝑖 in C do 4:\nTrain MLP and rank W 5:\nSelect possible names S 𝑖 from W 6:\nCompute cluster coherence 𝜂 𝑖 (Eq. 1)\n7:\nend for 8:\nfor each pair S 𝑖 , S 𝑗 do update R, C based on S 15: end while" }, { "figure_ref": [], "heading": "Initial Overestimation and Clustering", "publication_ref": [], "table_ref": [], "text": "In WOT-Class, we take the approach where we make an initial overestimation of classes, and then try to refine the class-words and clusters of text. We fix a number 𝐾 (𝐾 = 100 in all experiments) and ask CGExpan (refer to Section 2) to suggest 𝐾 -|C 𝑠 | similar words as the given |C 𝑠 | class names. We consider these as a rough approximation of classes in the corpus, and employee X-Class (refer to Section 2) to construct the initial clusters of text, that may contain many duplicates and different granularity clusters. From now on, our iterative method is completed by two processes. In the first process, we obtain a list of class-words for each cluster and remove duplicate ones; in the second process, we simply apply X-Class to refine the clusters based on the new class-words. We mainly elaborate on the first process." }, { "figure_ref": [], "heading": "Cluster → Class-words", "publication_ref": [ "b20", "b16", "b14" ], "table_ref": [], "text": "In the first process, we start with clusters of text, the initially suggested words by CGExpan, the class-words in the last iteration (for the first iteration, the class-words are the CGExpan proposed words) and the few-shot supervision, and aim to reduce the number of clusters and assign class-words to each cluster. Proposing Potential Class-words. Class-words are words that are related to and highly indicative of the class. Words from CGExpan qualify for the relativeness, but we also wish to find words that are rather exclusive to all except one (or a few, because of the noise in the clusters) cluster. The indicativeness of a word can be expressed by the statistical outstandness of it to its cluster of text, compared to other clusters. Such statistical measures are well-researched in information retrieval, the representative would be the tf-idf [18] score. We apply a more recent measure that has been used in text classification [14] to find statistically representative words within cluster 𝑖: frequency of word 𝑤, and 𝑠𝑖𝑧𝑒 𝑖 indicates how many documents are in cluster 𝑖. In the measurement, the first term tells how indicative a word is to a cluster, the second term measures how frequent this word is, and the third is a normalization based on the inverse document frequency. We find the top such statistically representative words for each cluster and merge these statistically representative words with words from CGExpan as the set of potential class-words. Class-word Ranking. We utilized CGExpan and statistical representativeness to approximately retrieve a list of potential classwords, and now we precisely rank them by learning a metric that defines the similarity between a cluster and a word.\n𝑠𝑐𝑜𝑟𝑒 𝑖 (𝑤) = 𝑠 𝑖 (𝑤) 𝑠𝑖𝑧𝑒 𝑖 • tanh 𝑡 𝑖 (𝑤) 𝑠𝑖𝑧𝑒 𝑖 • log |D | 𝑠 D (𝑤)\nSpecifically, we construct features for a potential class-word to a cluster as the mean and variance of Euclidean distance and cosine similarity of static representations between the class-word and (a large list of 𝑊 = 50) statistically representative words for the cluster 4 . Since we know a few labeled examples, they serve as virtual clusters where we treat the features of their class names to the respective virtual clusters as positive signals of closeness. The negative signals are derived through an intuitive heuristic where we find the most dissimilar word (i.e., the word with the furthest static representation from the known class name) from the set of potential class-words. With these two signals, we train a Multilayer Perceptron (MLP) logistic regression on the features to predict the signal. We assign a score 𝑝 (𝑤, 𝑖) for each cluster 𝑖 and each word from the high potential words.\nWe also propose a post-processing step to remove generic words from the ranking. We follow previous work [12] and design a penalty coefficient 𝜇 (𝑤, 𝑖) for each candidate class name 𝑤 in cluster 𝑖 based on inter-class statistics:\n𝜇 (𝑤, 𝑖) = log M{𝑟𝑎𝑛𝑘 𝑗 (𝑤) | 1 ≤ 𝑗 ≤ 𝐶} 1 + 𝑟𝑎𝑛𝑘 𝑖 (𝑤)\n, 4 A larger list of statistically representative words is further employed for each cluster to detect the closeness with potential class-words. When proposing class-words, note we only select the top statistically representative words in each cluster.\nwhere 𝑟𝑎𝑛𝑘 𝑖 (𝑤) is the absolute rank number of 𝑤 in cluster 𝑖 based on MLP's prediction, M{•} indicates the median value of the set, and 𝐶 is the size of the clusters in the current iteration.\nThe main idea of this formula is to obtain a coefficient to penalize those generic words (e.g., life, which might rank high in most clusters) from being selected as class-words. The numerator of the fraction shows how the word behaves across all clusters while the denominator shows how it behaves in a specific cluster. The median rank of a generic word will be very close to the specific rank. Note that we allow one word as the class name of several clusters because of the initial overestimation, but if a word ranks high in more than half of the clusters, it is considered a generic word that must be filtered. Such penalization and normalization are similar to the idf term in information retrieval. Therefore, we follow the design and choose to divide the two values and take the logarithm. Similar to the idf, this penalty coefficient lowers the chance of selecting a generic word but will not harm proper words.\nThe final indicativeness ranking is based on the product of two scores:\n𝐼 (𝑤, 𝑖) = 𝑝 (𝑤, 𝑖) × 𝜇 (𝑤, 𝑖).\nRemoval of Clusters. We finally discuss how we remove the clusters based on the class-words found. In simple terms, we remove clusters that have non-empty intersections in the class-words. While the cluster is removed, all its documents are discarded until they are reassigned to new clusters in the second text clustering process.\nPrecisely, we pick the 𝑇 highest ranked class-words for a cluster to compare, where 𝑇 is the number of iterations in the removal process, a simple way to inject the prior that the cluster quality is better and better after each iteration. We noticed that in certain clusters, there might not be enough good class-words, so we introduce a cutoff threshold 𝛽 such that we do not pick words that have a low ratio of indicativeness score to the highest indicativeness score in the cluster. Then, when two list of class words overlap, we would like to retain one cluster (or in other words, the list of class-words) and remove the other. We remove the cluster with a low coherent \n𝜂 = 1 |R| ∑︁ r∈R cos r, R ,(1)\nwhere R is the list of text representations belonging to the cluster. When overlap happens, we remove the cluster that has lower coherence 𝜂.\nWe also rerank the class words after removing the duplicated clusters and continue until no clusters require removal." }, { "figure_ref": [], "heading": "Iterative Framework and Final Classifier", "publication_ref": [ "b16", "b18", "b19", "b27" ], "table_ref": [], "text": "The whole iterative framework is simply applying the first classword suggesting and cluster deduplicating process and the second class-word-based text clustering process iteratively, until the first process no longer removes clusters.\nAfter exiting the iterative loop and obtaining the stable set of clusters, we follow the common approach in weakly supervised text classification [14,16,17,25] and train a final text classifier based on the pseudo-labels assigned to each text. This usually improves the performance as the fine-tuned classifier mitigates some noisy in the pseudo-labels." }, { "figure_ref": [], "heading": "EXPERIMENTS 4.1 Datasets", "publication_ref": [ "b15", "b18", "b17", "b17", "b30", "b31", "b30", "b19", "b27" ], "table_ref": [ "tab_4" ], "text": "We evaluate WOT-Class on 7 popular datasets of different textual sources and criteria of classes, including news article datasets 20News [13], NYT-Small [16], NYT-Topics [15], NYT-Locations [15] and AGNews [28], an internet question-answering dataset Yahoo [29], and an ontology categorization dataset DBpedia [28] based on 14 ontology classes in DBpedia. Table 1 contains the detailed statistics of the 7 datasets.\nSentiment analysis is also popular in text classification. However, many explored sentiment analysis settings with weak supervision are on the coarse-grained setting [17,25] with too less classes (e.g., positive and negative), which is not practical for open-world class detection." }, { "figure_ref": [], "heading": "Compared Methods", "publication_ref": [ "b12", "b25", "b8", "b25", "b11", "b3", "b26", "b13" ], "table_ref": [], "text": "We compare our method with 3 open-world classification methods, Rankstats+, ORCA and GCD.\nRankstats [10] (aka AutoNovel) is the first approach crafted for open-world classification without relying on any human annotation of the unseen classes. It tackles the task by joint learning to transfer knowledge from labeled to unlabeled data with ranking statistics. Since its original setting requires the labeled and unlabeled classes to be disjoint, we follow the process in GCD paper [23] to adapt it to our setting and named it Rankstats+. ORCA [6] is a general method for open-world semi-supervised classification, which further reduces the supervision of the seen classes. It utilizes an uncertainty adaptive margin to reduce the learning gap between seen and unseen classes. GCD [23] is also semi-supervised, utilizing contrastive representation learning and clustering to directly provide class labels, and improve Rankstats' method of estimating the number of unseen classes. To adapt these three methods to the text domain, we use BERT on top of their frameworks to obtain the document representations as the training feature. And since these three methods utilize self-supervised learning grounded upon visual data, within the text domain, we harness the SimCSE [9] approach.\nWe also propose other two baselines. BERT is known to capture the domain information of a document well [1,24]. So we design BERT+GMM, which utilizes the CLS token representations after fine-tuning on the labeled dataset to fit a Gaussian Mixture Model for all classes. We also propose BERT+SVM. We first utilize a Support Vector Machine [11] to find all outlier documents based on CLS token representations and then classify documents belonging to seen classes with a BERT classifier and cluster outlier documents." }, { "figure_ref": [ "fig_3" ], "heading": "Experimental Settings", "publication_ref": [ "b28" ], "table_ref": [ "tab_7" ], "text": "For the basic experiments, we split the classes into half seen and half unseen. We set the most infrequent half of classes (i.e., classes with Table 2: Evaluations of compared methods and WOT-Class. The overall mean micro-/macro-F 1 scores over three runs are reported. We also report performances for seen and unseen classes separately in Table 5 fewer documents) as unseen, which emphasizes the imbalanced and emerging nature of real-world scenarios. Among the seen classes, we give 10-shot supervision, that is 10 documents for each seen class containing labels and the rest are unlabeled (Figure 4). For WOT-Class and all compared methods in our experiments, we utilize the pre-trained bert-base-uncased model provided in Huggingface's Transformers library [26]. All experiments are conducted using a 32-core processor and a single NVIDIA RTX A6000 GPU. For WOT-Class, the default hyper-parameter settings are 𝐾 = 100, 𝑊 = 50, and 𝛽 = 0.7. The analysis of hyper-parameter sensitivity is presented in Section 4.4.\nSince all compared methods require the total number of classes as input, we evaluate them in two ways." }, { "figure_ref": [], "heading": "• Our Estimation (OE):", "publication_ref": [ "b8", "b12", "b25" ], "table_ref": [], "text": "To ensure the comparison is fair to the baseline methods, we also run all the baselines based on (3 runs) of our prediction of classes. • Baselines' Estimation: Since Rankstats+, ORCA, and GCD can also work started with an overestimated number of the initial classes and provide the prediction of the classes. So we also test these three methods starting with 𝐾 = 100 classes as same as our method. Since further experiments show their predictions don't work well in most of our datasets, we do not test other baselines with their estimations. Evaluation. There are several evaluation criteria like accuracy, or NMI-based clustering metrics have been used in previous work [6,10,23]. However, they were proposed in a balanced setting and would be biased toward the popular classes when the classes are imbalanced (i.e., the low penalty for misclassification of infrequent unseen classes). Since we argue that in open-world classification, the new, emerging classes are naturally the minority classes, these metrics are not suitable.\nTherefore, we propose a new evaluation criterion based on F 1 Score to better demonstrate results. Since the final number of classes produced by a method may not equal the ground truth, a mapping from the prediction to the actual classes is required. Given the clusters of documents provided by a method and the ground-truth classes of documents, we first perform a maximum bipartite matching between the method-provided clusters and the ground-truth classes, where the edge weights are the number of overlapping documents between the clusters and the ground-truth classes. 5 The matched clusters are assigned to the corresponding classes. This step is to guarantee that all classes have some predictions. For each remaining cluster, we simply assign it to the class with which it exhibits the maximal matches. This is the equation. Consider a matrix M, where M 𝑖,𝑗 denotes the number of text in cluster 𝑖 that belongs to class 𝑗. We use 𝑟 𝑖 to denote the assignment of each cluster: After assigning all clusters to the classes, the F 1 score can be computed instance-wise on the text as the evaluation criterion for classification performance." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [ "tab_7", "tab_6" ], "text": "WOT-Class Performance. We assess the weakly supervised openworld performance of WOT-Class versus other baselines. Table 2 contains overall comparisons, Table 5 and 6 further provide performances on seen and unseen classes. Specifically, WOT-Class outperforms BERT+GMM and BERT+SVM across all 7 datasets for both seen and unseen classes, even though they are given the matching number of classes as input. This strengthens the need for our iterative refinement process since merely applying few-shot tuning does not bring as good performance as ours. Moreover, WOT-Class performs noticeably better than the general methods Rankstats+, ORCA, and GCD under the same circumstances. Even when the matching number is given as input to them, WOT-Class consistently outperforms them in all cases on all datasets, except for the seen part of DBpedia. Imbalance Tolerance. As generic solutions, ORCA, and GCD with our prediction number of classes only have a little performance margin with WOT-Class on balanced dataset DBpedia, while underperforming more severely on other imbalanced datasets. To gain further insights, we conduct experiments on the tolerance of imbalance for WOT-Class and these three compared methods. As shown in Table 4, we construct three imbalanced DBpedia datasets with different degrees of imbalance. This is achieved by removing the number of samples in each class by a linearly increasing ratio Δ. For example, when Δ = 4%, the classes have 100%, 96%, 92%, . . . of its original documents. We choose the ordering of classes randomly but fixed across the Low, Medium, and High experiments, Table 7: Predictions of the number of classes. The average and standard deviation over three runs are reported. The average offset refers to the average of the absolute discrepancies between each prediction and the ground truth value." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Method", "publication_ref": [], "table_ref": [ "tab_5", "tab_9" ], "text": "AGNews and by design, the classes with a larger number of documents are seen classes. Figure 5 and Table 3 shows the result of WOT-Class and compared methods on the constructed DBpedia. ORCA and GCD are sensitive to imbalanced classes, especially for the unseen part of the data. Even after reporting the Pareto optimal results based on their own predictions and our estimation for these two methods, their overall performance still dropped by 11.31% and 8.03% respectively as the data distribution became more imbalanced, while our method experienced a relative drop of only 5.52%. This experiment shows that WOT-Class is more robust to imbalanced classes of text datasets which are common in the real world (e.g., the imbalance ratio of NYT collected from NYT News is 16.65). Prediction of the Number of Classes. WOT-Class starts with an initial guess on the number of classes and removes redundant ones iteratively. The number of the remaining classes is its prediction of the total number of classes. As shown in Table 7, in most cases, WOT-Class's final predicted number of classes is around 2 to 4 times larger than the ground truth, which is affordable for human inspection. And the estimation turns out to be reasonable as shown in Table 8, WOT-Class overestimates because its predicted classes are the fine-grained version of the ground truth classes. For example, DBpedia's artist class can be split into two classes which respectively related to painting and performing. Moreover, our class-words are highly related to (or even as same as) ground truth class names and human-understandable. So based on our prediction of classes with class-words, users can simply find some underlying sub-classes and decide whether to merge them.\nAs baselines, Rankstats+, ORCA, and GCD can also start with an overestimated number of classes and get a prediction of classes. However, given the same initial number of classes, Rankstats+ struggles to discover any classes beyond the seen classes, and ORCA hardly eliminates enough classes to provide the user with an intuitive understanding of the dataset's text distribution. GCD can provide more reasonable predictions, but compared to our approach, its prediction still deviates substantially from the ground truth and is much more unstable. This indicates these methods' ability to estimate the number of classes in the few-shot setting is not reliable. Hyper-parameter Sensitivity. WOT-Class has 3 hyper-parameters: 𝐾, 𝑊 , 𝛽, and we show their default values in Sec. 4.3. To further explore the stability and robustness of WOT-Class, we conduct a hyper-parameter sensitivity study on three datasets with different imbalance rates: 20News, NYT-Small and Yahoo, to study how fluctuations in hyper-parameters influence the performance of our method. The experiment is performed using a fixed random seed (42) for reproducibility. We report the overall macro-F 1 scores for 5 distinct values of each hyperparameter. As illustrated in Figure 6, the performance fluctuations remain within reasonable margins, basically under 5%. Our method does not need to fine-tune these hyper-parameters." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b5", "b6", "b22", "b12", "b8", "b25", "b3", "b27", "b4", "b9" ], "table_ref": [], "text": "Open-world Learning. Traditional open-world recognition methods [3,4,20] aim to incrementally extend the set of seen classes with new unseen classes. These methods require human involvement to label new classes.\nRecently, Rankstats [10] first defined open-world classification tasks with no supervision on unseen classes in the image domain and proposed a method with three stages. The model begins with self-supervised pre-training on all data to learn low-level representations. It is then trained under full supervision on labeled data to glean higher-level semantic insights. Finally, joint learning with ranking statistics is performed to transfer knowledge from labeled to unlabeled data. However, Rankstats still require full supervision on seen classes to get high performance.\nFollowing that, ORCA [6] and GCD [23] defined open-world semi-supervised classification and proposed general solutions which further improved the framework of Rankstats and alleviated the burden of manual annotation. However, these methods' performance is not robust enough for the few-shot task and the imbalanced data distribution in the text domain. In contrast, our work is applicable to infrequent classes and exploits the fact that the input is words which are class-indicative. Extremely Weak Supervision in NLP. Aharoni and Goldberg [1] showed that the average of BERT token representations can preserve documents' domain information. X-Class [25] followed this idea to propose the extremely weak supervision setting where text classification only relies on the name of the class as supervision. However, such methods can not transfer to open-world classification naively as they cannot detect unseen classes. Our method leverages such extremely weak supervision methods as a subroutine to help the clustering of documents. But importantly, we note that such methods cannot be applied straightforwardly as they also are sensitive to noise and too similar classes. We show that our general idea of using class-words can further help an extremely weak supervision method to obtain stable performance. Joint Clustering with Downstream Tasks. To some sense, our method leverages partly an idea called joint clustering, which some recent works [2,7] in the image domain achieved high performance through jointly performing clustering and image classification. Their main idea is to utilize clustering to extract the hidden information of image representations and generate pseudo-labels, which in turn provide supervision for classification training and ultimately guide the co-improvement of representation and clustering. However, the crucial difference is that their methods already know the predefined classes and highly depend on strong assumptions like all classes share the same size to obtain excellent performance. Conversely, WOT-Class utilizes the general idea of joint clustering in an open-world setting where the classes may be too fine-grained and noisy. We address these unique challenges via the class-words we propose and show that our methodology can not only estimate the precise number of classes but also tolerate imbalanced data distribution." }, { "figure_ref": [], "heading": "CONCLUSIONS AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce the challenging yet promising weakly supervised open-world text classification task. We have identified the key challenges and unique opportunities of this task and proposed WOT-Class that achieves quite decent performance with minimal human effort. Specifically, WOT-Class starts with an overestimated number of classes and constructs an iterative refinement framework that jointly performs the class-word ranking and document clustering, leading to iterative mutual enhancement. Consequently, WOT-Class can progressively extract the most informative classes and assemble similar documents, resulting in an effective and stable open-world classification system, which is validated by comprehensive experiments. In the future, we envision that open-world text classification can be conducted with even less manual annotation, for example, by only requiring user-provided hyper-concept (e.g., Topics, Locations) or custom instructions. This will further reduce the cost of classification systems and extend their applicability.\nIn summary, this paper presents an initial exploration of openworld text classification, including problem formulation, methodology, and empirical results. We hope this work can inspire more research in open-world learning for NLP. As an emerging field, open-world classification demands more algorithms, datasets, and evaluation metrics to truly unleash its potential." } ]
State-of-the-art weakly supervised text classification methods, while significantly reduced the required human supervision, still requires the supervision to cover all the classes of interest. This is never easy to meet in practice when humans explore new, large corpora without complete pictures. In this paper, we work on a novel yet important problem of weakly supervised open-world text classification, where supervision is only needed for a few examples from a few known classes and the machine should handle both known and unknown classes in test time. General open-world classification has been studied mostly using image classification; however, existing methods typically assume the availability of sufficient known-class supervision and strong unknown-class prior knowledge (e.g., the number and/or data distribution). We propose a novel framework WOT-Class that lifts those strong assumptions. Specifically, it follows an iterative process of (a) clustering text to new classes, (b) mining and ranking indicative words for each class, and (c) merging redundant classes by using the overlapped indicative words as a bridge. Extensive experiments on 7 popular text classification datasets demonstrate that WOT-Class outperforms strong baselines consistently with a large margin, attaining 23.33% greater average absolute macro-F1 over existing approaches across all datasets. Such competent accuracy illuminates the practical potential of further reducing human effort for text classification.
WOT-Class: Weakly Supervised Open-world Text Classification
[ { "figure_caption": "ArtFigure 1 :1Figure 1: Weakly Supervised Open-World Text Classification. We aim to cluster text in a corpus, where only a few classes have few-shot supervision and class names known.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An overview of WOT-Class framework. Given the corpus and a part of labels, we first estimate document representations and construct the initial clusters. And then, we perform an iterative cluster refinement to remove redundant clusters. At the end of each iteration, we will update the document representations and recluster them.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ",Figure 3 :3Figure 3: An overview of Cluster → Class-words.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Schematic diagram of the corpus split. Only 10 samples in popular classes are provided as training labels.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Overall performance of compared methods and WOT-Class on different imbalance degrees.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Hyper-parameter sensitivity study on 20News, NYT-Small and Yahoo. The overall macro-F 1 scores using a fixed random seed are reported.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "𝐷 5 finance The house is expensive. 𝐷 6 finance The stock increases.", "figure_data": "𝐷 1-He passed the driving test.𝐷 2-The car is missing.𝐷 3-The car is stolen.𝐷 4-I get stolen wallet from police.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "is less sensitive to class imbalances, making it more suitable in real-world scenarios.Our contributions are as follows.• We introduce the novel yet important problem of weakly supervised open-world text classification. We release the code and datasets on Github 2 .", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "𝑖 ∩ S 𝑗 ≠ ∅ and 𝜂 𝑖 ≤ 𝜂 𝑗 then", "figure_data": "10:Remove C 𝑖11:end if12:end for13:Re-estimate class names S14:", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "An overview of our datasets. The imbalance factor refers to the ratio of sample sizes between the most frequent class and least frequent one in the dataset. 𝜂, which is the closeness of the text in the cluster. This can be obtained from X-Class which provides representations of each text, and we can simply compute", "figure_data": "AGNews20NewsNYT-SmallNYT-TopicsNYT-LocationsYahooDBpediaCorpus DomainNewsNewsNewsNewsNewsQAWikipediaClass CriterionTopicsTopicsTopicsTopicsLocationsTopicsOntology# of Classes4559101014# of Documents12,00017,87113,08131,99731,99718,00022,400Imbalance1.002.0216.6527.0915.841.001.00", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": ", 6. Performance of compared methods and WOT-Class on different imbalance degrees. We report macro-F 1 scores to more effectively demonstrate the results under an imbalanced data distribution. For all compared methods, we report their Pareto optimal started with 100 classes and our estimation.", "figure_data": "MethodExtra InfoAGNews20NewsNYT-SNYT-TopNYT-LocYahooDBpediaAverageRankstats+39.53/28.5524.94/13.8852.01/23.1342.23/19.9839.68/23.1329.66/20.4448.20/39.1539.47/24.04ORCA72.44/72.2748.92/39.8374.34/42.2262.23/39.0258.71/44.8135.57/32.7169.27/67.9260.21/48.40GCD66.37/66.5151.75/42.9682.59/63.3566.36/39.6970.25/53.4136.73/35.3975.81/72.9764.27/53.47WOT-Class79.42/79.75 79.07/79.29 94.78/88.46 78.67/69.48 80.94/79.55 54.46/56.23 85.15/84.87 78.93/76.80Rankstats+ (OE)61.44/57.5053.65/38.1240.82/31.6719.93/15.0721.96/16.8132.79/26.9450.03/44.3140.09/32.92ORCA (OE)64.38/64.5051.85/40.0470.44/46.2159.42/38.2942.99/33.0843.87/41.4382.54/81.3059.35/49.26GCD (OE)# of Classes65.42/65.4461.27/56.4278.82/56.5970.51/42.4455.37/44.8639.01/37.5884.14/83.6064.93/55.28BERT+GMM (OE)38.25/37.1429.32/25.2158.79/24.7926.88/14.0811.64/9.4714.11/13.6414.74/14.2027.68/19.79BERT+SVM (OE)45.20/44.1539.07/34.9651.97/22.3424.95/12.8313.91/7.4515.25/13.3916.28/14.4129.52/21.36MethodDBpedia All Seen Unseen All Seen Unseen All Seen Unseen All Seen Unseen DBpedia-Low DBpedia-Medium DBpedia-HighORCA81.30 95.1967.4276.07 95.5156.6472.21 97.2047.2369.99 97.6342.34GCD83.60 93.4873.7182.92 94.3271.5181.78 94.4269.1575.57 92.5358.61WOT-Class 84.87 87.1682.5985.81 91.8279.9784.97 93.3176.6379.35 88.8169.90", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "An overview of the imbalanced DBpedia datasets with 14 classes.", "figure_data": "LowMediumHighΔ2%4%6%# of Documents19,48016,56513,652Imbalance1.352.094.56", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of seen classes. The mean micro/macro-F 1 scores over three runs are reported.", "figure_data": "MethodExtra InfoAGNews20NewsNYT-SNYT-TopNYT-LocYahooDBpediaAverage", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Examples of the class-words. We use '[]' to split classwords belonging to different clusters in WOT-Class.", "figure_data": "DatasetGround TruthWOT-ClassRussia[Ukraine, Russia]Germany[Germany]NYT-LocCanada[Canada]France[France]Italy[Italy]athlete[footballer], [Olympics]artist[painting, painter, art], [tv, theatre, television]DBpediacompany school[retail, company, business] [school, education, academic]politics[politician]transportation[aircraft, locomotive]building[architecture, tower, church]", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" } ]
Tianle Wang; Zihan Wang; Weitang Liu; Jingbo Shang
[ { "authors": "", "journal": "Rankstats+", "ref_id": "b0", "title": "", "year": "" }, { "authors": "", "journal": "Method Extra Info AGNews 20News NYT-S NYT-Top NYT-Loc Yahoo DBpedia Average Rankstats+", "ref_id": "b1", "title": "Table 6: Performance of unseen classes. The mean micro/macro-F 1 scores over three runs are reported", "year": "" }, { "authors": "= 𝑟; 𝑗; Where", "journal": "", "ref_id": "b2", "title": "𝑖, 𝑗 are obtained by maximum matching on M; arg max 𝑗", "year": "" }, { "authors": "Roee Aharoni; Yoav Goldberg", "journal": "", "ref_id": "b3", "title": "Unsupervised domain clusters in pretrained language models", "year": "2020" }, { "authors": "Yuki M Asano; Christian Rupprecht; Andrea Vedaldi", "journal": "", "ref_id": "b4", "title": "Self-labelling via simultaneous clustering and representation learning", "year": "2020" }, { "authors": "Abhijit Bendale; Terrance Boult", "journal": "", "ref_id": "b5", "title": "Towards open world recognition", "year": "2015" }, { "authors": "Terrance E Boult; Steve Cruz; Raj Akshay; Manuel Dhamija; James Gunther; Walter J Henrydoss; Scheirer", "journal": "", "ref_id": "b6", "title": "Learning and the unknown: Surveying steps toward open world recognition", "year": "2019" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b7", "title": "Language Models are Few-Shot Learners", "year": "2020" }, { "authors": "Kaidi Cao; Maria Brbic; Jure Leskovec", "journal": "", "ref_id": "b8", "title": "Open-world Semi-supervised Learning", "year": "2022" }, { "authors": "Mathilde Caron; Piotr Bojanowski; Armand Joulin; Matthijs Douze", "journal": "", "ref_id": "b9", "title": "Deep clustering for unsupervised learning of visual features", "year": "2018" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "", "ref_id": "b10", "title": "Making Pre-trained Language Models Better Few-shot Learners", "year": "2021-08-01" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "", "ref_id": "b11", "title": "Simcse: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "Kai Han; Sylvestre-Alvise Rebuffi; Sebastien Ehrhardt; Andrea Vedaldi; Andrew Zisserman", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b12", "title": "Autonovel: Automatically discovering and learning novel visual categories", "year": "2021" }, { "authors": "Marti A Hearst; Susan T Dumais; Edgar Osuna; John Platt; Bernhard Scholkopf", "journal": "IEEE Intelligent Systems and their applications", "ref_id": "b13", "title": "Support vector machines", "year": "1998" }, { "authors": "Karen Spärck; Jones ", "journal": "Journal of Documentation", "ref_id": "b14", "title": "A statistical interpretation of term specificity and its application in retrieval", "year": "1972" }, { "authors": "Ken Lang", "journal": "Elsevier", "ref_id": "b15", "title": "Newsweeder: Learning to filter netnews", "year": "1995" }, { "authors": "Dheeraj Mekala; Jingbo Shang", "journal": "", "ref_id": "b16", "title": "Contextualized Weak Supervision for Text Classification", "year": "2020" }, { "authors": "Yu Meng; Jiaxin Huang; Guangyuan Wang; Zihan Wang; Chao Zhang; Yu Zhang; Jiawei Han", "journal": "Association for Computing Machinery", "ref_id": "b17", "title": "Discriminative Topic Mining via Category-Name Guided Text Embedding", "year": "2020" }, { "authors": "Yu Meng; Jiaming Shen; Chao Zhang; Jiawei Han", "journal": "ACM", "ref_id": "b18", "title": "Weakly-Supervised Neural Text Classification", "year": "2018-10-22" }, { "authors": "Yu Meng; Yunyi Zhang; Jiaxin Huang; Chenyan Xiong; Heng Ji; Chao Zhang; Jiawei Han", "journal": "", "ref_id": "b19", "title": "Text Classification Using Label Names Only: A Language Model Self-Training Approach", "year": "2020-11-16" }, { "authors": "Juan Ramos", "journal": "Citeseer", "ref_id": "b20", "title": "Using tf-idf to determine word relevance in document queries", "year": "2003" }, { "authors": "Reynolds Douglas", "journal": "Encyclopedia of biometrics", "ref_id": "b21", "title": "Gaussian mixture models", "year": "2009" }, { "authors": "Ethan M Rudd; Lalit P Jain; Walter J Scheirer; Terrance E Boult", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b22", "title": "The extreme value machine", "year": "2017" }, { "authors": "Lei Shu; Hu Xu; Bing Liu", "journal": "", "ref_id": "b23", "title": "Unseen class discovery in open-world classification", "year": "2018" }, { "authors": "Fangbo Tao; Chao Zhang; Xiusi Chen; Meng Jiang; Tim Hanratty; Lance Kaplan; Jiawei Han", "journal": "", "ref_id": "b24", "title": "Doc2Cube: Allocating Documents to Text Cube Without Labeled Data", "year": "2018" }, { "authors": "Sagar Vaze; Kai Han; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b25", "title": "Generalized Category Discovery", "year": "2022" }, { "authors": "Zihan Wang; Chengyu Dong; Jingbo Shang", "journal": "", "ref_id": "b26", "title": "Average\" Approximates \"First Principal Component\"? An Empirical Analysis on Representations from Neural Language Models", "year": "2021-07-11" }, { "authors": "Zihan Wang; Dheeraj Mekala; Jingbo Shang", "journal": "", "ref_id": "b27", "title": "X-Class: Text Classification with Extremely Weak Supervision", "year": "2021" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b28", "title": "Huggingface's transformers: State-of-the-art natural language processing", "year": "2019" }, { "authors": "Hu Xu; Bing Liu; Lei Shu; Yu", "journal": "", "ref_id": "b29", "title": "Open-world learning and application to product classification", "year": "2019" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Character-level convolutional networks for text classification", "year": "2015" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "Curran Associates, Inc", "ref_id": "b31", "title": "Character-level Convolutional Networks for Text Classification", "year": "2015" }, { "authors": "Yunyi Zhang; Jiaming Shen; Jingbo Shang; Jiawei Han", "journal": "", "ref_id": "b32", "title": "Empower Entity Set Expansion via Language Model Probing", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 141.99, 373.34, 59.85, 21.52 ], "formula_id": "formula_0", "formula_text": "s 𝑤 = 𝑤 ′ =𝑤 t 𝑤 ′ 𝑤 ′ =𝑤 1" }, { "formula_coordinates": [ 3, 323.83, 134.37, 122.12, 16.83 ], "formula_id": "formula_1", "formula_text": "for each cluster C 𝑖 in C do 4:" }, { "formula_coordinates": [ 3, 347.14, 652.82, 172.39, 21.41 ], "formula_id": "formula_2", "formula_text": "𝑠𝑐𝑜𝑟𝑒 𝑖 (𝑤) = 𝑠 𝑖 (𝑤) 𝑠𝑖𝑧𝑒 𝑖 • tanh 𝑡 𝑖 (𝑤) 𝑠𝑖𝑧𝑒 𝑖 • log |D | 𝑠 D (𝑤)" }, { "formula_coordinates": [ 4, 96.07, 642.69, 146.93, 20.75 ], "formula_id": "formula_3", "formula_text": "𝜇 (𝑤, 𝑖) = log M{𝑟𝑎𝑛𝑘 𝑗 (𝑤) | 1 ≤ 𝑗 ≤ 𝐶} 1 + 𝑟𝑎𝑛𝑘 𝑖 (𝑤)" }, { "formula_coordinates": [ 4, 390.61, 520.38, 94.64, 7.93 ], "formula_id": "formula_4", "formula_text": "𝐼 (𝑤, 𝑖) = 𝑝 (𝑤, 𝑖) × 𝜇 (𝑤, 𝑖)." }, { "formula_coordinates": [ 5, 133.46, 257.86, 161.12, 22.63 ], "formula_id": "formula_5", "formula_text": "𝜂 = 1 |R| ∑︁ r∈R cos r, R ,(1)" } ]
10.1145/3580305.3599511
2023-06-24
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b2", "b35", "b33", "b39", "b28", "b53", "b60", "b61", "b57", "b65", "b1", "b2", "b6", "b12", "b44", "b58", "b66", "b4", "b13", "b22", "b6", "b12", "b44", "b22", "b16", "b20", "b55", "b2", "b21", "b64", "b2", "b64", "b54", "b2", "b4", "b1", "b41", "b48" ], "table_ref": [], "text": "Spatio-temporal point process (STPP) is a stochastic collection of points, where each point denotes an event 𝑥 = (𝑡, 𝑠) associated with time 𝑡 and location 𝑠. STPP is a principled framework for modeling sequences consisting of spatio-temporal events, and has been applied in a wide range of fields, such as earthquakes and aftershocks [3,36], disease spread [34,40], urban mobility [29,54,61,62], and emergencies [58,66].\nSpatio-temporal point processes have been widely studied in the literature [2,3,7,13,45,59,67] with rich theoretical foundations [5,14,23]. Due to computational complexities, a general approach for STPPs is to characterize the event time and space with distinct models. Conventional STPP models [7,13,45] mainly capture relatively simple patterns of spatio-temporal dynamics, where the temporal domain is modeled by temporal point process models, such as Poisson process [23], Hawkes process [17], and Self-correcting process [21], and the spatial domain is usually fitted by kernel density estimators (KDE) [56]. With the advance of neural networks, a series of neural architectures are proposed to improve the fitting accuracy [3,22,65]. However, they still adopt the approach of separate modeling. For example, Chen et al. [3] use neural ODEs and continuous-time normalizing flows (CNFs) to learn the temporal distribution and spatial distribution, respectively. Zhou et al. [65] apply two independent kernel functions for time and space, whose parameters are obtained from neural networks, to build the density function.\nHowever, for STPPs, the time and space where an event occurs are highly dependent and entangled with each other. For example, in seismology, earthquakes are spatio-temporal correlated due to crust movements [55], which occur with a higher probability close in time and space to previous earthquakes. Take urban mobility as another example, people are more likely to go to work during the day, while tend to go for entertainment at night. Therefore, it is crucial to learn models that can address the spatio-temporal joint distribution conditioned on the event history. However, it is non-trivial due to the following two challenges: (1) Spatio-temporal joint distributions for STPPs usually have tremendous sample spaces, which are highly intractable. Directly fitting requires huge training samples, which is prohibitive in practice. The general approach is to decompose the target distribution into conditionally dependent distributions [3,5], fitting the temporal density 𝑝 * (𝑡) 1 and conditional density 𝑝 * (𝑠 |𝑡) separately. However, the characterization of 𝑝 * (𝑠 |𝑡) is largely limited to certain model structures, such as KDEs and CNFs, which are less expressive. (2) The occurrence of events is usually associated with complex coupling correlations between time and space. Driven by different generation mechanisms, the occurrence of events exhibits distinct spatio-temporal dependencies across various fields. How to effectively capture the underlying dependence for an event still remains an open problem.\nSolving the above two challenges calls for a new modeling paradigm for STPPs. In this paper, we propose a novel parameterization framework, Spatio-Temporal Diffusion Point Processes (DSTPP), which is capable of leaning spatio-temporal joint distributions effectively. By leveraging denoising diffusion probabilistic modeling, we manage to decompose the original complex distribution into a Markov chain of multiple steps, where each step corresponds to a minor distribution change and can be modeled faithfully by a Gaussian distribution [42,49]. The target distribution is learned throughout the combination of all steps, where the predicted joint distribution obtained from the previous step acts as the condition for the nextstep learning. In this way, conditioned on the already predicted results, the modeling of time and space becomes independent at the current step, i.e., 𝑝 * (𝑡 current |𝑡 last , 𝑠 last ) and 𝑝 * (𝑠 current |𝑡 last , 𝑠 last ), which successfully solves the intractable problem of the conditional density 𝑝 * (𝑠 |𝑡). This novel learning paradigm completely removes the constraints of model structure parameterization in existing solutions, allowing accurate and flexible modeling of STPPs.\nThe multi-step learning process simulates the generation of the spatio-temporal joint distribution; however, the underlying mechanism of each step is still unclear. To further facilitate the learning at each step, we design a spatio-temporal co-attention module to characterize spatio-temporal interactions that contribute to the target joint distribution. Specifically, we simultaneously learn spatial " }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b16" ], "table_ref": [ "tab_0" ], "text": "No Asmp. (1) No Restr. (2) Flexible (3) Closed-form sampling (4) Hawkes [17] ✗ 1) Without assumptions of conditional spatio-temporal independence. (2) Without dependence restrictions between time and space. (3) Any powerful network architecture can be employed during the calculation. (4) Sampling without any approximation.\n✗ ✗ ✗ Self-correcting [21] ✗ ✗ ✗ ✗ KDE [2] - - ✗ ✓ CNF [3] - - ✗ ✓ ST Hawkes [45] ✗ ✗ ✗ ✗ RMTPP [9] ✗ ✗ ✓ ✗ NHP [32] ✗ ✗ ✓ ✗ THP [68] ✗ ✗ ✓ ✗ SNAP [63] ✗ ✗ ✓ ✗ LogNormMix [47] ✗ ✗ ✗ ✓ NJSDE [22] ✗ ✗ ✓ ✗ Neural STPP [3] ✓ ✗ ✓ ✗ DeepSTPP [65] ✗ ✗ ✓ ✗ DSTPP (ours) ✓ ✓ ✓ ✓(\nattention and temporal attention to capture their fine-grained interactions adaptively, which characterizes underlying mechanisms of the joint distribution. Table 1 compares the advantages of our framework with existing solutions. DSTPP can learn spatio-temporal joint distributions without any dependence restrictions. As no integrals or Monte Carlo approximations are required, it is flexible and can perform sampling in a closed form. It can also be utilized to model a variety of STPPs, where events are accompanied with either a vector of real-valued spatial location or a discrete value, e.g., a class label of the location; thus it is broadly applicable in real-world scenarios. We summarize our contributions as follows:\n• To the best of our knowledge, we are the first to model STPPs within the diffusion model paradigm. By removing integrals and overcoming structural design limitations in existing solutions, it achieves flexible and accurate modeling of STPPs. • We propose a novel spatio-temporal point process model, DSTPP.\nOn the one hand, the diffusion-based approach decomposes the complex spatio-temporal joint distribution into tractable distributions. On the other hand, the elaborated co-attention module captures the spatio-temporal interdependence adaptively. • Extensive experiments demonstrate the superior performance of our approach for modeling STPPs using both synthetic and realworld datasets. Further in-depth analyses validate that our model successfully captures spatio-temporal interactions for different scenarios in an adaptive manner." }, { "figure_ref": [], "heading": "PRELIMINARIES 2.1 Spatio-temporal Point Process", "publication_ref": [ "b34", "b8", "b27", "b31", "b44", "b64", "b67", "b2", "b4" ], "table_ref": [ "tab_0" ], "text": "A spatio-temporal point process is a stochastic process composed of events with time and space that occur over a domain [35]. These spatio-temporal events are described in continuous time with spatial information. The spatial domain of the event can be recorded in different ways. For example, in earthquakes, it is usually recorded as longitude-latitude coordinates in continuous space. It can also be associated with discrete labels, such as the neighborhoods of crime events. Let 𝑥 𝑖 = (𝑡 𝑖 , 𝑠 𝑖 ) denotes the 𝑖 𝑡ℎ spatio-temporal event written as the pair of occurrence time 𝑡 ∈ T and location 𝑠 ∈ S, where T × S ∈ R × R 𝑑 . Then a spatio-temporal point process can be defined as a sequence 𝑆 = {𝑥 1 , 𝑥 2 , ..., 𝑥 𝐿 }, and the number of events 𝐿 is also stochastic. Let 𝐻 𝑡 = {𝑥 𝑖 |𝑡 𝑖 < 𝑡, 𝑥 𝑖 ∈ 𝑆 } denote the event history before time 𝑡, modeling STPPs is concerned with parameterizing the conditional probability density function 𝑝 (𝑡, 𝑠 |𝐻 𝑡 ), which denotes the conditional probability density of the next event happening at time 𝑡 and space 𝑠 given the history 𝐻 𝑡 . Discussion on shortcomings. In existing methods for STPPs, given the event history, space and time are assumed to be conditionally independent [9,28,32,45,65,68] or unilaterally dependent [3,5] i.e., the space is dependent on the time by 𝑝 (𝑥 |𝑡). These dependence restrictions destroy the model's predictive performance on entangled space and time interactions conditioned on history. Besides, most approaches require integration operations when calculating the likelihood, or limit intensity functions to integrable forms, leading to a trade-off between accuracy and efficiency. We compare the shortcomings of existing approaches in Table 1 2 , which motivate us to design a more flexible and effective model." }, { "figure_ref": [], "heading": "Denoising Diffusion Probabilistic Models", "publication_ref": [ "b18" ], "table_ref": [], "text": "Diffusion models [19] generate samples by learning a distribution that approximates a data distribution. The distribution is learned by a gradual reverse process of adding noise, which recovers the actual value starting from Gaussian noise. At each step of the denoising process, the model learns to predict a slightly less noisy value.\nLet 𝑥 0 ∼ 𝑞(𝑥 0 ) denote a multivariate variable from specific input space 𝑋 ∈ R 𝐷 , and we consider a probability density function 𝑝 𝜃 (𝑥 0 ), which aims to approximate 𝑞(𝑥 0 ). Diffusion models are latent variable models, which are defined by two processes: the forward diffusion process and the reverse denoising process. Let 𝑋 𝑘 for 𝑡 = 1, 2, ..., 𝐾 denote a sequence of latent variables of dimension ∈ R 𝐷 , the forward diffusion process is defined by a Markov chain:\n𝑞(𝑥 1:𝐾 |𝑥 0 ) = 𝐾 𝑘=1 𝑞(𝑥 𝑘 |𝑥 𝑘 -1 ) ,(1)\nwhere 𝑞(𝑥 𝑘 |𝑥 𝑘 -1 ) N (𝑥 𝑘 ; √︁ 1 -𝛽 𝑘 𝑥 𝑘 and 𝛽 𝑘 𝑰 ), 𝛽 1 , ..., 𝛽 𝐾 ∈ (0, 1) is a given increasing variance schedule, representing a particular noise level. 𝑥 𝑘 can be sampled in a closed form as 𝑞(𝑥 𝑘 |𝑥 0 ) = (𝑥 𝑘 ; √︁ 𝛼 𝑘 𝑥 0 , (1-𝛼 𝑘 )𝑰 ), where 𝛼 𝑘 1-𝛽 𝑘 and 𝛼 𝑘 = 𝐾 𝑘=1 𝛼 𝑘 . Then a noisy observation at the 𝑘 𝑡ℎ step can be expressed as 𝑥 𝑘 = √︁ 𝛼 𝑘 𝑥 0 + (1 -𝛼 𝑘 )𝜖, where 𝜖 ∼ N (0, 𝑰 ) and 𝑥 0 is the clean observation.\nOn the contrary, the reverse denoising process recovers 𝑥 0 starting from 𝑥 𝐾 , where 𝑥 𝐾 ∼ N (𝑥 𝐾 ; 0, 𝑰 ). It is defined by the following Markov chain with learned Gaussian transitions:\n𝑝 𝜃 (𝑥 0:𝐾 ) 𝑝 (𝑥 𝐾 ) 𝐾 𝑘=1 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) , 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) N (𝑥 𝑘 -1 ; 𝜇 𝜃 (𝑥 𝑘 , 𝑘), 𝜎 𝜃 (𝑥 𝑘 , 𝑘)𝑰 ) ,(2)\n2 TPP models can be used for STPPs where the space acts as the marker. 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) aims to remove the Gaussian noise added in the forward diffusion process. The parameter 𝜃 can be optimized by minimizing the negative log-likelihood via a variational bound:" }, { "figure_ref": [], "heading": "History", "publication_ref": [ "b18" ], "table_ref": [], "text": "min 𝜃 E 𝑞 (𝑥 0 ) ≤ min 𝜃 E 𝑞 (𝑥 0:𝐾 ) [-log𝑝 (𝑥 𝐾 ) - 𝐾 ∑︁ 𝑘=1 log 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) 𝑞(𝑥 𝑘 |𝑥 𝑘 -1 )\n] .\n(3) Ho et al. [19] show that the denoising parameterization can be trained by the simplified objective:\nE 𝑥 0 ∼𝑞 (𝑥 0 ),𝜖∼N (0,𝑰 ) [∥𝜖 -𝜖 𝜃 (𝑥 𝑘 , 𝑘)∥ 2 ] ,(4)\nwhere 𝑥 𝑘 = √︁ 𝛼 𝑘 𝑥 0 + (1 -𝛼 𝑘 )𝜖. 𝜖 𝜃 needs to estimate Gaussian noise added to the input 𝑥 𝑘 , which is trained by MSE loss between the real noise and predicted noise. Therefore, 𝜖 𝜃 acts as the denoising network to transform 𝑥 𝑘 to 𝑥 𝑘 -1 . Once trained, we can sample 𝑥 𝑘 -1 from 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) and progressively obtain 𝑥 0 according to Equation (2)." }, { "figure_ref": [ "fig_1" ], "heading": "SPATIO-TEMPORAL DIFFUSION POINT PROCESSES", "publication_ref": [], "table_ref": [], "text": "Figure 2 illustrates the overall framework of DSTPP, which consists of two key modules, the spatio-temporal self-attention encoder, and the spatio-temporal diffusion model. The spatio-temporal encoder learns an effective representation of the event history, then it acts as the condition to support the spatio-temporal denoising diffusion process. We first present the spatio-temporal encoder in Section 3.1.\nThen we formulate the learning of the spatio-temporal joint distribution as a denoising diffusion process, and introduce the diffusion process and inverse denoising process in Section 3.2. We describe how to train this model and perform sampling in Section 3.3. Finally, We demonstrate the detailed architecture of the denoising network parametrization in Section 3.4." }, { "figure_ref": [], "heading": "Algorithm 1", "publication_ref": [], "table_ref": [], "text": "Training for each spatio-temporal event 𝑥 𝑖 = (𝜏 𝑖 , 𝑠 𝑖 )\nInput: ℎ 𝑖 -1 Repeat: 𝑥 0 𝑖 ∼ 𝑞(𝑥 0 𝑖 ), 𝑘 ∼ Uniform(1, 2, ..., 𝐾) 𝜖 ∼ N (0, 𝐼 ) Take gradient descent step on ∇ 𝜙,𝜃 ∥𝜖 -𝜖 𝜃 ( √︁ 𝛼 𝑘 𝑥 0 𝑖 + √︁ 1 -𝛼 𝑘 𝜖, ℎ 𝑖 -1 , 𝑘)∥ 2\nUntil: Converged" }, { "figure_ref": [ "fig_1" ], "heading": "Spatio-temporal Encoder", "publication_ref": [ "b67", "b52" ], "table_ref": [], "text": "To model the spatio-temporal dynamics of events and obtain effective sequence representations, we design a self-attention-based spatio-temporal encoder. The input of the encoder is made up of events 𝑥 = (𝑡, 𝑠). To obtain a unique representation for each event, we use two embedding layers for the time and space separately.\nFor the space 𝑠 ∈ R 𝑛 , we utilize a linear embedding layer; for the timestamp, we apply a positional encoding method following [68]:\n[𝑒 𝑡 ] 𝑗 = 𝑐𝑜𝑠 (𝑡/10000 𝑗 -1 𝑀 ) if 𝑗 is odd 𝑠𝑖𝑛(𝑡/10000 𝑗 -1 𝑀 ) if 𝑗 is even ,(5)\nwhere 𝑒 𝑡 denotes the temporal embedding and 𝑀 is the embedding dimension. For the spatial domain, we use linear projection to convert continuous or discrete space into embeddings as follows:\n𝑒 𝑠 = 𝑊 𝑒 𝑠(6)\nwhere 𝑊 𝑒 contains learnable parameters. We use 𝑊 𝑒 ∈ R 𝑀 ×𝐷 if the space 𝑠 is defined in the continuous domain R 𝐷 , 𝐷 ∈ {1, 2, 3}. We use 𝑊 𝑒 ∈ R 𝑀 ×𝑁 if the spatial information is associated with discrete locations represented by one-hot ID encoding 𝑠 ∈ R 𝑁 , where 𝑁 is the number of discrete locations. In this way, we obtain real-value vectors 𝑒 𝑠 for both continuous and discrete spatial domains. For each event 𝑥 = (𝑡, 𝑠), we obtain the spatio-temporal embedding 𝑒 𝑠𝑡 by adding the positional encoding 𝑒 𝑡 and spatial embedding 𝑒 𝑠 . The embedding of the 𝑆 = {(𝑡 𝑖 , 𝑠 𝑖 )} 𝐿 𝑖=1 is then specified by 𝐸 𝑠𝑡 = {𝑒 𝑠𝑡,1 , 𝑒 𝑠𝑡,2 , ..., 𝑒 𝑠𝑡,𝐿 } ∈ R 𝐿×𝑀 , where 𝑒 𝑠𝑡,𝑖 = 𝑒 𝑠,𝑖 + 𝑒 𝑡,𝑖 . In the meantime, we also keep the temporal embedding 𝐸 𝑡 = {𝑒 𝑡,1 , 𝑒 𝑡,2 , ..., 𝑒 𝑡,𝐿 } and spatial embedding 𝐸 𝑠 = {𝑒 𝑠,1 , 𝑒 𝑠,2 , ..., 𝑒 𝑠,𝐿 }, respectively, with the goal of capturing characteristics of different aspects. If only spatio-temporal representation is available, the model may fail when dealing with cases where the temporal and spatial domains are not entangled. With learned representations from different aspects, we did not simply sum them together. Instead, we concatenate them and enable the model to leverage representations adaptively.\nAfter the initial spatial embedding and temporal encoding layers, we pass 𝐸 𝑠𝑡 , 𝐸 𝑠 , and 𝐸 𝑡 through three self-attention modules. Specifically, the scaled dot-product attention [53] is defined as:\nAttention(𝑄, 𝐾, 𝑉 ) = Softmax( 𝑄𝐾 𝑇 √ 𝑑 ) , 𝑆 = Attention(𝑄, 𝐾, 𝑉 )𝑉 ,(7)\nwhere 𝑄, 𝐾, and 𝑉 represent queries, keys, and values. In our case, the self-attention operation takes the embedding 𝐸 as input, and\nAlgorithm 2 Sampling 𝑠 0 𝑖 and 𝜏 0 𝑖 Input: Noise 𝑠 𝐾 𝑖 ∼ N (0, 𝐼 ), 𝜏 𝐾 𝑖 ∼ N (0, 𝐼 ) and ℎ 𝑖 -1 for k = K to 1 do 𝑧 𝑠 ∼ N (0, 𝐼 ), 𝑧 𝑡 ∼ N (0, 𝐼 ) if k>1 else 𝑧 𝑠 = 0, 𝑧 𝑡 = 0 𝑠 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝑠 𝑘 𝑖 - 𝛽 𝑘 √ 1-𝛼 𝑘 𝜖 𝜃 (𝑠 𝑘 𝑖 , 𝜏 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑠 𝜏 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝜏 𝑘 𝑖 - 𝛽 𝑘 √ 1-𝛼 𝑘 𝜖 𝜃 (𝑠 𝑘 𝑖 , 𝜏 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑡\nend for Return: 𝑠 0 𝑖 , 𝜏 0 𝑖 then converts it into three matrices by linear projections:\n𝑄 = 𝐸𝑊 𝑄 , 𝐾 = 𝐸𝑊 𝐾 , 𝑉 = 𝐸𝑊 𝑉 ,(8)\nwhere 𝑊 𝑄 ,𝑊 𝐾 , and 𝑊 𝑉 are weights of linear projections. Finally, we use a position-wise feed-forward network to transform the attention output 𝑆 into the hidden representation ℎ(𝑡).\nFor three embeddings 𝐸 𝑠 , 𝐸 𝑡 and 𝐸 𝑠𝑡 containing information of different aspects, we all employ the above self-attentive operation to generate hidden spatial representation ℎ 𝑠 (𝑡), temporal representation ℎ 𝑡 (𝑡), and spatial-temporal representation ℎ 𝑠𝑡 (𝑡). As a result, the hidden representation ℎ 𝑖 -1 in Figure 2 is a collection of the three representations." }, { "figure_ref": [], "heading": "Spatio-temporal Diffusion and Denoising Processes", "publication_ref": [ "b18", "b18" ], "table_ref": [], "text": "Conditioned on the hidden representation ℎ 𝑖 -1 generated by the encoder, we aim to learn a model of the spatio-temporal joint distribution of the future event. The learning of such distribution is built on the diffusion model [19], and the values of space and time are diffused and denoised at each event. Specifically, for each event 𝑥 𝑖 = (𝜏 𝑖 , 𝑠 𝑖 ) in the sequence, where 𝜏 𝑖 denotes the time interval since the last event, we model the diffusion process as a Markov process over the spatial and temporal domains as (𝑥 0 𝑖 , 𝑥 1 𝑖 , ..., 𝑥 𝐾 𝑖 ), where 𝐾 is the number of diffusion steps. From 𝑥 0 𝑖 to 𝑥 𝐾 𝑖 , we add a little Gaussian noise step by step to the space and time values until they are corrupted into pure Gaussian noise. The process of adding noise is similar to image scenarios, where the noise is applied independently on each pixel [19]. We diffuse separately on the spatial and temporal domains by the following probabilities:\n𝑞 𝑠𝑡 (𝒙 𝑘 𝑖 |𝒙 𝑘 -1 𝑖 ) (𝑞(𝜏 𝑘 𝑖 |𝜏 𝑘 -1 𝑖 ), 𝑞(𝑠 𝑘 𝑖 |𝑠 𝑘 -1 𝑖 )) , 𝑞(𝑥 𝑘 |𝑥 𝑘 -1 ) N (𝑥 𝑘 ; √︁ 1 -𝛽 𝑘 𝑥 𝑘 , 𝛽 𝑘 𝑰 ) ,(9)\nwhere 𝛼 𝑘 = 1 -𝛽 𝑘 and 𝛼 𝑘 = 𝑘 𝑠=1 𝛼 𝑘 . On the contrary, we formulate the reconstruction of the point 𝑥 𝑖 = (𝜏 𝑖 , 𝑠 𝑖 ) as reverse denoising iterations from 𝑥 𝐾 𝑖 to 𝑥 0 𝑖 given the event history. In addition to the history representation ℎ 𝑖 -1 , the denoising processes of time and space are also dependent on each other obtained in the previous step. The predicted values of the next step are modeled in a conditionally independent manner, which is formulated as follows:\n𝑝 𝜃 (𝑥 𝑘 -1 𝑖 |𝑥 𝑘 𝑖 , ℎ 𝑖 -1 ) = 𝑝 𝜃 (𝜏 𝑘 -1 𝑖 |𝜏 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 )𝑝 𝜃 (𝑠 𝑘 -1 𝑖 |𝜏 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 ) ,(10)\nIn this way, we manage to disentangle the modeling of spatiotemporal joint distribution into conditionally independent modeling, which enables effective and efficient modeling of the observed spatio-temporal distribution. The overall reverse denoising process is formulated as follows:\n𝑝 𝜃 (𝑥 0:𝐾 𝑖 |ℎ 𝑖 -1 ) 𝑝 (𝑥 𝐾 𝑖 ) 𝐾 𝑘=1 𝑝 𝜃 (𝑥 𝑘 -1 𝑖 |𝑥 𝑘 𝑖 , ℎ 𝑖 -1 ) . (11\n)\nFor the continuous-space domain, the spatio-temporal distribution can be predicted by Equation 11. For the discrete-space domain, we add a rounding step at the end of the reverse process, 𝑝 𝜃 (𝑠 𝑖 |𝑠 0 𝑖 ), to convert the real-valued embedding 𝑠 0 𝑖 to discrete location ID 𝑠 𝑖 ." }, { "figure_ref": [], "heading": "Training and Inference", "publication_ref": [ "b18" ], "table_ref": [], "text": "Training. For a spatio-temporal point process, the training should optimize the parameters 𝜃 that maximize the log-likelihood:\n𝐿 ∑︁ 𝑖=1 log𝑝 𝜃 (𝑥 0 𝑖 |ℎ 𝑖 -1 ) , (12\n)\nwhere 𝐿 is the number of events in the sequence. Based on a similar derivation in the preliminary section, we train the model by a simplified loss function for the 𝑖 𝑡ℎ event and diffusion step 𝑘 as follows [19]:\nL = E 𝑥 0 𝑖 ,𝜖,𝑘 [∥𝜖 -𝜖 𝜃 ( √︁ 𝛼 𝑘 𝑥 0 𝑖 + √︁ 1 -𝛼 𝑘 𝜖, ℎ 𝑖 -1 , 𝑘)∥ 2 ] ,(13)\nwhere 𝜖 ∼ N (0, 𝐼 ). Samples at each diffusion step k for each event are included in the training set. We train the overall framework consisting of ST encoder and ST diffusion in an end-to-end manner.\nThe pseudocode of the training procedure is shown in Algorithm 1.\nInference. To predict future spatio-temporal events with trained DSTPP. We first obtain the hidden representation ℎ 𝑖 by employing the spatio-temporal self-attention encoder given past 𝑖 -1 events. Then, we can predict the next event starting from Gaussian noise 𝑠 𝐾 𝑖 , 𝜏 𝐾 𝑖 ∼ N (0, 𝐼 ) conditioned on ℎ 𝑖 . Specifically, the reconstruction of 𝑥 0 𝑖 from 𝑥 𝐾 𝑖 = (𝑠 𝐾 𝑖 , 𝜏 𝐾 𝑖 ) is formulated as follows:\n𝑠 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝑠 𝑘 𝑖 - 𝛽 𝑘 √︁ 1 -𝛼 𝑘 𝜖 𝜃 (𝑥 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑠 , 𝜏 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝜏 𝑘 𝑖 - 𝛽 𝑘 √︁ 1 -𝛼 𝑘 𝜖 𝜃 (𝑥 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑡 ,(14)\nwhere 𝑧 𝑠 and 𝑧 𝑡 are both stochastic variables sampled from a standard Gaussian distribution. 𝜖 𝜃 is the trained reverse denoising network, which takes in the previous denoising result 𝑥 𝑘 𝑖 , the hidden representation of the sequence history ℎ 𝑖 -1 and the diffusion step 𝑘. Algorithm 2 presents the pseudocode of the sampling procedure." }, { "figure_ref": [ "fig_2" ], "heading": "Co-attention Denoising Network", "publication_ref": [ "b13" ], "table_ref": [], "text": "We design a co-attention denoising network to capture the interdependence between spatial and temporal domains, which facilitates the learning of spatio-temporal joint distributions. Specifically, it performs spatial and temporal attention simultaneously at each denoising step to capture fine-grained interactions. Figure 3 illustrates the detailed network architecture. Each step of the denoising process shares the same structure, which takes in the previously predicted values 𝑠 𝑘+1 𝑖 and 𝜏 𝑘+1 𝑖 , and the denoising step 𝑘 with positional encoding. Meanwhile, the network also integrates the hidden representation ℎ 𝑖 -1 to achieve conditional denoising.\nTemporal attention aims to generate a context vector by attending to certain parts of the temporal input and certain parts of the spatial input, and so does spatial attention. We calculate the mutual attention weights, i.e., 𝛼 𝑠 and 𝛼 𝑡 , for space and time based on the condition ℎ 𝑖 -1 and current denoising step 𝑘 as follows:\n𝑒 𝑘 = SinusoidalPosEmb(𝑘) , 𝛼 𝑠 = Softmax(𝑊 𝑠𝑎 Concat(ℎ 𝑖 -1 , 𝑒 𝑘 ) + 𝑏 𝑠𝑎 ) , 𝛼 𝑡 = Softmax(𝑊 𝑡𝑎 Concat(ℎ 𝑖 -1 , 𝑒 𝑘 ) + 𝑏 𝑡𝑎 ) ,(15)\nwhere 𝑊 𝑠𝑎 ,𝑊 𝑡𝑎 , 𝑏 𝑠𝑎 , 𝑏 𝑡𝑎 are learnable parameters. 𝛼 𝑠 and 𝛼 𝑡 measure the mutual dependence between time and space, which are influenced by the event history and current denoising step.\nThen we integrate the spatio-temporal condition ℎ 𝑖 -1 = {ℎ 𝑠,𝑖 -1 , ℎ 𝑡,𝑖 -1 } into previously predicted values 𝑠 𝑘+1 𝑖 and 𝜏 𝑘+1 𝑖 by feedforward neural networks, and each layer is formulated as follows:\n𝑥 𝑠,𝑖 = 𝜎 (𝑊 𝑠 𝑠 𝑘+1 𝑖 + 𝑏 𝑠 + 𝑊 𝑠ℎ ℎ 𝑠,𝑖 -1 + 𝑏 𝑠ℎ + 𝑒 𝑘 ) , 𝑥 𝑡,𝑖 = 𝜎 (𝑊 𝑡 𝜏 𝑘+1 𝑖 + 𝑏 𝑡 + 𝑊 𝑡ℎ ℎ 𝑡,𝑖 -1 + 𝑏 𝑡ℎ + 𝑒 𝑘 ) ,(16)\nwhere 𝑊 𝑠 ∈ R 𝑀 ×𝐷 ,𝑊 𝑡 ∈ R 𝑀 ×1 ,𝑊 𝑠ℎ ,𝑊 𝑡ℎ ∈ R 𝑀 ×𝑀 , and 𝑏 𝑠 , 𝑏 𝑡 , 𝑏 𝑠ℎ , 𝑏 𝑡ℎ ∈ R 𝑀 ×1 are learnable parameters of the linear projection, and 𝜎 denotes the ReLU activation function. Finally, the outputs of spatial attention and temporal attention are calculated as follows:\n𝑥 𝑖 = [𝑥 𝑠,𝑖 , 𝑥 𝑡,𝑖 ] , 𝜖 𝑘 𝑠,𝑖 = ∑︁ 𝛼 𝑠 𝑥 𝑖 , 𝜖 𝑘 𝑡,𝑖 = ∑︁ 𝛼 𝑡 𝑥 𝑖 ,(17)\nwhere 𝜖 𝑘 𝑠,𝑖 and 𝜖 𝑘 𝑡,𝑖 are the predicted noise at step 𝑘 for the 𝑖 𝑡ℎ event. We can obtain the predicted values 𝑠 𝑘 𝑖 and 𝜏 𝑘 𝑖 at step 𝑘 according to Equation (14). Then the predicted values 𝑠 𝑘 𝑖 and 𝜏 𝑘 𝑖 are fed into the denoising network again to iteratively predict the results towards the clean values of space and time. In this way, the interdependence between time and space is captured adaptively and dynamically, facilitating the learning of the spatio-temporal joint distribution." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we perform experiments to answer the following research questions:\n• RQ1: How does the proposed model perform compared with existing baseline approaches? • RQ2: Is the joint modeling of spatial and temporal dimensions effective for STPPs, and what's the spatio-temporal interdependence like during the denoising process? • RQ3: How does the total number of diffusion steps affect the performance? • RQ4: How to gain a deeper understanding of the reverse denoising diffusion process?" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b2" ], "table_ref": [], "text": "4.1.1 Datasets. We perform extensive experiments on synthetic datasets and real-world datasets in the STPP literature. All datasets are obtained from open sources, which contain up to thousands of spatio-temporal events. Varying across a wide range of fields, we use one synthetic dataset and three real-world datasets, including earthquakes in Japan, COVID-19 spread, bike sharing in New York City, and simulated Hawkes Gaussian Mixture Model process [3].\nBesides, we use a real-world dataset, Atlanta Crime Data, the spatial locations of which are discrete neighborhoods. We briefly introduce them here, and further details can be found in Appendix A.\n(1) Earthquakes. Earthquakes in Japan with a magnitude of at least 2.5 from 1990 to 2020 recorded by the U.S. Geological Survey3 .\n(2) COVID-19. Publicly released by The New York Times (2020), which records daily infected cases of COVID-19 in New Jersey state 4 . We aggregate the data at the county level. (3) Citibike. Bike sharing in New York City collected by a bike sharing service. The start of each trip is considered as an event. (4)HawkesGMM 5 . This synthetic data uses Gaussian Mixture Model to generate spatial locations. Events are sampled from a multivariate Hawkes process. (6) Crime 6 . It is provided by the Atlanta Police Department, recording robbery crime events. Each event is associated with the time and the neighborhood." }, { "figure_ref": [], "heading": "Baselines.", "publication_ref": [ "b3", "b3", "b3", "b42", "b17", "b20", "b8", "b31", "b67", "b62", "b46", "b56", "b46", "b2", "b64" ], "table_ref": [], "text": "To evaluate the performance of our proposed model, we compare it with commonly-used methods and state-ofthe-art models. The baselines can be divided into three groups: spatial baselines, temporal baselines, and spatio-temporal baselines. It is common for previous methods to model the spatial domain and temporal domain separately, so spatial baselines and temporal baselines can be combined freely for STPPs. We summarize the three groups as follows 7 :\n• Spatial baselines: We use conditional kernel density estimation (Condition KDE) [4], Continuous normalizing flow (CNF), and Time-varying CNF [4] (TVCNF) [4]. The three methods all model continuous spatial distributions. • Temporal baselines: We include commonly used TPP models.\nClassical TPP models include the Poisson process [43], Hawkes Process [18], and Self-correcting process [21]. We also incorporate neural TPP models, including Recurrent Marked Temporal Point Process (RMTPP) [9], Neural Hawkes Process (NHP) [32], Transformer Hawkes Process (THP) [68], Self-attentive Hawkes Process (SAHP) [63]. Besides, we also compare with intensityfree approaches: Log Normal Mixture model (LogNormMix) [47], and Wasserstein GAN (WGAN) [57]. • Spatio-temporal baselines. We include state-of-the-art spatiotemporal baselines, including Neural Jump Stochastic Differential Equations (NJSDE) [47], Neural Spatio-temporal Point Process (NSTPP) [3], and DeepSTPP [65]." }, { "figure_ref": [], "heading": "Evaluation Metrics.", "publication_ref": [ "b36" ], "table_ref": [], "text": "We evaluate the performance of models from two perspectives: likelihood comparison and event prediction comparison. We use negative log-loglikelihood (NLL) as metrics, and the time and space are evaluated, respectively. Although the exact likelihood cannot be obtained, we can write the variational lower bound (VLB) according to Equation ( 3) and utilize it as the NLL metric instead. Thus, the performance on exact likelihood is even better than the reported variational lower bound. The models' predictive ability for time and space is also important in practical applications [37]. Since time intervals are real values, we use a common metric, Root Mean Square Error (RMSE), to evaluate time prediction. The spatial location can be defined in 𝐷-dimensional space, so we use Euclidean distance to measure the spatial prediction error. We refer the readers to Appendix C.1 for more details of the used evaluation metrics." }, { "figure_ref": [], "heading": "Overall performance", "publication_ref": [], "table_ref": [], "text": "Table 2 and Table 3 show the overall performance of models on NLL and prediction, respectively. Figure 4 shows the prediction performance of models in discrete-space scenarios. From these results, we have the following observations:\n• Unreasonable parametric assumptions for point processes destroy the performance severely. The worst performance of the self-correcting process indicates the assumption that the occurrence of past events inhibits the occurrence of future events, does not match realities. On the contrary, the Hawkes process, which assumes the occurrence of an event increases the probability of event occurrence in the future, outperforms other classical models (Poisson and Self-correcting), with an obvious reduction of temporal NLL. Nevertheless, the self-exciting assumption can still fail when faced with cases where previous events prevent subsequent events. Therefore, classical models that require certain assumptions, cannot cover all situations with different dynamics. • It is necessary to capture the spatio-temporal interdependence. NSTPP models the dependence of space on time by 𝑝 (𝑠 |𝑡), 7 Appendix B provides more details of the used baselines. also achieves remarkably significant improvement across various datasets. In terms of models' predictive power, our model also achieves optimal performance, with remarkable improvements compared to the second-best model. In addition, as Figure 4 shows, DSTPP delivers better predictive performance compared with other solutions in modeling discrete-space scenarios. The flexible framework that requires no parameter assumptions and MC estimations enables DSTPP to achieve superior performance." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "Analysis of Spatio-temporal Interdependence", "publication_ref": [], "table_ref": [], "text": "To gain a deeper understanding of the spatio-temporal interdependence in the denoising process, we perform an in-depth analysis of co-attention weights. Specifically, the analysis is conducted on two representative datasets: Earthquake and Synthetic-Independent, where the Earthquake dataset is highly spatio-temporal entangled, and the Synthetic-Independent dataset is totally spatio-temporal independent. Appendix A provides the generation details of the synthetic dataset. We use these two datasets to validate whether the designed co-attention mechanism can learn different interdependence between time and space. At each step of the denoising process, we calculate attention weights of the temporal and spatial dimensions on themselves and each other. Figure 6 shows how attention weights change as denoising proceeds.\nAs shown in Figure 6(a), at the early stage, temporal and spatial domains do not assign attention weights to each other, and the attention weights on themselves are close to one. At the final stage (step ≥ 150), the two domains start to assign attention weights to each other. At last, for the temporal domain, the attention weights on time and space are approximately 0.83 and 0.17; for the spatial domain, the attention weights are close to evenly divided (0.52 and 0.48), suggesting that the spatial domain is more dependent on the temporal domain. In the later stage of the denoising iterations, the model learns a distribution closer to the real case; thus, it is reasonable that the spatial and temporal domains assign more attention weights to each other. Figure 6(b) displays different results: the two domains share almost no attention weights to each other, indicating that the model has successfully learned the independent relationship. Figure 6(a) and (b) together validate the effectiveness of the co-attention mechanism, which can adaptively learn various interaction mechanisms between time and space." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Co-attention Mechanism. In order to examine the effectiveness of the co-attention mechanism, we degrade our DSTPP into a base framework, DSTPP-Ind, which models the distributions of space and time independently in the denoising process. To be specific, we replace 𝑝 𝜃 (𝑡 𝑘 -1\n𝑖 |𝑡 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 ) and 𝑝 𝜃 (𝑠 𝑘 -1 𝑖 |𝑡 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 ) in Equation (10) with 𝑝 𝜃 (𝑡 𝑘 -1 𝑖 |𝑡 𝑘 𝑖 , ℎ 𝑖 -1 ), 𝑝 𝜃 (𝑠 𝑘 -1 𝑖 |𝑠 𝑘 𝑖 , ℎ 𝑖 -1 )\n, where space and time are not conditionally dependent on each other. Figure 5 shows the performance comparison of DSTPP and DSTPP-Ind in continuous-space settings. We can observe that DSTPP trained by incorporating the joint modeling of time and space performs consistently better than DSTPP-Ind with independent modeling. These results indicate the necessity to capture the interdependence between time and space, and meanwhile, validate the effectiveness of the spatio-temporal co-attention design. Due to the space limit, we leave other results in Appendix D." }, { "figure_ref": [], "heading": "Analysis of Reverse Diffusion Processes", "publication_ref": [], "table_ref": [], "text": "To gain a deeper understanding of the denoising process, We visualize the spatial distribution during the reverse denoising iterations in Figure 7. As we can observe, at the beginning of the denoising process, the spatial distribution displays a Gaussian noise. With progressive denoising iterations, the data distribution deforms gradually and becomes more concentrated. Finally, at the last step, the spatial distribution fits perfectly with the ground truth distribution. It indicates that our DSTPP is able to learn the generative process of spatial distribution successfully. Besides, the denoising process is not a linear change, where the distribution changes during the last 50 steps are more significant than the previous steps. Combined with results in Section 4.3, where the interdependence between spatial and temporal domains is effectively captured in the latter stage, it is reasonable that the denoising effect is improved significantly during this period." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b8", "b27", "b31", "b62", "b67", "b1", "b2", "b21", "b34", "b44", "b64", "b37", "b46", "b37", "b46", "b7", "b56", "b25", "b51", "b30", "b38", "b15", "b32", "b18", "b48", "b49", "b0", "b5", "b45", "b47", "b43", "b50", "b10", "b19", "b23", "b9", "b11", "b26", "b29", "b63", "b14", "b40" ], "table_ref": [], "text": "Spatio-temporal Point Processes. Temporal point process models [9,28,32,63,68] can be directly used for STPPs, where the space is considered as the event marker. Kernel density estimation methods are also used to model continuous-space distributions in STPP models [2,3,22,35,45,65]. Most existing solutions follow an intensity-based paradigm, and their main challenge is how to choose a good parametric form for the intensity function. There exists a trade-off between the modeling capability of the intensity function and the cost to compute the log-likelihood. Some intensityfree models [38,47] are proposed to tackle this problem; however, the probability density function either is unavailable [38] or still has certain model restrictions [47]. Another drawback of existing models is that they can only model either the continuous-space domain or the discrete-space domain, which largely limits their usability in real-world scenarios.\nRecently, a line of advances have been developed for the generative modeling of point processes. For example, generative adversarial networks [8,57] are used to learn to generate point processes in a likelihood-free manner. Reinforcement learning [26,52] approaches and variational autoencoders [31,39] are also included to explore the generative performance of TPPs. Some works also use noise contrastive learning [16,33] instead of MLE. We are the first to learn point processes within the paradigm of diffusion models, which successfully address limitations in previous existing solutions.\nDenoising Diffusion Probabilistic Models. Denoising diffusion probabilistic models (DDPM) [19,49,50], are a class of deep generative models, which are inspired by non-equilibrium thermodynamics. Due to their powerful generative capabilities, diffusion models have been used in a wide range of applications including image generation [1,6,46,48], time series prediction and imputation [44,51], audio generation [11,20,24], text generation [10,12,27], 3D point cloud generation [30,64], and trajectory generation [15,41]. In this paper, we first introduce the diffusion model to the domain of spatio-temporal point processes." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [ "b24", "b59" ], "table_ref": [], "text": "In this paper, we propose a novel framework to directly learn spatiotemporal joint distributions with no requirement for independence assumption and Monte Carlo sampling, which has addressed the structural shortcomings of existing solutions. The framework also poses desired properties like easy training and closed-form sampling. Extensive experiments on diverse datasets highlight the impact of our framework against state-of-the-art STPP models. As for future work, it is promising to apply our model in urban system [25,60] as well as large-scale natural systems, such as climate changes and ocean currents, which are concerned with highly complex spatio-temporal data. validation set, and 100 for the testing set. The sequence lengths range between 5 to 287.\nCitibike: This dataset is collected by a bike-sharing service, which records the demand for bike sharing in New York City. We use the records from April 2019 to August 2019. The start of each trip is considered as an event. We split the record sequence of each bike into one-day subsequences starting at 5:00 am in the day. Therefore, each sequence is within the length of 19 hours. We randomly split the dataset into the training set, validation set, and testing set. Finally, We have 2440 sequences for the training set, 300 for the validation set, and 320 for the testing set. The sequence lengths range from 14 to 204.\nCrime10 : It is provided by the Atlanta Police Department, recording robbery crime events from the end of 2015 to 2017. Each robbery report is associated with the time and the neighborhood. Each sequence is within the length of one day. We randomly split the dataset into the training set, validation set, and testing set. Synthetic-Independent: The temporal domain is generated by a Hawkes process, and the intensity function is defined as follows:\n𝜆(𝑡, |𝐻 𝑡 ) = 0.2 + ∑︁ 𝑡 <𝑡 𝑖 (0.2𝑒 -0.2(𝑡 -𝑡 -𝑡 𝑖 ) + 4𝑒 -10(𝑡 𝑖 -𝑡 ) )(18)\nthe spatial distribution follows a Two-dimensional Gaussian distribution:\n𝑓 (𝑠1, 𝑠2) = (19) 1 2𝜋𝜎 𝑠1 𝜎 𝑠2 √︁ (1 -𝜌 2 ) 𝑒 -1 2(1-𝜌 2 ) [ ( 𝑠1-𝜇 1 𝜎 𝑠1 ) 2 -2𝜌 ( 𝑠1-𝜇 1 𝜎 𝑠1 ) ( 𝑠2-𝜇 2 𝜎 𝑠2 )+( 𝑠2-𝜇 2 𝜎 𝑠2 ) 2 ](20)\nwhere\n𝜌 = √ 2 4 , 𝜇 1 = 4.0, 𝜇 2 = 7.0, 𝜎 𝑠1 = √ 2, 𝜎 𝑠2 = 2." }, { "figure_ref": [], "heading": "B BASELINE", "publication_ref": [ "b3", "b42", "b17", "b20" ], "table_ref": [], "text": "We provide detailed descriptions of used baselines as follows:\n• Conditional KDE: Conditional kernel density estimations. We utilize a history-dependent Gaussian mixture model to model the spatial distribution. • CNF and Time-varying CNF [4]: We use Continuous normalizing flow for modeling spatial distribution. Time-varying CNF denotes the dependence on the timestamps. • Possion [43]: The homogeneous Poisson process is the simplest point process, where the number of events occurring during time intervals are independent, and the probability of a single event occurrence is proportional to the length of the interval. • Hawkes [18]: Its essential property is that the occurrence of any event increases the probability of further events occurring by a certain amount. The triggering kernel which captures temporal dependencies can be chosen in advance or directly learned from data.\n• Self-correcting [21]: In contrast to the Hawkes process, this point process follows the pattern that the occurrence of past events inhibits the occurrence of future events. Every time when a new event appears, the intensity is decreased by multiplying a constant less than 1. " }, { "figure_ref": [], "heading": "C.2 Parameter Settings", "publication_ref": [], "table_ref": [], "text": "We all use three-layer MLPs with ReLU activations and hidden size of 64. The training process is performed in batch. After training the model for ten epochs ( all training instance), we examine the model's performance on validation set. The model that delivers the best performance on the validation set will be used to validate the performance on the test set. We set the learning rate as 3e-4 via searching in a set of {1𝑒 -4, 3𝑒 -4, 1𝑒 -3}. The proposed framework is implemented with Pytorch. We train it on a Linux server with eight GPUs (NVIDIA RTX 2080 Ti * 8). In practice, our framework can be effectively trained within 6 hours on a single GPU." }, { "figure_ref": [ "fig_8" ], "heading": "D ADDITIONAL RESULTS", "publication_ref": [ "b48" ], "table_ref": [], "text": "Diffusion Steps. The number of total steps 𝐾 in the diffusion process is a crucial hyperparameter. With the increase of diffusion steps, the denoising network approximates more minimal changes between steps. A bigger 𝐾 allows the reverse denoising process to be adequately approximated by Gaussian distribution [49]. However, too many diffusion steps will vastly reduce the efficiency of training and sampling. Therefore, it is essential to explore to what extent, a larger diffusion step 𝐾 improves the model's performance. Specifically, we perform ablation studies on Earthquakes and HawkesGMM datasets with varying total diffusion steps 𝐾 = {2, 5, 10, 20, 50, 100, 200, 500, 1000} and keep all other hyperparameters fixed. The results of temporal NLL and spatial NLL are plotted in Figure 8. We can observe that temporal NLL and spatial NLL both achieve the best values at 𝐾 ≈ 200, suggesting that the diffusion step 𝐾 can be reduced to 200 without significant performance loss." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Key Research and Development Program of China under grant 2020YFA0711403, the National Nature Science Foundation of China under U22B2057, 61971267, and U1936217, and BNRist." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Figure 4: The performance of models on discrete-space datasets for both time and space of the next event.\nand the performance regarding spatial metrics is improved compared with independent modeling, including Conditional KDE, CNF, and Time-varying CNF. However, it does not outperform other TPP models in the temporal domain, suggesting that modeling the distribution 𝑝 (𝑡 |𝐻 𝑡 ) without conditioning on the space 𝑠 fails to learn the temporal domain sufficiently. • DSTPP achieves the best performance across multiple datasets.\nIn the continuous-space scenarios regarding NLL, our model performs the best on both temporal and spatial domains. Compared with the second-best model, our model reduces the spatial NLL by over 20% on average. The performance on temporal NLL 9 from March 2020 to July 2020. We aggregate the data at the county level. Sequences are generated by sliding windows with a window size of 7 days and a gap of three days. Therefore, each sequence is within the length of 7 days. We split the dataset into the training set, validation set, and testing set and ensure that there is no overlap between them. Finally, We have 1450 sequences for the training set, 100 for the" } ]
Spatio-temporal point process (STPP) is a stochastic collection of events accompanied with time and space. Due to computational complexities, existing solutions for STPPs compromise with conditional independence between time and space, which consider the temporal and spatial distributions separately. The failure to model the joint distribution leads to limited capacities in characterizing the spatio-temporal entangled interactions given past events. In this work, we propose a novel parameterization framework for STPPs, which leverages diffusion models to learn complex spatio-temporal joint distributions. We decompose the learning of the target joint distribution into multiple steps, where each step can be faithfully described by a Gaussian distribution. To enhance the learning of each step, an elaborated spatio-temporal co-attention module is proposed to capture the interdependence between the event time and space adaptively. For the first time, we break the restrictions on spatiotemporal dependencies in existing solutions, and enable a flexible and accurate modeling paradigm for STPPs. Extensive experiments from a wide range of fields, such as epidemiology, seismology, crime, and urban mobility, demonstrate that our framework outperforms the state-of-the-art baselines remarkably. Further in-depth analyses validate its ability to capture spatio-temporal interactions, which can learn adaptively for different scenarios. The datasets and source code are available online: https://github.com/tsinghua-fiblab/Spatio-temporal-Diffusion-Point-Processes.
Spatio-temporal Diffusion Point Processes
[ { "figure_caption": "Figure 1 :1Figure 1: High-level comparison between our proposed framework and conditionally independent solutions for modeling STPPs. Our framework can directly learn the spatiotemporal joint distribution without any model restrictions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The overview of the proposed DSTPP framework.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Network architecture of the spatio-temporal coattention mechanism. Each step in the denoising process shares the same network structure, with spatio-temporal hidden representations as conditions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Ablation study on the joint spatio-temporal modeling. DSTPP-Ind denotes the degraded version of DSTPP, where spatial and temporal domains are independent.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Spatial and temporal attention weights in the denoising iterations for two datasets with different spatiotemporal interdependence. Best viewed in color.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Earthquakes COVID- 19 HawkesGMMFigure 7 :197Figure 7: Visualization of the spatial distribution at different stages in the denoising process (the first five columns in blue color). The last column in red color presents the real distribution. Starting from Gaussian noise, our DSTPP model gradually fits the spatial distribution of ground truth. Best viewed in color.", "figure_data": "", "figure_id": "fig_5", "figure_label": "197", "figure_type": "figure" }, { "figure_caption": "Finally, We have 2000 sequences for the training set, 200 for the validation set, and 2000 for the testing set. The sequence lengths range between 26 to 144.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "•Recurrent Marked Temporal Point Process (RMTPP)[9]: This neural temporal point process model applies a nonlinear function of past events to model the intensity function and leverages RNNs to learn a representation of the influence from event history, where time intervals act as explicit inputs.• Neural Hawkes Process (NHP)[32]: With the goal of capturing the temporal evolution of event sequences, it uses continuoustime LSTMs to model a marked TPP. The modeling of future event intensities is conditioned on the RNN's hidden state. • Transformer Hawkes Process (THP)[68]: It is an extension to the transformer by modeling the conditional intensity. The selfattention mechanism is leveraged to capture long-term dependencies. • Self-attentive Hawkes Process (SAHP)[63]: It learns the temporal dynamics by leveraging a self-attention mechanism to aggregate historical events. In order to take the time intervals between events into consideration, it modifies the conventional positional encoding by converting time intervals into phase shifts of sinusoidal functions. • Log Normal Mixture model (LogNormMix) [47]: It adopts intensityfree learning of TPPs, which models the PDF by a log-normal mixture model. Additionally, a simple mixture model is proposed to match the flexibility of flow-based models. The loglikelihood for training and density for sampling are in closed form. • Wasserstein GAN (WGAN) [57]: This intensity-free approach transforms nuisance processes to target one. And the Wasserstein distance is used to train the model, which is a likelihood-free method. Loglikelihood cannot be obtained for this approach. • Neural Jump Stochastic Differential Equations (NJSDE) [47]: It models TPPs with a piecewise-continuous latent representation, where the discontinuities are brought by stochastic events. The spatial distribution is modeled with a Gaussian mixture model. • Neural Spatio-temporal Point Process (NSTPP) [3]: It applies Neural ODEs as the backbone, which parameterizes the temporal intensity with Neural Jump SDEs and the spatial PDF with continuous-time normalizing flows. • Deep Spatiotemporal Point Process (DeepSTPP) [65]: It is the state-of-the-art STPP model, which suggests using a non-parametric space-time intensity function governed by a latent process. Amortized variational inference is leveraged to deduce the latent process.C IMPLEMENTATION DETAILS C.1 Evaluation MetricsSuppose 𝒚 = 𝑦 1 , ..., 𝑦 𝑀 represents the ground truth for real values, ŷ = ŷ1 , ..., ŷ𝑁 represents the predicted real values, 𝒌 = 𝑘 1 , ..., 𝑘 𝑀 represents the ground truth for real values, k = k1 , ..., k𝑁 represents the predicted discrete labels, and 𝑁 denotes the number of test samples, we can formulate these metrics as follows:", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Ablation studies on the total number of diffusion steps for Earthquake and HawkesGMM data. We observe similar results with other datasets.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Comparison of the proposed model with other point process approaches regarding important properties.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Yuan Yuan; Jingtao Ding; Chenyang Shao; Depeng Jin; Yong Li
[ { "authors": "Jacob Austin; Jonathan Daniel D Johnson; Daniel Ho; Rianne Tarlow; Van Den; Berg", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "2021" }, { "authors": "Adrian Baddeley; Imre Bárány; Rolf Schneider", "journal": "", "ref_id": "b1", "title": "Spatial point processes and their applications", "year": "2004" }, { "authors": "Brandon Ricky Tq Chen; Maximilian Amos; Nickel", "journal": "", "ref_id": "b2", "title": "Neural spatiotemporal point processes", "year": "2021" }, { "authors": "Yulia Ricky Tq Chen; Jesse Rubanova; David K Bettencourt; Duvenaud", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Neural ordinary differential equations", "year": "2018" }, { "authors": "Daryl J Daley; David Vere-Jones", "journal": "Springer", "ref_id": "b4", "title": "An introduction to the theory of point processes: volume I: elementary theory and methods", "year": "2003" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "J Peter; Diggle", "journal": "Monographs on Statistics and Applied Probability", "ref_id": "b6", "title": "Spatio-temporal point processes: methods and applications", "year": "2006" }, { "authors": "S Haleh; Saeid Dizaji; Javad Pashazadeh; Niya Musevi", "journal": "The Journal of Supercomputing", "ref_id": "b7", "title": "Wasserstein generative adversarial networks for modeling marked events", "year": "2022" }, { "authors": "Nan Du; Hanjun Dai; Rakshit Trivedi; Utkarsh Upadhyay; Manuel Gomez-Rodriguez; Le Song", "journal": "", "ref_id": "b8", "title": "Recurrent marked temporal point processes: Embedding event history to vector", "year": "2016" }, { "authors": "Zhujin Gao; Junliang Guo; Xu Tan; Yongxin Zhu; Fang Zhang; Jiang Bian; Linli Xu", "journal": "", "ref_id": "b9", "title": "Difformer: Empowering Diffusion Model on Embedding Space for Text Generation", "year": "2022" }, { "authors": "Karan Goel; Albert Gu; Chris Donahue; Christopher Ré", "journal": "PMLR", "ref_id": "b10", "title": "It's raw! audio generation with state-space models", "year": "2022" }, { "authors": "Shansan Gong; Mukai Li; Jiangtao Feng; Zhiyong Wu; Lingpeng Kong", "journal": "", "ref_id": "b11", "title": "Diffuseq: Sequence to sequence text generation with diffusion models", "year": "2022" }, { "authors": "A Jonatan; Francisco J González; Ottmar Rodríguez-Cortés; Jorge Cronie; Mateu", "journal": "Spatial Statistics", "ref_id": "b12", "title": "Spatio-temporal point process statistics: a review", "year": "2016" }, { "authors": "Jan Grandell", "journal": "Springer Science & Business Media", "ref_id": "b13", "title": "Aspects of risk theory", "year": "2012" }, { "authors": "Tianpei Gu; Guangyi Chen; Junlong Li; Chunze Lin; Yongming Rao; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b14", "title": "Stochastic trajectory prediction via motion indeterminacy diffusion", "year": "2022" }, { "authors": "Ruocheng Guo; Jundong Li; Huan Liu", "journal": "", "ref_id": "b15", "title": "INITIATOR: Noise-contrastive Estimation for Marked Temporal Point Process", "year": "2018" }, { "authors": "Alan G Hawkes", "journal": "Journal of the Royal Statistical Society: Series B (Methodological)", "ref_id": "b16", "title": "Point spectra of some mutually exciting point processes", "year": "1971" }, { "authors": "Alan G Hawkes", "journal": "Quantitative Finance", "ref_id": "b17", "title": "Hawkes processes and their applications to finance: a review", "year": "2018" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey Gritsenko; William Chan; Mohammad Norouzi; David J Fleet", "journal": "", "ref_id": "b19", "title": "Video diffusion models", "year": "2022" }, { "authors": "Valerie Isham; Mark Westcott", "journal": "Stochastic processes and their applications", "ref_id": "b20", "title": "A self-correcting point process", "year": "1979" }, { "authors": "Junteng Jia; Austin R Benson", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Neural jump stochastic differential equations", "year": "2019" }, { "authors": "John Frank; Charles Kingman", "journal": "Clarendon Press", "ref_id": "b22", "title": "Poisson processes", "year": "1992" }, { "authors": "Zhifeng Kong; Wei Ping; Jiaji Huang; Kexin Zhao; Bryan Catanzaro", "journal": "", "ref_id": "b23", "title": "Diffwave: A versatile diffusion model for audio synthesis", "year": "2020" }, { "authors": "Fuxian Li; Huan Yan; Guangyin Jin; Yue Liu; Yong Li; Depeng Jin", "journal": "", "ref_id": "b24", "title": "Automated Spatio-Temporal Synchronous Modeling with Multiple Graphs for Traffic Prediction", "year": "2022" }, { "authors": "Shuang Li; Shuai Xiao; Shixiang Zhu; Nan Du; Yao Xie; Le Song", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "Learning temporal point processes via reinforcement learning", "year": "2018" }, { "authors": "Lisa Xiang; John Li; Ishaan Thickstun; Percy Gulrajani; Tatsunori B Liang; Hashimoto", "journal": "", "ref_id": "b26", "title": "Diffusion-lm improves controllable text generation", "year": "2022" }, { "authors": "Haitao Lin; Lirong Wu; Guojiang Zhao; Pai Liu; Stan Z Li", "journal": "JMLR", "ref_id": "b27", "title": "Exploring Generative Neural Temporal Point Process", "year": "2022" }, { "authors": "Qingyue Long; Huandong Wang; Tong Li; Lisi Huang; Kun Wang; Qiong Wu; Guangyu Li; Yanping Liang; Li Yu; Yong Li", "journal": "", "ref_id": "b28", "title": "Practical Synthetic Human Trajectories Generation Based on Variational Point Processes", "year": "2023" }, { "authors": "Shitong Luo; Wei Hu", "journal": "", "ref_id": "b29", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "Nazanin Mehrasa; Abdu Akash; Thibaut Jyothi; Jiawei Durand; Leonid He; Greg Sigal; Mori", "journal": "", "ref_id": "b30", "title": "A variational auto-encoder model for stochastic point processes", "year": "2019" }, { "authors": "Hongyuan Mei; Jason M Eisner", "journal": "Advances in neural information processing systems", "ref_id": "b31", "title": "The neural hawkes process: A neurally self-modulating multivariate point process", "year": "2017" }, { "authors": "Hongyuan Mei; Tom Wan; Jason Eisner", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Noise-contrastive estimation for multivariate point processes", "year": "2020" }, { "authors": "Sebastian Meyer; Johannes Elias; Michael Höhle", "journal": "Biometrics", "ref_id": "b33", "title": "A space-time conditional intensity model for invasive meningococcal disease occurrence", "year": "2012" }, { "authors": "Jesper Moller; Rasmus Plenge Waagepetersen", "journal": "CRC press", "ref_id": "b34", "title": "Statistical inference and simulation for spatial point processes", "year": "2003" }, { "authors": "Yosihiko Ogata", "journal": "Journal of the American Statistical association", "ref_id": "b35", "title": "Statistical models for earthquake occurrences and residual analysis for point processes", "year": "1988" }, { "authors": "Maya Okawa; Tomoharu Iwata; Takeshi Kurashima; Yusuke Tanaka; Hiroyuki Toda; Naonori Ueda", "journal": "", "ref_id": "b36", "title": "Deep mixture point processes: Spatio-temporal event prediction with rich contextual information", "year": "2019" }, { "authors": "Takahiro Omi; Kazuyuki Aihara", "journal": "Advances in neural information processing systems", "ref_id": "b37", "title": "Fully neural network based model for general temporal point processes", "year": "2019" }, { "authors": "Zhen Pan; Zhenya Huang; Defu Lian; Enhong Chen", "journal": "", "ref_id": "b38", "title": "A variational point process model for social event sequences", "year": "2020" }, { "authors": "Junhyung Park; Adam W Chaffee; Ryan J Harrigan; Frederic Paik Schoenberg", "journal": "Journal of Applied Statistics", "ref_id": "b39", "title": "A non-parametric hawkes model of the spread of ebola in west africa", "year": "2022" }, { "authors": "Jami Pekkanen; Oscar Terence Giles; Yee Mun Lee; Ruth Madigan; Tatsuru Daimon; Natasha Merat; Gustav Markkula", "journal": "Computational Brain & Behavior", "ref_id": "b40", "title": "Variable-drift diffusion models of pedestrian road-crossing decisions", "year": "2021" }, { "authors": "Rüdiger Rackwitz; Bernd Flessler", "journal": "Computers & structures", "ref_id": "b41", "title": "Structural reliability under combined random load sequences", "year": "1978" }, { "authors": "Jakob Gulddahl; Rasmussen ", "journal": "", "ref_id": "b42", "title": "Lecture notes: Temporal point processes and the conditional intensity function", "year": "2018" }, { "authors": "Kashif Rasul; Calvin Seward; Ingmar Schuster; Roland Vollgraf", "journal": "PMLR", "ref_id": "b43", "title": "Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting", "year": "2021" }, { "authors": "Alex Reinhart", "journal": "Statist. Sci", "ref_id": "b44", "title": "A review of self-exciting spatio-temporal point processes and their applications", "year": "2018" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b45", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Oleksandr Shchur; Marin Biloš; Stephan Günnemann", "journal": "", "ref_id": "b46", "title": "Intensity-free learning of temporal point processes", "year": "2019" }, { "authors": "Abhishek Sinha; Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b47", "title": "D2c: Diffusion-decoding models for few-shot conditional generation", "year": "2021" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b48", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b49", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Yusuke Tashiro; Jiaming Song; Yang Song; Stefano Ermon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b50", "title": "CSDI: Conditional score-based diffusion models for probabilistic time series imputation", "year": "2021" }, { "authors": "Utkarsh Upadhyay; Abir De; Manuel Gomez Rodriguez", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b51", "title": "Deep reinforcement learning of marked temporal point processes", "year": "2018" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b52", "title": "Attention is all you need", "year": "2017" }, { "authors": "Huandong Wang; Qiaohong Yu; Yu Liu; Depeng Jin; Yong Li", "journal": "Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies", "ref_id": "b53", "title": "Spatiotemporal urban knowledge graph enabled mobility prediction", "year": "2021" }, { "authors": "Qianlong Wang; Yifan Guo; Lixing Yu; Pan Li", "journal": "IEEE Transactions on Emerging Topics in Computing", "ref_id": "b54", "title": "Earthquake prediction based on spatio-temporal data mining: an LSTM network approach", "year": "2017" }, { "authors": "Stanisław Węglarczyk", "journal": "EDP Sciences", "ref_id": "b55", "title": "Kernel density estimation and its application", "year": "2018" }, { "authors": "Shuai Xiao; Mehrdad Farajtabar; Xiaojing Ye; Junchi Yan; Le Song; Hongyuan Zha", "journal": "Advances in neural information processing systems", "ref_id": "b56", "title": "Wasserstein learning of deep generative point process models", "year": "2017" }, { "authors": "Zheng Xu; Yunhuai Liu; Lin Neil Y Yen; Xiangfeng Mei; Xiao Luo; Chuanping Wei; Hu", "journal": "IEEE Transactions on Cloud Computing", "ref_id": "b57", "title": "Crowdsourcing based description of urban emergency events using social media big data", "year": "2016" }, { "authors": "Guolei Yang; Ying Cai; Chandan K Reddy", "journal": "", "ref_id": "b58", "title": "Recurrent spatio-temporal point process for check-in time prediction", "year": "2018" }, { "authors": "Fudan Yu; Wenxuan Ao; Huan Yan; Guozhen Zhang; Wei Wu; Yong Li", "journal": "", "ref_id": "b59", "title": "Spatio-Temporal Vehicle Trajectory Recovery on Road Network Based on Traffic Camera Video Data", "year": "2022" }, { "authors": "Yuan Yuan; Jingtao Ding; Huandong Wang; Depeng Jin; Yong Li", "journal": "", "ref_id": "b60", "title": "Activity Trajectory Generation via Modeling Spatiotemporal Dynamics", "year": "2022" }, { "authors": "Yuan Yuan; Huandong Wang; Jingtao Ding; Depeng Jin; Yong Li", "journal": "", "ref_id": "b61", "title": "Learning to Simulate Daily Activities via Modeling Dynamic Human Needs", "year": "2023" }, { "authors": "Qiang Zhang; Aldo Lipani; Omer Kirnap; Emine Yilmaz", "journal": "", "ref_id": "b62", "title": "Self-attentive Hawkes process", "year": "2020" }, { "authors": "Linqi Zhou; Yilun Du; Jiajun Wu", "journal": "", "ref_id": "b63", "title": "3d shape generation and completion through point-voxel diffusion", "year": "2021" }, { "authors": "Zihao Zhou; Xingyi Yang; Ryan Rossi; Handong Zhao; Rose Yu", "journal": "PMLR", "ref_id": "b64", "title": "Neural Point Process for Learning Spatiotemporal Event Dynamics", "year": "2022" }, { "authors": "Shixiang Zhu; Ruyi Ding; Minghe Zhang; Pascal Van Hentenryck; Yao Xie", "journal": "IEEE TITS", "ref_id": "b65", "title": "Spatio-temporal point processes with attention for traffic congestion event modeling", "year": "2021" }, { "authors": "Jiancang Zhuang", "journal": "Journal of the Royal Statistical Society: Series B (Statistical Methodology)", "ref_id": "b66", "title": "Second-order residual analysis of spatiotemporal point processes and applications in model evaluation", "year": "2006" }, { "authors": "Simiao Zuo; Haoming Jiang; Zichong Li; Tuo Zhao; Hongyuan Zha", "journal": "PMLR", "ref_id": "b67", "title": "Transformer hawkes process", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 324.46, 139.86, 211.05, 144.38 ], "formula_id": "formula_0", "formula_text": "✗ ✗ ✗ Self-correcting [21] ✗ ✗ ✗ ✗ KDE [2] - - ✗ ✓ CNF [3] - - ✗ ✓ ST Hawkes [45] ✗ ✗ ✗ ✗ RMTPP [9] ✗ ✗ ✓ ✗ NHP [32] ✗ ✗ ✓ ✗ THP [68] ✗ ✗ ✓ ✗ SNAP [63] ✗ ✗ ✓ ✗ LogNormMix [47] ✗ ✗ ✗ ✓ NJSDE [22] ✗ ✗ ✓ ✗ Neural STPP [3] ✓ ✗ ✓ ✗ DeepSTPP [65] ✗ ✗ ✓ ✗ DSTPP (ours) ✓ ✓ ✓ ✓(" }, { "formula_coordinates": [ 3, 118.4, 498.03, 176.19, 25.4 ], "formula_id": "formula_1", "formula_text": "𝑞(𝑥 1:𝐾 |𝑥 0 ) = 𝐾 𝑘=1 𝑞(𝑥 𝑘 |𝑥 𝑘 -1 ) ,(1)" }, { "formula_coordinates": [ 3, 84.96, 646.23, 209.62, 40.91 ], "formula_id": "formula_2", "formula_text": "𝑝 𝜃 (𝑥 0:𝐾 ) 𝑝 (𝑥 𝐾 ) 𝐾 𝑘=1 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) , 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) N (𝑥 𝑘 -1 ; 𝜇 𝜃 (𝑥 𝑘 , 𝑘), 𝜎 𝜃 (𝑥 𝑘 , 𝑘)𝑰 ) ,(2)" }, { "formula_coordinates": [ 3, 321.2, 374.97, 221.84, 25.82 ], "formula_id": "formula_3", "formula_text": "min 𝜃 E 𝑞 (𝑥 0 ) ≤ min 𝜃 E 𝑞 (𝑥 0:𝐾 ) [-log𝑝 (𝑥 𝐾 ) - 𝐾 ∑︁ 𝑘=1 log 𝑝 𝜃 (𝑥 𝑘 -1 |𝑥 𝑘 ) 𝑞(𝑥 𝑘 |𝑥 𝑘 -1 )" }, { "formula_coordinates": [ 3, 367.76, 449.76, 190.98, 12.58 ], "formula_id": "formula_4", "formula_text": "E 𝑥 0 ∼𝑞 (𝑥 0 ),𝜖∼N (0,𝑰 ) [∥𝜖 -𝜖 𝜃 (𝑥 𝑘 , 𝑘)∥ 2 ] ,(4)" }, { "formula_coordinates": [ 4, 53.8, 101.88, 196.98, 68.46 ], "formula_id": "formula_5", "formula_text": "Input: ℎ 𝑖 -1 Repeat: 𝑥 0 𝑖 ∼ 𝑞(𝑥 0 𝑖 ), 𝑘 ∼ Uniform(1, 2, ..., 𝐾) 𝜖 ∼ N (0, 𝐼 ) Take gradient descent step on ∇ 𝜙,𝜃 ∥𝜖 -𝜖 𝜃 ( √︁ 𝛼 𝑘 𝑥 0 𝑖 + √︁ 1 -𝛼 𝑘 𝜖, ℎ 𝑖 -1 , 𝑘)∥ 2" }, { "formula_coordinates": [ 4, 100.07, 310.61, 194.51, 27.45 ], "formula_id": "formula_6", "formula_text": "[𝑒 𝑡 ] 𝑗 = 𝑐𝑜𝑠 (𝑡/10000 𝑗 -1 𝑀 ) if 𝑗 is odd 𝑠𝑖𝑛(𝑡/10000 𝑗 -1 𝑀 ) if 𝑗 is even ,(5)" }, { "formula_coordinates": [ 4, 157.71, 383, 136.88, 8.24 ], "formula_id": "formula_7", "formula_text": "𝑒 𝑠 = 𝑊 𝑒 𝑠(6)" }, { "formula_coordinates": [ 4, 102.55, 651.28, 192.03, 33.57 ], "formula_id": "formula_8", "formula_text": "Attention(𝑄, 𝐾, 𝑉 ) = Softmax( 𝑄𝐾 𝑇 √ 𝑑 ) , 𝑆 = Attention(𝑄, 𝐾, 𝑉 )𝑉 ,(7)" }, { "formula_coordinates": [ 4, 317.62, 86.2, 206.64, 86.51 ], "formula_id": "formula_9", "formula_text": "Algorithm 2 Sampling 𝑠 0 𝑖 and 𝜏 0 𝑖 Input: Noise 𝑠 𝐾 𝑖 ∼ N (0, 𝐼 ), 𝜏 𝐾 𝑖 ∼ N (0, 𝐼 ) and ℎ 𝑖 -1 for k = K to 1 do 𝑧 𝑠 ∼ N (0, 𝐼 ), 𝑧 𝑡 ∼ N (0, 𝐼 ) if k>1 else 𝑧 𝑠 = 0, 𝑧 𝑡 = 0 𝑠 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝑠 𝑘 𝑖 - 𝛽 𝑘 √ 1-𝛼 𝑘 𝜖 𝜃 (𝑠 𝑘 𝑖 , 𝜏 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑠 𝜏 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝜏 𝑘 𝑖 - 𝛽 𝑘 √ 1-𝛼 𝑘 𝜖 𝜃 (𝑠 𝑘 𝑖 , 𝜏 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑡" }, { "formula_coordinates": [ 4, 379.26, 235.91, 179.48, 8.96 ], "formula_id": "formula_10", "formula_text": "𝑄 = 𝐸𝑊 𝑄 , 𝐾 = 𝐸𝑊 𝐾 , 𝑉 = 𝐸𝑊 𝑉 ,(8)" }, { "formula_coordinates": [ 4, 359.08, 548.82, 199.66, 27.49 ], "formula_id": "formula_11", "formula_text": "𝑞 𝑠𝑡 (𝒙 𝑘 𝑖 |𝒙 𝑘 -1 𝑖 ) (𝑞(𝜏 𝑘 𝑖 |𝜏 𝑘 -1 𝑖 ), 𝑞(𝑠 𝑘 𝑖 |𝑠 𝑘 -1 𝑖 )) , 𝑞(𝑥 𝑘 |𝑥 𝑘 -1 ) N (𝑥 𝑘 ; √︁ 1 -𝛽 𝑘 𝑥 𝑘 , 𝛽 𝑘 𝑰 ) ,(9)" }, { "formula_coordinates": [ 4, 321.71, 687.54, 237.03, 21.89 ], "formula_id": "formula_12", "formula_text": "𝑝 𝜃 (𝑥 𝑘 -1 𝑖 |𝑥 𝑘 𝑖 , ℎ 𝑖 -1 ) = 𝑝 𝜃 (𝜏 𝑘 -1 𝑖 |𝜏 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 )𝑝 𝜃 (𝑠 𝑘 -1 𝑖 |𝜏 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 ) ,(10)" }, { "formula_coordinates": [ 5, 86.55, 147.08, 204.61, 25.4 ], "formula_id": "formula_13", "formula_text": "𝑝 𝜃 (𝑥 0:𝐾 𝑖 |ℎ 𝑖 -1 ) 𝑝 (𝑥 𝐾 𝑖 ) 𝐾 𝑘=1 𝑝 𝜃 (𝑥 𝑘 -1 𝑖 |𝑥 𝑘 𝑖 , ℎ 𝑖 -1 ) . (11" }, { "formula_coordinates": [ 5, 291.16, 155.35, 3.42, 7.94 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 136.57, 287.85, 154.6, 24.75 ], "formula_id": "formula_15", "formula_text": "𝐿 ∑︁ 𝑖=1 log𝑝 𝜃 (𝑥 0 𝑖 |ℎ 𝑖 -1 ) , (12" }, { "formula_coordinates": [ 5, 291.16, 296.13, 3.42, 7.94 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 5, 71.08, 378.76, 223.5, 13.95 ], "formula_id": "formula_17", "formula_text": "L = E 𝑥 0 𝑖 ,𝜖,𝑘 [∥𝜖 -𝜖 𝜃 ( √︁ 𝛼 𝑘 𝑥 0 𝑖 + √︁ 1 -𝛼 𝑘 𝜖, ℎ 𝑖 -1 , 𝑘)∥ 2 ] ,(13)" }, { "formula_coordinates": [ 5, 69.87, 531.26, 224.72, 49.15 ], "formula_id": "formula_18", "formula_text": "𝑠 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝑠 𝑘 𝑖 - 𝛽 𝑘 √︁ 1 -𝛼 𝑘 𝜖 𝜃 (𝑥 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑠 , 𝜏 𝑘 -1 𝑖 = 1 √ 𝛼 𝑘 (𝜏 𝑘 𝑖 - 𝛽 𝑘 √︁ 1 -𝛼 𝑘 𝜖 𝜃 (𝑥 𝑘 𝑖 , ℎ 𝑖 -1 , 𝑘)) + √︁ 𝛽 𝑘 𝑧 𝑡 ,(14)" }, { "formula_coordinates": [ 5, 358.72, 478.66, 200.02, 36.33 ], "formula_id": "formula_19", "formula_text": "𝑒 𝑘 = SinusoidalPosEmb(𝑘) , 𝛼 𝑠 = Softmax(𝑊 𝑠𝑎 Concat(ℎ 𝑖 -1 , 𝑒 𝑘 ) + 𝑏 𝑠𝑎 ) , 𝛼 𝑡 = Softmax(𝑊 𝑡𝑎 Concat(ℎ 𝑖 -1 , 𝑒 𝑘 ) + 𝑏 𝑡𝑎 ) ,(15)" }, { "formula_coordinates": [ 5, 354.58, 597.95, 204.16, 26.1 ], "formula_id": "formula_20", "formula_text": "𝑥 𝑠,𝑖 = 𝜎 (𝑊 𝑠 𝑠 𝑘+1 𝑖 + 𝑏 𝑠 + 𝑊 𝑠ℎ ℎ 𝑠,𝑖 -1 + 𝑏 𝑠ℎ + 𝑒 𝑘 ) , 𝑥 𝑡,𝑖 = 𝜎 (𝑊 𝑡 𝜏 𝑘+1 𝑖 + 𝑏 𝑡 + 𝑊 𝑡ℎ ℎ 𝑡,𝑖 -1 + 𝑏 𝑡ℎ + 𝑒 𝑘 ) ,(16)" }, { "formula_coordinates": [ 5, 382.22, 683.73, 176.52, 25.92 ], "formula_id": "formula_21", "formula_text": "𝑥 𝑖 = [𝑥 𝑠,𝑖 , 𝑥 𝑡,𝑖 ] , 𝜖 𝑘 𝑠,𝑖 = ∑︁ 𝛼 𝑠 𝑥 𝑖 , 𝜖 𝑘 𝑡,𝑖 = ∑︁ 𝛼 𝑡 𝑥 𝑖 ,(17)" }, { "formula_coordinates": [ 8, 317.96, 511.19, 239.8, 24.4 ], "formula_id": "formula_22", "formula_text": "𝑖 |𝑡 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 ) and 𝑝 𝜃 (𝑠 𝑘 -1 𝑖 |𝑡 𝑘 𝑖 , 𝑠 𝑘 𝑖 , ℎ 𝑖 -1 ) in Equation (10) with 𝑝 𝜃 (𝑡 𝑘 -1 𝑖 |𝑡 𝑘 𝑖 , ℎ 𝑖 -1 ), 𝑝 𝜃 (𝑠 𝑘 -1 𝑖 |𝑠 𝑘 𝑖 , ℎ 𝑖 -1 )" }, { "formula_coordinates": [ 11, 343.72, 343.09, 215.02, 20.93 ], "formula_id": "formula_23", "formula_text": "𝜆(𝑡, |𝐻 𝑡 ) = 0.2 + ∑︁ 𝑡 <𝑡 𝑖 (0.2𝑒 -0.2(𝑡 -𝑡 -𝑡 𝑖 ) + 4𝑒 -10(𝑡 𝑖 -𝑡 ) )(18)" }, { "formula_coordinates": [ 11, 321.96, 412.72, 236.78, 47.33 ], "formula_id": "formula_24", "formula_text": "𝑓 (𝑠1, 𝑠2) = (19) 1 2𝜋𝜎 𝑠1 𝜎 𝑠2 √︁ (1 -𝜌 2 ) 𝑒 -1 2(1-𝜌 2 ) [ ( 𝑠1-𝜇 1 𝜎 𝑠1 ) 2 -2𝜌 ( 𝑠1-𝜇 1 𝜎 𝑠1 ) ( 𝑠2-𝜇 2 𝜎 𝑠2 )+( 𝑠2-𝜇 2 𝜎 𝑠2 ) 2 ](20)" }, { "formula_coordinates": [ 11, 342.73, 465.48, 158.27, 19.85 ], "formula_id": "formula_25", "formula_text": "𝜌 = √ 2 4 , 𝜇 1 = 4.0, 𝜇 2 = 7.0, 𝜎 𝑠1 = √ 2, 𝜎 𝑠2 = 2." } ]
2023-05-21
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b9", "b14", "b15" ], "table_ref": [], "text": "Change detection (CD) involves using remote sensing technologies to compare and analyze images taken at different times in the same area, detecting changes in ground objects between two or more images [1]. Hyperspectral data provides continuous spectral information, making it ideal for detecting subtle changes on the Earth's surface. As such, hyperspectral image change detection (HSI-CD) has become an important Xiangrong Zhang, Shunli Tian, Guanchun Wang, and Licheng Jiao are with the Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi Province 710071, China.\nHuiyu Zhou is with the School of Informatics, University of Leicester, Leicester LE1 7RH.U.K.\nThis work was supported in part by the National Natural Science Foundation of China under Grant 61871306, Grant 62171332. research focus in remote sensing [2], with applications in land use and land cover change [3], ecosystem monitoring, natural disaster damage assessment [4], and more.\nBroadly speaking, HSI-CD can be achieved using supervised and unsupervised method. Most current methods rely on supervised deep learning networks trained with high-quality labeled samples [5], [6]. However, obtaining high-quality labeled training samples is costly and time-consuming. Thus, reducing or eliminating the reliance on labeled data is critical to addressing the challenge of HSI-CD.\nAlthough deep learning based supervised HSI-CD methods have shown promising results, they still face several challenges: 1) There are often insufficient labeled samples for HSI-CD, necessitating the need to effectively leverage labeled and unlabeled data to train deep learning networks. 2) HSI-CD involves spatiotemporal data, where changes occur over time and exhibit spatial correlations. While existing approaches primarily focus on extracting features, they often overlook the importance of considering spectral-spatial semantic correlations. Incorporating such correlations is essential for accurate CD.\n3) Objects with the same semantic concept at the same spatial location can exhibit different spectral features at different times due to changes in imaging conditions and environments (i.e., the same objects with different spectra). While most deep learning-based CD methods focus on fully extracting spectral features, none of them has investigated extracting spectral difference invariant features caused by environmental changes.\nRecently, many unsupervised HSI-CD methods [7], [8] have been proposed. Unlike supervised methods, unsupervised methods do not require pre-labeled data and can learn features of changed regions using only two HSIs. This confers a significant advantage over supervised methods, as it avoids the need for labor-intensive and time-consuming labeling and mitigates issues such as inaccurate and inconsistent labeling. However, the accuracy of unsupervised methods is often lower than that of supervised methods, despite their ability to function without any annotation information.\nDiffusion models have recently demonstrated remarkable successes in image generation and synthesis [9], [10]. Thanks to their excellent generative capabilities, researchers have begun exploring the application of diffusion models in visual understanding tasks such as semantic segmentation [11], [12], object detection [13], image colorization [14], super-resolution [10], [15], and more. However, their potential for HSI-CD remains largely unexplored. As such, how to apply diffusion models to HSI-CD remains an open problem.\nTo address the challenges faced by HSI-CD, we propose an unsupervised approach based on semantic correlation diffusion model (SCDM) that leverages its strong denoising generation ability. This method consists of two main steps. Firstly, the denoising process of the SCDM can utilize many unlabeled samples, fully consider the semantic correlation of spectralspatial features, and retrieve the features of the original image semantic correlation. Secondly, we propose a cross-temporal contrastive learning (CTCL) mechanism to address the problem of spectral variations caused by environmental changes. This method aligns the spectral feature representations of unchanged samples cross-temporally, enabling the network to learn features that are invariant to these spectral differences.\nThe main contributions of this paper are:\n• We propose DiffUCD, the first diffusion model designed explicitly for HSI-CD, which can fully consider the semantic correlation of spectral-spatial features and retrieve semantically related features in the original image. Through experiments on three publicly available datasets (Santa Barbara, Bay Area, and Hermiston), we demonstrate that DiffUCD outperforms state-of-the-art methods by a significant margin. Specifically, our method achieves OA values of 96.87%, 96.35%, and 95.47% on the three datasets, respectively, which are 5.73%, 5.56%, and 0.57% higher than those achieved by the state-of-the-art unsupervised method. Even when trained with the same number of human-labeled training samples, our method exhibits competitive performance compared to supervised methods. When compared to ML-EDAN [16], our method achieves slightly better or similar performance, with OA values changing by -1.13%, -0.12%, and +0.89%, respectively. In summary, our approach extends the application of diffusion models to HSI-CD, achieving superior results compared to previous methods.\nThe rest of this article is organized as follows. Section II introduces the related work of this paper. Section III introduces the proposed framework for HSI-CD in detail. Section IV introduces the experiments. Finally, the conclusion of this paper is drawn in Section V." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Unsupervised HSI-CD", "publication_ref": [ "b16", "b17", "b17", "b18", "b19", "b20", "b20" ], "table_ref": [], "text": "There has been a growing interest in unsupervised HSI-CD methods based on deep learning in recent years. Recent studies have focused on mitigating the impact of noisy labels in pseudo-labels [17], [18]. Li et al. [18] proposed an unsupervised fully convolutional HSI-CD framework based on noise modeling. This framework uses parallel Siamese fully convolutional networks (FCNs) to extract features from bitemporal images separately. The unsupervised noise modeling module can alleviate the accuracy limitation caused by pseudolabels. An unsupervised method [19] that self-generates trusted labels has been proposed to improve pseudo-labels' quality. This method combines two model-driven methods, CVA and SSIM, to generate trusted pseudo-training sets, and the trusted pseudo-labels can improve the performance of deep learning networks. While recent advances in unsupervised HSI-CD methods have shown promise, the efficient extraction of changing features remains challenging [20], [21]. UTBANet [21] aims to reconstruct HSIs and adds a decoding branch to reconstruct edge information. Unlike previous methods, this paper utilizes many unlabeled HSI-CD samples to train SCDM to extract semantically relevant spectral-spatial information." }, { "figure_ref": [], "heading": "B. Diffusion models", "publication_ref": [ "b13", "b21", "b22", "b9", "b23", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b32", "b33" ], "table_ref": [], "text": "Diffusion models [14], [22], [23] are Markov chains that reconstruct data samples through a step-by-step denoising process, beginning with randomly distributed samples. Recently, methods based on diffusion models have been brilliant in various fields, such as computer vision [10], [24]- [26], natural language processing [27], [28], multimodal learning [29], [30], time series modeling [31], [32], etc. Diffusion models have been gradually explored in terms of visual representation, and Baranchuk et al. [33] demonstrated that diffusion models could also be used as a tool for semantic segmentation, especially when labeled data is scarce. Gu et al. [34] proposed a new framework, DiffusionInst, which represents instances as instance-aware filters and instance segmentation as a noise-tofilter denoising process. In this paper, we propose SCDM and further explore the application of the diffusion model in the field of HSI-CD. To our knowledge, this is the first work that employs a diffusion model for HSI-CD." }, { "figure_ref": [], "heading": "C. Contrastive learning", "publication_ref": [ "b34", "b36", "b37", "b38", "b37", "b38", "b39", "b40", "b41" ], "table_ref": [], "text": "Contrastive learning [35]- [37] learns feature representations of samples by automatically constructing similar and dissimilar samples. BYOL [38] relies on the interaction of the online and target networks for learning. An online network is trained from augmented views of an image to predict target network representations of the same image under different augmented views. SimSiam [39] theoretically explained that the essence of twin network representation learning with stop-gradient is the Expectation-Maximization (EM) algorithm. BYOL [38] and SimSiam [39] still work without negative samples. Recently, contrastive learning has achieved promising results in HSI classification tasks [40], [41]. Ou et al. [42] proposed an HSI-CD framework based on a self-supervised contrastive learning pre-training model and designed a data augmentation strategy based on Gaussian noise for constructing positive and negative samples. In this paper, we design a CTCL network that can extract the invariant features of spectral differences caused by environmental changes, thereby reducing the impact of imaging conditions and environmental changes on CD results. " }, { "figure_ref": [ "fig_0" ], "heading": "III. PROPOSED METHOD", "publication_ref": [ "b21", "b42", "b43" ], "table_ref": [], "text": "This section will provide an overview of the DDPM framework [22], [43], [44] and describe the proposed DiffUCD model in detail. Fig. 1 illustrates the architecture of the Dif-fUCD model, which comprises three main parts: the SCDM, CTCL, and CD head." }, { "figure_ref": [], "heading": "A. Preliminaries", "publication_ref": [ "b44", "b21", "b42", "b43", "b22", "b45", "b13", "b46", "b47" ], "table_ref": [], "text": "Inspired by nonequilibrium thermodynamics [45], a series of probabilistic generative models called diffusion models have been proposed. There are currently three popular formulations based on diffusion models: denoising diffusion probabilistic models (DDPMs) [22], [43], [44], score-based generative models (SGMs) [23], [46], and stochastic differential equations (Score SDEs) [14], [47]. In this paper, we expand the application of DDPMs to the HSI-CD domain.\nDiffusion probabilistic models for denoising typically use two Markov chains: a forward chain that perturbs the image with noise and a reverse chain that denoises the noisy image. The forward chain is a process of forward diffusion, which gradually adds Gaussian noise to the input data to create interference. The reverse chain learns a denoising network that reverses the forward diffusion process. In the forward diffusion process of noise injection, Gaussian noise is gradually added to the clean data x 0 ∼ p (x 0 ) until the data is entirely degraded, resulting in a Gaussian distribution N (0, I). Formally, the operation at each time step t in the forward diffusion process is defined as:\nq (x t | x t-1 ) = N x t ; 1 -β t x t-1 , β t I(1)\nHere\n(x 0 , x 1 , • • • , x T ) represents a T -step Markov chain. β t ∈ (0, 1) represent the noise Schedule.\nImportantly, given a clean data sample x 0 , we can obtain a noisy sample x t by sampling the Gaussian vector ∼ N (0, I) and applying the transformation directly to x 0 :\nq (x t | x 0 ) = N x t | x 0 √ ᾱt , (1 -ᾱt ) I(2)\nx t = x 0 √ ᾱt + t √ 1 -ᾱt , t ∼ N (0, I)(3)\nTo add noise to x 0 , we use Eq. 3 to transform the data into x t for each time step t ∈ {0, 1, . . . , T }. Here ᾱt =\nt i=0 α i = t i=0 (1 -β i ).\nDuring the training phase, a U-ViT [48] like structure for θ (x t , t) is trained to predict by minimizing the training objective using L2 loss.\nL = -θ (x t , t) 2 = -θ √ α t x t-1 + √ 1 -α t , t 2(4\n) During the inference stage, given a noisy input x t , the trained model θ (x t , t) is used to denoise and obtain x t-1 . This process can be mathematically represented as follows:\nx t-1 = 1 √ α t x t - 1 -α t √ 1 -ᾱt θ (x t , t) + σ t z(5)\nwhere z ∼ N (0, I) and\nσ t = 1-ᾱt-1 1-ᾱt β t . x t obtains x 0 through continuous iteration, i.e., x t → x t-1 → x t-2 → . . . → x 0 .\nIn this work, we aim to address the task of unsupervised HSI-CD using a diffusion model. Specifically, we consider data sample x 0 as a patch from the HSI at either T 1 or T 2. We begin by corrupting x 0 with Gaussian noise using Eq. 3 to obtain the noisy input x t for the noise predictor θ (x t , t, c). We define θ (x t , t, c) as a noise predictor that can extract spectral-spatial features that are useful for downstream HSI-CD tasks." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "B. DiffUCD", "publication_ref": [ "b21", "b47" ], "table_ref": [], "text": "The proposed DiffUCD framework comprises a SCDM, a CTCL, and a CD head, as illustrated in Fig. 1. SCDM can use a large number of unlabeled samples to fully consider the semantic correlation of spectral-spatial features and retrieve the features of the original image semantic correlation. CTCL aligns the spectral sequence information of unchanged pixels, guiding the network to extract features that are insensitive to spectral differences resulting from variations in imaging conditions and environments.\n1) Semantic Correlation Diffusion Model: We utilize the forward diffusion process proposed by SCDM [22] in Eq. ( 6), which corrupts the input HSI H 0 to obtain H t at a random time step t. Fig. 1 illustrates that the SCDM takes a patch x t ∈ R C×K×K from the H t at time T 1 or T 2 as input. Our SCDM is structured similarly to U-ViT [48], with the time step t, condition c, and noise image x t all used as tokens for input into the SCDM. In contrast to the U-ViT long skip connections method, we employ a multi-head cross-attention (MCA) approach for feature fusion between the shallow and deep layers. The noise image x t is fed into θ (x t , t, c), parameterized by the SCDM. The pixel-level representation x0 of x 0 is obtained through the θ (x t , t, c) network, and the corresponding formula is given as follows:\nH t (H 0 , t ) = H 0 √ ᾱt + t √ 1 -ᾱt(6)\nwhere ᾱt =\nt i=0 α i = t i=0 (1 -β i ), t ∼ N (0, I). x0 = 1 √ ᾱt x t - √ 1 -ᾱt θ (x t , t, c)(7)\n2) Cross-Temporal Contrastive Learning: The proposed CTCL module aims to learn more discriminative features for HSI-CD by emphasizing spectral difference invariant features between unchanged samples at T 1 and T 2 moments. The architecture consists of two parts: a spectral transformer encoder and an MLP. To construct positive and negative sample pairs, unchanged pixels at the same location but different phases are used as positive samples, while the rest are negative samples. The CTCL network takes X 1 and X 2 as input and produces contrastive feature representations z i and z j , which are then aligned through a contrastive loss function. This architecture aims to shorten the distance between the feature representations of unchanged pixel samples in different phases, which helps the network extract more robust and invariant features that are less affected by environmental changes." }, { "figure_ref": [ "fig_0" ], "heading": "C. Change Detection Head", "publication_ref": [], "table_ref": [], "text": "We employ a fusion module to fuse the semantic correlation of spectral-spatial features obtained by the SCDM with the spectral difference-invariant features extracted by CTCL. The module is formulated as follows:\nX = 1/3(Conv(Sub( X1 , X2 )) + Concat( X1 , X2 ) + Concat(x 1 0 , x2 0 ))(8)\nHere, X1 and X2 represent the encoder output features obtained through CTCL, while x1 0 and x2 0 denote the spectralspatial features extracted by the SCDM. The Concat(•) function is used to superimpose features along the channel dimension, while Sub(•) calculates the features' differences. The resulting fused features, X, are then passed to the CD head to generate the final CD map. The structure of the CD head used in this paper is consistent with the spatial transformer in Fig. 1." }, { "figure_ref": [], "heading": "D. Training", "publication_ref": [], "table_ref": [], "text": "The training process comprises two stages: 1) The SCDM is pre-trained using a large number of unlabeled HSI-CD samples to fully consider the semantic correlation of spectralspatial features and retrieve the features of the original image semantic correlation. 2) A small set of pseudo-label samples are used to train the CTCL network. The spectral-spatial features extracted by the SCDM are fused with the spectrally invariant features learned by the CTCL network and then passed through the CD head to generate the ultimate CD map." }, { "figure_ref": [], "heading": "1) Pretrained Semantic Correlation Diffusion Model:", "publication_ref": [ "b50", "b51", "b50", "b57", "b58" ], "table_ref": [], "text": "To pre-train the SCDM, we selected the Santa Barbara, Bay Area, and Hermiston datasets 1 , which contain large amounts of unlabeled data. For the input x 0 , we randomly initialized the time t and added noise using Eq. ( 3) to obtain x t . The pretrained SCDM predicted x t and then calculated the estimated features of the input data x 0 using Eq. ( 7). The noise loss for the SCDM is defined as follows:\nL noise = E t,x0,c, N i=1 i -θ x i t , t, c 2 = E t,x0,c, N i=1 i -θ √ ᾱt x i t + √ 1 -ᾱt , t 2(9\n) where i represents the noise added to the i-th sample using Eq. ( 3), N represents the number of samples.\n2) Training the Cross-Temporal Contrastive Learning and Change Detection Head: In the second stage, we keep the pretrained SCDM parameters fixed and only focus on training the CTCL and CD head networks. Our goal is to learn features that are invariant to spectral differences caused by environmental changes. We use CTCL to align spectral feature representations of unchanged samples to achieve this. First, we obtain pseudo-labels using the traditional unsupervised method PCA [51] and then use them to train the entire network. We feed the original samples X 1 and X 2 into the CTCL to obtain contrastive feature representations z i and z j . The loss function of the CTCL architecture based on the paper SimCLR [52] is defined as follows:\ni,j = -log exp (sim (z i , z j ) /τ ) 2Q k=1 1 k =i • (exp (sim (z i , z k )) /τ ) (10)\nwhere\n1 [k =i] ∈ {0, 1} is an indicator function evaluating to1 if k = i. L con = 1 2Q Q k=1 [ (2k -1, 2k) + (2k, 2k -1)](11)\nwhere i,j represents the loss of a pair of positive samples (i, j), and L con represents the total loss of contrastive learning. sim (z i , z j ) is the cosine similarity between feature representations z i and z j . Q represents the number of unchanged samples in a sample set with a batch size of N . τ denotes a temperature parameter.\nThe CD task involves pixel-wise evaluation of changes at each location, and we use the cross-entropy loss to measure the change loss. The loss for variation is defined as follows:\nL change = - 1 N N i=1 (y i log ŷi + (1 -y i ) log (1 -ŷi )) (12)\nwhere y i ∈ {0, 1} represents the actual label, 0 represents no change, 1 represents a change, and ŷi represents the label 1 https://citius.usc.es/investigacion/datasets/hyperspectral-change-detection. predicted by the network. Therefore, the total loss of our proposed DiffUCD framework is:\nIV. EXPERIMENTS A. Datasets\nWe demonstrate the effectiveness of our proposed method on three publicly available HSI-CD datasets: Santa Barbara, Bay Area, and Hermiston. The Santa Barbara dataset comprises imagery captured by the AVIRIS sensor over the Santa Barbara region in California. The dataset includes images from 2013 and 2014, with spatial dimensions of 984 × 740 pixels and 224 spectral bands. Similarly, the Bay Area dataset consists of AVIRIS sensor imagery surrounding the city of Patterson, California. The dataset includes images captured in 2013 and 2015, with spatial dimensions of 600 × 500 pixels and 224 spectral bands.\nThe Hermiston dataset focuses on an irrigated agricultural field in Hermiston, Umatilla County, Oregon. The imagery was acquired on May 1, 2004, and May 8, 2007. The image size is 307 × 241 pixels, consisting of 57,311 unchanged pixels and 16,676 changed pixels. After removing noise, 154 spectral bands were selected for the experiments. The changes observed in this dataset primarily pertain to land cover types and the presence of rivers.\nSanta Barbara and Bay Area unlabeled pixels make up approximately 80% of all pixels. To train the CTCL and CD heads, we use the full-pixel pre-trained SCDM and select 500 changed and 500 unchanged pixels from the PCA-generated pseudo-labels [51]. \nF1 = 2 recall -1 + precision -1(17)\n2) Implementation Details: We perform all experiments using the PyTorch platform, running on an NVIDIA GTX 2080Ti GPU with 11GB of memory. The batch size is 128, and a patch size of 7 is used to process the input data. In the first stage, the pre-training SCDM trains for 1000 epochs using the AdamW optimizer [58] with an initial learning rate of 1e-5. The timestep for the SCDM was set to 200. In the second stage, we fix the parameters of the SCDM and use the Adadelta optimizer [59] to optimize the CTCL and CD head network over time. The initial learning rate is set to 1 and linearly decreases to 0 at 200 epochs. Through experiments, we choose the spectral-spatial features produced by the SCDM t = 5, 10, 100 as the input features of the CD head." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "C. Comparison to State-of-the-art Methods", "publication_ref": [ "b52", "b50", "b53", "b48", "b54", "b49", "b56", "b55", "b15" ], "table_ref": [], "text": "We conduct a comprehensive comparison of our method with recent unsupervised and supervised HSI-CD methods, including CVA [53], PCA [51], ISFA [54], DSFA [49], MSCD [55], HyperNet [50], BCG-Net [57], BCNNs [56], and ML-EDAN [16]. Fig. 2 presents a visual comparison of these methods on the three datasets.\nFrom the visual observations in Fig. 2, it is evident that our proposed method, DiffUCD, exhibits the smallest regions of red and green. This compelling visualization underscores the superior performance of DiffUCD compared to all other methods. Table II, Table III, and Table IV provides the quantitative results of DiffUCD alongside various state-of-the-art methods across the three datasets. Remarkably, our proposed method substantially improves performance over the state-of-the-art unsupervised methods, as evidenced by significant margins in OA, KC, and F1-score. Specifically, DiffUCD surpasses the unsupervised methods on the Santa Barbara dataset by remarkable margins of 5.73%, 11.93%, and 7.17% in terms of OA, KC, and F1-score, respectively. Furthermore, compared to supervised methods trained on an equivalent number of human-annotated training examples, our method demonstrates comparable or superior performance." }, { "figure_ref": [ "fig_3" ], "heading": "D. Ablation Study 1) Effectiveness of the module:", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We conduct a comprehensive ablation study to verify the effectiveness of the proposed SCDM and CTCL. The results are shown in Table V. After adding the pre-training of the SCDM, the results of the network on the three datasets have been significantly improved. We argue that the SCDM pre-training process utilizes many unlabeled samples, which can extract the semantic correlation of spectral-spatial features of the CD dataset. The third row of Table V is based on the base model, which adds a CTCL module, improving CD accuracy on the three datasets by aligning the spectral features of unchanged samples. The fourth row is the experimental results of the DiffUCD model we proposed, and the OA values on the three data sets have been increased by 6.39%, 4.58%, and 2.64%, respectively. Experiments fully prove the effectiveness of our proposed DiffUCD and sub-modules.\n2) Comparison of feature extraction ability: Fig. 3 visually demonstrates the effectiveness of the SCDM in extracting compact intra-class features compared to the base model. Notably, the feature distances obtained through the CTCL mechanism are significantly larger on the Santa Barbara and Hermiston datasets. The t-SNE visualization further reinforces the discriminative nature of our model. The t-SNE plot vividly illustrates that the features extracted by DiffUCD are well-separated, allowing for distinct clusters corresponding to different classes. This enhanced feature separability plays a crucial role in boosting CD accuracy.\n3) The influence of timestamp t on the reconstruction effect: Fig. 4 and Fig. 5 provides qualitative evidence of the effectiveness of DiffUCD in both noise removal and feature reconstruction of the original HSI. The visualization results clearly illustrate how the denoising process of DiffUCD fully incorporates the semantic correlation of spectral-spatial features, enabling the extraction of essential features that preserve the original image's semantic correlation.\nV. CONCLUSION This work presents a novel diffusion framework, called DiffUCD, designed explicitly for HSI-CD. To our knowledge, this is the first diffusion model developed for this particular task. DiffUCD leverages many unlabeled samples to fully consider the semantic correlation of spectral-spatial features and retrieve the features of the original image semantic correlation. Additionally, we employ CTCL to align the spectral feature representations of unchanged samples. This alignment facilitates learning invariant spectral difference features essential for capturing environmental changes. We evaluate the performance of our proposed method on three publicly available datasets and demonstrate that it achieves significant improvements over state-of-the-art unsupervised methods in terms of OA, KC, and F1 metrics. Furthermore, the diffusion model holds great potential as a novel solution for the HSI-CD task. Our work will inspire the development of new approaches and foster advancements in this field." } ]
Hyperspectral image change detection (HSI-CD) has emerged as a crucial research area in remote sensing due to its ability to detect subtle changes on the earth's surface. Recently, diffusional denoising probabilistic models (DDPM) have demonstrated remarkable performance in the generative domain. Apart from their image generation capability, the denoising process in diffusion models can comprehensively account for the semantic correlation of spectral-spatial features in HSI, resulting in the retrieval of semantically relevant features in the original image. In this work, we extend the diffusion model's application to the HSI-CD field and propose a novel unsupervised HSI-CD with semantic correlation diffusion model (DiffUCD). Specifically, the semantic correlation diffusion model (SCDM) leverages abundant unlabeled samples and fully accounts for the semantic correlation of spectral-spatial features, which mitigates pseudo change between multi-temporal images arising from inconsistent imaging conditions. Besides, objects with the same semantic concept at the same spatial location may exhibit inconsistent spectral signatures at different times, resulting in pseudo change. To address this problem, we propose a cross-temporal contrastive learning (CTCL) mechanism that aligns the spectral feature representations of unchanged samples. By doing so, the spectral difference invariant features caused by environmental changes can be obtained. Experiments conducted on three publicly available datasets demonstrate that the proposed method outperforms the other state-of-the-art unsupervised methods in terms of Overall Accuracy (OA), Kappa Coefficient (KC), and F1 scores, achieving improvements of approximately 3.95%, 8.13%, and 4.45%, respectively. Notably, our method can achieve comparable results to those fully supervised methods requiring numerous annotated samples.
DiffUCD:Unsupervised Hyperspectral Image Change Detection with Semantic Correlation Diffusion Model
[ { "figure_caption": "Fig. 1 .1Fig. 1. The proposed DiffUCD framework consists of two main modules: SCDM and CTCL. SCDM can fully consider the semantic correlation of spectral-spatial features and reconstruct the essential features of the original image semantic correlation. CTCL can deal with the problem of the same object with different spectra and constrain the network to learn the invariant characteristics of spectral differences caused by environmental changes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Visualizations of the proposed method and state-of-the-art unsupervised methods on three datasets. From top to bottom are Santa Barbara, Bay Area, and Hermiston datasets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The t-SNE visualization of features extracted on three datasets. From top to bottom are Santa Barbara, Bay Area, and Hermiston datasets.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .Fig. 5 .45Fig. 4. SCDM denoising process reconstructs pseudo-color images of different timestamps of the Santa Barbara dataset. Image visualization at time T1 and T2 from top to bottom.", "figure_data": "", "figure_id": "fig_4", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "EXPERIMENTS ON MODULE EFFECTIVENESS ON THREE DATASETS", "figure_data": "Santa BarbaraBay AreaHermistonBaseSCDMCTCLOAKCF1OAKCF1OAKCF1√ √ √ √√ √√ √90.48 95.64 95.38 96.8780.51 90.92 90.33 93.4188.67 94.57 94.15 95.9791.77 94.74 94.38 96.3583.35 89.49 88.74 92.6792.65 94.90 94.59 96.5792.83 94.62 93.62 95.4777.24 84.65 82.71 86.6981.55 88.12 86.89 89.58KC =OA -PRE 1 -PRE(16)PRE =(TP + FP)(TP + FN) (TP + TN + FP + FN) 2 +(FN + TN)(FP + TN) (TP + TN + FP + FN) 2", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" } ]
Xiangrong Zhang; Shunli Tian; Guanchun Wang; Huiyu Zhou; Licheng Jiao
[ { "authors": "D Lu; P Mausel; E Brondizio; E Moran", "journal": "International journal of remote sensing", "ref_id": "b0", "title": "Change detection techniques", "year": "2004" }, { "authors": "S Liu; D Marinelli; L Bruzzone; F Bovolo", "journal": "IEEE Geoscience and Remote Sensing Magazine", "ref_id": "b1", "title": "A review of change detection in multitemporal hyperspectral images: Current techniques, applications, and challenges", "year": "2019" }, { "authors": "F Aslami; A Ghorbani", "journal": "Environmental monitoring and assessment", "ref_id": "b2", "title": "Object-based land-use/land-cover change detection using landsat imagery: a case study of ardabil, namin, and nir counties in northwest iran", "year": "2018" }, { "authors": "T Rumpf; A.-K Mahlein; U Steiner; E.-C Oerke; H.-W Dehne; L Plümer", "journal": "Computers and electronics in agriculture", "ref_id": "b3", "title": "Early detection and classification of plant diseases with support vector machines based on hyperspectral reflectance", "year": "2010" }, { "authors": "Y Wang; D Hong; J Sha; L Gao; L Liu; Y Zhang; X Rong", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b4", "title": "Spectral-spatial-temporal transformers for hyperspectral image change detection", "year": "2022" }, { "authors": "W Dong; J Zhao; J Qu; S Xiao; N Li; S Hou; Y Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b5", "title": "Abundance matrix correlation analysis network based on hierarchical multi-head self-cross-hybrid attention for hyperspectral change detection", "year": "2023" }, { "authors": "D Chakraborty; A Ghosh", "journal": "", "ref_id": "b6", "title": "Unsupervised change detection in hyperspectral images using feature fusion deep convolutional autoencoders", "year": "2021" }, { "authors": "J Lei; M Li; W Xie; Y Li; X Jia", "journal": "Neurocomputing", "ref_id": "b7", "title": "Spectral mapping with adversarial learning for unsupervised hyperspectral change detection", "year": "2021" }, { "authors": "G Kim; T Kwon; J C Ye", "journal": "", "ref_id": "b8", "title": "Diffusionclip: Text-guided diffusion models for robust image manipulation", "year": "2022" }, { "authors": "C Saharia; J Ho; W Chan; T Salimans; D J Fleet; M Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "H Tan; S Wu; J Pi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b10", "title": "Semantic diffusion network for semantic segmentation", "year": "2022" }, { "authors": "J Wu; H Fang; Y Zhang; Y Yang; Y Xu", "journal": "", "ref_id": "b11", "title": "Medsegdiff: Medical image segmentation with diffusion probabilistic model", "year": "2022" }, { "authors": "S Chen; P Sun; Y Song; P Luo", "journal": "", "ref_id": "b12", "title": "Diffusiondet: Diffusion model for object detection", "year": "2022" }, { "authors": "Y Song; J Sohl-Dickstein; D P Kingma; A Kumar; S Ermon; B Poole", "journal": "", "ref_id": "b13", "title": "Score-based generative modeling through stochastic differential equations", "year": "2020" }, { "authors": "H Li; Y Yang; M Chang; S Chen; H Feng; Z Xu; Q Li; Y Chen", "journal": "Neurocomputing", "ref_id": "b14", "title": "Srdiff: Single image super-resolution with diffusion probabilistic models", "year": "2022" }, { "authors": "J Qu; S Hou; W Dong; Y Li; W Xie", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b15", "title": "A multilevel encoderdecoder attention network for change detection in hyperspectral images", "year": "2021" }, { "authors": "H Zhao; K Feng; Y Wu; M Gong", "journal": "Remote Sensing", "ref_id": "b16", "title": "An efficient feature extraction network for unsupervised hyperspectral change detection", "year": "2022" }, { "authors": "X Li; Z Yuan; Q Wang", "journal": "Remote Sensing", "ref_id": "b17", "title": "Unsupervised deep noise modeling for hyperspectral image change detection", "year": "2019" }, { "authors": "Q Li; H Gong; H Dai; C Li; Z He; W Wang; Y Feng; F Han; A Tuniyazi; H Li", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b18", "title": "Unsupervised hyperspectral image change detection via deep learning self-generated credible labels", "year": "2021" }, { "authors": "Z Hou; W Li; R Tao; Q Du", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b19", "title": "Three-order tucker decomposition and reconstruction detector for unsupervised hyperspectral change detection", "year": "2021" }, { "authors": "S Liu; H Li; F Wang; J Chen; G Zhang; L Song; B Hu", "journal": "Remote Sensing", "ref_id": "b20", "title": "Unsupervised transformer boundary autoencoder network for hyperspectral image change detection", "year": "2023" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Y Song; S Ermon", "journal": "Advances in neural information processing systems", "ref_id": "b22", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b23", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "E A Brempong; S Kornblith; T Chen; N Parmar; M Minderer; M Norouzi", "journal": "", "ref_id": "b24", "title": "Denoising pretraining for semantic segmentation", "year": "2022" }, { "authors": "J Wyatt; A Leach; S M Schmon; C G Willcocks", "journal": "", "ref_id": "b25", "title": "Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise", "year": "2022" }, { "authors": "J Austin; D D Johnson; J Ho; D Tarlow; R Van Den; Berg", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "2021" }, { "authors": "X Li; J Thickstun; I Gulrajani; P S Liang; T B Hashimoto", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Diffusion-lm improves controllable text generation", "year": "2022" }, { "authors": "O Avrahami; D Lischinski; O Fried", "journal": "", "ref_id": "b28", "title": "Blended diffusion for textdriven editing of natural images", "year": "2022" }, { "authors": "D Yang; J Yu; H Wang; W Wang; C Weng; Y Zou; D Yu", "journal": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "ref_id": "b29", "title": "Diffsound: Discrete diffusion model for text-to-sound generation", "year": "2023" }, { "authors": "Y Tashiro; J Song; Y Song; S Ermon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Csdi: Conditional scorebased diffusion models for probabilistic time series imputation", "year": "2021" }, { "authors": "K Rasul; C Seward; I Schuster; R Vollgraf", "journal": "PMLR", "ref_id": "b31", "title": "Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting", "year": "2021" }, { "authors": "D Baranchuk; I Rubachev; A Voynov; V Khrulkov; A Babenko", "journal": "", "ref_id": "b32", "title": "Label-efficient semantic segmentation with diffusion models", "year": "2021" }, { "authors": "Z Gu; H Chen; Z Xu; J Lan; C Meng; W Wang", "journal": "", "ref_id": "b33", "title": "Diffusioninst: Diffusion model for instance segmentation", "year": "2022" }, { "authors": "K He; H Fan; Y Wu; S Xie; R Girshick", "journal": "", "ref_id": "b34", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "T Chen; S Kornblith; K Swersky; M Norouzi; G E Hinton", "journal": "Advances in neural information processing systems", "ref_id": "b35", "title": "Big self-supervised models are strong semi-supervised learners", "year": "2020" }, { "authors": "G Wang; X Zhang; Z Peng; X Tang; H Zhou; L Jiao", "journal": "", "ref_id": "b36", "title": "Absolute wrong makes better: Boosting weakly supervised object detection via negative deterministic information", "year": "2022" }, { "authors": "J.-B Grill; F Strub; F Altché; C Tallec; P Richemond; E Buchatskaya; C Doersch; B Avila Pires; Z Guo; M Gheshlaghi Azar", "journal": "Advances in neural information processing systems", "ref_id": "b37", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": "X Chen; K He", "journal": "", "ref_id": "b38", "title": "Exploring simple siamese representation learning", "year": "2021" }, { "authors": "X Hu; T Li; T Zhou; Y Liu; Y Peng", "journal": "Applied Sciences", "ref_id": "b39", "title": "Contrastive learning based on transformer for hyperspectral image classification", "year": "2021" }, { "authors": "P Guan; E Y Lam", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b40", "title": "Cross-domain contrastive learning for hyperspectral image classification", "year": "2022" }, { "authors": "X Ou; L Liu; S Tan; G Zhang; W Li; B Tu", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b41", "title": "A hyperspectral image change detection framework with self-supervised contrastive learning pretrained model", "year": "2022" }, { "authors": "A Q Nichol; P ", "journal": "PMLR", "ref_id": "b42", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "J Song; C Meng; S Ermon", "journal": "", "ref_id": "b43", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "J Sohl-Dickstein; E Weiss; N Maheswaranathan; S Ganguli", "journal": "PMLR", "ref_id": "b44", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Y Song; S Ermon", "journal": "Advances in neural information processing systems", "ref_id": "b45", "title": "Improved techniques for training score-based generative models", "year": "2020" }, { "authors": "Y Song; C Durkan; I Murray; S Ermon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b46", "title": "Maximum likelihood training of score-based diffusion models", "year": "2021" }, { "authors": "F Bao; C Li; Y Cao; J Zhu", "journal": "", "ref_id": "b47", "title": "All are worth words: a vit backbone for score-based diffusion models", "year": "2022" }, { "authors": "B Du; L Ru; C Wu; L Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b48", "title": "Unsupervised deep slow feature analysis for change detection in multi-temporal remote sensing images", "year": "2019" }, { "authors": "M Hu; C Wu; L Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b49", "title": "Hypernet: Self-supervised hyperspectral spatial-spectral feature understanding network for hyperspectral change detection", "year": "2022" }, { "authors": "J Deng; K Wang; Y Deng; G Qi", "journal": "International Journal of Remote Sensing", "ref_id": "b50", "title": "Pca-based land-use change detection and analysis using multitemporal and multisensor satellite data", "year": "2008" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b51", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "F Bovolo; L Bruzzone", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b52", "title": "A theoretical framework for unsupervised change detection based on change vector analysis in the polar domain", "year": "2006" }, { "authors": "C Wu; B Du; L Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b53", "title": "Slow feature analysis for change detection in multispectral imagery", "year": "2013" }, { "authors": "S Saha; P Ebel; X X Zhu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b54", "title": "Self-supervised multisensor change detection", "year": "2022" }, { "authors": "Y Lin; S Li; L Fang; P Ghamisi", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b55", "title": "Multispectral change detection with bilinear convolutional neural networks", "year": "2019" }, { "authors": "M Hu; C Wu; B Du; L Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b56", "title": "Binary change guided hyperspectral multiclass change detection", "year": "2023" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b57", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "M D Zeiler", "journal": "", "ref_id": "b58", "title": "Adadelta: an adaptive learning rate method", "year": "2012" } ]
[ { "formula_coordinates": [ 3, 350.1, 665.17, 212.93, 9.68 ], "formula_id": "formula_0", "formula_text": "q (x t | x t-1 ) = N x t ; 1 -β t x t-1 , β t I(1)" }, { "formula_coordinates": [ 3, 311.98, 690.27, 251.06, 20.91 ], "formula_id": "formula_1", "formula_text": "(x 0 , x 1 , • • • , x T ) represents a T -step Markov chain. β t ∈ (0, 1) represent the noise Schedule." }, { "formula_coordinates": [ 4, 91.16, 63.35, 208.86, 17.25 ], "formula_id": "formula_2", "formula_text": "q (x t | x 0 ) = N x t | x 0 √ ᾱt , (1 -ᾱt ) I(2)" }, { "formula_coordinates": [ 4, 88.36, 96.62, 211.66, 17.63 ], "formula_id": "formula_3", "formula_text": "x t = x 0 √ ᾱt + t √ 1 -ᾱt , t ∼ N (0, I)(3)" }, { "formula_coordinates": [ 4, 58.37, 133.93, 241.65, 26.4 ], "formula_id": "formula_4", "formula_text": "t i=0 α i = t i=0 (1 -β i )." }, { "formula_coordinates": [ 4, 50.72, 209.67, 247.05, 28.89 ], "formula_id": "formula_5", "formula_text": "L = -θ (x t , t) 2 = -θ √ α t x t-1 + √ 1 -α t , t 2(4" }, { "formula_coordinates": [ 4, 79.33, 287.1, 220.69, 23.23 ], "formula_id": "formula_6", "formula_text": "x t-1 = 1 √ α t x t - 1 -α t √ 1 -ᾱt θ (x t , t) + σ t z(5)" }, { "formula_coordinates": [ 4, 48.96, 318.43, 251.06, 24.48 ], "formula_id": "formula_7", "formula_text": "σ t = 1-ᾱt-1 1-ᾱt β t . x t obtains x 0 through continuous iteration, i.e., x t → x t-1 → x t-2 → . . . → x 0 ." }, { "formula_coordinates": [ 4, 365.73, 66.4, 197.31, 17.63 ], "formula_id": "formula_8", "formula_text": "H t (H 0 , t ) = H 0 √ ᾱt + t √ 1 -ᾱt(6)" }, { "formula_coordinates": [ 4, 360.41, 94.89, 202.63, 49.53 ], "formula_id": "formula_9", "formula_text": "t i=0 α i = t i=0 (1 -β i ), t ∼ N (0, I). x0 = 1 √ ᾱt x t - √ 1 -ᾱt θ (x t , t, c)(7)" }, { "formula_coordinates": [ 4, 354.85, 447.95, 208.18, 29.15 ], "formula_id": "formula_10", "formula_text": "X = 1/3(Conv(Sub( X1 , X2 )) + Concat( X1 , X2 ) + Concat(x 1 0 , x2 0 ))(8)" }, { "formula_coordinates": [ 5, 57.41, 168.03, 238.74, 72.87 ], "formula_id": "formula_11", "formula_text": "L noise = E t,x0,c, N i=1 i -θ x i t , t, c 2 = E t,x0,c, N i=1 i -θ √ ᾱt x i t + √ 1 -ᾱt , t 2(9" }, { "formula_coordinates": [ 5, 71.77, 437.72, 228.25, 26.56 ], "formula_id": "formula_12", "formula_text": "i,j = -log exp (sim (z i , z j ) /τ ) 2Q k=1 1 k =i • (exp (sim (z i , z k )) /τ ) (10)" }, { "formula_coordinates": [ 5, 48.96, 470.44, 251.06, 69.76 ], "formula_id": "formula_13", "formula_text": "1 [k =i] ∈ {0, 1} is an indicator function evaluating to1 if k = i. L con = 1 2Q Q k=1 [ (2k -1, 2k) + (2k, 2k -1)](11)" }, { "formula_coordinates": [ 5, 57.24, 669.26, 242.78, 30.32 ], "formula_id": "formula_14", "formula_text": "L change = - 1 N N i=1 (y i log ŷi + (1 -y i ) log (1 -ŷi )) (12)" }, { "formula_coordinates": [ 5, 311.98, 187.52, 165.15, 23.23 ], "formula_id": "formula_15", "formula_text": "IV. EXPERIMENTS A. Datasets" }, { "formula_coordinates": [ 7, 107.83, 655.14, 192.19, 41.46 ], "formula_id": "formula_16", "formula_text": "F1 = 2 recall -1 + precision -1(17)" } ]
2023-08-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b12", "b44", "b60", "b61", "b64", "b64" ], "table_ref": [], "text": "Simulating how humans interact with environments plays an essential role in many applications, such as generating training data for machine learning algorithms, and simulating autonomous agents in AR/VR and computer games. Although this task is highly related to character animation in computer graphics, most existing character animation methods (e.g. [4,5,17]) focus on improving the realism and controllability of character movements. With the traditional character animation workflows, one can produce high-quality animations but can hardly generate autonomous and spontaneous natural human motions interacting with the surroundings in diverse plausible ways as real humans. Previous learning-based interaction synthesis methods [13,45,61] require simultaneously capturing human motion and scenes for supervision. However, capturing such training data is costly and challenging, resulting in a notably limited spectrum of human-scene interaction motions and difficulties in handling unseen interaction scenarios. This restriction also results in inferior motion quality of synthesized virtual humans.\nTo address this problem, we leverage reinforcement learning (RL) [48] to solve our task. By formulating goals as rewards, perception as states, and latent variables of deep generative models as actions, we can synthesize continuous, stochastic, plausible, and spontaneous motions of virtual humans to inhabit the digital world. Although existing RL-based motion synthesis approaches (e.g. [29,36,64]) can effectively generate natural motions to achieve goals, their generated virtual humans can only interact with simple scenes, rather than complex environments with functional furniture and diverse objects. For example, GAMMA [64] employs generative motion primitives and a policy network that are generalizable across diverse human body shapes, but it can only synthesize waypoint-reaching locomotions. The trained digital humans are not aware of how to perform actions like sitting on a chair or lying on a sofa, and frequently inter-penetrate with the scene geometry.\nTo overcome these limitations, we propose a novel framework to learn both scene and interaction-aware motion control policies for synthesizing realistic and diverse human-scene interaction motions. First, in order to improve the physical plausibility of the synthesized human motions, we design a new scene-aware policy to help virtual humans avoid collisions with scene objects. Specifically, we use a 2D occupancy-based local walkability map to incorporate scene information into the locomotion policy. In addition, we add features derived from the signed distance from body markers to the object surface and the gradient direction of the signed distance to encode the proximity between humans and objects for object interaction policies. Second, in order to achieve controllable object interactions, we provide fine-grained guidances based on surface mark-ers [62] of a human body performing the target interactions. Specifically, we use COINS [65] to generate human bodies interacting with scene objects given the interaction semantics, and then use the body markers as the interaction guidance for motion synthesis. Combined with navigation mesh-based path-finding algorithms to generate intermediate waypoints in 3D scenes, virtual humans can autonomously reach target locations in complex environments and mimic target poses in a variety of plausible ways.\nWe train the policy networks in synthetic scenes consisting of randomized objects to learn generalizable sceneaware locomotion and fine-grained object interactions. With this framework, we investigate how to synthesize diverse in-scene motions consisting of locomotion, sitting, and lying. We empirically evaluate the motion realism and expressiveness of our proposed method, and compare it with state-of-the-art methods. The results show that our approach consistently outperforms the baselines in terms of diversity, physical plausibility, and perceptual scores.\nIn summary, we aim to let virtual humans inhabit virtual environments, and present these contributions:\n1. We propose a reinforcement learning-based framework to generate realistic and diverse motions of virtual humans in complex indoor scenes.\n2. We propose to use body surface markers as detailed interaction goals for fine-grained human-object interaction synthesis and leverage COINS [65] to generate articulated 3D human bodies based on interaction semantics to make virtual humans controllable via interaction semantics and fine-grained body poses.\n3. We design scene and interaction-aware policies to enable virtual humans to navigate in 3D scenes while avoiding collisions, to interact with scene objects, and to continuously perform sequences of activities in complex scenes." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b43", "b44", "b45", "b61", "b3", "b5", "b26", "b37", "b65", "b0", "b56", "b29", "b30", "b10", "b17", "b22", "b18", "b55", "b56", "b34", "b36", "b21", "b12" ], "table_ref": [], "text": "Human motion synthesis. Generating high-quality human motion has been widely explored in computer vision and graphics. Motion graph [25] and motion matching [4, 5, 67] generate motion by searching suitable clips in datasets and blending them automatically. Starke et al. [44][45][46] use phase-conditioned neural networks to synthesize character animations and interaction with objects. Zhang et al. [62,64] model motion as sequences of surface markers on the parametric SMPL-X [34] body model, and train autoregressive networks on the large-scale mocap dataset AMASS [32] in order to produce diverse motions of bodies with various shapes. Peng et al.\n[36] use imitation learning with a skill discovery objective to learn a general motion skill space for physically simulated characters. Tang et al.\n[49] trains a motion manifold model of consecutive frames for real-time motion transition. Recent works [26,27] propose generative methods to synthesize motions from single or a few example motion sequences. Transformerbased models have been designed to predict or generate stochastic motions conditioned on action categories [38], texts [39,51], gaze [66], and others. More recently, motion diffusion models [1,2,52,53,57] achieve appealing performance on motion synthesis conditioned on various control signals and demonstrate flexible motion editing.\nMotion control and RL-based motion synthesis. Various motion control methods have been proposed to constrain body movements or guide the body to reach goals. Sampling-based motion control methods [30,31] generate multiple samples at each step and select the samples that best match the targets. Goal-conditional generation networks [11,18,23] are applied for motion control. However, such methods may produce invalid results when the traintest domain gap is large. Optimization-based motion control methods [19,56] leverage the learned generative motion model as regularization and optimize the motion latent variables to fit the decoded motion to the goals. Motion diffusion models [52,57] implement control via classifier-free text guidance or gradually reprojecting the generated motion onto the physically plausible space at individual denoising steps. Human motions can be formulated as a Markov decision process, and synthesized and controlled via RL. Imitation learning methods [3,33,35,37] trains policy networks to control humanoids to imitate reference motion and complete given tasks. Peng et al. [36] propose a skill discovery objective apart from the imitation objective to learn a latent space for general motion skills and train substream task policies leveraging the general motion space. Followup works [22,50] combine the character controllers with language-based selection or finite state machine to compose more complex movements. Ours versus others. Our method is most similar to GAMMA [64] and SAMP [13]. GAMMA learns generalizable motion models and policies across human bodies of diverse identities and shapes, without goal-motion paired training data. Despite producing high-fidelity motions, its results are limited to locomotion in the scene, and frequently collide with scene objects. SAMP learns conditional VAEs to produce sitting and lying actions in living environments, with object-motion paired data. Its generated motion has visible artifacts such as foot-ground skating. Our method combines their merits and eliminates their individual disadvantages. We extend the RL-based framework proposed in GAMMA by incorporating fine-grained motion controls (guided by interaction semantics) and scene interaction modules, so as to generate human-scene interactions in complex daily living environments. Compared to SAMP, our produced motion is more diverse, more physically plausible, and can be guided by fine-grained body surface markers. For systematic comparisons, please refer to Section 4." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b3", "b61", "b41", "b64", "b13" ], "table_ref": [], "text": "3.1. Preliminaries SMPL-X [34] and body representation. We use SMPL-X to represent 3D human bodies in our work. Given shape parameter β ∈ R 10 and body pose parameters θ ∈ R 63 , SMPL-X produces a posed body mesh with a fixed topology of 10475 vertices. To place the body in a scene, the root location r ∈ R 3 and the orientation ϕ ∈ so(3) w.r.t. the scene coordinates are additionally needed. Since facial expressions and hand gestures are not our focus, we leave their parameters as the default values. In addition, we follow [62,64] to represent the body in motion by the SSM 67 body surface marker placement. A motion sequence is then formulated as X = {x 1 , ..., x N }, where N is the length of motion and x i ∈ R 67×3 denotes the body marker 3D locations at frame i. The marker locations are relative to a canonical coordinate frame centered at the body pelvis in the first frame.\nGAMMA [64]. GAMMA can synthesize stochastic, perpetual, and realistic goal-reaching actions in 3D scenes. It comprises generative motion primitive models, RL-based control, and tree-based search to implement gradient-free test-time optimization. The motion primitive is formulated by a CVAE model to generate uncertain marker motions for 0.25 seconds into the future given a motion seed, followed by a MLP-based body regressor to yield SMPL-X parameters. Long-term random motion can be generated by running the motion primitive model recursively. The RL-based control is implemented by learning a policy within a simulation area. The actor-critic framework [48] and the PPO algorithm [42] are applied to update the policy network. An additional motion prior term is used to ensure the motion appears natural. During testing, the generated motion primitives are stored in a tree where only the best K primitives at each layer are preserved in order to discard low-quality sampling results.\nCOINS [65]. COINS generates physically plausible static human-scene interactions with instance-level semantic control. Given the point cloud of an object and action labels, COINS can generate static bodies interacting with the given object based on the specified action, e.g., sitting, lying, or touching. COINS leverages transformer-based generative models trained on a human-scene interaction dataset [14] to first generate a plausible body pelvis for interaction and then the posed body. The generated bodies are further optimized to improve the physical plausibility and to match the predicted action-dependent contact areas with objects. Such generated static bodies capture the characteristics of the human-scene interaction process and can be used as fine-grained interaction guidance." }, { "figure_ref": [ "fig_0" ], "heading": "RL-based Framework to Inhabit the Virtual", "publication_ref": [ "b64", "b11", "b42", "b12", "b41" ], "table_ref": [], "text": "As illustrated in Fig. 2, we propose a motion synthesis framework that enables virtual humans to navigate in complex indoor scenes and interact with various scene objects, e.g., sitting on a chair. Compared to the GAMMA framework [64], our method incorporates scene information into the states to better handle complex human-scene interactions. Also, we use body markers as goals to provide fine-grained guidance on how to drive the body for the target interactions. With modularized path-finding methods and static person-scene interaction generation methods, our framework can synthesize realistic human motions in complex 3D environments. In our work, we use COINS [65] to generate static person-scene interactions from interaction semantics given as 'action-object' pairs. The walking path can be either generated by hand, or by automatic pathfinding algorithms like A* [12,43].\nWe formulate our motion synthesis tasks with reinforcement learning. At each time step, a virtual human perceives its state s t in the environment and samples an action a t from its policy model π(a t |s t ). Based on its motion model, it advances its motion state, and obtains a new perception state s t+1 . A reward r t = r(s t , a t , s t+1 ) is calculated, tailored to different tasks.\nThe motion model and the action. We leverage the CVAE-based generative motion primitive [64] as our motion model, and use its latent variables as actions. We train the model conditioned on 1 or 2 past frames using the combination of the SAMP [13] and AMASS [32] motion capture datasets, to learn a latent motion primitive space covering motion skills for human-scene interactions. Each latent variable z in the motion primitive space is regarded as an action and can be decoded to a short clip of motion.\nThe state. The state is formulated by\ns t = (X s , I, G),(1)\nwhere X s ∈ R M ×67×3 is the body markers motion seed that represents a motion history of M frames. I and G denote the person-scene interaction feature and the goalreaching feature, respectively. The interaction feature and goal-reaching feature vary among the locomotion and object interaction tasks. We introduce the detailed formulation in Sec. 3.3 and 3.4.\nThe rewards. The rewards evaluate how well the virtual human performs locomotion and fine-grained object interaction tasks. We formulate rewards as\nr = r goal + r contact + r pene ,(2)\nwhere r goal , r contact , and r pene represent the rewards for goal-reaching, foot-ground contact, and penetration avoidance, respectively. Specifically, the contact reward r contact encourages foot-floor contact and discourages foot skating, and is defined as:\nr contact = e -(| min xz|-0.05)+ • e -(min ∥x vel ∥2-0.075)+ , (3\n)\nwhere F is the set of foot markers, x z is the height of the markers, x vel is the velocity of the markers, and \nL = L P P O + E[(r t -V (s t )) 2 ] + αKL-div(π(z|s)||N (0, I)),(4)\nwhere the first term is the PPO [42] loss, the second updates the value estimation of the critic networks, and the third Kullback-Leibler divergence term regularizes motion in the latent space [64].\nTree sampling for test-time optimization. Given the stochastic nature of our Gaussian policies, sampling motions from the generated action distributions can yield motion primitive results of various qualities. Therefore, we follow [64] to use tree-based sampling during inference to discard motion primitives with inferior goal-reaching and scene interaction scores. Specifically, we sample multiple latent actions at each time step and selectively keep the best K samples, utilizing the same rewards used to train the policies as the selection criteria. This tree-sampling technique yields improved synthesis results of higher quality.\nIn the following, we elaborate on the design of states and rewards tailored to different actions, namely locomotion and fine-grained object interaction. By combining the learned policies, long-term and coherent motions can be composed by rolling out the initial body and switching between the locomotion and object interaction stages. Figure 3: Illustration of the scene-aware locomotion policy network. The locomotion policy state consists of the body markers, the goal-reaching feature of normalized direction vectors from markers to the goal pelvis, and the interaction feature of a 2D binary map indicating the walkability ( red: non-walkable area, blue: walkable area) of the local 1.6m× 1.6m square area. The locomotion policy network employs the actor-critic architecture and shares the state encoder." }, { "figure_ref": [], "heading": "Scene-Aware Locomotion Synthesis", "publication_ref": [], "table_ref": [], "text": "Navigating in cluttered scenes here means the human body moving to a target while avoiding collisions with scene objects. Our key idea is to incorporate the walkability information of the surrounding environment into the states and use collision rewards to train the locomotion policy in order to avoid scene collisions. Specifically, we represent the walkability of the environment surrounding the human agent using a 2D binary map M ∈ {0, 1} 16×16 as illustrated in Fig. 3.\nThe walkability map is defined in the human's local coordinates and covers a 1.6m × 1.6m area centered at the body pelvis and aligned with the body facing orientation. It consists of a 16×16 cell grid where each cell stores a binary value indicating whether this cell is walkable or not. This local walkability map enables the policy to sense surrounding obstacles.\nReferring to Eq. 1, the person-scene interaction feature is specified by\nI = vec(M),(5)\nin which vec(•) denotes vectorization. The goal-reaching feature is specified by\nG = (g p -X s ) n ,(6)\nwhere X s and gp ∈ R M ×67×3 are the body marker seed representing M frames history of motion, and the broadcasted target pelvis location relative to the body-centered canonical coordinate, respectively. (g p -X s ) n denotes the normalized vectors pointing from each marker to the goal pelvis.\nThe rewards contributing to Eq. 2 are defined as\nr pene = e -|M0∩Bxy(X)| ,(7)\nwhere M 0 denotes the non-walkable cells in the walkability map, B xy (•) denotes the 2D bounding box of the body markers X, ∩ denotes their intersection, and | • | denotes the number of non-walkable cells overlapping with the human bounding box.\nr goal = r dist + r ori ,(8)\nr dist = 1 -(∥p -g p ∥ 2 -0.05) + ,(9)\nr ori = ⟨o, g p -p⟩ 2 , (10\n)\nwhere r dist encourages the body pelvis p to be close to the pelvis goal g p and r ori encourages the body facing direction o to be aligned with the direction from the current body pelvis p to the pelvis goal g p ." }, { "figure_ref": [], "heading": "Fine-grained Object Interaction Synthesis", "publication_ref": [ "b14", "b59", "b64", "b64" ], "table_ref": [], "text": "To synthesize fine-grained human-object interactions, e.g. sitting on a chair or lying on a sofa, we use body marker goals as guidance, and model the proximity between the body surface and the scene object in a compact way. The goal marker sets can be generated by static person-scene interaction methods such as [15,60,65]. We use COINS [65] to generate the static goal interaction body for its performance and controllability of interaction semantics.\nIn addition to the marker-based goal guidance, we incorporate the proximity relations between humans and objects into the states. Specifically, we use the signed distance from each marker to the surface of the scene object, as well as the gradient direction of the signed distance to represent the proximity relationship, as illustrated in Fig. 4. Both the signed distance and its gradient direction are calculated using the object's signed distance field (SDF) Ψ O .\nReferring to Eq. 1, the person-scene interaction feature is formulated as\nI = [Ψ O (X s ), ∇Ψ O (X s )],(11)\nin which\nΨ O ∈ R M ×67 and ∇Ψ O ∈ R M ×201\ndenote the SDF values and the gradient at each marker location in the M frames, respectively, and [•, •] denotes feature concatenation. The goal-reaching feature is formulated as\nG = [(g m -X s ) n , ∥g m -X s ∥ 2 ],(12)\nin which X s ∈ R M ×67×3 denotes the body markers seed, gm ∈ R M ×67×3 denotes the broadcasted goal body markers, (g m -X s ) n denotes the normalized vector representing the direction from each marker to the corresponding target marker, ∥g m -X s ∥ 2 denotes the distance from each marker to target marker.\nFigure 4: Illustration of the object interaction policy network. The interaction policy state consists of the body markers, the goal-reaching features of both distance and direction from current markers to the goal markers, and the interaction features of the signed distances from each marker to the object surfaces and the signed distance gradient at each marker location. Such interaction features encode the human-object proximity relationship.\nThe interaction policy is trained using the reward defined in Eq. 2 with the following interaction-specific goal reward and penetration reward:\nr goal = 1 -(∥x -g m ∥ 2 -0.05) + ,(13)\nr pene = e -1 |V | 1 T T t=1 |V | i=1 |(Ψ O (vti))-| ,(14)\nwith |V | being the SMPL-X mesh vertices, T denotes the number of frames in each motion primitive (equals to 10 in our study). The distance reward encourages the final frame body markers x to be close to the goal body markers g m . The penetration reward penalizes all the body vertices within a motion primitive that have negative SDF values. We use body vertices instead of joints because humanobject contact happens on the body surface and can be better detected using vertices. Moreover, we train the interaction policy with a mixture of 'sit/lie down' and 'stand up' tasks. This training scheme enables the human agent to also learn how to stand up and transit from object interaction back to locomotion, which enables the synthesis of a sequence of interaction activities as in Fig. 1." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b12", "b39", "b13" ], "table_ref": [], "text": "Motion Datasets. We combine the large-scale motion capture dataset AMASS [32] with SAMP [13] motion data to train the motion primitive models. Each sequence is first subsampled to 40 FPS and then split into 10 frames and 100 frames clips. Each motion clip is canonicalized using the first frame body. Specifically, we select AMASS sequences annotated with 'sit' or 'lie' in BABEL [40] and all motion data of SAMP to train the motion primitive model. We train separate motion primitive models for locomotion and interactions using task-related data. We observe that extending SAMP motion data with AMASS dataset is the key to making interaction policies work.\nPolicy Training Environments. To train the scene-aware locomotion policy, we randomly generate synthetic cluttered scenes consisting of random objects from ShapeNet [6]. Random initial and target location pairs are sampled in the walkable areas using navigation meshes to train the locomotion policy. To train the interaction policy, we use the static person-object interaction data from PROX [14] and retargeted to ShapeNet objects. We first randomly sample a furniture and interaction body goal from the retargeted PROX data. Then we sample the initial body with random poses and locations in front of the object to train the interaction policy. We also randomly swap the initial and goal body to learn both 'sit/lie down' and 'stand up' motions.\nPlease refer to Supp. Mat. for more details." }, { "figure_ref": [ "fig_1" ], "heading": "Locomotion in 3D Scenes", "publication_ref": [ "b12" ], "table_ref": [ "tab_3" ], "text": "We randomly generate test scenes for locomotion in the same way as the training scenes. The virtual human is instructed to move from the random start point to the random target point while avoiding penetration with scene objects.\nBaselines and metrics. We compare our method with SAMP [13] and GAMMA [64] for locomotion. The SAMP results are recorded by running the released Unity demo. The start and termination are manually determined so the reported completion time may be slightly higher than the actual time due to human response time. The evaluation metrics for locomotion include: 1) time from start point to target point or reaching the time limit, measured in seconds. 2) the average distance from the final body to the targets, measured in meters. 3) foot contact score encouraging the lowest feet joints on the floor and discouraging foot skating as defined in Eq. 3. We use body joints instead of markers to calculate the contact score because the marker set annotation for the SAMP body is missing. 4) locomotion penetration score indicating the percentage of body vertices that are inside the walkable areas according to the navigation mesh. Results. Table 1 shows the empirical evaluation results. Our method achieves a higher contact score (0.99) than both GAMMA (0.94) and SAMP (0.84) which indicates better foot-floor contact and less foot skating. Moreover, our method achieves the highest penetration score which indicates our scene-aware policy can better avoid scene collisions. Fig. 5 shows examples of locomotion tasks where GAMMA collides into the scenes while our sceneaware policy successfully avoids penetration. Compared to GAMMA trained in the same environments, we observe our scene-aware policy learns more conservative behavior with lower moving speed and spent more time (6.43s) walking to the target locations, just like a human afraid of stepping on surrounding traps. All the methods can reach the target location within a reasonable distance." }, { "figure_ref": [], "heading": "Fine-Grained Human-Object Interaction", "publication_ref": [ "b12" ], "table_ref": [], "text": "We evaluate the object interaction task on 10 unseen objects (3 armchairs, 3 straight chairs, 3 sofas, 1 L-sofa) from ShapeNet [6]. We use the object size annotation of ShapeNet and manually filter unrealistic-sized objects. The virtual human is randomly placed in front of the target object and then instructed to perform the interaction, stay for around 2 seconds, and then stand up. We evaluate two interactions of sitting and lying separately.\nBaselines and metrics. We compare our method with SAMP [13] for the object interaction task. The evaluation metrics for the interaction tasks are: 1) time of completing the object interaction task. 2) foot contact score as defined in Eq. 3. Note that the foot contact score does not always reflect the motion quality for lying tasks because the foot can often be off the floor during lying. 3) interaction penetration score for each frame is defined as:\ns inter pene = vi∈V |(Ψ O (v i )) -|,(15)\nwhere Ψ O is the object signed distance field, (•) -clips all positve distance values to zero, and V is the body vertivces. We show both the average penetration over time and the maximum penetration in one sequence.\nResults. Tab. 2 shows the evaluation results. Our method achieves a significantly higher foot contact score, indicating more natural motion. Our method also achieves lower mean and maximum penetration, which means our method generates more physically plausible results. Moreover, our method can complete the interaction tasks much faster than SAMP. This is because SAMP does not generalize well to unseen random body initialization and needs a longer time to start performing interactions. Qualitative results demonstrating the various object interactions generated by our method are shown in Fig. 6. Our object interaction policy generalizes to random initial body locations and orientations, as well as novel objects of various shapes." }, { "figure_ref": [], "heading": "Walk to sit on the bus bench", "publication_ref": [], "table_ref": [], "text": "Walk to sit on the bed " }, { "figure_ref": [], "heading": "Interaction Sequences in 3D Scenes", "publication_ref": [ "b46" ], "table_ref": [], "text": "Humans often continuously perform sequences of interactions with various objects in complex real-world scenes as shown in Fig. 1. Such interaction sequences are combinations of alternating locomotions and fine-grained interactions which we evaluate in Sec. 4.1 and Sec. 4.2 respectively. We conduct the empirical evaluation in real scene scans from Replica [47] and compare our method with SAMP. We select a list of interactable objects in the scene and instruct the virtual human to walk to the objects to perform interactions one by one. The evaluation metrics include the foot contact score, locomotion penetration score, and the mean and maximum interaction penetration score. We also conducted a perceptual study where participants are shown a side-by-side comparison of results from two methods and asked to choose the one perceptually more natural. We report the rate of being chosen as the better.\nThe evaluation results are shown in Tab. 3. Our method generates high-quality results of human-scene interactions in cluttered environments. Our results achieve higher contact scores and less penetration with the scenes compared to SAMP. In addition, our method generates perceptually more natural results according to perceptual study." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "We perform ablation studies on the foot-ground contact reward and penetration avoidance reward of the interaction policies to substantiate the significance of these rewards. We train two ablation interaction policies using identical networks and environments, differing only in the exclusion of one of these rewards in each case. The quantitative metrics are reported in Tab. 4. The removal of the penetration avoidance reward yields a marked escalation in detected human-scene penetration. The removal of the footfloor contact reward yields notably both inferior contact and penetration scores, accompanied by observed erratic synthesized motions that either remain suspended in the air or penetrate the floor. These empirical findings underscore the crucial role played by the foot-floor contact and penetration avoidance rewards in the learning of interaction policies." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [], "table_ref": [], "text": "Limitations and Future Work. Our current method has various limitations that could be improved in future works. First, our method does not fully resolve penetrations with scene objects and floors since our method only uses rewards to encourage avoiding penetration with scenes, which does not impose hard constraints for penetrations. The combination of our method with physics simulation holds the potential to effectively address and resolve penetration issues. Moreover, the lying motions generated by our method are not as natural as the sitting motions because the motion primitive model fails to learn a comprehensive action space for lying given limited training data. Specifically, the available AMASS motion capture data for lying (167 seconds) is significantly less than the data for sitting (5K seconds). To overcome this issue, we aim to explore more data-efficient learning methods and scalable methods to collect humanscene interaction data. Furthermore, our locomotion policy is now limited to flat-floor scenes due to its reliance on the 2D occupancy-based walkability map. However, to broaden the applicability of our approach to more complex scenes such as uneven outdoor terrains and multi-floor buildings requiring walking upstairs, it will be necessary to replace the walkability map with a more suitable environment sens-Inter-penetrations Unnatural poses ing mechanism. In addition, our method is restricted to interactions with static scenes. However, in the real world, humans are exposed to dynamic interactions with movable objects and scenes involving autonomous agents including other humans, animals, and vehicles. Extension to dynamic interactions will enable tackling a broader range of intricate and dynamic applications that mirror the complexities inherent in the real world." }, { "figure_ref": [], "heading": "Conclusion.", "publication_ref": [], "table_ref": [], "text": "In this paper, we leverage reinforcement learning to establish a framework to synthesize diverse human motions in indoor scenes, which is stochastic, realistic, and perpetual. The proposed method has large potential to improve many applications such as daily-living activity simulation, synthetic data creation, architecture design, and so on. Compared to existing methods, our method realizes fine-grained control by using body surface keypoints as targets, and achieves autonomous body-scene collision avoidance by incorporating scene information into the states and the rewards. Experiments show that our method effectively enables virtual humans to inhabit the virtual, and outperforms baselines consistently. We train the scene-aware locomotion policy using random synthetic scenes to learn generalizable locomotion skills of moving from the initial location to the goal location while avoiding collision with scenes. The initial and goal locations used for training are waypoints of collision-free paths sampled in the synthetic scenes." }, { "figure_ref": [ "fig_6" ], "heading": "Synthesizing Diverse Human", "publication_ref": [], "table_ref": [], "text": "Each synthetic scene has a random scene size, consists of random numbers and categories of objects sampled from ShapeNet [6], and has a random scene layout. The synthetic scenes are generated using the following steps:\n• Sample the initial scene shape as a rectangle with edges ranging from 2 meters to 7 meters.\n• Randomly sample furniture objects constituting the scene from ShapeNet. Specifically, we sample objects from chairs, beds, sofas, desks, and tables. We limit the number of objects belonging to categories that normally have large sizes (e.g. beds) to avoid the scenes being fully occupied, leaving no space for human movements. We use the real object size annotation of ShapNet and transform the object model to make the z-axis point up.\n• Randomly rotate and translate the objects in the scene to obtain random scene layouts.\n• Expand the scene boundary so that every object keeps a reasonable distance from the boundary and humans can potentially walk by.\nAfter synthetic scene generation, we calculate the corresponding navigation mesh as described in Sec. B, and randomly sample pairs of collision-free initial-goal locations in the walkable areas. We first randomly sample two initial and goal locations on the navigation mesh. Then we use navigation mesh-based pathfinding to generate a sequence of waypoints that constitute a collision-free path. Each pair of consecutive waypoints are used as one initial-goal location pairs to train the locomotion policy. One sample synthetic scene and waypoints for training the locomotion policy are shown in Fig. S1.\nWe train the locomotion policy using the synthetic scenes and corresponding initial-goal location pairs. The locomotion policy is trained to move from the initial location to the goal location while avoiding penetration with scene objects. We further randomize the initial body pose and orientation to make the policy generalize to various initial body configurations." }, { "figure_ref": [ "fig_0" ], "heading": "A.2. Object Interaction Environments", "publication_ref": [ "b13", "b64", "b44", "b44" ], "table_ref": [], "text": "We train the human-object interaction policy to reach the fine-grained body marker goals that perform the specified interaction. The static goal human-scene interaction data is the prerequisite for training the interaction policy, which we obtain from the PROX [14] dataset using the following steps:\n• We obtain the static human-object interaction estimation from PROX recordings, which consist of SMPL-X body estimation from LEMO [59], and object mesh from the instance segmentation and annotation from COINS [65]. Specifically, we use the static human-scene interactions annotated as 'sit on' and 'lie on' according to COINS.\n• To improve object diversity and augment the interaction data from PROX, we retarget the static interaction data to random ShapeNet [6] objects similar to the data augmentation in [45]. For each static human-object interaction data from PROX, we randomly sample an object from ShapeNet and fit it to the original PROX object by optimizing scale, rotation, and translation. After fitting, we replace the original PROX object with the fitted object. Then we augment the interaction data by applying slight scaling and rotation augmentation to the fitted object, and the corresponding human bodies are updated using contact points and relative vectors similar to [45].\nWith the object retargeting and augmentation, we obtain goal static human-object interaction data with increased diversity compared to the original PROX dataset. When training the object interaction policy, we randomly sample one frame of static interaction to retrieve the interaction object mesh and fine-grained goal body markers for the training environment setup. We The object interaction policy is trained to reach the goal interaction body while avoiding collision with the interaction object and the floor. The red spheres denote the body markers.\nrandomly sample the initial body location and pose in front of the interaction object. Furthermore, we randomly swap the initial body and goal body with a probability of 0.25, in order to also learn 'stand up' behaviors in addition to 'sit/ lie down' behaviors. Two example training scenes of sitting down and standing up are demonstrated in Fig. S2." }, { "figure_ref": [ "fig_8" ], "heading": "B. Implementation Details", "publication_ref": [ "b64", "b13", "b6" ], "table_ref": [], "text": "Goal static interaction synthesis using COINS. In this paper, we use a modified version of COINS [65], incorporating slightly improved object generalization, for the synthesis of goal static human-scene interactions. COINS synthesizes static human-scene interactions conditioned on interaction semantics and object geometries. However, the original COINS models are trained on the PROX [14] dataset which contains a very limited number of object models. This restricted training object diversity constrains the object generalization capabilities of COINS models. Empirical observations reveal that generation quality deteriorates when applied to objects with domain gaps, such as CAD models from ShapeNet [6] and in-the-wild scans from ScanNet [7]. We observed that the object generalization failures are mainly due to the pelvis generation stage. Therefore, we annotated the pelvis frame data of sitting and lying interactions on a subset of ShapeNet objects that covers more diverse objects than contained in the PROX dataset. We retrain the PelvisNet of COINS with the annotated pelvis frame data and keep the BodyNet and other sampling algorithms untouched. The goal static interactions are generated and filtered in advance of the interaction motion synthesis.\nNavigation mesh and path planning. We use the open-sourced pynavmesh [21] library for navigation mesh creation and collision-free pathfinding. We implemented utility code to adapt the library to general scenes with the z-axis pointing up. We enforce the generated navigation mesh only containing triangle faces. For each scene, we create two versions of navigation meshes with different agent radii. The navigation mesh created with a larger agent radius of 0.2 is used for collision-free pathfinding, and the other one created with a smaller radius of 0.02 gives a tight fitting of the unoccupied areas and is used for the local walkability map calculation, as described in the next paragraph.\nWalkability map. The walkability map is implemented as a 2D occupancy map centered at the human pelvis and aligns with the body's forward orientation. We leverage a 16x16 walkability map covering 1.6 meters by 1.6 meters square area. Each cell of the map has a binary value indicating whether this cell is walkable or occupied by obstacles. At each time step, we dynamically update this human-centric local walkability map. We first sample the 256 cells in the human-centric local coordinates frames, then transform the cell center to the scene coordinates, and evaluate each cell occupancy using the tightly fitted navigation mesh by querying whether the centroid is inside any triangle faces of the navigation mesh. One example walkability map is shown in Fig. S3.\nSDF-based features. We leverage the mesh-to-sdf [24] library to calculate the marker-object signed distance and gradient features, which are used by the fine-grained human-object interaction policy. For each interaction object, we precompute a 128x128x128 SDF grid and a corresponding gradient grid. At each test step of interaction synthesis, we calculate the body marker-object distance and gradient features by evaluating the SDF and gradient grids at the current marker locations using grid sampling with trilinear interpolation. We utilize this grid sampling-based SDF feature calculation to achieve a balance between accuracy and computational efficiency." }, { "figure_ref": [], "heading": "C. Comparison to More Related Works", "publication_ref": [ "b60" ], "table_ref": [], "text": "Here we discuss and compare with more related works on synthesizing human motions in 3D scenes. [54, 55] share a similar multi-stage motion synthesis framework of first placing anchor bodies in scenes and then generating in-between trajectories and poses, which requires the pre-specification of the total number of frames. In contrast, our method offers greater flexibility by not constraining the number of frames to generate in advance. Since [54] doesn't provide source code, we add the quantitative locomotion comparison with [55] in their test scenes as shown in Tab. S1. Our method can generate locomotion results with significantly more natural foot-floor contact and less scene collision. The generated human motion results from [55] exhibit artifacts like obvious jittering and foot skating, as evident in the video qualitative comparison in our project website. COUCH [61] autoregressively synthesize human motions sitting to chairs satisfying given hand-chair contact constraints. However, COUCH can not handle complex scenes nor generate standing-up motion, and COUCH repeats generating deterministic motion. Nevertheless, we quantitatively compare with COUCH on the task of sitting down using their test chairs. The quantitative metrics in Tab. S2 show that our method can generate sitting interaction results with faster interaction completion, more natural foot contact, and less human-object penetration. We refer to our project website for the qualitative comparison of video results. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. We sincerely acknowledge the anonymous reviewers for their insightful suggestions. This work was supported by the SNSF project grant 200021 204840 and SDSC PhD fellowship. Lingchen Yang provided valuable suggestions on visualization." } ]
Figure 1: In this work, we propose a method to generate a sequence of natural human-scene interaction events in real-world complex scenes as illustrated here. The human first walks to sit on a stool (yellow to red), then walk to another chair to sit down (red to magenta), and finally walk to and lie on the sofa (magenta to blue).
Synthesizing Diverse Human Motions in 3D Indoor Scenes
[ { "figure_caption": "Figure 2 :2Figure2: Illustration of our proposed human-scene interaction synthesis framework, which consists of learned motion primitive (actions), alongside locomotion and interaction policies generating latent actions conditioned on scenes and interaction goals. By integrating navigation mesh-based path-finding and static human-scene interaction generation methods, we can synthesize realistic motion sequences for virtual humans with fine-grained controls.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Demonstration of locomotion tasks where GAMMA [64] (left) collides with the obstacles (red bodies) while our scene-aware locomotion policy (right) avoids collision. The yellow circles denote the specified waypoints.", "figure_data": "", "figure_id": "fig_1", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "1 Figure 6 :16Figure 6: Demonstration of various generated object interactions. Each row shows generated interactions with the same object starting from random initial body locations and orientations. Colors from yellow to red denote time.", "figure_data": "", "figure_id": "fig_2", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Our method can synthesize natural human movements in scene reconstructions that are significantly different from the training scenes. We show the synthesis results in one Paris street [8] (left figure) and one indoor room [14] (right figure). Transparent to solid colors denote time.", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Limitations. Penetrations remain observable in our results (left) since our method only encourages avoiding collision with rewards. Furthermore, the absence of sufficient training data for lying motions impedes the motion primitive model's ability to acquire a comprehensive lying motion space, leading to degraded motion (right).", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure S1 :S1FigureS1: Illustration of a synthetic scene and sampled waypoints used to train the locomotion policy. The corresponding navigation mesh is visualized as blue, denoting the walkable areas for humans. The yellow spheres denote pair-wise collisionfree waypoints found by navigation mesh-based path finding and are used to train the locomotion policy.", "figure_data": "", "figure_id": "fig_6", "figure_label": "S1", "figure_type": "figure" }, { "figure_caption": "Figure S2: Demonstration of two example training environments (left: sit down, right: stand up) for the fine-grained humanobject interaction policy. Each training scene consists of an interaction object, an initial body (gray), and a goal body (pink).The object interaction policy is trained to reach the goal interaction body while avoiding collision with the interaction object and the floor. The red spheres denote the body markers.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure S3 :S3Figure S3: Illustration of the human-centric walkability map. The walkability map is a 2D occupancy map indicating which areas surrounding the human are walkable (blue cells) or occupied by obstacles (red cells). The walkability map is dynamically updated at each step according to the human body's location and orientation.", "figure_data": "", "figure_id": "fig_8", "figure_label": "S3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Evaluation of the locomotion task. The up/down arrows denote the score is the higher/lower the better and the best results are in boldface.", "figure_data": "SAMP [13]5.970.140.840.94GAMMA [64]3.870.030.940.94Ours6.430.040.990.95", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation of the interaction tasks. The up/down arrows denote the score is the higher/lower the better and the best results are in boldface.", "figure_data": "SAMP [13] sit8.630.8911.9145.22Ours sit4.090.971.9110.61SAMP [13] lie12.550.7344.77238.81Ours lie4.200.789.9044.61", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation of interaction sequences synthesis.", "figure_data": "contact ↑0.870.96loco. pene. ↑0.620.72inter. pene. mean↓15.613.40inter. pene. max ↓101.2539.68perceptual. ↑0.150.85", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Rewards ablation studies results, where '-Contact' and '-Penetration' denote policies trained without the floor contact and penetration avoidance reward, respectively. time ↓ contact ↑ pene. mean ↓ pene. max ↓", "figure_data": "Ours4.090.971.9110.61-Contact3.750.8819.4960.10-Penetration4.030.9516.8045.50", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with [55] on locomotion. time ↓ avg. dist ↓ contact ↑ loco pene ↑", "figure_data": "Wang etc. [55]4.000.030.920.86Ours3.090.040.990.95", "figure_id": "tab_7", "figure_label": "S1", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with[61] on sitting interactions. time ↓ contact ↑ pene. mean ↓ pene. max ↓", "figure_data": "COUCH [61]6.470.915.2414.14Ours3.250.971.505.89", "figure_id": "tab_8", "figure_label": "S2", "figure_type": "table" } ]
Kaifeng Zhao; Yan Zhang; Shaofei Wang; Thabo Beeler; Siyu Tang
[ { "authors": "Rajmund Simon Alexanderson; Jonas Nagy; Gustav Eje Beskow; Henter", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b0", "title": "Listen, denoise, action! audio-driven motion synthesis with diffusion models", "year": "2023" }, { "authors": "Tenglong Ao; Zeyi Zhang; Libin Liu", "journal": "", "ref_id": "b1", "title": "GestureDiffu-CLIP: Gesture diffusion model with CLIP latents", "year": "2023" }, { "authors": "Kevin Bergamin; Simon Clavet; Daniel Holden; James Richard; Forbes ", "journal": "ACM Transactions On Graphics (TOG)", "ref_id": "b2", "title": "DReCon: data-driven responsive control of physics-based characters", "year": "2019" }, { "authors": "Michael Buttner", "journal": "", "ref_id": "b3", "title": "Machine learning for motion synthesis and character control in games", "year": "2019" }, { "authors": "Michael Büttner; Simon Clavet", "journal": "Proc. of Nucl. ai", "ref_id": "b4", "title": "Motion matching-the road to next gen animation", "year": "2015" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b5", "title": "Shapenet: An informationrich 3d model repository", "year": "2015" }, { "authors": "Angela Dai; Angel X Chang; Manolis Savva; Maciej Halber; Thomas Funkhouser; Matthias Nießner", "journal": "IEEE", "ref_id": "b6", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Jean-Emmanuel Deschaud; David Duque; Jean Pierre Richa; Santiago Velasco-Forero; Beatriz Marcotegui; Franc ¸ois Goulette", "journal": "Remote Sensing", "ref_id": "b7", "title": "Paris-carla-3d: A real and synthetic outdoor point cloud dataset for challenging tasks in 3d mapping", "year": "2021" }, { "authors": "Helmut Grabner; Juergen Gall; Luc Van Gool", "journal": "IEEE", "ref_id": "b8", "title": "What makes a chair a chair?", "year": "2011" }, { "authors": "Abhinav Gupta; Scott Satkin; Alexei A Efros; Martial Hebert", "journal": "IEEE", "ref_id": "b9", "title": "From 3d scene geometry to human workspace", "year": "2011" }, { "authors": "Ikhsanul Habibie; Daniel Holden; Jonathan Schwarz; Joe Yearsley; Taku Komura", "journal": "", "ref_id": "b10", "title": "A recurrent variational autoencoder for human motion synthesis", "year": "2017" }, { "authors": "Nils J Peter E Hart; Bertram Nilsson; Raphael", "journal": "IEEE transactions on Systems Science and Cybernetics", "ref_id": "b11", "title": "A formal basis for the heuristic determination of minimum cost paths", "year": "1968" }, { "authors": "Mohamed Hassan; Duygu Ceylan; Ruben Villegas; Jun Saito; Jimei Yang; Yi Zhou; Michael J Black", "journal": "", "ref_id": "b12", "title": "Stochastic scene-aware motion prediction", "year": "2008" }, { "authors": "Mohamed Hassan; Vasileios Choutas; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b13", "title": "Resolving 3D human pose ambiguities with 3D scene constraints", "year": "2019" }, { "authors": "Mohamed Hassan; Partha Ghosh; Joachim Tesch; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b14", "title": "Populating 3D scenes by learning human-scene interaction", "year": "2021" }, { "authors": "Mohamed Hassan; Yunrong Guo; Tingwu Wang; Michael Black; Sanja Fidler; Xue Bin Peng", "journal": "", "ref_id": "b15", "title": "Synthesizing Physical Character-Scene Interactions", "year": "2023-02" }, { "authors": "Daniel Holden; Oussama Kanoun; Maksym Perepichka; Tiberiu Popa", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b16", "title": "Learned motion matching", "year": "2020" }, { "authors": "Daniel Holden; Taku Komura; Jun Saito", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b17", "title": "Phasefunctioned neural networks for character control", "year": "2017" }, { "authors": "Daniel Holden; Jun Saito; Taku Komura", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b18", "title": "A deep learning framework for character motion synthesis and editing", "year": "2016" }, { "authors": "Ruizhen Hu; Zihao Yan; Jingwen Zhang; Oliver Van Kaick; Ariel Shamir; Hao Zhang; Hui Huang", "journal": "", "ref_id": "b19", "title": "Predictive and generative neural networks for object functionality", "year": "2020" }, { "authors": "Shekn Itrch", "journal": "", "ref_id": "b20", "title": "pynavmesh: Python implementation of path finding algorithm in navigation meshes", "year": "" }, { "authors": "Jordan Juravsky; Yunrong Guo; Sanja Fidler; Xue Bin Peng", "journal": "", "ref_id": "b21", "title": "PADL: Language-Directed Physics-Based Character Control", "year": "2022" }, { "authors": "Kacper Kania; Marek Kowalski; Tomasz Trzciński", "journal": "", "ref_id": "b22", "title": "Tra-jeVAE: Controllable Human Motion Generation from Trajectories", "year": "2021" }, { "authors": "Marian Kleineberg", "journal": "", "ref_id": "b23", "title": "mesh-to-sdf: Calculate signed distance fields for arbitrary meshes", "year": "" }, { "authors": "Lucas Kovar; Michael Gleicher; Frédéric Pighin", "journal": "", "ref_id": "b24", "title": "Motion graphs", "year": "2008" }, { "authors": "Peizhuo Li; Kfir Aberman; Zihan Zhang; Rana Hanocka; Olga Sorkine-Hornung", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b25", "title": "Ganimator: Neural motion synthesis from a single sequence", "year": "2022" }, { "authors": "Weiyu Li; Xuelin Chen; Peizhuo Li; Olga Sorkine-Hornung; Baoquan Chen", "journal": "", "ref_id": "b26", "title": "Example-based Motion Synthesis via Generative Motion Matching", "year": "2023" }, { "authors": "Xueting Li; Sifei Liu; Kihwan Kim; Xiaolong Wang; Ming-Hsuan Yang; Jan Kautz", "journal": "", "ref_id": "b27", "title": "Putting humans in a scene: Learning affordance in 3d indoor environments", "year": "2019" }, { "authors": "Hung Yu; Ling ; Fabio Zinno; George Cheng; Michiel Van De Panne", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b28", "title": "Character controllers using motion vaes", "year": "2020" }, { "authors": "Libin Liu; Kangkang Yin; Baining Guo", "journal": "Wiley Online Library", "ref_id": "b29", "title": "Improving sampling-based motion control", "year": "2015" }, { "authors": "Libin Liu; Kangkang Yin; Michiel Van De Panne; Tianjia Shao; Weiwei Xu", "journal": "", "ref_id": "b30", "title": "Sampling-based contact-rich motion control", "year": "2010" }, { "authors": "Naureen Mahmood; Nima Ghorbani; Gerard Nikolaus F Troje; Michael J Pons-Moll; Black", "journal": "", "ref_id": "b31", "title": "AMASS: Archive of motion capture as surface shapes", "year": "2019" }, { "authors": "Josh Merel; Leonard Hasenclever; Alexandre Galashov; Arun Ahuja; Vu Pham; Greg Wayne; Yee Whye Teh; Nicolas Heess", "journal": "", "ref_id": "b32", "title": "Neural probabilistic motor primitives for humanoid control", "year": "2018" }, { "authors": "Georgios Pavlakos; Vasileios Choutas; Nima Ghorbani; Timo Bolkart; Dimitrios Ahmed Aa Osman; Michael J Tzionas; Black", "journal": "", "ref_id": "b33", "title": "Expressive body capture: 3d hands, face, and body from a single image", "year": "2019" }, { "authors": "Xue Bin Peng; Pieter Abbeel; Sergey Levine; Michiel Van De Panne", "journal": "ACM Transactions On Graphics (TOG)", "ref_id": "b34", "title": "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills", "year": "2018" }, { "authors": "Xue Bin Peng; Yunrong Guo; Lina Halper; Sergey Levine; Sanja Fidler", "journal": "ACM Transactions On Graphics (TOG)", "ref_id": "b35", "title": "Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters", "year": "2022" }, { "authors": "Xue Bin Peng; Angjoo Kanazawa; Jitendra Malik; Pieter Abbeel; Sergey Levine", "journal": "ACM Transactions On Graphics (TOG)", "ref_id": "b36", "title": "Sfv: Reinforcement learning of physical skills from videos", "year": "2018" }, { "authors": "Mathis Petrovich; Michael J Black; Gül Varol", "journal": "", "ref_id": "b37", "title": "Actionconditioned 3D human motion synthesis with transformer VAE", "year": "2021" }, { "authors": "Mathis Petrovich; Michael J Black; Gül Varol", "journal": "Springer", "ref_id": "b38", "title": "TEMOS: Generating diverse human motions from textual descriptions", "year": "2022" }, { "authors": "Arjun Abhinanda R Punnakkal; Nikos Chandrasekaran; Alejandra Athanasiou; Michael J Quiros-Ramirez; Black", "journal": "", "ref_id": "b39", "title": "BABEL: bodies, action and behavior with English labels", "year": "2021" }, { "authors": "Manolis Savva; X Angel; Pat Chang; Matthew Hanrahan; Matthias Fisher; Nießner", "journal": "ACM transactions on graphics (TOG)", "ref_id": "b40", "title": "SceneGrok: Inferring action maps in 3D environments", "year": "2014" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b41", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "Greg Snook", "journal": "Game programming gems", "ref_id": "b42", "title": "Simplified 3D movement and pathfinding using navigation meshes", "year": "2000" }, { "authors": "Sebastian Starke; Ian Mason; Taku Komura", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b43", "title": "Deepphase: Periodic autoencoders for learning motion phase manifolds", "year": "2022" }, { "authors": "Sebastian Starke; He Zhang; Taku Komura; Jun Saito", "journal": "ACM Trans. Graph", "ref_id": "b44", "title": "Neural state machine for character-scene interactions", "year": "2019" }, { "authors": "Sebastian Starke; Yiwei Zhao; Taku Komura; Kazi Zaman", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b45", "title": "Local motion phases for learning multi-contact character movements", "year": "2020" }, { "authors": "Julian Straub; Thomas Whelan; Lingni Ma; Yufan Chen; Erik Wijmans; Simon Green; Jakob J Engel; Raul Mur-Artal; Carl Ren; Shobhit Verma", "journal": "", "ref_id": "b46", "title": "The Replica dataset: A digital replica of indoor spaces", "year": "2019" }, { "authors": "S Richard; Andrew G Sutton; Barto", "journal": "MIT press Cambridge", "ref_id": "b47", "title": "Introduction to reinforcement learning", "year": "1998" }, { "authors": "Xiangjun Tang; He Wang; Bo Hu; Xu Gong; Ruifan Yi; Qilong Kou; Xiaogang Jin", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b48", "title": "Real-time controllable motion transition for characters", "year": "2022" }, { "authors": "Chen Tessler; Yoni Kasten; Yunrong Guo; Shie Mannor; Gal Chechik; Xue Bin Peng", "journal": "", "ref_id": "b49", "title": "CALM: Conditional Adversarial Latent Models for Directable Virtual Characters", "year": "2023" }, { "authors": "Guy Tevet; Brian Gordon; Amir Hertz; H Amit; Daniel Bermano; Cohen-Or", "journal": "Springer", "ref_id": "b50", "title": "Motionclip: Exposing human motion generation to clip space", "year": "2022" }, { "authors": "Guy Tevet; Sigal Raab; Brian Gordon; Yonatan Shafir; Daniel Cohen-Or; Amit H Bermano", "journal": "", "ref_id": "b51", "title": "Human motion diffusion model", "year": "2022" }, { "authors": "Jonathan Tseng; Rodrigo Castellon; Karen Liu", "journal": "", "ref_id": "b52", "title": "EDGE: Editable Dance Generation From Music", "year": "2022" }, { "authors": "Jingbo Wang; Yu Rong; Jingyuan Liu; Sijie Yan; Dahua Lin; Bo Dai", "journal": "", "ref_id": "b53", "title": "Towards diverse and natural scene-aware 3d human motion synthesis", "year": "2022" }, { "authors": "Jiashun Wang; Huazhe Xu; Jingwei Xu; Sifei Liu; Xiaolong Wang", "journal": "", "ref_id": "b54", "title": "Synthesizing long-term 3d human motion and interaction in 3d scenes", "year": "2021" }, { "authors": "Zhiyong Wang; Jinxiang Chai; Shihong Xia", "journal": "IEEE transactions on visualization and computer graphics", "ref_id": "b55", "title": "Combining recurrent neural networks and adversarial training for human motion synthesis and control", "year": "2019" }, { "authors": "Ye Yuan; Jiaming Song; Umar Iqbal; Arash Vahdat; Jan Kautz", "journal": "", "ref_id": "b56", "title": "PhysDiff: Physics-Guided Human Motion Diffusion Model", "year": "2022" }, { "authors": "Haotian Zhang; Ye Yuan; Viktor Makoviychuk; Yunrong Guo; Sanja Fidler; Xue Bin Peng; Kayvon Fatahalian", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b57", "title": "Learning Physically Simulated Tennis Skills from Broadcast Videos", "year": "2023" }, { "authors": "Siwei Zhang; Yan Zhang; Federica Bogo; Pollefeys Marc; Siyu Tang", "journal": "", "ref_id": "b58", "title": "Learning motion priors for 4d human body capture in 3d scenes", "year": "2001" }, { "authors": "Siwei Zhang; Yan Zhang; Qianli Ma; Michael J Black; Siyu Tang", "journal": "IEEE", "ref_id": "b59", "title": "PLACE: Proximity learning of articulation and contact in 3D environments", "year": "2020" }, { "authors": "Xiaohan Zhang; Bharat Lal Bhatnagar; Sebastian Starke; Vladimir Guzov; Gerard Pons-Moll", "journal": "Springer", "ref_id": "b60", "title": "Couch: towards controllable human-chair interactions", "year": "2022" }, { "authors": "Yan Zhang; Michael J Black; Siyu Tang", "journal": "", "ref_id": "b61", "title": "We are more than our joints: Predicting how 3d bodies move", "year": "2021" }, { "authors": "Yan Zhang; Mohamed Hassan; Heiko Neumann; Michael J Black; Siyu Tang", "journal": "", "ref_id": "b62", "title": "Generating 3d people in scenes without people", "year": "2020" }, { "authors": "Yan Zhang; Siyu Tang", "journal": "", "ref_id": "b63", "title": "The wanderings of odysseus in 3D scenes", "year": "2007" }, { "authors": "Kaifeng Zhao; Shaofei Wang; Yan Zhang; Thabo Beeler; Siyu Tang", "journal": "Springer", "ref_id": "b64", "title": "Compositional human-scene interaction synthesis with semantic control", "year": "2022" }, { "authors": "Yang Zheng; Yanchao Yang; Kaichun Mo; Jiaman Li; Tao Yu; Yebin Liu; Karen Liu; Leonidas J Guibas", "journal": "Springer", "ref_id": "b65", "title": "Gimo: Gaze-informed human motion prediction in context", "year": "2022" }, { "authors": "Fabio Zinno", "journal": "", "ref_id": "b66", "title": "Ml tutorial day: From motion matching to motion synthesis, and all the hurdles in between", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 392.91, 610.91, 152.2, 9.68 ], "formula_id": "formula_0", "formula_text": "s t = (X s , I, G),(1)" }, { "formula_coordinates": [ 5, 109.23, 119.09, 177.13, 9.65 ], "formula_id": "formula_1", "formula_text": "r = r goal + r contact + r pene ,(2)" }, { "formula_coordinates": [ 5, 55.09, 204.89, 227.4, 11.72 ], "formula_id": "formula_2", "formula_text": "r contact = e -(| min xz|-0.05)+ • e -(min ∥x vel ∥2-0.075)+ , (3" }, { "formula_coordinates": [ 5, 282.49, 207.29, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 5, 103.06, 414.4, 183.3, 25.75 ], "formula_id": "formula_4", "formula_text": "L = L P P O + E[(r t -V (s t )) 2 ] + αKL-div(π(z|s)||N (0, I)),(4)" }, { "formula_coordinates": [ 5, 399.69, 550.6, 145.43, 8.96 ], "formula_id": "formula_5", "formula_text": "I = vec(M),(5)" }, { "formula_coordinates": [ 5, 390.51, 596.25, 154.61, 9.68 ], "formula_id": "formula_6", "formula_text": "G = (g p -X s ) n ,(6)" }, { "formula_coordinates": [ 5, 377.4, 702.12, 167.71, 11.72 ], "formula_id": "formula_7", "formula_text": "r pene = e -|M0∩Bxy(X)| ,(7)" }, { "formula_coordinates": [ 6, 126.96, 148.47, 159.4, 9.65 ], "formula_id": "formula_8", "formula_text": "r goal = r dist + r ori ,(8)" }, { "formula_coordinates": [ 6, 98.83, 180.69, 187.53, 9.68 ], "formula_id": "formula_9", "formula_text": "r dist = 1 -(∥p -g p ∥ 2 -0.05) + ,(9)" }, { "formula_coordinates": [ 6, 128.09, 209.85, 154.12, 22.34 ], "formula_id": "formula_10", "formula_text": "r ori = ⟨o, g p -p⟩ 2 , (10" }, { "formula_coordinates": [ 6, 282.21, 216.94, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 6, 112.53, 540.44, 173.83, 9.65 ], "formula_id": "formula_12", "formula_text": "I = [Ψ O (X s ), ∇Ψ O (X s )],(11)" }, { "formula_coordinates": [ 6, 88.69, 561.57, 151.97, 11.23 ], "formula_id": "formula_13", "formula_text": "Ψ O ∈ R M ×67 and ∇Ψ O ∈ R M ×201" }, { "formula_coordinates": [ 6, 98.64, 621.68, 187.72, 9.68 ], "formula_id": "formula_14", "formula_text": "G = [(g m -X s ) n , ∥g m -X s ∥ 2 ],(12)" }, { "formula_coordinates": [ 6, 355.2, 360.74, 189.91, 9.68 ], "formula_id": "formula_15", "formula_text": "r goal = 1 -(∥x -g m ∥ 2 -0.05) + ,(13)" }, { "formula_coordinates": [ 6, 347.61, 387.05, 197.5, 14.71 ], "formula_id": "formula_16", "formula_text": "r pene = e -1 |V | 1 T T t=1 |V | i=1 |(Ψ O (vti))-| ,(14)" }, { "formula_coordinates": [ 8, 103.35, 468.7, 183.02, 20.06 ], "formula_id": "formula_17", "formula_text": "s inter pene = vi∈V |(Ψ O (v i )) -|,(15)" } ]
10.18653/v1/2020.acl-main.9
2023-05-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b2", "b16", "b21", "b0", "b1", "b20", "b7", "b17", "b19", "b5", "b5", "b4", "b5", "b4" ], "table_ref": [], "text": "Inspired by the tremendous success in pre-training large language models (PLMs) in general domains (Devlin et al., 2019;Clark et al., 2020;Radford et al., 2018), efforts have been made to train PLMs for dialogue response generation (Zhang et al., 2020;Bao et al., 2020;Chen et al., 2022). However, they constrain the dialogues to be either two-party, or sequential structured (i.e. each utterance replies directly to its previous utterance). Different from them, a multi-party dialogue can involve multiple interlocutors, where each interlocutor can reply to any preceding utterances, making the response relations of the dialogue tree-structured and much more complicated (Zhang et al., 2018;Le et al., 2019;Shi and Huang, 2019;Wang et al., 2020). Besides, the speaker and addressee of a response utterance should be specified before it is generated in multi-party scenario, making the annotated data for multi-party dialogue response generation (MP-DRG) less available. Figure 1 illustrates an example of MPDRG task taken from the Ubuntu IRC benchmark (Hu et al., 2019). The upper part shows the tree-structured addressee relations of the dialogue, where the arrows point from addressees to speakers, and different colors represent different interlocutors. The middle part displays the content of the dialogue history, where U 7 is the response to be generated. The addressee (U 6 ) and the speaker (#4) of it are given, and the content of this response is the target of our model. The lower part gives the human response, which is also called the ground truth reference.\nPrevious works on MPDRG fine-tune generative PLMs on small multi-party dialogue datasets with explicit addressee annotations. They utilize the response annotations to form a tree-structured response graph, then encode the dialogue history using either homogeneous or heterogeneous Graph Neural Networks (GNNs) (Hu et al., 2019;Gu et al., 2022). Nevertheless, none of them make attempts to pre-train a response generation model for multiparty dialogues due to the lack of large-scale corpora with annotated addressee labels.\nTo solve the aforementioned problem of data scarcity, we propose an EM approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Specifically, we treat the addressee of each utterance in the dialogue history as a discrete latent variable z. During the E-steps, given the current dialogue history c t and the the response utterance r t , we model the distribution of the current addressee z t as p(z t |c t , r t ; θ), where θ is the current model parameters. During the M-steps, we sample (c t , r t , z t ) triplets from distribution p(z t |c t , r t ; θ) and optimize the generative model p(r t |c t , z t ; θ) on these samples. With the iteration number increasing, the accuracy of latent variable prediction and the quality of generated responses will grow together. It is worth noting that during these iterations, annotated addressee labels are not required, which makes it possible to leverage the huge amount of multi-party dialogue corpora without addressee labels. We provide theoretical analyses to prove the feasibility of our EM method, and conduct experiments on the Ubuntu IRC benchmark, which is used in previous works (Hu et al., 2019;Gu et al., 2022).\nThe contributions of our work can be summarized as the following three folds: • To the best of our knowledge, we are the first to study the pre-training of multi-party dialogue response generation, which is much more challenging and complicated than two-party dialogues. • We put forward an EM approach to alleviate the scarcity of multi-party dialogue data with addressee labels, making it possible to pre-train a model with huge amount of unlabeled corpora. • We provide theoretical analyses to prove the feasibility of our EM pre-training method, and experimental results on the Ubuntu IRC benchmark show our pre-trained model achieves state-of-theart performance compared with previous works.\n2 Related Works" }, { "figure_ref": [], "heading": "Pre-training for Response Generation", "publication_ref": [ "b21", "b16" ], "table_ref": [], "text": "In recent years, researchers have gradually drawn their attention from retrieval-based dialogue systems to generation-based ones. Thanks to the huge amount of two-party dialogue corpora, various PLMs for two-party dialogue response generation have been proposed. Zhang et al. (2020) propose DialoGPT, which utilizes the sequential response chains in the Reddit Corpus to pre-train an auto-regressive response generation model based on the architecture of GPT (Radford et al., 2018). Different from their work, which focuses on sequential dialogue history, our work aims to solve the case where the agent can respond to any previous utterance in a tree-structured dialogue history. " }, { "figure_ref": [], "heading": "Human Response:", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "Figure 1: An example of multi-party dialogue response generation task, better view in color. Bao et al. (2020) propose PLATO, which models the conversational intents as K discrete latent variables, then utilizes response selection, bag-ofwords prediction, and language modeling objectives to train the model. DialogVED (Chen et al., 2022) further extends the discrete latent variables to continuous ones, and models them with a multivariable Gaussian distribution. It utilizes KL divergence reduction to optimize the parameters of the latent distribution and applies masked language modeling, response generation, and bag-of-words prediction to train the whole model. PLATO and DialogVED focus on two-party conversations, and the conversational intents they put forward have no corresponding concepts of actual entities (e.g., intent to argue, intent to end a conversation, and so on). Distinct from their works, we lay emphasis on multi-party dialogues, and the latent variables of our method have actual meanings: variable z t = j indicates that the addressee of the response at the t th turn is the j th utterance." }, { "figure_ref": [], "heading": "Multi-party Dialog Response Generation", "publication_ref": [ "b5", "b13", "b4", "b18" ], "table_ref": [], "text": "Several previous works have studied the MPDRG task. Hu et al. (2019) extract a subset of the Ubuntu Dialogue Corpus (Lowe et al., 2015) tured Neural Network (GSN) for dialogue modeling. Specifically, they first treat each utterance of a dialogue as a node, and the addressee relations as edges to construct a dialogue graph, then make use of GNNs to encode the dialogue history. Finally, they adopt a Gated Recurrent Unit (GRU) with cross attention as the decoder to generate responses. Gu et al. (2022) put forward Het-erMPC, which models the dialogue history as a heterogeneous graph. In detail, they first design six types of edges: reply and replied-by, address and addressed-by, speak and spoken-by, among two kinds of nodes: interlocutor nodes and utterance nodes, and then encode the dialogue history using Transformers (Vaswani et al., 2017) together with heterogeneous GNNs. Finally, they utilize a Transformer Decoder to generate responses. Instead of fine-tuning models on a small dataset with annotated addressee labels as these existing work did, our work focuses on the utilization of large unlabeled corpora to pre-train a response generation model for multi-party dialogues." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "To design a model for multi-party dialogue response generation and make it compatible with the EM training algorithm, there are two important things to consider: how to model p(r t |c t , z t ; θ) in the maximization step, and how to compute p(z t |c t , r t ; θ) in the expectation step. In this section, we will first address these two problems, then mathematically derive the feasibility of our EM pre-training algorithm." }, { "figure_ref": [], "heading": "Task Formulation", "publication_ref": [], "table_ref": [], "text": "Given an input sequence of the dialogue history and the speaker of the response at time step t,\nX = {S 1 : U 1 [SEP]S 2 : U 2 [SEP] . . . S t-1 : U t-1 [SEP]S t :},\ntogether with the addressee of the response z t = j, our goal is to train a model that can generate an response Y = U t . Here each S i is the name of the speaker at time step i, which is represented as Speaker #S i like those in Figure 1. U i = {w i1 , w i2 , . . . , w in i } is the content of the i th utterance with n i words. z t = j represents that S t speaks to S j , who utters U j , and [SEP] is a special token that indicates the end of a dialogue turn." }, { "figure_ref": [], "heading": "Addressee Modeling", "publication_ref": [ "b8", "b4" ], "table_ref": [], "text": "In this section, we answer the first question: how to model p(r t |c t , z t ; θ), or in other words, how to incorporate the addressee information z t = j into the process of generating a response r t . We design a straightforward method that adds addressee embeddings to the positional encodings and word embeddings, before they are further encoded by a PLM. The left part of Figure 2 illustrates this method, where we use an embedding look-up table with 2 entries to indicate whether a word belongs to the addressee utterance or not. Specifically, if a word is in the addressee utterance, it will get its addressee embedding from entry 1, otherwise from entry 0. Since addressee modeling is not the key contribution of this work, we just adopt the most straightforward and effective way. In our experiments, we use BART (Lewis et al., 2020) as the backbone PLM, following previous works (Gu et al., 2022). Due to the page limit, the proverbial architecture of Transformer and BART are omitted here." }, { "figure_ref": [], "heading": "Latent Variable Prediction", "publication_ref": [], "table_ref": [], "text": "In this section, we answer the second question: how to compute p(z t |c t , r t ; θ) in the expectation step, or in other words, how to predict the distribution of the unlabeled addressee z t , given the current dialogue context c t , response r t , under parameters θ. The solution to this question is essentially the most important part of our method since it delicately solves the problem of data scarcity in MPDRG.\nLet's consider what humans will do to participate in a multi-party conversation. First, we will read the dialogue history c t , then choose an addressee z t to reply. Once c t and z t are determined, we will utter a response according to the content of the whole dialogue and the addressee utterance. The right part of Figure 2 gives the Bayesian Network of the above process, where the joint distribution of (c t , z t , r t ) can be factorized as:\np(c, z, r) = p(c) • p(z|c) • p(r|c, z)(1)\nHere we omit the subscript t and model parameters θ for simplicity. Given Eq. ( 1), p(z|c, r; θ) can be derived as:\np(z|c, r) = p(c, z, r) p(c, r) = p(c) • p(z|c) • p(r|c, z) p(c) • p(r|c) = p(z|c) • p(r|c, z) p(r|c)(2)\nWe assume that the probability of choosing any previous utterance as the addressee is the same given the current dialogue history, which means p(z|c) obeys a uniform distribution. Meanwhile, the denominator p(r|c) is independent of z, leaving only the term p(r|c, z). Now, we can induce that:\np(z|c, r) ∝ p(r|c, z)(3)\nTherefore, for each z i , i = 1, 2, . . . , t -1, we have:\np(z i |c, r) = p(r|c, z i ) t-1 j=1 p(r|c, z j )(4)\nIn practice, we can use the generative model p(r t |c t , z t ; θ) to compute the probability distribution of p(z t |c t , r t ; θ) by Eq. (4). the observed data (c t , r t ) and the current model parameters θ, where Eq. ( 4) gives a reasonable approximation of this value. Specifically, for a sample (c t , r t ), with the model parameters θ fixed, we first calculate the un-normalized probability of each of the i th (i < t) utterance being the addressee: p(r t |c t , z i t ; θ) using Eq. ( 3), then normalize them to get the conditional distribution of z t using Eq. ( 4). Once P (z t |c t , r t ; θ) is obtained, we sample (c t , r t , z t ) triplets from this distribution, which is further used in the maximization step. The Maximization Step is analogical to the normal training process. Given the sampled {(c k t , r k t , z k t )} N k=1 triplets, where N is the total number of samples, our goal is to minimize the auto-regressive language modeling loss:" }, { "figure_ref": [], "heading": "Expectation-Maximization Process", "publication_ref": [ "b17", "b15" ], "table_ref": [], "text": "L G = - N k=1 n k i=1 log p w k i | w k <i , c k t , z k t ; θ (5)\nwhere w k i is the i th word in the response of the k th sample:\nr k t = {w k i } n i i=1\n, and n i is the length of this response.\nCompared with the vanilla EM algorithm, there are several differences in our implementations. First of all, we do not use the initial model to generate the training data for the first round of the maximization step. Instead, we utilize the discourse parser provided by Shi and Huang (2019) to predict the addressee of each utterance in the unlabeled corpus to get a coarse initial training dataset. The reason for this initialization method is that the initialization of training data (or model parameters) is vital to the EM method, which helps it converge to a better point. Second, rather than sampling z t from its conditional distribution, we adopt a hard EM approach which takes the value z i t with highest probability as the predicted label, where i = arg max i p(z i t |c t , r t ; θ). This hard EM approach is proved as more effective to boost the performance (Min et al., 2019). Finally, to ensure the quality of the generated training data in the maximization step, we set a hyper-parameter α ∈ [0, 1] to control the proportion of training data that is actually used. Specifically, we first rank the prediction confidence of each z k t according to the value of p(z k t |c k t , r k t ; θ), then pick the top α × N samples with the highest confidence scores. In our experiments, α is dynamically set to ensure the addressee prediction accuracy of the selected samples is over 80% in an annotated validation set." }, { "figure_ref": [], "heading": "Proof of Feasibility", "publication_ref": [], "table_ref": [], "text": "In a multi-party dialogue corpus without annotated addressee labels, a usual solution to train a response generation model is to maximize the marginal loglikelihood (or incomplete log-likelihood) over all possible addressees:\n(c, r; θ) = log p(r|c; θ) = log i p(r, z i |c; θ) (6)\nHowever, this objective is hard to optimize since the distribution of z is hard to obtain. Here, we define an expected complete log-likelihood where our estimation of p(z t |c t , r t ; θ) can come to rescue:\nˆ (c, r; θ) = q(z i ) i log p(r, z i |c; θ) q(z) = p(z t |c t , r t ; θ) (7)\nOur new objective now becomes maximizing the expected complete log-likelihood. The relation between and ˆ can be derived as follows:\n(c, r; θ) = log i p(r, z i |c; θ) = log i q(z i ) • p(r, z i |c; θ) q(z i ) ≥ i q(z i ) • log p(r, z i |c; θ) q(z i ) = i q(z i ) • log p(r, z i |c; θ) - i q(z i ) • log q(z i ) = ˆ (c, r; θ) + H q(z)(8)\nwhere the third line is derived from the Jensen Inequality, and H q(z) is the entropy of the distribution of z. Since H q(z) ≥ 0, we can derive that ˆ (c, r; θ) ≤ (c, r; θ), which means ˆ is the lower bound of . By maximizing the lower bound ˆ , we can indirectly maximize , which is originally hard to optimize. Another important observation is hat ˆ = if and only if q(z) = p(z t |c t , r t ; θ), which is exactly what we calculate during the E-steps in Eq. ( 7). Though the derivation of the posterior distribution of z is not exact since we assume uniform prior in Eq. ( 2), it is still much closer to the real distribution compared to random q(z).\nIt is worth noting that the global optimal point is not guaranteed to be reached by this algorithm, and it depends heavily on the initialization of model parameters or the training data for the first round of the maximization step. This explains the reason why we utilize a discourse parser to get a coarse initial training dataset instead of using the expectation step at the first iteration in Section 3.4." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce the datasets to pre-train and evaluate our model, then present the experimental results and comparisons with previous methods." }, { "figure_ref": [], "heading": "Datasets and Experimental Setups", "publication_ref": [ "b13", "b5", "b5", "b4", "b8" ], "table_ref": [], "text": "For pre-training, we adopt the second version of Ubuntu Dialogue Corpus (Lowe et al., 2015), which contains no annotated addressee labels. The original dataset contains 1M dialogues for training, and 0.5M dialogues for validation and testing, respectively. Dialogues that contain less than 4 turns, or have overlap with the dataset for the downstream task (the Ubuntu IRC benchmark, Hu et al. 2019), are excluded from the pre-training data. After filtering, we eventually get a pre-training dataset that contains 764,373 dialogues.\nFor fine-tuning, we follow previous works (Hu et al., 2019;Gu et al., 2022) to adopt the Ubuntu IRC benchmark, which is constructed by extracting all utterances with response addressees indicated by the \"@\" symbol in the Ubuntu Dialogue Corpus. In total, this dataset consists of 311,725 dialogues for training, and 5,000 dialogues for validation and testing, respectively. It is worth noting that this dataset contains addressee labels for every single utterance in the dialogue history, which are utilized by previous methods, yet not by ours.\nFor both pre-training and fine-tuning, BART (Lewis et al., 2020) weights from BART-base. During the process of pre-training, we evaluate our model on the validation set of the Ubuntu IRC benchmark, and the best checkpoint is saved for the fine-tuning process." }, { "figure_ref": [], "heading": "Baseline Models and Evaluation Metrics", "publication_ref": [ "b16", "b5", "b4", "b5" ], "table_ref": [ "tab_2" ], "text": "Table 1 shows the results of our method and previous models, where GPT-2, GSN, and HeterMPC (Radford et al., 2018;Hu et al., 2019;Gu et al., 2022) are introduced in section 2.1 and 2.2, respectively. BART is a sequence-to-sequence model with encoder-decoder Transformer architecture and is trained using denoising objectives. Following Hu et al. (2019), we also adopt BLEU-1 to BLEU-4, METEOR, and ROUGE-L as the automatic evaluation metrics, which can be calculated using the pycocoevalcap package. Besides automatic evaluation, human evaluation is also conducted and will be introduced in Section 4.4." }, { "figure_ref": [], "heading": "Automatic Evaluation Results", "publication_ref": [], "table_ref": [], "text": "Let's firstly focus on the upper and middle part of posed EM method with unlabeled corpus is already able to achieve comparable results with the previous state-of-the-art (SOTA) models. It is surprising since the pre-training requires no annotated addressee labels, while previous models not merely utilize the addressee information of the response utterance, but also make use of the addressee labels of the dialogue history to form a response graph. Second, fine-tuning our model on the downstream dataset with the ground truth addressee labels yields better results compared with pre-training only. Since it uses the ground truth addressee labels of responses, the results of it can be regarded as an upper bound of what the EM training can achieve. Besides, FO outperforms the previous SOTA model by large margins with even simpler architecture and fewer annotations (without addressee labels in the dialogue history), demonstrating the effectiveness of our proposed addressee embeddings. Finally, by further fine-tuning the pre-trained checkpoint with the ground truth addressee labels, we achieve the best performance on all metrics, which shows the transferability of our pre-trained model." }, { "figure_ref": [], "heading": "Human Evaluation Results", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "For human evaluation, we recruit a team with 8 members who have at least a Bachelor's degree in Computer Science and are familiar with Ubuntu and Linux. We randomly sample 100 examples from the testing set, then ask the team members to score each prediction and select the best one. The quality scores are considered in terms of three independent aspects: 1) relevance, 2) fluency and 3) informativeness. They are scored from 0-3 and the average values were reported. The evaluation results are shown in Table 2, where our model (Pre-training + Fine-tuning) constantly outperforms vanilla BART and the previous SOTA model HeterMPC BART . We also report the Fleiss's Kappa to indicate the agreement between annotators. Besides, the ratio of our predictions being the best response is the same as that of human responses, demonstrating the high quality of the generated responses of our model." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "In order to get more insights into the proposed EM pre-training method, we dive deeper into it by conducting extensive analyses." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b8" ], "table_ref": [ "tab_2" ], "text": "We conduct ablation studies to investigate the contribution of our different designs, whose results are tabulated in the lower part of Table 1. Firstly, let's focus on the first line of the lower part. To study whether other utterances that are not in the reply chain of the current addressee can help to generate a better response, we extract the reply train by traversing from the current leave utterance (the response) up to the root node (the first utterance), then train a model by inputting this chain only. We see a large performance drop on all metrics in this setting, demonstrating the significance of the side information provided by the whole context.\nSecond, let's pay attention to the second and third lines of the lower part. In order to study the effect of the EM pre-training process, which is the key contribution of our work, we remove this process and pre-train a model using only the addressee labels obtained from the discourse parser (i.e. the initial training data used in the first iteration of our EM approach). A sharp performance drop is observed compared with PO and PF with our proposed EM pre-training strategy, demonstrating the significance of our design. Without the iterative EM procedure, the noisy addressee labels obtained from the discourse parser can cause error propagation, which makes the model learn noisy features to predict a response, and hurts the performance.\nFinally, aiming at investigating whether the performance gains come from seeing more in-domain data in the pre-training process, we use the same pre-training data to train another model with the denoising objectives proposed in BART (Lewis et al., 2020), then also fine-tune it on the Ubuntu IRC benchmark. The last line of the lower part presents the results, where we observe nearly the same performance compared with FO. This observation indicates that simply performing domain adaptation using the general pre-training objectives is insufficient to benefit the MPDRG task." }, { "figure_ref": [ "fig_1" ], "heading": "Response Generation vs. Addressee Prediction", "publication_ref": [], "table_ref": [], "text": "In Section 3.3, we prove that p(z|c, r) ∝ p(r|c, z).\nTo verify the correctness of this equation and also to investigate the training process of our EM strategy, we draw the line chart of the BLEU-4 score and addressee prediction accuracy of the top-30% confidence samples on the validation set with the increasing of pre-training iterations. The addressees are predicted using Eq. ( 4), where we take the z i with the highest conditional probability as the predicted addressee. Figure 4 illustrates the trending of the BLEU-4 score and addressee prediction accuracy. On the one hand, we see that the trending of both metrics is consistent, which means with a more powerful response generation model comes a higher addressee prediction accuracy. This observation verifies the correctness of Eq. ( 3). On the other hand, with the increasing of iterations, both metrics grow mutually, then reach their tops at around the 6 th iteration, demonstrating the effectiveness of the EM process." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Case Studies", "publication_ref": [], "table_ref": [], "text": "To understand the effect of our method intuitively, we sample two cases from the testing set and present them in this section. Figure 5 illustrates an example whose addressee relations and dialogue history are shown in Figure 1. This conversation is about how to run the compiz or beryl in a comp with 256MB RAM. Speaker #2 points that it's the graphic card that is important, but Speaker #4 seems unsatisfied by saying that didn't tell me much. After that, Speaker #5 suggests using the rdesktop and Speaker #4 replies him/her. Our model is able to capture the key information rdesktop and terminal in the addressee utterance U 6 , and generate a proper response Well, how do I install rdesktop from the terminal, which is very close to the human answer and even better with more information from the terminal. On the contrary, the baseline model (BART) fails to capture the addressee information and just replies with a safe response I tried but it didn't work. This case shows the great significance of modeling the addressee information, and also demonstrates the effectiveness of our model design. Figure 6 presents another example sampled from the testing set, where we investigate how different addressee labels affect the generated responses. In the figure, different colors represent different utterances in the Dialogue History part, and different responses generated by giving the corresponding utterances as addressees in the Generated Responses part. This conversation is about discussing the file system in Ubuntu that can share on a network with windows machines. When the addressee is given as U 1 , our model suggests using samba, which is a solution to the question of U 1 . Responses to U 2 and U 3 are like safe responses, but they make sense in their contexts: the former expresses its confusion about a confusing utterance (U 2 ), and the latter expresses its gratitude to the suggestion in U 3 . Response to U 4 states his/her understanding towards U 4 , and questions if his/her understanding is right. Response to U 5 acknowledges the solution gentoo in U 5 by saying using gentoo on my computer too. In general, this case demonstrates the ability of our model to generate diverse responses according to the specified addressees and contexts of the dialogue history." }, { "figure_ref": [], "heading": "Response Parser: A Byproduct for Free", "publication_ref": [ "b9", "b6", "b11", "b14" ], "table_ref": [], "text": "Another contribution of our EM pre-training is that a response parser can be freely obtained. This byproduct comes from Eq. ( 4), where given a response generation model with addressee modeling, we can predict the addressee for each utterance in the dialogue. Previous literature has studied and proved that explicitly modeling the structural information is beneficial to understanding specific structured data. (Li et al., 2020(Li et al., , 2022a,b),b). In this context, the response parser can be used to infer the discourse structures, which contributes to boost-ing the performance of some multi-party dialogue comprehension tasks like response selection and question answering. (Jia et al., 2020;Li and Zhao, 2021;Ma et al., 2022) 6 Conclusion\nMost multi-party dialogue corpora are not annotated with addressee labels, making them unable to support the pre-training of response generation models. To solve this problem, we design a simple yet effective way to model the addressee of a response as a latent variable and propose an EM pre-training approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Mathematical derivation, experimental results on the Ubuntu IRC benchmark, and extensive analyses have justified the theoretical feasibility and actual effectiveness of our method." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "First, Due to the lack of datasets to evaluate the MP-DRG task, we perform our experiments only on the Ubuntu IRC benchmark and pre-train our model only on the domain of Ubuntu chats. However, the potential of our approach goes far beyond that since it is applicable to any open-domain multi-party dialogue dataset. In the future work, we will consider applying our method in more open-domain conversational datasets, such as the transcripts of TV series or movies. Additionally, the pre-training process solely relies on the addressee information of individual turns, disregarding the reply-to relations within the dialogue history. This oversight prevents the model from benefiting from valuable contextual cues necessary for a comprehensive understanding of the multi-party dialogue. In our future work, we will explore the integration of discourse-level reply-to relations into the pre-training process to further enrich the capabilities of the model." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* Corresponding author. This paper was partially supported bytion of China (U1836222 and 61733011)." } ]
Dialogue response generation requires an agent to generate a response according to the current dialogue history, in terms of which two-party dialogues have been well studied, but leaving a great gap for multi-party dialogues at the same time. Different from twoparty dialogues where each response is a direct reply to its previous utterance, the addressee of a response utterance should be specified before it is generated in the multi-party scenario. Thanks to the huge amount of two-party conversational data, various pre-trained language models for two-party dialogue response generation have been proposed. However, due to the lack of annotated addressee labels in multiparty dialogue datasets, it is hard to use them to pre-train a response generation model for multi-party dialogues. To tackle this obstacle, we propose an Expectation-Maximization (EM) approach that iteratively performs the expectation steps to generate addressee labels, and the maximization steps to optimize a response generation model. Theoretical analyses and extensive experiments have justified the feasibility and effectiveness of our proposed method. The official implementation of this paper is available at https://github. com/EricLee8/MPDRG.
EM Pre-training for Multi-party Dialogue Response Generation
[ { "figure_caption": "Figure 3 Figure 3 :33Figure 3 illustrates the overview of our EM training process. During the E-steps, we compute the probability distribution of the latent variable (the addressee z). During the M-steps, we sample (c, r, z) triplets from this distribution and optimize the generative model by standard training algorithms. The Expectation Step is to compute the conditional distribution of the latent variable z t , given", "figure_data": "", "figure_id": "fig_0", "figure_label": "33", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Line chart of the BLEU-4 score and addressee prediction accuracy with the increase of EM iterations.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The first example of Case Studies, which shows the generated responses of our model and the baseline model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The second example of Case Studies, which illustrates the generated response of our model given different addressee labels. Better view in color.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "The overview of our model architecture. The left part shows how we incorporate the addressee information into response generation by adding addressee embeddings. The right part illustrates a Bayesian Network of how a response is generated given the current dialogue history c t and the addressee z t .", "figure_data": "Generative Pre-trained Language Models (BART)U tBayesian Networkc tz t… …… ………r tFigure 2:withexplicit addressee labels to construct the UbuntuIRC benchmark, where they propose a Graph Struc-", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "is used as the backbone model. Before pre-training, we initialize the pre-trained Results on the Ubuntu IRC benchmark, where the upper part presents models of previous works, the middle part shows our backbone model BART together with our method under different settings, and the lower part shows the ablation studies.", "figure_data": "ModelBLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-LGPT-2 (Radford et al., 2018)10.373.601.660.934.019.53GSN (Hu et al., 2019)10.233.571.700.974.109.91HeterMPCBART (Gu et al., 2022)12.264.802.421.494.9411.20BART (Lewis et al., 2020)11.254.021.780.954.469.90Pre-training Only (PO)11.784.672.381.414.9811.19Fine-tuning Only (FO)11.475.112.982.115.2311.31Pre-training + Fine-tuning (PF)12.315.393.342.455.5211.71FO + Reply-Chain9.113.521.991.354.329.36PO w/o EM10.033.902.031.184.569.66PF w/o EM11.395.043.022.155.2711.20Denoising + Fine-tuning11.495.083.022.135.2511.28", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": ", where we present the results of previousmodels and our methods. Three settings of ourmethod based on BART are experimented with:pre-training only (PO), fine-tuning only (FO), andpre-training-fine-tuning (PF). Results of PO areobtained by directly using the pre-trained modelto generate the response for each dialogue. FOmeans the checkpoint of BART is directly fine-tuned on the Ubuntu IRC benchmark without pre-training. PF follows a pre-training-fine-tuningparadigm, where the best checkpoint of the pre-training process is further fine-tuned on the down-stream dataset.Three observations can be seen from the ta-ble. First of all, solely pre-training with our pro-", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Human evaluation results, where Score is the average score and Best means the ratio of each system being the best response.", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" } ]
Yiyang Li; Hai Zhao
[ { "authors": "Siqi Bao; Huang He; Fan Wang; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "PLATO: Pre-trained dialogue generation model with discrete latent variable", "year": "2020" }, { "authors": "Wei Chen; Yeyun Gong; Song Wang; Bolun Yao; Weizhen Qi; Zhongyu Wei; Xiaowu Hu; Bartuer Zhou; Yi Mao; Weizhu Chen; Biao Cheng; Nan Duan", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "DialogVED: A pre-trained latent variable encoder-decoder model for dialog response generation", "year": "2022" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b2", "title": "ELECTRA: pretraining text encoders as discriminators rather than generators", "year": "2020-04-26" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Jia-Chen Gu; Chao-Hong Tan; Chongyang Tao; Zhen-Hua Ling; Huang Hu; Xiubo Geng; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "HeterMPC: A heterogeneous graph neural network for response generation in multi-party conversations", "year": "2022" }, { "authors": "Wenpeng Hu; Zhangming Chan; Bing Liu; Dongyan Zhao; Jinwen Ma; Rui Yan", "journal": "", "ref_id": "b5", "title": "GSN: A graph-structured network for multi-party dialogues", "year": "2019-08-10" }, { "authors": "Qi Jia; Yizhu Liu; Siyu Ren; Kenny Zhu; Haifeng Tang", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Multi-turn response selection using dialogue dependency relations", "year": "2020" }, { "authors": "Ran Le; Wenpeng Hu; Mingyue Shang; Zhenjun You; Lidong Bing; Dongyan Zhao; Rui Yan", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Who is speaking to whom? learning to identify utterance addressee in multi-party conversations", "year": "2019" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Jiaqi Li; Ming Liu; Min-Yen Kan; Zihao Zheng; Zekun Wang; Wenqiang Lei; Ting Liu; Bing Qin", "journal": "International Committee on Computational Linguistics", "ref_id": "b9", "title": "Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure", "year": "2020" }, { "authors": "Yiyang Li; Hongqiu Wu; Hai Zhao", "journal": "International Committee on Computational Linguistics", "ref_id": "b10", "title": "Semantic-preserving adversarial code comprehension", "year": "2022" }, { "authors": "Yiyang Li; Hai Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Self-and pseudo-selfsupervised prediction of speaker and key-utterance for multi-party dialogue reading comprehension", "year": "2021" }, { "authors": "Yiyang Li; Hai Zhao; Zhuosheng Zhang", "journal": "", "ref_id": "b12", "title": "Back to the future: Bidirectional information decoupling network for multi-turn dialogue modeling", "year": "2022" }, { "authors": "Ryan Lowe; Nissan Pow; Iulian Serban; Joelle Pineau", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems", "year": "2015" }, { "authors": "Zhuosheng Xinbei ; Ma; Hai Zhang; Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Structural characterization for dialogue disentanglement", "year": "2022" }, { "authors": "Sewon Min; Danqi Chen; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "A discrete hard EM approach for weakly supervised question answering", "year": "2019" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b16", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Zhouxing Shi; Minlie Huang", "journal": "AAAI Press", "ref_id": "b17", "title": "A deep sequential model for discourse parsing on multi-party dialogues", "year": "2019-01-27" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b18", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Weishi Wang; C H Steven; Shafiq Hoi; Joty", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Response selection for multi-party conversations with dynamic topic tracking", "year": "2020" }, { "authors": "Rui Zhang; Honglak Lee; Lazaros Polymenakos; Dragomir R Radev", "journal": "AAAI Press", "ref_id": "b20", "title": "Addressee and response selection in multi-party conversations with speaker interaction rnns", "year": "2018-02-02" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "DIALOGPT : Largescale generative pre-training for conversational response generation", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 70.87, 750.02, 221.64, 24.18 ], "formula_id": "formula_0", "formula_text": "X = {S 1 : U 1 [SEP]S 2 : U 2 [SEP] . . . S t-1 : U t-1 [SEP]S t :}," }, { "formula_coordinates": [ 4, 103.6, 260.79, 185.53, 9.81 ], "formula_id": "formula_1", "formula_text": "p(c, z, r) = p(c) • p(z|c) • p(r|c, z)(1)" }, { "formula_coordinates": [ 4, 103.32, 331.28, 185.82, 83.95 ], "formula_id": "formula_2", "formula_text": "p(z|c, r) = p(c, z, r) p(c, r) = p(c) • p(z|c) • p(r|c, z) p(c) • p(r|c) = p(z|c) • p(r|c, z) p(r|c)(2)" }, { "formula_coordinates": [ 4, 135.38, 518.63, 153.76, 9.81 ], "formula_id": "formula_3", "formula_text": "p(z|c, r) ∝ p(r|c, z)(3)" }, { "formula_coordinates": [ 4, 116.03, 564.28, 173.1, 30.52 ], "formula_id": "formula_4", "formula_text": "p(z i |c, r) = p(r|c, z i ) t-1 j=1 p(r|c, z j )(4)" }, { "formula_coordinates": [ 4, 312.67, 485.18, 211.74, 34.41 ], "formula_id": "formula_5", "formula_text": "L G = - N k=1 n k i=1 log p w k i | w k <i , c k t , z k t ; θ (5)" }, { "formula_coordinates": [ 4, 342.78, 543.82, 62.73, 15.22 ], "formula_id": "formula_6", "formula_text": "r k t = {w k i } n i i=1" }, { "formula_coordinates": [ 5, 80.98, 360.47, 208.16, 32.49 ], "formula_id": "formula_7", "formula_text": "(c, r; θ) = log p(r|c; θ) = log i p(r, z i |c; θ) (6)" }, { "formula_coordinates": [ 5, 99.42, 460.47, 189.72, 41.16 ], "formula_id": "formula_8", "formula_text": "ˆ (c, r; θ) = q(z i ) i log p(r, z i |c; θ) q(z) = p(z t |c t , r t ; θ) (7)" }, { "formula_coordinates": [ 5, 93.21, 558.66, 195.93, 166.7 ], "formula_id": "formula_9", "formula_text": "(c, r; θ) = log i p(r, z i |c; θ) = log i q(z i ) • p(r, z i |c; θ) q(z i ) ≥ i q(z i ) • log p(r, z i |c; θ) q(z i ) = i q(z i ) • log p(r, z i |c; θ) - i q(z i ) • log q(z i ) = ˆ (c, r; θ) + H q(z)(8)" } ]
10.18653/v1/2022.naacl-industry.24
2023-05-21
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b3", "b52", "b27", "b10", "b6", "b39", "b34", "b14", "b13", "b25", "b54", "b50", "b16", "b24", "b21" ], "table_ref": [], "text": "Knowledge graphs (KGs) (Bollacker et al., 2008;Vrandecic and Krötzsch, 2014;Lehmann et al., 2015), which consist of a set of facts represented in the form of a (head entity, relation, tail entity) triplet, can store a large amount of world knowledge. In natural language applications, language models (LMs) (Devlin et al., 2019;Brown et al., 2020) are commonly used; however, their knowledge internalized in parameters is often incomplete, inaccurate, and outdated. Therefore, several recent works suggest augmenting LMs with facts from KGs, for example, in question answering (Oguz et al., 2022;Ma et al., 2022) and dialogue generation (Galetzka et al., 2021;Kang et al., 2022b).\nHowever, despite the broad applications of the KGs, the existing mechanism for retrieving facts from them are, in many cases, unnecessarily complex. In particular, to retrieve facts from KGs, existing work (Fu et al., 2020;Lan et al., 2021;Wang et al., 2021) relies on three sequential steps, consisting of span detection, entity disambiguation, and relation classification, as illustrated in Figure 1a. For example, given an input text: \"Where was Michael Phelps born?\", they first detect a span of an entity within the input, which corresponds to \"Michael Phelps\". Then, they match the entity mention in the input to an entity id in the KG. Those two steps are often called entity linking. Finally, among 91 relations associated with the entity of Michael Phelps, they select one relation relevant to the input, namely \"place of birth\".\nThe aforementioned approach has a couple of drawbacks. First, all three sub-modules in the existing pipeline require module-specific labels in addition to query-triplet pairs for training. However, in real-world, high-quality training data is limited, and annotating them requires significant costs. Second, such a pipeline approach is prone to error propagation across steps (Singh et al., 2020;Han et al., 2020). For example, if the span detection fails, the subsequent steps, such as relation classification, are likely to make incorrect predictions as well. Third, certain modules, that match entities in queries to KGs or predict relations over KGs, are usually not generalizable to emerging entities and relations and cannot be applied to different KGs. It would be preferable to have a method that does not require KG-specific training and inference.\nTo tackle these limitations, we propose to directly retrieve the relevant triplets related to a natural language query by computing their similarities over a shared representation space (see Figure 1b). The design of our direct retrieval framework is motivated by a pioneering work of open-domain question answering with documents (Karpukhin et al., 2020), which showed the possibility of dense retrieval with simple vector similarities between the question and document embeddings. However, in contrast to the document retrieval scenario where documents have sufficient contexts to embed, it is unclear whether the LM can still effectively embed facts represented in the short triplet form for retrieval. Also, compared to the document retrieval which additionally requires a reader to extract only the relevant piece of knowledge, our fact retriever itself can directly provide the relevant knowledge.\nTo realize our fact retriever, we train it by maximizing similarities between representations of relevant pairs of input texts and triplets while minimizing irrelevant pairs, where we use LMs for encoding them. We note that this process requires only text-triplet pairs without using extra labels, unlike the conventional pipeline approach for fact retrieval. After training, we index all triplets in the KG with the trained encoder in an offline manner, and, given the input query, we return the nearest triplets over the embedding space. This procedure simplifies the conventional three steps for retrieving facts from KGs into one. To further efficiently search the relevant triplets, we approximate the similarity calculation with vector quantization and hierarchical search based on clustering (Johnson et al., 2021). We further note that, since we embed triplets using the LM, our retriever can generalize to different KGs without any modification, unlike some conventional retrieval systems that require additional training to learn new KG schema about distinct entities and relations types. We refer to our framework as Direct Fact Retrieval (DiFaR).\nWe experimentally demonstrate that our direct retrieval on KGs works well; however, the fact represented in the triplet form has a limited context, since it consists of only two entities and one relation. Also, similarity calculation with the independently represented input text and triplets is arguably simple, and might be less effective. Therefore, to further improve the retriever performance, we additionally use a reranker, whose goal is to calibrate the ranks of retrieved triplets for the input text. In particular, we first retrieve k nearest facts with the direct retriever, and then use another LM which directly measures the similarity by encoding the input text and the triplet simultaneously. Moreover, another objective of the reranker is to filter out irrlevant triplets, which are the most confusing ones in the embedding space of the direct retriever. Therefore, to effectively filter them, we train the reranker to minimize similarities between the input text and the most nearest yet irrelevant triplets.\nWe evaluate our DiFaR framework on fact retrieval tasks across two different domains of question answering and dialogue, whose goals are to retrieve relevant triplets in response to the given query. The experimental results show that our Di-FaR framework outperforms relevant baselines that use conventional pipeline approaches to retrieve facts on KGs, and also show that our reranking strategy significantly improves retrieval performances. The detailed analyses further support the efficacy of our DiFaR framework, with its great simplicity.\nOur contributions in this work are as follows:\n• We present a novel direct fact retrieval (Di-FaR) framework from KGs, which leverages only the representational similarities between the query and triplets, simplifying the conventional three steps: entity detection, disambiguation, and relation classification, into one.\n• We further propose a reranking strategy, to tackle a limitation of little context in facts, for direct knowledge retrieval, which is trained with samples confused by the direct retriever.\n• We validate our DiFaR on fact retrieval tasks, showing that it significantly outperforms baselines on unsupervised and supervised setups." }, { "figure_ref": [], "heading": "Background and Related Work", "publication_ref": [ "b3", "b52", "b48", "b10", "b32", "b7", "b60", "b14" ], "table_ref": [], "text": "Knowledge Graphs Knowledge Graphs (KGs) are factual knowledge sources (Bollacker et al., 2008;Vrandecic and Krötzsch, 2014), containing a large number of facts, represented in a symbolic triplet form: (head entity, relation, tail entity).\nSince some natural language applications require factual knowledge (Schneider et al., 2022), existing literature proposes to use knowledge in KGs, and sometimes along with language models (LMs) (Devlin et al., 2019). To mention a few, in question answering domains, facts in KGs can directly be answers for knowledge graph question answering tasks (Lukovnikov et al., 2017;Chakraborty et al., 2019), but also they are often augmented to LMs to generate knowledge-grounded answers (Zhang et al., 2019;Kang et al., 2022a). Similarly, in dialogue generation, some existing work augments LMs with facts from KGs (Galetzka et al., 2021;Kang et al., 2022b). However, prior to utilizing facts in KGs, fact retrieval -selection of facts relevant to the input context -should be done in advance, whose results substantially affect downstream performances. In this work, we propose a conceptually simple yet effective framework for fact retrieval, motivated by information retrieval." }, { "figure_ref": [], "heading": "Information Retrieval", "publication_ref": [ "b44", "b45", "b38", "b20", "b10", "b30", "b24", "b57", "b42", "b58", "b12", "b1", "b33", "b29", "b4", "b17", "b35", "b8", "b54", "b50", "b16", "b39", "b34" ], "table_ref": [], "text": "The goal of most information retrieval work is to retrieve relevant documents in response to a query (e.g., question). Early work relies on term-based matching algorithms, which count lexical overlaps between the query and documents, such as TF-IDF and BM25 (Robertson et al., 1994;Robertson and Zaragoza, 2009). However, they are vulnerable to a vocabulary mismatch problem, where semantically relevant documents are lexically different from queries (Nogueira et al., 2019;Jeong et al., 2021). Due to such the issue, recently proposed work instead uses LMs (Devlin et al., 2019;Liu et al., 2019) to encode queries and documents, and uses their representational similarities over a latent space (Karpukhin et al., 2020;Xiong et al., 2021;Qu et al., 2021). They suggest their huge successes are due to the effectiveness of LMs in embedding documents. However, they focus on lengthy documents having extensive context, and it is unclear whether LMs can still effectively represent each fact, succinctly represented with two entities and one relation in the triplet form, for its retrieval. In this work, we explore this new direction by formulating the fact retrieval problem as the information retrieval problem done for documents.\nKnowledge Retrieval from KGs Since KGs have a large number of facts, it is important to bring only the relevant piece of knowledge given an input query. To do so, one traditional approach uses neural semantic parsing-based methods (Yih et al., 2015;Dong and Lapata, 2016;Bao et al., 2016;Luo et al., 2018) aiming to translate natural language inputs into logical query languages, such as SPARQL1 and λ-DCS (Liang, 2013), executable over KGs. However, they have limitations in requiring additional labels and an understanding of logical forms of queries. Another approach is to use a pipeline (Bordes et al., 2014;Hao et al., 2017;Mohammed et al., 2018;Chen et al., 2019;Wang et al., 2021) consisting of three subtasks: entity span detection, entity disambiguation, and relation classification. However, they similarly require additional labels on training each subcomponent, and this pipeline approach suffers from errors that are propagated from previous steps (Singh et al., 2020;Han et al., 2020). While recent work (Oguz et al., 2022) proposes to retrieve textual triplets from KGs based on their representational similarities to the input text with the information retrieval mechanism, they still rely on entity linking (e.g., span detection and entity disambiguation) first, thus identically having limitations of the pipeline approach. Another recent work (Ma et al., 2022) merges a set of facts associated with each entity into a document and performs document-level retrieval. However, the document retrieval itself can be regarded as entity linking, and also the overall pipeline requires an additional reader to extract only the relevant entity in retrieved documents. In contrast to them, we directly retrieve facts from the input query based on their representational similarities, which simplifies the conventional three-step approach including entity linking into one single retrieval step.\n3 DiFaR: Direct Fact Retrieval" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b39", "b34", "b14", "b4", "b54" ], "table_ref": [], "text": "We formally define a KG and introduce a conventional mechanism for retrieving facts from the KG.\nKnowledge Graphs Let E be a set of entities and R be a set of relations. Then, one particular fact is defined as a triplet: t = (e h , r, e t ) ∈ E × R × E, where e h and e t are head and tail entities, respectively, and r is a relation between them. Also, a knowledge graph (KG) G is defined as a set of fac-tual triplets: G = {(e h , r, e t )} ⊆ E × R × E. Note that this KG is widely used as a useful knowledge source for many natural language applications, including question answering and dialogue generation (Oguz et al., 2022;Ma et al., 2022;Galetzka et al., 2021;Kang et al., 2022b). However, the conventional mechanism to access facts in KGs is largely complex, which may hinder its broad applications, which we describe in the next paragraph.\nExisting Knowledge Graph Retrieval The input of most natural language tasks is represented as a sequence of tokens:\nx = [w 1 , w 2 , . . . , w |x| ].\nSuppose that, given the input x, t + is a target triplet to retrieve2 . Then, the objective of the conventional fact retrieval process for the KG G (Bordes et al., 2014;Wang et al., 2021) is, in many cases, formalized as the following three sequential tasks:\nt + = arg max t∈G p θ (t|e, x, G)p φ (e|m, x)p ψ (m|x),(1)\nwhere p ψ (m|x) is the model for mention detection with m as the detected entity mention within the input x, p φ (e|m, x) is the model for entity disambiguation, and p θ (t|e, x, G) is the model for relation classification, all of which are individually parameterized by φ, ψ, and θ, respectively. However, there is a couple of limitations in such the three-step approaches. First, they are vulnerable to the accumulation of errors, since, for example, if the first two steps consisting of span detection and entity disambiguation are wrong and we are ending up with the incorrect entity irrelevant to the given query, we cannot find the relevant triplet in the final relation prediction stage. Second, due to their decomposed structures, three sub-modules are difficult to train in an end-to-end fashion, while requiring labels for training each sub-module. For example, to train p ψ (m|x) that aims to predict the mention boundary of the entity within the input text, they additionally require annotated pairs of the input text and its entity mentions: {(x, m)}. Finally, certain modules are usually limited to predicting entities E and relations R specific to the particular KG schema, observed during training. Therefore, they are not directly applicable to unseen entities and relations, but also to different KGs." }, { "figure_ref": [], "heading": "Direct Knowledge Graph Retrieval", "publication_ref": [ "b24" ], "table_ref": [], "text": "To tackle the aforementioned challenges of the existing fact retrieval approaches on KGs, we present the direct knowledge retrieval framework. In particular, our objective is simply formulated with the single sentence encoder model E θ without introducing extra variables (e.g., m and e), as follows:\nt + = arg max t∈G f (E θ (x), E θ (t)),(2)\nwhere f is a non-parametric scoring function that calculates the similarity between the input text representation E θ (x) and the triplet representation E θ (t), for example, by using the dot product. Note that, in Equation 2, we use the sentence encoder E θ to represent the triplet t. To do so, we first symbolize the triplet as a sequence of tokens:\nt = [w 1 , w 2 , . . . , w |t| ],\nwhich is constructed by entity and relation tokens, and the separation token (i.e., a special token, [SEP]) between them. Then, we simply forward the triplet tokens to E θ to obtain the triplet representation. While we use the single model for encoding both input queries and triplets, we might alternatively represent them with different encoders, which we leave as future work.\nTraining After formalizing the goal of our direct knowledge retrieval framework in Equation 2, the next step is to construct the training samples and the optimization objective to train the model (i.e., E θ ). According to Equation 2, the goal of our model is to minimize distances between the input text and its relevant triplets over an embedding space, while minimizing distances of irrelevant pairs. Therefore, following the existing dense retrieval work for documents (Karpukhin et al., 2020), we use a contrastive loss as our objective to generate an effective representation space, formalized as follows:\nmin θ -log exp(f (E θ (x), E θ (t + ))) (x,t)∈τ exp(f (E θ (x), E θ (t))) ,(3)\nwhere τ contains a set of pairs between the input text and all triplets in the same batch. In other words, (x, t+) ∈ τ is the positive pair to maximize the similarity, whereas, others are negative pairs to minimize. Also, exp(•) is an exponential function.\ncalculation, since it provides the extremely efficient search logic, also known to be applicable to billions of dense vectors; therefore, suitable for our fact retrieval from KGs. Moreover, to further reduce the search cost, we use the approximated neighborhood search algorithm, namely Hierarchical Navigable Small World Search with Scalar Quantizer. This mechanism not only quantizes the dense vectors to reduce the memory footprint, but also builds the hierarchical graph structures to efficiently find the nearest neighborhoods with few explorations. We term our Direct Fact Retrieval method as DiFaR." }, { "figure_ref": [], "heading": "Reranking for Accurate Fact Retrieval", "publication_ref": [], "table_ref": [], "text": "The fact retrieval framework outlined in Section 3.2 simplifies the conventional three subtasks used to access the knowledge into the single retrieval step. However, contrary to the document retrieval case, the fact is represented with the most compact triplet form, which consists of only two entities and one relation. Therefore, it might be suboptimal to rely on the similarity, calculated by the independently represented input text and triplets as in Equation 2. Also, it is significantly important to find the correct triplet within the small k (e.g., k = 1) of the top-k retrieved triplets, since, considering the scenario of augmenting LMs with facts, forwarding several triplets to LMs yields huge computational costs.\nTo tackle such challenges, we propose to further calibrate the ranks of the retrieved triplets from our DiFaR framework. Specifically, we first obtain the k nearest facts in response to the input query over the embedding space, by using the direct retrieval mechanism defined in Section 3.2. Then, we use another LM, E φ , that returns the similarity score of the pair of the input text and the retrieved triplet by encoding them simultaneously, unlike the fact retrieval in Equation 2. In other words, we first concatenate the token sequences of the input text and the triplet: [x, t], where [•] is the concatenation operation, and then forward it to E φ ([x, t]). By doing so, the reranking model E φ can effectively consider token-level relationships between two inputs (i.e., input queries and triplets), which leads to accurate calibration of the ranks of retrieved triplets from DiFaR, especially for the top-k ranks with small k.\nFor training, similar to the objective of DiFaR defined in Section 3.2, we aim to maximize the similarities of positive pairs: {(x, t + )}, while minimizing the similarities of irrelevant pairs: {(x, t)}\\ {(x, t + )}. To do so, we use a binary cross-entropy loss. However, contrary to the previous negative sampling strategy defined in Section 3.2 where we randomly sample the negative pairs, in this reranker training, we additionally manipulate them by using the initial retrieval results from our DiFaR. The intuition here is that the irrelevant triplets, included in the k nearest neighbors to the input query, are the most confusing examples, which are yet not filtered by the DiFaR model. Hereat, the goal of the reranking strategy is to further filter them by refining the ranks of the k retrieved triplets; therefore, to achieve this goal, we include them as the negative samples during reranker training. Formally, let τ = (x, t) is a set of pairs of the input query x and its k nearest facts retrieved from DiFaR. Then, the negative samples for the reranker are defined by excluding the positive pairs, formalized as follows: τ \\ {(x, t + )}. Note that constructing the negative samples with retrieval at every training iteration is costly; therefore, we create them at intervals of several epochs (e.g., ten), but also we use only a subset of triplets in KGs during retrieval. Our framework with the reranking strategy is referred to as Direct Fact Retrieval with Reranking (DiFaR 2 )." }, { "figure_ref": [], "heading": "Experimental Setups", "publication_ref": [], "table_ref": [], "text": "We explain datasets, models, metrics, and implementations. For additional details, see Appendix A." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [], "text": "We validate our Direct Fact Retrieval (DiFaR) on fact retrieval tasks, whose goal is to retrieve relevant triplets over KGs given the query. We use four datasets on question answering and dialogue tasks." }, { "figure_ref": [], "heading": "Question Answering", "publication_ref": [ "b5", "b2", "b59", "b49", "b3", "b52" ], "table_ref": [], "text": "The goal of KG-based question answering (QA) tasks is to predict factual triplets in response to the given question, where predicted triplets are direct answers. For this task, we use three datasets, namely SimpleQuestions (Bordes et al., 2015), WebQuestionsSP (WebQSP) (Berant et al., 2013;Yih et al., 2016), and Mintaka (Sen et al., 2022). Note that SimpleQuestions and We-bQSP are designed with the Freebase KG (Bollacker et al., 2008), ad Mintaka is designed with the Wikidata KG (Vrandecic and Krötzsch, 2014).\nDialogue In addition to QA, we evaluate our Di-FaR on KG-based dialogue generation, whose one subtask is to retrieve relevant triplets on the KG that provides factual knowledge to respond to a Table 1: Main results on the question answering domain for SimpleQuestions, WebQSP, and Mintaka datasets. We emphasize the best scores in bold, except for the incomparable model: Retrieval with Gold Entities, which uses labeled entities in inputs." }, { "figure_ref": [], "heading": "SimpleQuestions", "publication_ref": [ "b52" ], "table_ref": [], "text": "WebQSP Mintaka Types Methods MRR Hits@1 Hits@10 MRR Hits@1 Hits@10 MRR Hits@1 Hits@10 (Vrandecic and Krötzsch, 2014) for our experiments on QA, and use their dataset processing settings. For OpenDialKG, we use Freebase." }, { "figure_ref": [], "heading": "Baselines and Our Models", "publication_ref": [ "b32" ], "table_ref": [], "text": "We compare our DiFaR framework against other relevant baselines that involve subtasks, such as entity detection, disambiguation, and relation prediction. Note that most existing fact retrieval work either uses labeled entities in queries, or uses additional labels for training subcomponents; therefore, they are not comparable to DiFAR that uses only pairs of input texts and relevant triplets. For evaluations, we include models categorized as follows:\nRetrieval with Entity Linking: Factoid QA by Retrieval: It retrieves entities and relations independently based on their similarities with the input query (Lukovnikov et al., 2017).\nOur Models: Our Direct Knowledge Retrieval (DiFaR) directly retrieves the nearest triplets to the input text on the latent space. DiFaR with Reranking (DiFaR 2 ) is also ours, which includes a reranker to calibrate retrieved results.\nRetrieval with Gold Entities: It uses labeled entities in inputs and retrieves triplets based on their associated triplets. It is incomparable to others." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b57", "b19" ], "table_ref": [], "text": "We measure the retrieval performances of models with standard ranking metrics, which are calculated by ranks of correctly retrieved triplets. In particular, we use Hits@K which measures whether retrieved Top-K triplets include a correct answer or not, and Mean Reciprocal Rank (MRR) which measures the rank of the first correct triplet for each input text and then computes the average of reciprocal ranks of all results. Following exiting document retrieval work (Xiong et al., 2021;Jeong et al., 2022), we consider top-1000 retrieved triplets when calculating MRR, since considering ranks of all triplets in KGs are computationally prohibitive." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b37" ], "table_ref": [], "text": "We use a distilbert3 as a retriever for all models, and a lightweight MiniLM model4 as a reranker, both of which are pre-trained with the MSMARCO dataset (Nguyen et al., 2016). During reranking, we sample top-100 triplets retrieved from DiFaR. We use off-the-shelf models for unsupervised settings, and further train them for supervised settings." }, { "figure_ref": [], "heading": "Experimental Results and Analyses", "publication_ref": [], "table_ref": [ "tab_2", "tab_2", "tab_2", "tab_3" ], "text": "Main Results We first conduct experiments on question answering domains, and report the results in Table 1. As shown in Table 1, our DiFaR with Reranking (DiFaR 2 ) framework significantly outperforms all baselines on all datasets across both unsupervised and supervised experimental settings with large margins. Also, we further experiment on dialogue domain, and report results in Table 2.\nAs shown in Table 2, similar to the results on QA domains, our DiFaR 2 framework outperforms the relevant baselines substantially. These results on two different domains demonstrate that our DiFaR 2 framework is highly effective in fact retrieval tasks.\nTo see the performance gains from our reranking strategy, we compare the performances between our model variants: DiFaR and DiFaR 2 . As shown in Table 1 andTable 2, compared to DiFaR, DiFaR 2 including the reranker brings huge performance improvements, especially on the challenging datasets: Mintaka and OpenDialKG. However, we consistently observe that our DiFaR itself can also show superior performances against all baselines except for the model of Factoid QA by Retrieval on the SimpleQuestions dataset. The inferior performance of our DiFaR on this SimpleQuestions dataset is because, its samples are automatically constructed from facts in KGs; therefore, it is extremely simple to extract entities and predict relations in response to the input query. On the other hand, our DiFaR framework sometimes outperforms the incomparable model: Retrieval with Gold Entities, which uses the labeled entities in the input queries. This is because this model is restricted to retrieve the facts that should be associated with entities in input queries; meanwhile, our DiFaR is not limited to query entities thanks to the direct retrieval scheme.\nAnalyses on Zero-Shot Generalization Our Di-FaR can be generalizable to different datasets with the same KG, but also to ones with other KGs without any modifications. This is because it retrieves triplets based on their text-level similarities to input queries and does not leverage particular One-Hop Ratio Multi-Hops Ratio DiFaR Hits@1 DiFaR 2 Hits@1\nFigure 2: Breakdown results by single and multi-hops. We report ratios of single and multi-hops samples on the left side of each subfigure, and Hits@1 of DiFaR and DiFaR 2 across single and multi-hops on the middle and right. We exclude the SimpleQuestions dataset that consists of single-hop questions.\nschema of entities and relations, unlike the existing entity linking methods. To demonstrate them, we perform experiments on zero-shot transfer learning, where we use the model, trained on the We-bQSP dataset with the Wikidata KG, to different datasets with the same KG and also to ones with the different Freebase KG. As shown in Table 3, our DiFaR frameworks are effectively generalizable to different datasets and KGs; meanwhile, the pipeline approaches involving entity linking are not generalizable to different KGs, and inferior to ours." }, { "figure_ref": [], "heading": "Analyses on Single-and Multi-Hops", "publication_ref": [], "table_ref": [ "tab_4", "tab_4", "tab_4", "tab_2" ], "text": "To see whether our DiFaR frameworks can also perform challenging multi-hop retrieval that requires selecting triplets not directly associated with entities in input queries, we breakdown the performances by single-and multi-hop type queries. As shown in Figure 2, our DiFaR can directly retrieve relevant triplets regardless of whether they are associated with entities in input queries (single-hop) or not (multi-hop), since it does not rely on entities in queries for fact retrieval. Also, we observe that our reranking strategy brings huge performance gains, especially on multi-hop type queries. However, due to the intrinsic complexity of multi-hop retrieval, its performances are relatively lower than performances in single-hop cases. Therefore, despite the fact that the majority of queries are answerable with single-hop retrieval and that our DiFaR can handle multi-hop queries, it is valuable to further extend Figure 3: Performances and efficiencies of DiFaR 2 with varying K, where we change the number of Top-K retrieved triplets when leveraging the reranking strategy. We report results with the relative improvement (%) to our DiFaR without reranking. We report the time with average over 30 runs.\nthe model for multi-hop, which we leave as future work. We also provide examples of facts retrieved by our DiFaR framework in Table 4. As shown in Table 4, since LMs, that is used for encoding both the question and the triplets for retrieval, might learn background knowledge about them during pre-trainnig, our DiFaR framework can directly retrieve relevant triplets even for complex questions.\nFor instance, in the first example of Table 4, the LM already knows who was the us president in 1963, and directly retrieves whose religion. Additionally, we provide more retrieval examples of our DiFaR framework in Appendix B.2 with Table 6 for both single-and multi-hop questions.\nAnalyses on Reranking with Varying K While we show huge performance improvements with our reranking strategy in Table 1 andTable 2, its performances and efficiencies depend on the number of retrieved Top-K triplets. Therefore, to further analyze it, we vary the number of K, and report the performances and efficiencies in Figure 3. As shown in Figure 3, the performances are rapidly increasing until Top-10 and saturated after it. Also, the time for reranking is linearly increasing when we increase the K values, and, in Top-10, the reranking mechanism takes only less than 20% time required for the initial retrieval. These results suggest that it might be beneficial to set the K value as around 10." }, { "figure_ref": [], "heading": "Sensitivity Analyses on Architectures", "publication_ref": [ "b37" ], "table_ref": [ "tab_5" ], "text": "To see different architectures of retrievers and rerankers make how many differences in performances, we Figure 4: Entity linking results, where we measure the performances on benchmark datasets with Wikidata and Freebase KGs. Note that entity mentions of the SimpleQuestions dataset are not available; therefore, we cannot fine-tune existing entity linkers, which additionally require mention labels, unlike ours. perform sensitivity analyses by varying their backbones. We use available models in the huggingface model library5 . As shown in Table 5, we observe that the pre-trained backbones by the MSMARCO dataset (Nguyen et al., 2016) show superior performances compared to using the naive backbones, namely DistilBERT and MiniLM, on both retrievers and rerankers. Also, performance differences between models with the same pre-trained dataset (e.g., MSMARCO-TAS-B and MSMARCO-Distil) are marginal. These two results suggest that the knowledge required for document retrieval is also beneficial to fact retrieval, and that DiFaR frameworks are robust across different backbones.\nAnalyses on Entity Linking While our DiFaR framework is not explicitly trained to predict entity mentions in the input query and their ids in the KG, during the training of our DiFaR, it might learn the knowledge on matching the input text to its entities. To demonstrate it, we measure entity linking performances by checking whether the retrieved triplets contain the labeled entities in the input query. As shown in Figure 4, our DiFaR surprisingly outperforms entity linking models. This might be because there are no accumulation of errors in entity linking steps, which are previously done with mention detection and entity disambiguation, thanks to direct retrieval with end-to-end learning; but also the fact in the triplet form has more beneficial information to retrieve contrary to the entity retrieval.\nIn this work, we focused on the limitations of the conventional fact retrieval pipeline, usually consisting of entity mention detection, entity disambiguation and relation classification, which not only requires additional labels for training each subcomponent but also is vulnerable to the error propagation across submodules. To this end, we proposed the extremely simple Direct Fact Retrieval (DiFaR) framework. During training, it requires only pairs of input texts and relevant triplets, while, in inference, it directly retrieves relevant triplets based on their representational similarities to the given query. Further, to calibrate the ranks of retrieved triplets, we proposed to use a reranker. We demonstrated that our DiFaR outperforms existing fact retrieval baselines despite its great simplicity, but also ours with the reranking strategy significantly improves the performances; for the first time, we revealed that fact retrieval can be easily yet effectively done. We believe our work paves new avenues for fact retrieval, which leads to various follow-up work." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b49" ], "table_ref": [], "text": "In this section, we faithfully discuss the current limitations and potential avenues for future research.\nFirst of all, while one advantage of our Direct Fact Retrieval (DiFaR) is its simplicity, this model architecture is arguably simple and might be less effective in handling very complex queries (Sen et al., 2022). For example, as shown in Figure 2, even though our DiFaR framework can handle the input queries demanding multi-hop retrieval, the performances on such queries are far from perfect. Therefore, future work may improve DiFaR by including more advanced techniques, for example, further traversing over the KG based on the retrieved facts from our DiFaR. Also, while we use only the text-based similarities between queries and triplets with LMs, it is interesting to model triplets over KGs based on their graph structures and blend their representations with representations from LMs to generate more effective search space.\nAlso, we focus on retrieval datasets in English. Here we would like to note that, in fact retrieval, most datasets are annotated in English, and, based on this, most existing work evaluates model performances on English samples. However, handling samples in various languages is an important yet challenging problem, and, as future work, one may extend our DiFaR to multilingual settings." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "For an input query, our Direct Fact Retrieval (Di-FaR) framework enables the direct retrieval of the factual knowledge from knowledge graphs (KGs), simplifying the conventional pipeline approach consisting of entity detection, entity disambiguation, and relation classification. However, the performance of our DiFaR framework is still not perfect, and it may retrieve incorrect triplets in response to given queries. Therefore, for the high-risk domains, such as biomedicine, our DiFaR should be carefully used, and it might be required to analyze retrieved facts before making the critical decision." }, { "figure_ref": [], "heading": "A Additional Experimental Setups", "publication_ref": [], "table_ref": [], "text": "Here we provide additional experimental setups." }, { "figure_ref": [], "heading": "A.1 Datasets", "publication_ref": [ "b5", "b2", "b59", "b49", "b3", "b52", "b36", "b11", "b46", "b52", "b3", "b11", "b46" ], "table_ref": [], "text": "Question Answering In KG-based question answering datasets, there exist pairs of questions and their relevant triplets, and we use them for training and evaluating models. We use the following three datasets: SimpleQuestions (Bordes et al., 2015), WebQuestionsSP (WebQSP) (Berant et al., 2013;Yih et al., 2016), and Mintaka (Sen et al., 2022), and here we describe them in details. First of all, the SimpleQuestions dataset is designed with the Freebase KG (Bollacker et al., 2008), which consists of 19,481, 2,821, and 5,622 samples on training, validation, and test sets. Similarly, the WebQSP dataset, which is a refined from the We-bQuestions dataset by filtering out samples with invalid annotations, is annotated with the Freebase KG, consisting of 2,612 and 1,375 samples on training and test sets, and we further sample 20% of training samples for validation. Lastly, the Mintaka dataset is recently designed for complex question answering, which is collected from crowdsourcing and annotated with the Wikidata KG (Vrandecic and Krötzsch, 2014). Among eight different languages, we use questions in English, which consist of 14,000, 2,000, and 4,000 samples for training, validation, and test sets, respectively.\nDialogue Similar to the KG-based question answering datasets, the dataset on KG-based dialogue generation domain has pairs of the input query and its relevant triplets, where the input query consists of the user's utterance and dialogue history, and the annotated triplets are the useful knowledge source to answer the query. For this dialogue domain, we use the OpenDialKG dataset (Moon et al., 2019), which is collected with two parallel corpora of open-ended dialogues and a Freebase KG. We randomly split the dataset into training, validation, and test sets with ratios of 70%, 15%, and 15%, respectively, and preprocess it following Kang et al. (2022b), which results in 31,145, 6,722, and 6,711 samples on training, validation, and test sets.\nKnowledge Graphs Following experimental setups of Diefenbach et al. (2017) and Saffari et al. (2021), we use the Wikidata KG (Vrandecic and Krötzsch, 2014) for our experiments on question answering, since the Freebase KG (Bollacker et al., 2008) is outdated, and the recently proposed entity linking models are implemented with the Wikidata, i.e., they are not suitable for the Freebase KG. Specifically, to use the Wikidata KG for datasets designed with the Freebase KG (e.g., SimpleQuestions and WebQSP), we use available mappings from the Freebase KG to the Wikidata KG (Diefenbach et al., 2017). Also, we use the wikidata dump of Mar. 07, 2022, and follow the dataset preprocessing setting from Saffari et al. (2021). For the OpenDialKG dataset, since it does not provide the Freebase entity ids, we cannot map them to the Wikidata entity ids using the available entity mappings. Therefore, for this dataset, we use original samples annotated with the Freebase KG." }, { "figure_ref": [], "heading": "A.2 Baselines and Our Model", "publication_ref": [ "b18", "b9", "b56", "b28", "b0", "b15", "b32" ], "table_ref": [], "text": "In this subsection, we provide the detailed explanation of models that we use for baselines. Note that entity linking models are further coupled with the relation classification module to predict triplets based on identified entities in input queries. We begin with the explanations of entity linkers.\nspaCy This model (Honnibal et al., 2020) sequentially predicts spans and KG ids of entities based on the named entity recognition and entity disambiguation modules. We use the spaCy v3.46 . GENRE This model (De Cao et al., 2021) first predicts the entity spans and then generates the unique entities in an autoregressive manner. Note that this model is trained for long texts; therefore, it may not be suitable for handling short queries. BLINK This model (Wu et al., 2020) retrieves the entities based on their representational similarities with the input queries, and, before that, entity mentions in the input should be provided. We use a model further tuned for questions (Li et al., 2020).\nReFinED This model (Ayoola et al., 2022) performs the entity mention detection and the entity disambiguation in a single forward pass. We use a model further fine-tuned for questions.\nGrailQA Unlike the above entity linkers that are trained for the Wikidata KG, this model (Gu et al., 2021) is trained to predict entities in the Freebase KG. This model performs the entity detection and the disambiguation sequentially, which is similar to the entity linking mechanism of spaCy.\nFactoid QA by Retrieval This model is a baseline (Lukovnikov et al., 2017) that individually retrieves the entities and relations based on their embedding-level similarities to input queries. Then, it merges the retrieved entities and relations with the KG-specific schema to construct the triplets.\nDiFaR This is our fact retrieval framework that directly retrieves the facts on KGs based on their representational similarities to the input queries.\nDiFaR 2 This is our fact retrieval framework with the proposed reranking strategy, where we further calibrate the retrieved results from DiFaR.\nRetrieval with Gold Entities This is an incomparble model to others, which uses labeled entities in input queries to predict relations based on them." }, { "figure_ref": [], "heading": "A.3 Implementation Details", "publication_ref": [ "b47", "b53", "b31", "b40", "b55", "b43", "b51" ], "table_ref": [], "text": "In this subsection, we provide additional implementation details that are not discussed in Section 4.4. In particular, we use the distilbert (Sanh et al., 2019) 7 as the retriever, and it consists of the 66M parameters. Also, for the reranker, we use the MiniLM model (Wang et al., 2020) 8 , which consists of the 22M parameters. For supervised learning experiments, we train all models for 30 epochs, with a batch size of 512 for question answering and 32 for dialogue, and a learning rate of 2e-5. Also, we optimize all models using an AdamW optimizer (Loshchilov and Hutter, 2019). We implement all models based on the following deep learning libraries: PyTorch (Paszke et al., 2019), Transformers (Wolf et al., 2020), Sentence-Transformers (Reimers and Gurevych, 2019), and BEIR (Thakur et al., 2021). For computing resources, we train and run all models with four GeForce RTX 2080 Ti GPUs and with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz having 72 processors. Also, training of our DiFaR framework takes less than one day. Note that we report all results with the single run, since our DiFaR framework significantly outperforms all baselines, but also it is costly to conduct multiple run experiments in the information retrieval experiment setting.\n7 https://huggingface.co/sentence-transformers/msmarcodistilbert-base-v3 8 https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2" }, { "figure_ref": [], "heading": "B Additional Experimental Results", "publication_ref": [], "table_ref": [], "text": "Here we provide additional experimental results." }, { "figure_ref": [], "heading": "B.1 Running Time Efficiency", "publication_ref": [ "b21", "b24", "b57" ], "table_ref": [], "text": "Note that, while we provide running time comparisons between our DiFaR and DiFaR 2 in Figure 3, it might be interesting to see more detailed running costs required for our dense fact retriever. As described in the Inference paragraph of Section 3.2, we index dense vectors with the Faiss library (Johnson et al., 2021) that supports vector quantization and clustering for highly efficient search. Specifically, following the common vector index setting in previous document retrieval work (Karpukhin et al., 2020;Lee et al., 2021), we use the HNSW index type. Please refer to the documentation of the Faiss library 910 , if you want to further explore different index types and their benchmark performances.\nWe report running time efficiencies on the Open-DialKG dataset, which are measured on the server with Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz having 72 processors (See Section A.3). First of all, during inference, we can process about 174 queries per second where we return the top 1,000 facts for each query. Also, the average time for encoding and indexing one fact takes about 1 ms, which can be not only boosted further with more parallelization but also done in an online manner. Lastly, the performance drop of the approximation search with Faiss from the exact search is only 0.0098 on MRR." }, { "figure_ref": [], "heading": "B.2 Additional Retrieval Examples", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In this subsection, on top of the retrieval examples provided in Table 4, we provide the additional examples of our DiFaR framework in Table 6." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank the members of the End-to-End Reasoning team of Alexa AI at Amazon and the anonymous reviewers for their constructive and insightful comments. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the previous and current funding agencies of the authors. The part of Jinheon Baek's graduate study and, accordingly, this work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST) and No.2021-0-02068, Artificial Intelligence Innovation Hub), and the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korea Government (MSIT) (NRF-2018R1A5A1059921)." } ]
There has been a surge of interest in utilizing Knowledge Graphs (KGs) for various natural language processing/understanding tasks. The conventional mechanism to retrieve facts in KGs usually involves three steps: entity span detection, entity disambiguation, and relation classification. However, this approach requires additional labels for training each of the three subcomponents in addition to pairs of input texts and facts, and also may accumulate errors propagated from failures in previous steps. To tackle these limitations, we propose a simple knowledge retrieval framework, which directly retrieves facts from the KGs given the input text based on their representational similarities, which we refer to as Direct Fact Retrieval (DiFaR). Specifically, we first embed all facts in KGs onto a dense embedding space by using a language model trained by only pairs of input texts and facts, and then provide the nearest facts in response to the input text. Since the fact, consisting of only two entities and one relation, has little context to encode, we propose to further refine ranks of top-k retrieved facts with a reranker that contextualizes the input text and the fact jointly. We validate our Di-FaR framework on multiple fact retrieval tasks, showing that it significantly outperforms relevant baselines that use the three-step approach.
Direct Fact Retrieval from Knowledge Graphs without Entity Linking
[ { "figure_caption": "Figure 1 :1Figure 1: (a) A conventional fact retrieval from KGs involves three sequential steps: 1) entity mention detection to identify entities in queries; 2) entity disambiguation to match entities in input texts to KGs; 3) relation classification to select relevant relations. (b) Our fact retrieval directly retrieves relevant facts with their representational similarities to input queries.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Main results on the dialogue domain for the Open-DialKG dataset. We emphasize the best scores in bold except for Retrieval with Gold Entities, which uses labeled entities.", "figure_data": "OpenDialKGTypesMethodsMRRHits@1 Hits@10Retrieval with Gold Entities 0.25110.15600.4683Retrieval with GrailQA0.20510.12710.3745UnsupervisedFactoid QA by Retrieval0.19770.08920.4231DiFaR (Ours)0.23960.13950.4424DiFaR 2 (Ours)0.26370.16030.4744Retrieval with Gold Entities 0.27500.14950.5745Retrieval with GrailQA0.22170.11980.4436SupervisedFactoid QA by Retrieval0.20420.12660.3587DiFaR (Ours)0.27550.14050.5547DiFaR 2 (Ours)0.47840.35350.7380", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Zero-shot transfer learning results. We use models trained on the WebQSP dataset with the Wikidata KG not only for SimpleQuestions and Mintaka datasets with the same KG, but also for the WebQSP dataset with the different Freebase KG. We use MRR as a metric, and N/A denotes not available.", "figure_data": "WikidataFreebaseMethodsSimpleQuestions Mintaka WebQSPRetrieval with Gold Entities0.79940.19500.6000Retrieval with BLINK0.57040.1617N/ARetrieval with ReFinED0.53890.1591N/AFactoid QA by Retrieval0.80140.14310.4239DiFaR (Ours)0.78120.20630.5913DiFaR 2 (Ours)0.82440.27690.6324All One-Hop Multi-Hops", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Retrieval examples for complex questions, on the challenging Mintaka dataset. We highlight the related phrases across the question and the triplet in yellow and green colors.", "figure_data": "Question: What religion was the us president in 1963?Retrieved Triplet: (Robert F. Kennedy, religion, Catholicism)Answer: CatholicismQuestion: Who commanded the allied invasion of western Europe at Normandyand was an American president?Retrieved Triplet: (Normandy landings, participant, Dwight D. Eisenhower)Answer: Dwight D. EisenhowerQuestion: Which former Chicago Bull shooting guard was also selected to playon the 1992 US basketball team?Retrieved Triplet: (1992 US men's basketball team, has part, Michael Jordan)Answer: Michael Jordan", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Sensitivity analyses on architectures, where we change the backbones of retriever and reranker in our DiFaR 2 . MSMARCO in the model name indicates it is pre-trained by the MSMARCO dataset, and we report results on the WebQSP.", "figure_data": "TypesModelsMRRHits@1 Hits@10DistilBERT0.59830.49630.7810RetrieverMSMARCO-TAS-B0.60510.49630.7844MSMARCO-Distil0.61020.50710.7927MiniLM0.66750.59450.7927RerankerMSMARCO-TinyBERT 0.70680.64200.8177MSMARCO-MiniLM0.71890.65280.838590 100WikidataBLINK ReFinED DiFaR (Ours)90 100Freebase GrailQA DiFaR (Ours)80807070606050SimpleQuestions WebQSPMintaka50OpenDialKG", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Jinheon Baek; Alham Fikri; Jens Lehmann; Sung Ju Hwang; Kaist; Mbzuai
[ { "authors": "Tom Ayoola; Shubhi Tyagi; Joseph Fisher; Christos Christodoulopoulos; Andrea Pierleoni", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Refined: An efficient zero-shot-capable approach to end-to-end entity linking", "year": "2022-07-10" }, { "authors": "Junwei Bao; Nan Duan; Zhao Yan; Ming Zhou; Tiejun Zhao", "journal": "ACL", "ref_id": "b1", "title": "Constraint-based question answering with knowledge graph", "year": "2016-12-11" }, { "authors": "Jonathan Berant; Andrew Chou; Roy Frostig; Percy Liang", "journal": "", "ref_id": "b2", "title": "Semantic parsing on freebase from question-answer pairs", "year": "2013-10" }, { "authors": "Kurt D Bollacker; Colin Evans; Praveen K Paritosh; Tim Sturge; Jamie Taylor", "journal": "ACM", "ref_id": "b3", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "year": "2008-06-10" }, { "authors": "Antoine Bordes; Sumit Chopra; Jason Weston", "journal": "ACL", "ref_id": "b4", "title": "Question answering with subgraph embeddings", "year": "2014-10-25" }, { "authors": "Antoine Bordes; Nicolas Usunier; Sumit Chopra; Jason Weston", "journal": "", "ref_id": "b5", "title": "Large-scale simple question answering with memory networks", "year": "2015" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mc-Candlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b6", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Nilesh Chakraborty; Denis Lukovnikov; Gaurav Maheshwari; Priyansh Trivedi; Jens Lehmann; Asja Fischer", "journal": "", "ref_id": "b7", "title": "Introduction to neural network based approaches for question answering over knowledge graphs", "year": "2019" }, { "authors": "Yu Chen; Lingfei Wu; Mohammed J Zaki", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Bidirectional attentive memory networks for question answering over knowledge bases", "year": "2019-06-02" }, { "authors": "Nicola De Cao; Gautier Izacard; Sebastian Riedel; Fabio Petroni", "journal": "", "ref_id": "b9", "title": "Autoregressive entity retrieval", "year": "2021-05-03" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b10", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Dennis Diefenbach; Thomas Pellissier Tanon; Kamal Deep Singh; Pierre Maret", "journal": "", "ref_id": "b11", "title": "Question answering benchmarks for wikidata", "year": "2017-10-23" }, { "authors": "Li Dong; Mirella Lapata", "journal": "The Association for Computer Linguistics", "ref_id": "b12", "title": "Language to logical form with neural attention", "year": "2016-08-07" }, { "authors": "Bin Fu; Yunqi Qiu; Chengguang Tang; Yang Li; Haiyang Yu; Jian Sun", "journal": "", "ref_id": "b13", "title": "A survey on complex question answering over knowledge base: Recent advances and challenges", "year": "2020" }, { "authors": "Fabian Galetzka; Jewgeni Rose; David Schlangen; Jens Lehmann", "journal": "", "ref_id": "b14", "title": "Space efficient context encoding for non-task-oriented dialogue generation with graph attention transformer", "year": "2021-08-01" }, { "authors": "Yu Gu; Sue Kase; Michelle Vanni; Brian M Sadler; Percy Liang; Xifeng Yan; Yu Su", "journal": "ACM", "ref_id": "b15", "title": "Beyond I.I.D.: three levels of generalization for question answering on knowledge bases", "year": "2021-04-19" }, { "authors": "Namgi Han; Goran Topic; Hiroshi Noji; Hiroya Takamura; Yusuke Miyao", "journal": "", "ref_id": "b16", "title": "An empirical analysis of existing systems and datasets toward general simple question answering", "year": "2020-12-08" }, { "authors": "Yanchao Hao; Yuanzhe Zhang; Kang Liu; Shizhu He; Zhanyi Liu; Hua Wu; Jun Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "An endto-end model for question answering over knowledge base with cross-attention combining global knowledge", "year": "2017-07-30" }, { "authors": "Matthew Honnibal; Ines Montani; Sofie Van Landeghem; Adriane Boyd", "journal": "", "ref_id": "b18", "title": "spaCy: Industrial-strength Natural Language Processing in Python", "year": "2020" }, { "authors": "Soyeong Jeong; Jinheon Baek; Sukmin Cho; Sung Ju Hwang; Jong C Park", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Augmenting document representations for dense retrieval with interpolation and perturbation", "year": "2022-05-22" }, { "authors": "Soyeong Jeong; Jinheon Baek; Chaehun Park; Jong Park", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Unsupervised document expansion for information retrieval with stochastic text generation", "year": "2021" }, { "authors": "Jeff Johnson; Matthijs Douze; Hervé Jégou", "journal": "IEEE Trans. Big Data", "ref_id": "b21", "title": "Billion-scale similarity search with gpus", "year": "2021" }, { "authors": "Minki Kang; Jinheon Baek; Sung Ju Hwang ; A", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "KALA: knowledge-augmented language model adaptation", "year": "2022-07-10" }, { "authors": "Minki Kang; Jin ; Myung Kwak; Jinheon Baek; Sung Ju Hwang", "journal": "", "ref_id": "b23", "title": "Knowledge-consistent dialogue generation with knowledge graphs", "year": "2022" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; S H Patrick; Ledell Lewis; Sergey Wu; Danqi Edunov; Wen-Tau Chen; Yih", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Dense passage retrieval for open-domain question answering", "year": "2020-11-16" }, { "authors": "Yunshi Lan; Gaole He; Jinhao Jiang; Jing Jiang; Wayne Xin Zhao; Ji-Rong Wen", "journal": "", "ref_id": "b25", "title": "A survey on complex knowledge base question answering: Methods, challenges and solutions", "year": "2021-08" }, { "authors": "Jinhyuk Lee; Mujeen Sung; Jaewoo Kang; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Learning dense representations of phrases at scale", "year": "2021" }, { "authors": "Jens Lehmann; Robert Isele; Max Jakob; Anja Jentzsch; Dimitris Kontokostas; Pablo N Mendes; Sebastian Hellmann; Mohamed Morsey; Patrick Van Kleef; Sören Auer; Christian Bizer", "journal": "Semantic Web", "ref_id": "b27", "title": "Dbpedia -A large-scale, multilingual knowledge base extracted from wikipedia", "year": "2015" }, { "authors": "Belinda Z Li; Sewon Min; Srinivasan Iyer; Yashar Mehdad; Wen-Tau Yih", "journal": "", "ref_id": "b28", "title": "Efficient one-pass end-to-end entity linking for questions", "year": "2020-11-16" }, { "authors": "Percy Liang", "journal": "", "ref_id": "b29", "title": "Lambda dependency-based compositional semantics", "year": "2013" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b30", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b31", "title": "Decoupled weight decay regularization", "year": "2019-05-06" }, { "authors": "Denis Lukovnikov; Asja Fischer; Jens Lehmann; Sören Auer", "journal": "ACM", "ref_id": "b32", "title": "Neural network-based question answering over knowledge graphs on word and character level", "year": "2017-04-03" }, { "authors": "Kangqi Luo; Fengli Lin; Xusheng Luo; Kenny Q Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Knowledge base question answering via encoding of complex query graphs", "year": "2018-10-31" }, { "authors": "Kaixin Ma; Hao Cheng; Xiaodong Liu; Eric Nyberg; Jianfeng Gao", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Open domain question answering with A unified knowledge interface", "year": "2022-05-22" }, { "authors": "Salman Mohammed; Peng Shi; Jimmy Lin", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Strong baselines for simple question answering over knowledge graphs with and without neural networks", "year": "2018-06-01" }, { "authors": "Seungwhan Moon; Pararth Shah; Anuj Kumar; Rajen Subba", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs", "year": "2019" }, { "authors": "Tri Nguyen; Mir Rosenberg; Xia Song; Jianfeng Gao; Saurabh Tiwary; Rangan Majumder; Li Deng", "journal": "", "ref_id": "b37", "title": "MS MARCO: A human generated machine reading comprehension dataset", "year": "2016-12-09" }, { "authors": "Rodrigo Frassetto Nogueira; Wei Yang; Jimmy Lin; Kyunghyun Cho", "journal": "", "ref_id": "b38", "title": "Document expansion by query prediction", "year": "2019" }, { "authors": "Barlas Oguz; Xilun Chen; Vladimir Karpukhin; Stan Peshterliev; Dmytro Okhonko; Michael Sejr Schlichtkrull; Sonal Gupta; Yashar Mehdad; Scott Yih", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Unik-qa: Unified representations of structured and unstructured knowledge for opendomain question answering", "year": "2022-07-10" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b40", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b41", "title": "", "year": "" }, { "authors": "Yingqi Qu; Yuchen Ding; Jing Liu; Kai Liu; Ruiyang Ren; Wayne Xin Zhao; Daxiang Dong; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Rocketqa: An optimized training approach to dense passage retrieval for opendomain question answering", "year": "2021-06-06" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "year": "2019" }, { "authors": "Stephen E Robertson; Steve Walker; Susan Jones; Micheline Hancock-Beaulieu; Mike Gatford", "journal": "", "ref_id": "b44", "title": "Okapi at TREC-3", "year": "1994-11-02" }, { "authors": "Stephen E Robertson; Hugo Zaragoza", "journal": "Found. Trends Inf. Retr", "ref_id": "b45", "title": "The probabilistic relevance framework: BM25 and beyond", "year": "2009" }, { "authors": "Amir Saffari; Armin Oliya; Priyanka Sen; Tom Ayoola", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "End-to-end entity resolution and question answering using differentiable knowledge graphs", "year": "2021-07-11" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b47", "title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Phillip Schneider; Tim Schopf; Juraj Vladika; Mikhail Galkin; Elena Simperl; Florian Matthes", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "A decade of knowledge graphs in natural language processing: A survey", "year": "2022" }, { "authors": "Priyanka Sen; Alham Fikri Aji; Amir Saffari", "journal": "International Committee on Computational Linguistics", "ref_id": "b49", "title": "Mintaka: A complex, natural, and multilingual dataset for end-to-end question answering", "year": "2022" }, { "authors": "Kuldeep Singh; Ioanna Lytra; Arun Sethupat Radhakrishna; Saeedeh Shekarpour; Maria-Esther Vidal; Jens Lehmann", "journal": "J. Web Semant", "ref_id": "b50", "title": "No one is perfect: Analysing the performance of question answering components over the dbpedia knowledge graph", "year": "2020" }, { "authors": "Nandan Thakur; Nils Reimers; Andreas Rücklé; Abhishek Srivastava; Iryna Gurevych", "journal": "", "ref_id": "b51", "title": "BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models", "year": "2021" }, { "authors": "Denny Vrandecic; Markus Krötzsch", "journal": "Commun. ACM", "ref_id": "b52", "title": "Wikidata: a free collaborative knowledgebase", "year": "2014" }, { "authors": "Wenhui Wang; Furu Wei; Li Dong; Hangbo Bao; Nan Yang; Ming Zhou", "journal": "", "ref_id": "b53", "title": "Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers", "year": "2020-12-06" }, { "authors": "Zhiguo Wang; Patrick Ng; Ramesh Nallapati; Bing Xiang", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Retrieval, re-ranking and multi-task learning for knowledge-base question answering", "year": "2021-04-19" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b55", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Ledell Wu; Fabio Petroni; Martin Josifoski; Sebastian Riedel; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Scalable zeroshot entity linking with dense entity retrieval", "year": "2020-11-16" }, { "authors": "Lee Xiong; Chenyan Xiong; Ye Li; Kwok-Fung Tang; Jialin Liu; Paul N Bennett; Junaid Ahmed; Arnold Overwijk", "journal": "", "ref_id": "b57", "title": "Approximate nearest neighbor negative contrastive learning for dense text retrieval", "year": "2021" }, { "authors": "Wen-Tau Yih; Ming-Wei Chang; Xiaodong He; Jianfeng Gao", "journal": "", "ref_id": "b58", "title": "Semantic parsing via staged query graph generation: Question answering with knowledge base", "year": "2015" }, { "authors": "Wen-Tau Yih; Matthew Richardson; Christopher Meek; Ming-Wei Chang; Jina Suh", "journal": "", "ref_id": "b59", "title": "The value of semantic parse labeling for knowledge base question answering", "year": "2016" }, { "authors": "Zhengyan Zhang; Xu Han; Zhiyuan Liu; Xin Jiang; Maosong Sun; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "ERNIE: enhanced language representation with informative entities", "year": "2019-07-28" } ]
[ { "formula_coordinates": [ 4, 185.32, 230.18, 105.72, 11.25 ], "formula_id": "formula_0", "formula_text": "x = [w 1 , w 2 , . . . , w |x| ]." }, { "formula_coordinates": [ 4, 76.38, 315.81, 212.75, 18.63 ], "formula_id": "formula_1", "formula_text": "t + = arg max t∈G p θ (t|e, x, G)p φ (e|m, x)p ψ (m|x),(1)" }, { "formula_coordinates": [ 4, 344.63, 139.62, 179.78, 20.88 ], "formula_id": "formula_2", "formula_text": "t + = arg max t∈G f (E θ (x), E θ (t)),(2)" }, { "formula_coordinates": [ 4, 306.14, 261.82, 97.52, 11.22 ], "formula_id": "formula_3", "formula_text": "t = [w 1 , w 2 , . . . , w |t| ]," }, { "formula_coordinates": [ 4, 313.1, 555.59, 211.31, 29.04 ], "formula_id": "formula_4", "formula_text": "min θ -log exp(f (E θ (x), E θ (t + ))) (x,t)∈τ exp(f (E θ (x), E θ (t))) ,(3)" } ]
10.1145/nnnnnnn.nnnnnnn
[ { "figure_ref": [ "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b26", "b20", "b34", "b17", "b20", "b26", "b34", "b0", "b8", "b43", "b45", "b9", "b33", "b46", "b47", "b1", "b4", "b19", "b31", "b20", "b26", "b43", "b45" ], "table_ref": [], "text": "Feed recommendation system (RS) is one type of product that recommends a sequence of items for users to browse and interact with. It has been widely applied to various online platforms, such as on the homepage of Kuaishou [27], Xiaohongshu [21], Taobao [35], and AliExpress [18]. An example of the feed recommendation is given in Figure 1. The feed allows users to continuously scroll down the item list of items in the viewing window, such that previously viewed items have a large impact on users' behaviors towards the next item. In this case, traditional methods that mainly focus on improving the accuracy of recommended items become sub-optimal for feed recommendation because they usually ignore the correlations between consecutive items. For example, if a user was shown a mobile phone item, it may be sub-optimal to put a series of more mobile phone items next to it. This mismatch is exacerbated by the fact that similar items tend to have similar click ratios. Therefore, it is of vital importance for feed recommendation methods to consider both accuracy and diversity from an item sequence perspective to attract users to browse and interact with more items in the feed [21,27,35].\nThis work focuses on two common characteristics that are closely related to optimizing accuracy and diversity in feed recommendations. First, different users may have different perceptions of diversity, such that item diversity in feed recommendations should be measured based on their personal interests. Existing works on diversified recommendations mainly focus on measuring the dissimilarity between item pairs, without considering users' personal interests. For example, the post-processing methods improve diversity by heuristically rearranging the item order based on predefined rules, which are not customized for all users [3, 6-8, 37, 41]; the learning-based methods measure the similarity of a given item pair by directly comparing the item embeddings [1,9,44,46]. Though effective, the ignorance of users' personal interests may lead to a mismatch between the model's definition of diversity and users' perceptions of diversity. For example, some female customers prefer to view more clothing than others in the feed. Directly reducing the probability of presenting user-preferred items to increase diversity may degrade their satisfaction with the recommended results.\nSecond, in feed applications, users tend to view many items in a row, such that users' interests may evolve during this continuous browsing process. In this case, the measurement of both item accuracy and diversity should consider the evolving interest due to the ever-changing context so as to accommodate the sequential browsing nature in feed scenarios. However, most existing interest models mainly focus on learning users' interests from their historical behaviors with less emphasis on the evolution of interests along with the browsing context [10,34,47,48]. Another line of research on re-ranking proposes various list-wise solutions to capture the interior correlations between items within the context [2,5,20,32]. Nevertheless, they mainly focus on improving accuracy regardless of diversity. Some recent works devote efforts to solve this accuracy-diversity dilemma and obtained promising results [21,27,44,46]. However, the joint optimization of accuracy and diversity still remains to be a challenging problem, especially for industrial implementation on large-scale systems.\nIn light of the above challenges, in this paper, we investigate the following research questions. 1) How to formulate and design a general framework for jointly optimizing accuracy and diversity from an item sequence perspective? 2) How to estimate accuracy and diversity with adaptation to the evolution of user interest while browsing consecutive items in feed scenarios? 3) How to implement the proposed framework in industrial systems for practical applications and how well does it perform? To this end, we propose a general Multi-factor Sequential Re-ranking with Perception-Aware Diversification (MPAD) framework to jointly optimize accuracy and diversity for practical feed recommendations. This framework consists of four main components. The bi-sequential determinantal point process (BS-DPP) algorithm provides a principled and tractable framework for sequential item selection to maximize the joint benefits of accuracy and diversity of the entire item list. The Multi-scale Interest Extraction (MIE) model extracts multi-scale user interests through graph clustering-based aggregations. The Context-aware Accuracy Estimation (CAE) model provides an estimate of context-aware accuracy from a sequence perspective by learning from both the multi-scale interests and the ever-changing browsing context. The Perception-Aware Kernel (PDK) evaluates the similarity between items with consideration of the user's perception of diversity based on personal interests. The main contributions are as follows.\n• This work formulates the feed recommendation task as a multifactor re-ranking problem and proposes a principle and tractable MPAD framework to maximize the joint benefits of accuracy and diversity of the entire recommended item list. • This work proposes a series of collaborative models to estimate the accuracy and diversity of an item list from a sequence perspective. They are able to capture the influence from both the browsing context and the evolving user interests to align with the browsing nature of the feed scenario. We also propose a tailored BS-DPP algorithm to jointly optimize the accuracy and diversity when selecting optimal items in a sequential manner. • This paper presents a general system architecture for the deployment of MPAD in industrial systems. It has now been implemented in the homepage feed to achieve 2.4% lift on user clicks, 2.0% lift on stay time, and 4.0% lift on content diversity.\nThe architecture now serves Taobao's main traffic with 120, 000 queries-per-second at peak. • This work conducts extensive experiments on both offline datasets and online A/B tests. The results show that our proposed MPAD significantly outperforms other methods. The source code has been made public 1 ." }, { "figure_ref": [], "heading": "PROBLEM SETUP", "publication_ref": [ "b20", "b43", "b11", "b20", "b28", "b43", "b8", "b20", "b26", "b43" ], "table_ref": [], "text": "A typical pipeline of industrial RS includes three stages [21,44], i.e., matching, ranking, and re-ranking. The RS first retrieves candidate items from item databases at the matching stage. Then, the ranking modules measure item accuracy in a point-wise manner. Finally, the top items will be sent to the re-ranking module to determine the final item list to present to users.\nIn this paper, we consider a multi-factor feed recommendation problem at the re-ranking stage, where the task is to select an item sequence\n𝑆 = {𝑖 1 , 𝑖 2 , • • • , 𝑖 𝐾 } with size 𝐾 from a candidate set 𝐼 = {𝑖 1 , 𝑖 2 , • • • , 𝑖 𝑁 }\nwith size 𝑁 ≫ 𝐾 provided by the ranking module. The selection of sequence 𝑆 depends on both the item accuracy which relates to user's preference for the items and the list-wise item diversity which influences user's intention to browse and interact in practical applications [12,21,29,44]. Formally, given a target user 𝑢 and a set of candidate items 𝐼 , our aim is to select a fixed-size subset from 𝐼 and determine their order in a page to maximize a joint utility function:\nP 0 : arg max 𝑆 ⊆𝐼 𝑓 (𝑢, 𝑆) = 𝐹 (Acc(𝑢, 𝑆), Div(𝑢, 𝑆)),(1)\nwhere the first term Acc(𝑢, 𝑆) evaluates the context-aware item accuracy based on user interests and browsing context, the second term Div(𝑢, 𝑆) evaluates the list-wise diversity of all items, and the fusion function 𝐹 (•) measures the contribution of item accuracy and diversity to the joint utility 𝑓 (𝑢, 𝑆).\nNote that this formulation extends the commonly used item-level diversity [9,21,27,44] to personalized user-item-level diversity, i.e., evolving from Div(𝑆) to Div(𝑢, 𝑆). As such, the solution needs to consider user's personalized perception of diversity on the recommended results." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "This section first gives an overview of the proposed MPAD framework in Sec. 3.1. Then, this section introduces the main building blocks of MPAD in order from Sec. 3.2 to Sec. 3.5. Finally, this section discusses the online implementation of MPAD in Sec. 3.6." }, { "figure_ref": [ "fig_2" ], "heading": "Framework Overview", "publication_ref": [], "table_ref": [], "text": "The framework consists of two layers: the selection-layer uses a sequential item selection algorithm to select items from the candidate set using the item accuracy and diversity scores evaluated by the estimation-layer. The detailed workflow is presented in Figure 2.\nSpecifically, the selection-layer operates with the BS-DPP algorithm which considers both list-wise item diversity and contextaware item accuracy during the selection of optimal items. It indeed offers a principle and tractable solution for function 𝐹 (•). The estimation-layer, on the other hand, consists of three components. The first component is MIE which groups users and items into different clusters and represents the user's multi-scale interest based on behavior sequences encoded by item/cluster embeddings. MIE can be computed offline for online complexity reduction. The second component is CAE which refines the point-wise accuracy scores from the ranking stage into context-aware accuracy scores by making use of both browsing context and multi-scale user interests. The refined scores are used in the computation of Acc(𝑢, 𝑆) in BS-DPP. The third component is PDK which computes item similarities based on both item embedding and the user's different scales of interests. The diversity kernel measures the diversity term Div(𝑢, 𝑆) in BS-DPP." }, { "figure_ref": [], "heading": "Bi-Sequential Item Selection", "publication_ref": [ "b8", "b43" ], "table_ref": [], "text": "This section presents the BS-DPP algorithm for incremental item selection at the selection-layer. In BS-DPP, both the item diversity scores and the item accuracy scores are considered to be sequentially updated along with the item selection process. This is different with standard Determinantal Point Process (DPP) methods [9,44] where the item accuracy scores are considered to be fixed values, regardless of the change of context. Therefore, BS-DPP is more in accordance with the browsing nature of feed products where the user's interests may evolve during reviewing consecutive items." }, { "figure_ref": [], "heading": "Task Formulation.", "publication_ref": [ "b8", "b24", "b8", "b24", "b8", "b43", "b8", "b15", "b43" ], "table_ref": [], "text": "A point process 𝑃 defined on an item set 𝐼 = {𝑖 1 , 𝑖 2 , • • • , 𝑖 𝑁 } is a probability distribution on the powerset of 𝐼 (i.e., the set of all subsets of 𝐼 ), where the probability satisfies 𝑆 ⊆𝐼 𝑃 (𝑆) = 1. The probability of choosing a specific item subset is determined by the kernel function in DPP and the item selection process is usually modeled as a MAP inference [9,25]. In this paper, we define the DPP kernel based on a combined measurement of Acc(𝑢, 𝑆) and Div(𝑢, 𝑆), such that the probability of choosing an item subset is naturally proportional to the joint optimization of item accuracy and diversity. Based on the DPP theory [9,25], the objective in (1) equals to:\nP 1 : arg max 𝑆 ∈𝐼 𝐹 (Acc(𝑢, 𝑆), Div(𝑢, 𝑆)) = log det(𝑲 𝑢 𝑆 ),(2)\nwhere 𝑲 𝑢 𝑆 is the kernel function defined with Acc(𝑢, 𝑆) and Div(𝑢, 𝑆), to be discussed later; log det(𝑲 𝑢 𝑆 ) is the log-probability function of choosing a subset 𝑆 for user 𝑢. In this way, the aim to maximize the utility function 𝑓 (𝑢, 𝑆) in ( 1) is transformed into maximizing the log-probability function ℎ(𝑢, 𝑆) = log det(𝑲 𝑢 𝑆 ). 3.2.2 Bi-Sequential DPP. Standard DPP methods [9,44] construct the kernel matrix as follows:\n𝐾 𝑢 𝑆 (𝑖, 𝑗) = 𝑔(𝑢, 𝑖) • 𝐷 (𝑖, 𝑗) • 𝑔(𝑢, 𝑗),(3)\nwhere 𝑔(𝑢, 𝑖) is the point-wise accuracy score evaluated between user 𝑢 and item 𝑖 ∈ 𝑆, regardless of the page-context, while 𝐷 (𝑖, 𝑗) measures the similarity between item 𝑖 and item 𝑗 with ∀𝑖, 𝑗 ∈ 𝑆, regardless of user's personal interests. In contrast, BS-DPP considers that 1) the accuracy scores are related to the browsing context, i.e., the previously added items in 𝑆; and 2) the diversity scores are related to the user's interests. This changes the definition in (3) into\n𝐾 𝑢 𝑆 (𝑖, 𝑗) = 𝑔(𝑢, 𝑖 |𝑆) • 𝐷 (𝑖, 𝑗 |𝐸 𝑢 ) • 𝑔(𝑢, 𝑗 |𝑆),(4)\nwhere 𝑔(𝑢, 𝑖 |𝑆) denotes the context-aware accuracy score which conditions on the previously presented items in 𝑆, while 𝐷 (𝑖, 𝑗 |𝐸 𝑢 ) measures the similarity between item 𝑖, 𝑗 ∈ 𝑆 condition on the user's interest 𝐸 𝑢 . We modify the log-probability of choosing a subset 𝑆 as\nℎ(𝑢, 𝑆) = ∑︁ 𝑖 ∈𝑆 𝑔(𝑢, 𝑖 |𝑆) + 𝛼 • log det(𝑫 𝑢 𝑆 ),(5)\nwhere 𝛼 is a tunable parameter to control the trade-off between the diversity and accuracy of the recommended results. It is useful in practical feed applications since different platforms need such a parameter to control the tendency towards accuracy or diversity to suit different business orientations, e.g., more accuracy for relevant recommendations or more diversity for discovering new interests. The objective in (2) can be solved based on the popular greedy approximation methods [9,16,44], which maximize the marginal gain when incrementally adding a new item to set 𝑆. Combining with our definition of the kernel function, the greedy maximization step to choose an optimal item per iteration can be written as\n𝑗 = arg max 𝑖 ∈𝐼 \\𝑆 log det 𝑲 𝑢 𝑆∪{𝑖 } -log det 𝑲 𝑢 𝑆 (6a) = arg max 𝑖 ∈𝐼 \\𝑆 ℎ (𝑢, 𝑆 ∪ {𝑖}) -ℎ(𝑢, 𝑆)(6b)\n= arg max\n𝑖 ∈𝐼 \\𝑆 𝑔(𝑢, 𝑖 |𝑆) +𝛼 • log det 𝑫 𝑢 𝑆∪{𝑖 } -log det 𝑫 𝑢 𝑆 (6c) = arg max 𝑖 ∈𝐼 \\𝑆 𝑔(𝑢, 𝑖 |𝑆) + 𝛼 • log(𝑑 2 𝑖 ). (6d\n) Concatenation & MLP Target Item Prev Context Cand Context Macro Interest Micro Interest User Profile CAE Model c1 c2 c3 i3 i5 i2 i3 i4 i5 i6 i1 i1 i2 i3 i4 s1 s2 s3 s4 ... c1 c2 c3 c4 Find Item's Cluster i1 i2 i6 i4\nPooling Layer" }, { "figure_ref": [], "heading": "Top-K Clusters", "publication_ref": [], "table_ref": [], "text": "Position Encoding Linear Layer\nSelf-Attention Linear Layer Full Behavior Seq. Recent Behavior Seq.\nSelf-Attention for 𝑖 ∈ 𝐼 \\ 𝑆 do 6:" }, { "figure_ref": [], "heading": "MIE Model", "publication_ref": [], "table_ref": [], "text": "u I E u m acro H u m icro H T u m acro H ) ( T u m icro H ) ( T u I E ) ( Diversity kernel ) log( ) | , ( max arg 2 \\ i S I i d S i u g j      Bi-Sequential DPP\n𝑒 𝑖 = (𝑫 𝑗𝑖 -⟨c 𝑗 , c 𝑖 ⟩)/𝑑 𝑗 . 7: Update 𝑑 2 𝑖 = 𝑑 2 𝑖 -𝑒 2 𝑖 , c 𝑖 = [c 𝑖 𝑒 𝑖 ]. 8:\nUpdate 𝑔(𝑢, 𝑖 |𝑆) with the proposed preference model." }, { "figure_ref": [], "heading": "9:", "publication_ref": [], "table_ref": [], "text": "end for 10:\nObtain 𝑗 = arg max 𝑖 ∈𝐼 \\𝑆 𝑔(𝑢, 𝑖 |𝑆) + 𝛼 • log(𝑑 2 𝑖 )." }, { "figure_ref": [], "heading": "11:", "publication_ref": [], "table_ref": [], "text": "Update subset 𝑆 = 𝑆 ∪ { 𝑗 }. 12: end while\nThe complete algorithm of BS-DPP in MPAD is given in Algorithm 1. We defer more details on the derivations of ( 6) and the update of term log(𝑑 2 𝑖 ), i.e., Step. 6 and Step. 7 in Algorithm 1, to the appendix. We now introduce how to obtain 𝑔(𝑢, 𝑖 |𝑆) and log(𝑑 2 𝑖 ) in (6d) via the proposed CAE and PDK model in the sequel." }, { "figure_ref": [ "fig_2" ], "heading": "Multi-Scale Interest Extraction", "publication_ref": [ "b9", "b32", "b33", "b47", "b3", "b12" ], "table_ref": [], "text": "In this section, we propose the MIE model to extract users' multiscale interests, which are used as the input of the subsequent CAE and PDK models. Existing user interest models usually directly perform self-attention on user's behavior items [10,33,34,48]. However, directly mixing the information from a large quantity of raw item-level features may introduce redundant or noisy information to the model thus affecting learning performance. It is also hard for them to distinguish users' different aspects of interests, especially from the full behavior sequences.\nTherefore, this work proposes MIE to describe user interests from two scales, i.e., the micro-scale and the macro-scale. The micro-scale captures users' recent interests, such as their recent attention to gold necklaces and earrings. The macro-scale, on the other hand, models the user's long-term interests at a broader scope, such as fashion, clothing, or sports. For macro-scale interests, MIE groups items into clusters based on graph modularity and represents the user's macro-level interest through cluster-wise aggregated embeddings. Each cluster corresponds to one interest point at the macro level. For micro-scale interests, MIE directly uses item-level features of each item within the user's behavior sequence, such as item id sequences and feature sequences, for micro-scale interest modeling. Each item corresponds to one interest point at the micro-level. MIE also adopts time decay encoding to distinguish the freshness of recent micro-level interests.\nGraph Clustering with Modularity. The user-item interaction data can be represented as a bipartite network. The edges (i.e., interactions) within the bipartite network only exist between user nodes and item nodes. In this paper, we partition the clusters in a user-item bipartite network based on the bipartite modularity [4], which is defined as\n𝑄 = 1 𝐸 ∑︁ 𝑖,𝑗 𝐴 𝑖 𝑗 -𝑃 𝑖 𝑗 𝛿 (𝑐 𝑖 , 𝑐 𝑗 ),(7)\nwhere 𝐸 is the total number of edges in the bipartite graph, 𝐴 𝑖 𝑗 is the adjacency matrix where the element equals one if an interaction between 𝑖 and 𝑗 exists, 𝑃 𝑖 𝑗 refers to the expected edge between 𝑖 and 𝑗 in a graph partitioned by different clusters, and 𝛿 (𝑐 𝑖 , 𝑐 𝑗 ) is the indicator function which equals one if 𝑖 and 𝑗 belongs to the same cluster, otherwise zero. A larger value of 𝑄 means that there are more edges in clusters than expected, which implies a stronger cluster structure. The graph modularity 𝑄 can be optimized in an iterative manner according to the Louvain algorithm [13]. After the algorithm converges, the items are grouped into different clusters which are used as the foundation of macro-level interest modeling.\nMacro-Level User Interest. For a given user 𝑢, we first classify its behavior items into several interest points according to their belonged clusters. Each interest point represents the user's one aspect of macro-level interest. We present an example in Figure 2.\nGiven a user 𝑢 with full behavior sequence 𝐼 𝑢 = {𝑖 1 , 𝑖 2 , 𝑖 3 , 𝑖 4 , ..., 𝑖 𝑁 }, we partition these behavior items into four interest points, i.e.,\n𝐶 𝑢 = {𝑐 1 , 𝑐 2 , 𝑐 3 , 𝑐 4 }.\nHere 𝑐 𝑢 1 = {𝑖 1 , 𝑖 3 } due to that 𝑖 1 and 𝑖 3 belong to the same cluster. We obtain the representation of one interest point by pooling over the embedding of its contained items:\nh 𝑢 𝑚 = Aggregate e 𝑖 𝑥 , ∀𝑖 𝑥 ∈ 𝑐 𝑚 ,(8)\nwhere 𝑐 𝑚 refers to the 𝑚-th interest point, e 𝑖 𝑥 denotes the embedding of behavior item 𝑖 𝑥 and Aggregate(•) is an aggregation function, which is sum pooling in this paper. Then, we perform multi-head attention among the top-M interest groups to obtain the representation of macro-level interests. Formally, the formulation of one single-head attention can be written as\nAtt(𝑸 𝑚 , 𝑲 𝑚 , 𝑽 𝑚 ) = Softmax 𝛼𝑸 𝑚 𝑲 𝑇 𝑚 𝑽 𝑚 ,(9)\nwhere 𝑸 𝑚 = h 𝑢 𝑚 𝑾 𝑄 , 𝑲 = h 𝑢 𝑚 𝑾 𝐾 , and 𝑽 = h 𝑢 𝑚 𝑾 𝑉 are the linear transformations applied to the representation of interest group h 𝑢 𝑚 . The scaling factor 𝛼 is usually set to be 1/ √ 𝑑 with 𝑑 being the dimension of the embedding vector. Then, the representation of macro-level user interest via multi-head attention is\nHead 𝑚 = Att(𝑸, 𝑲 𝑚 , 𝑽 𝑚 ),(10a)\nh 𝑢 macro = Concat(Head 1 , • • • , Head 𝑚 )𝑾 𝑂 ,(10b)\nwhere Concat(•) denotes the concatenation of embedding vectors and 𝑾 𝑶 denotes the linear projection matrix and scales with the number of used heads.\nMicro-Level User Interest. User's micro-level interests are usually more dynamic and more concrete than macro-level interests. Therefore, we directly perform multi-head attention towards the individual behavior items, instead of clusters, to obtain the representation of micro-level user interests. Noticeably, we inject the time decay corresponding to each behavior item into the embedding to describe the freshness of this aspect of interest. To be more specific, the expanded embedding of each behavior item can be written as\nẽ𝑖 𝑥 = Concat{𝒆 𝑖 𝑥 , 𝒕 𝑖 𝑥 }, ∀𝑖 𝑥 ∈ 𝐼 𝑢 micro ,(11)\nwhere 𝐼 𝑢 micro denotes the set of individual behavior items and 𝒕 𝑖 𝑥 is a learnable embedding that represents the time interval from the interaction time till now. Then, we obtain the representation of the user's micro-level interest in the target item as\nHead 𝑖 = Att(𝑸 𝑖 , 𝑲 𝑖 , 𝑽 𝑖 ),(12a)\nh micro = Concat(Head 1 , • • • , Head ℎ )𝑾 𝑂 ,(12b)\nwhere 𝑸, 𝑲 , and 𝑽 follows similar definition as in ( 9) but replace the embedding of interest group h 𝑢 𝑚 with the embedding of each individual behavior item ẽ𝑖 𝑥 ." }, { "figure_ref": [], "heading": "Context-Aware Accuracy Estimation", "publication_ref": [ "b18" ], "table_ref": [], "text": "In this section, we propose the CAE model to refine the pointwise accuracy scores produced by models in the ranking stage into context-aware accuracy scores for the measurement of Acc(𝑢, 𝑆). The proposed model only performs a linear transformation on the embedding vectors such that it is low-cost for online inference. A brief workflow of CAE is presented in Figure . 2. CAE maintains two embeddings to describe the context information. First, when determining the 𝑘-th item in a page, we represent the context of previous reviewed items as\nh prev = Aggregate (e 𝑖 , 𝑖 ∈ [𝑘 -1]) ,(13)\nwhere\n[𝑘 -1] = {1, 2, • • • , 𝑘 -1}.\nSecond, we represent the context of all candidate items to reflect the overall tendency from ranking models, which can be written as\nh cand = Aggregate (e 𝑖 , 𝑖 ∈ [𝑁 ]) .(14)\nNext, we model the influence from the context of previous items and candidate items towards the target item based on an excitation mechanism proposed in SENet [19]. Taking the context of previous items as an example, we obtain its excited representation as\nW prev = 𝜎 (W 2 • 𝛿 (W 1 • h prev )),(15)\nwhere 𝜎 (•) is the sigmoid activation function, 𝛿 (•) is the Relu function, W 1 and W 2 are the linear transformation matrices. This excitation operator can be understood as a low-cost attention mechanism to extract the key information embedded in the context vector h prev . Then, we multiply W prev with the target item embedding to emphasize the influence from the context of previous items:\n𝒉 𝑖 𝑡 prev = W prev ⊗ 𝒆 𝑖 𝑡 ,(16)\nwhere ⊗ denotes the dot product operation. \nŶ𝐼𝐼 (𝑢, 𝑖) = Softmax MLP h 𝑢 all ,(17b)\nThis CAE model can be trained with the commonly used crossentropy loss as in other ranking models." }, { "figure_ref": [ "fig_3" ], "heading": "Perception-Aware Diversity Kernel", "publication_ref": [ "b4", "b38", "b8", "b20", "b26", "b43", "b38", "b15" ], "table_ref": [], "text": "This section introduces the design of diversity kernel 𝑫 𝑢 𝑆 in (5). In general, the diversity kernel determines how to evaluate the similarity between any given pairs of items in set 𝑆. The elements of 𝑫 𝑢 𝑆 determines the log(𝑑 2 𝑖 ) term in (6d). Different definitions of the diversity kernel lead to disparate diversification results. In this paper, we introduce the user's multi-scale interests obtained in Sec. 3.3 into the measurement of item similarity. This connects diversity measurement with the user's personal perception of diversity due to distinct interests.\nSpecifically, we define an elementary kernel based on the form of SE kernel [39] for the perception on macro-level interests as\n𝐷 𝑢 macro (𝑖, 𝑗 |𝐸 𝑢 ) = 𝑎 2 𝑙 exp - 𝒉 𝑢 𝑖,macro ⊗ 𝒉 𝑢 𝑗,macro 𝑏 2 𝑙 ,(18)\nwhere 𝑎 2 𝑙 is the magnitude of the correlated components, 𝑏 𝑙 is its length scale, and 𝒉 𝑢 𝑖,macro refers to the dot product between the item embedding 𝒆 𝑖 and the macro-level interest vector 𝒉 𝑢 macro . Similar goes for 𝒉 𝑢 𝑗,macro . The kernel for the perception on micro-level interests can be defined as\n𝐷 𝑢 micro (𝑖, 𝑗 |𝐸 𝑢 ) = 𝑎 2 𝑠 exp - 𝒉 𝑢 𝑖,micro ⊗ 𝒉 𝑢 𝑗,micro 𝑏 2 𝑠 ,(19)\nwhere 𝑎 2 𝑠 and 𝑏 𝑠 are hyper-parameters for micro-level interests. We also define another elementary kernel that directly compares the similarity between items based on their embeddings. In this case, the item-level diversity used in existing literature [9,21,27,44] can be treated as a special case of PDK. In particular, this kernel can be defined as\n𝐷 item (𝑖, 𝑗 |𝐸 𝑢 ) = 𝑎 2 𝑠 exp - 𝒆 𝑢 𝑖 ⊗ 𝒆 𝑢 𝑗 𝑏 2 𝑠 ,(20)\nThese elementary kernels can be merged into a composite kernel without influencing the kernel properties via addition and multiplication operations [39]. More complicated operations such as automatic kernel learning are also worth trying for better adaptivity and full automation of a system, e.g., deep DPP [16], which can be explored in the future. In this work, we adopt the addition operation to construct this composite kernel:\n𝐷 𝑢 (𝑖,𝑗 |𝐸 𝑢 ) = 𝐷 𝑢 item (𝑖,𝑗 |𝐸 𝑢 ) + 𝛽 1 •𝐷 𝑢 macro (𝑖,𝑗 |𝐸 𝑢 ) + 𝛽 2 •𝐷 𝑢 micro (𝑖,𝑗 |𝐸 𝑢 ),(21)\nwhere 𝛽 1 and 𝛽 2 are the hyper-parameters to control the influence from macro-level and micro-level interest to the diversification results.\nWe give an example in Figure . 3 to show the change of diversity measurement when adding user interests into the kernel. Figure . 3(a) shows the item similarity in 𝐷 𝑢 item (𝑖,𝑗 |𝐸 𝑢 ) which only considering the distance of item embedding. Figure . 3(b) shows the item similarity after adding the macro-level interests 𝐷 𝑢 macro (𝑖,𝑗 |𝐸 𝑢 ) into the kernel. Figure . 3(c) shows the item similarity of the complete kernel 𝐷 𝑢 (𝑖,𝑗 |𝐸 𝑢 ). It is clear that part of dissimilar items transforms into similar items due to the consideration of user interests, and vice versa. In this way, the similarity values of the same set of items are different for distinct users, thereby leading to perception-aware diversification." }, { "figure_ref": [ "fig_5" ], "heading": "Online Implementation", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the online implementation of MPAD in the Homepage Feed of Taobao Mobile App. The presented system architecture is able to handle 120, 000 QPS at traffic peak and respond within 20 milliseconds in general. It now serves the main The architecture to implement the proposed MPAD model in Taobao is presented in Fig. 4, including the workflow of both offline training and online serving. The offline training is based on a distributed machine learning platform. The learned embedding and item clustering results are uploaded to the feature center for online serving. The re-ranking service retrieves user and item features from the feature center in real-time and feeds them into a series of models to determine the final item list. Note that the graph clustering in MIE only performs offline to reduce online complexity." }, { "figure_ref": [], "heading": "Online Platform", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Updates Report Top-K Final List", "publication_ref": [ "b8" ], "table_ref": [], "text": "The online inference complexity consists of three parts. First, each element in PDK requires the computation of dot product between two embedding vectors which incurs a complexity of O (𝑑) where 𝑑 is the length of embedding, such that the overall complexity of computing PDK scales as O (𝑑𝑁 2 ), where 𝑁 is the number of candidate items. Note that the dot product of embedding vectors between hot items and active users can be pre-computed offline to save a lot of computations. Second, the accuracy estimation in CAE only involves linear transformation over embedding vectors in the exciting mechanism and the final MLP layer. As such, the complexity scales linearly with the length of embedding vectors, i.e., O (𝑑𝑁 ) where we assume the length of each input embedding is 𝑑. Third, the BS-DPP runs in the same complexity as standard DPP [9], i.e., 𝑂 (𝐾 3 ) time for unconstrained MAP inference and 𝑂 (𝐾 2 𝑁 ) to return 𝐾 items." }, { "figure_ref": [], "heading": "EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct extensive experiments on both offline datasets and real-world online RS with the goal to answer the following research questions. Q1: Does MPAD outperform other SOTA methods in terms of accuracy and diversity for feed recommendation? Q2: How do different components of MPAD influence the final performance? Q3: How does MPAD perform in real-world feed recommendation platforms?" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b30", "b37", "b10", "b16", "b47", "b46", "b33", "b1", "b31", "b4", "b7", "b8", "b22", "b27" ], "table_ref": [], "text": "4.1.1 Datasets. We conduct offline experiments on three public available datasets: MovieLens dataset2 , Wechat dateset3 , and Taobao dataset 4 . Specifically, MovieLens dataset is a widely-used benchmark dataset for movie recommendations, which contains 10 million samples. Here we propose it for easy reproduction. Wechat dataset is collected from 7.3 million of video playback logs on Wechat Mobile App. It is one of the largest mobile social applications in China. The dataset involves 20, 000 users and 96, 564 videos. The label is marked as positive if the user has watched more than 90% playback progress of a video. Taobao dataset is a widely used public benchmark dataset for online advertising, which contains over 100 million ad display/click logs collected from Taobao Mobile App. It is one of the largest online merchandise applications in China. The logs involve 1 million users and 800 thousand of ads collected on Taobao Mobile App.\n4.1.2 Comparing Methods. We compare MPAD with both pointwise and list-wise mainstream methods for recommendation tasks.\nPoint-wise baselines: we compare with four commonly used point-wise baselines, i.e., the shallow model based on linear regression (LR) [31], the PNN model [38] which performs feature interaction with different product operations, the Wide & Deep learning model (WDL) [11] and DeepFM model [17] which adopt a hierarchical structure consists of linear and deep layers. We also compare with a few representative user interest models, i.e., DIN [48] which models short user behavior sequences with the target attention mechanism; DIEN [47] which uses an interest extraction layer based on Gated Recurrent Unit (GRU) to model users' temporal drifting interest; SIM [34] which models user's full behavior sequence based on a two-stage paradigm.\nList-wise baselines: we compare with three representative listwise baselines, i.e., DLCM [2] which applies GRU to encode the input ranking list, accompanied with a global vector to learn a powerful scoring function for list-wise re-ranking; PRM [32] which uses the self-attention mechanism to capture the mutual influence among items in the input ranking list; Seq2Slate [5] which adopts RNN and pointer network to encode the previous selected items when selecting the most appropriate item for next step. We compare with the statistical models, i.e., maximal marginal relevance (MMR) [8] and fast DPP [9]. Both of them have a tunable parameter to balance accuracy and diversity, similar to MPAD. We also compare with the generative-based models which directly generate item lists as the final results, including ListCVAE [23] and PivotCVAE [28]." }, { "figure_ref": [], "heading": "Metrics.", "publication_ref": [ "b21", "b44" ], "table_ref": [], "text": "For accuracy estimation, we use the commonly used Area Under ROC (AUC) and Logloss (cross entropy) to evaluate the point-wise estimation performance; and use normalized discounted cumulative gain (nDCG) [22] and mean average precision (MAP) to measure the list-wise estimation performance. nDCG@K or MAP@K refers to the performance of top-k recommended items in the return list. For list-wise diversity, we use intra-list average distance (ILAD) [45] to evaluate the richness of diversified items in GMV is a term used in online retailing to indicate the total sales monetary value for merchandise sold over a certain period of time. We use the time period of a complete day for all online metrics in this paper." }, { "figure_ref": [], "heading": "Parameter Settings.", "publication_ref": [], "table_ref": [], "text": "In all experiments, we use the validation set to tune the hyper-parameters to generate the best performance for different methods. The learning rate is searched from 10 -4 to 10 -2 . The L2 regularization term is searched from 10 -4 to 1. All models use Adam as the optimizer. We extract micro-level interests from the user's recent 100, 50, and 20 behavior items for Movie-Lens, WeChat, and Taobao, respectively. For macro-level interests, we group all items into 20, 241, and 6769 clusters for MovieLens, WeChat, and Taobao, respectively. We assign each user's recent behavior to these clusters, and we select the top-5 interest groups to compute their macro-level interests." }, { "figure_ref": [ "fig_7", "fig_7", "fig_7" ], "heading": "Offline Evaluation", "publication_ref": [], "table_ref": [ "tab_3", "tab_4" ], "text": "This section compares the experimental results of MPAD and other baselines on offline datasets to answer Q1 and Q2. First, we verify the effectiveness of interest modeling in MIE. For all datasets, we train MPAD and other competing methods using the same user behavior sequences. The interest modeling techniques differ from each other among the comparing methods. In particular, MPAD makes use of the cluster-based interest model proposed in Sec. 3.3. LR, DeepFM, and WDL treat user behavior sequences as raw features and directly feed them into linear/MLP layers for feature crossing. DIN and DIEN adopt TA/GRU units to model short-term user interests. SIM introduces an additional retrieval layer to select top-𝑘 items from user's full behavior sequences to model the lifelong user interest. As shown in Table 1, the results verify that MIE outperforms other user interest models remarkably, in terms of both AUC and Logloss. This indicates that MIE is more robust to the disturbing noise hidden in the raw item-level features within behavior sequences that may undermine learning performance. Next, we compare the performance of accuracy estimation among all point-wise and list-wise ranking methods. For MPAD, we only activate MIE and CAE components for this experiment. As shown in Table 2, the point-wise baselines achieve generally worse performance than the list-wise baselines on all datasets. This verifies that the mutual influence among the input ranking list incurs a great impact on list-wise recommendation. Therefore, it is of vital importance to consider the influence of browsing context in feed recommendations. Moreover, our proposed MPAD consistently yields the best performance on all datasets in terms of both NDCG and MAP. This verifies that MPAD has a superior capability to model the contextual influence among consecutive items, due to the modeling of browsing context and the user's multi-scale interests. Now we examine the capability to balance item accuracy and diversity in MPAD. We activate all components in MPAD for this experiment. As shown in Figure 5, when decreasing the parameter 𝛼 in (6d), ILAD decreases monotonously while nDCG increases at first and then decrease a bit. When 𝛼 = 0, MPAD directly returns items with the highest accuracy scores, regardless of the item diversity.\nThe results indicate that it is critical to introduce a proper amount of diversity into the item list to improve the joint utility of accuracy and diversity for feed recommendation. Then, we compare MPAD with MMR and fastDPP. The tunable parameters of all methods are chosen such that different algorithms have approximately the same range of nDCG. The result in Figure 5 shows that, among all comparing methods, our proposed MPAD exhibits the best item accuracy-diversity trade-off performance. This is probably due to the superior performance of accuracy and diversity estimation from MIE, CAE, and PDK. It is also noteworthy that each curve in Figure 5 has an inflection point, corresponding to the optimal balance of accuracy and diversity. In practical applications, the parameter 𝛼 should be tuned to reach such an optimal status to deliver the best experience for customers." }, { "figure_ref": [ "fig_1" ], "heading": "Online Evaluation", "publication_ref": [ "b8" ], "table_ref": [ "tab_5" ], "text": "MPAD has been fully deployed in the homepage feed of Taobao named Guess-you-like to serve the main traffic. In general, Guessyou-like one of the largest merchandise feed recommendation platform in China, which serves more than hundreds of millions of users towards billions of items every day. In Guess-you-like, users can slide to browse and interact with endless items in a sequential manner, as shown in Figure 1. We deploy MPAD at the re-ranking stage in Guess-you-like platform, which takes hundreds of candidate items from the ranking stage as input and outputs a fixed-size item list to form a new page. The online performance is compared against the fast DPP method and a heuristic method. Specifically, the fast DPP method uses point-wise ranking scores and item embedding vectors from ranking models as input, similar to [9]. The heuristic method adjusts the item order according to a series of heuristic rules predefined with expert knowledge, e.g., no more than two items within the same category on one screen. It is a commonly used diversification strategy in industrial applications.\nThe performance in Table 3 is averaged over two consecutive weeks. We have the following observations. Compared with the heuristic method, first, MPAD achieves a performance improvement of 2.38% for CLICK, 0.62% for CTR, and 0.48% for GMV, indicating that our framework is able to increase the user's willingness to interact with the items. The less improvement on GMV is due to that we mainly optimize MPAD towards the CLICK goal to be consistent with the business orientation. It is noteworthy that 1% improvement is a considerable enhancement in real-world RS, especially for applications with billion-scale users and items.\nIn Guess-you-like, 1% improvement on CLICK brings millions of clicks every day. Second, the Category Breadth per page increases by around 4% at the same time, which verifies that MPAD is able to promote diversity in the recommended items as well as accuracy. Third, the Stay Time increases by 1.95% and the PV increases by 1.29%, which indicates that MPAD can attract users to stay at the platform. MPAD also outperforms fastDPP in all the above metrics. All these improvements verify that MPAD is able to enhance both the item accuracy and diversity in the recommendation results and well balance their trade-off to attract users in feed recommendation. For example, the number of clothing items decreases from four in (b) to only two in (c), and their distance is greater. This example qualitatively illustrates the effectiveness of MPAD in delivering perception-aware diversification services based on user interests." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b10", "b47", "b32", "b1", "b48", "b4", "b31", "b19", "b42", "b13", "b7", "b11", "b41", "b23", "b8", "b14", "b29", "b20", "b47", "b46", "b25", "b39", "b32", "b33", "b35" ], "table_ref": [], "text": "Re-ranking Methods. Traditional point-wise ranking models focus on predicting the interaction label between any given user-item pairs, e.g., Wide&Deep [11], DIN [48] and SIM [33], regardless of the context information in a full recommendation list. However, in feed products, the mutual influence between items exhibits a great influence on user behaviors since users are reviewing items in a sequential manner. Recent works on re-ranking propose to consider the mutual influence between items in a list-wise manner, which includes three main research lines, i.e., RNN-based methods, attention-based methods, and evaluator-generator-based methods. Specifically, the RNN-based methods model the mutual influence based on RNN structures. DLCM [2] uses gated recurrent units (GRU) to sequentially encode the top-ranked items with their feature vectors. MiDNN [49] applies the long-short term memory (LSTM), with a global feature extension method to capture crossitem influences. Seq2Slate [5] extends MiDNN by adopting a more flexible pointer network to solve the re-ranking problem.\nAttention-based methods use self-attention to model item interactions without RNN's sequential structure. PRM [32] uses pretrained embedding to extract item interactions and generate listwise predictions with self-attention blocks and position encoding. PFRN [20] uses Listwise Feature Encoding for context-aware item interaction modeling with multi-head self-attention and relative position representation. Evaluator-generator methods use a generator to generate permutations and an evaluator to determine optimal permutation, e.g., SEG [43] and GRN [14]. These re-ranking models mainly focus on improving recommendation accuracy instead of a joint utility of both accuracy and diversity. Diversity Methods. It has been widely acknowledged in diversified recommendation methods that accuracy should not be the only goal of recommendation tasks since it may lead to a return of highly similar items to harm user's satisfaction with the recommendation results [1, 3, 6-9, 21, 27, 37, 41, 44, 46]. Research on diversification includes three main streams. The first stream of methods adopts heuristics rules to deal with item order in a post-processing manner. The representative work is maximal marginal relevance (MMR) [8], which represents relevance and diversity with independent metrics and maximizes the marginal relevance with a trade-off parameter. Other greedy heuristics methods vary in the definition of this marginal relevance [3, 6-8, 37, 41]. The second stream of methods treats diversified recommendation as an end-to-end learning task. DCF [12] proposes to solve the coupled parameterized matrix factorization and structural learning problems based on collaborative filtering. BGCF [42] applies bayesian graph convolutional neural networks to model the uncertainty between user-item and bring diversity into recommendation indirectly. DSSA [24] adopts the attention mechanism to determine the importance of the undercovered subtopics, where the relevance and the diversity are jointly estimated with subtopic attention. The third stream of methods is based on statistical models. The representative is the determinantal point process (DPP) which measures set diversity by describing the probability for all subsets of the item set. The maximum a posteriori (MAP) in DPP to generate diverse lists is NP-hard, such that many related works focus on the approximation of DPP for low-complex iterates. For example, Fast DPP [9] proposes a greedy approximation to accelerate the MAP inference for DPP. This fast DPP method also inspires many follow-ups to improve diversity in different recommendation tasks [15,30]. Meanwhile, SSD [21] proposed a time series analysis technique to include out-of-window items into the measurement of diversity to increase the diversity of a long recommendation sequence and alleviate the long tail effect as well.\nUser Interest Modeling. Researchers are capturing shifting user interests by modeling behavior sequences. For example, DIN [48] uses TA to capture user diversity, DIEN [47] uses GRU for drifting temporal interest, and MIND [26] uses multi-vectors for dynamic interests. These models focus on short sequences (<100). For long sequences, memory-based methods such as HPMN [40] and MIMN [33] use memory networks to model diverse user interests, while two-stage methods such as SIM [34] and UBR4CTR [36] train retrieval and CTR models separately. In the first stage, the retrieval model retrieves the top-𝑘 relevant items from long user behavior sequences and stores the subsequence in an offline database. Then, in the second stage, the CTR model retrieves the top-𝑘 relevant items directly from the offline database to reduce complexity during the learning. These models mainly focus on the CTR tasks with the goal of maximizing accuracy. Their successes in CTR prediction inspire us to extract user interests from both the long and short behavior sequences." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a general re-ranking framework named MPAD for practical feed recommendation. A series of collaborative models are proposed to sequentially evaluate the accuracy and diversity of different items in a list and to generate an optimal item list by maximizing the joint utility of accuracy and diversity of the entire list. Both online and offline experiments verified the effectiveness of the proposed framework." }, { "figure_ref": [], "heading": "A DERIVATION OF ITEM SELECTION", "publication_ref": [ "b21", "b23", "b8", "b26" ], "table_ref": [], "text": "The composite kernel 𝑫 𝑢 𝑆 is a PSD matrix since it is an addition of multiple PSD elementary kernels. The Cholesky decomposition of 𝑫 𝑢 𝑆 can be written as 𝑫 𝑢 𝑆 = VV ⊤ , where V ∈ R 𝑘×𝑘 is an invertible lower triangular matrix. For any 𝑖 ∈ 𝐼 \\ 𝑆, the Cholesky decomposition of 𝑫 𝑢 𝑆∪{𝑖 } can be represented as\nwhere the row vector c 𝑖 and the scalar 𝑑 𝑖 ≥ 0 satisfies\nAccording to (22), we have\nCombine (6c) with (24), we obtain\nWe follow [9] to derive the update of log(𝑑 2 𝑖 ) as follows. The Cholesky decomposition of 𝑫 𝑢 𝑆∪{ 𝑗 } can be written as\nDefine c ′ 𝑖 and 𝑑 ′ 𝑖 as the new vector and scalar of 𝑖 ∈ 𝐼 \\ (𝑆 ∪ { 𝑗 }) after adding item 𝑗 into 𝑆. According to (23a) and ( 26), we have\nCombining (27) with Eq. (23a), we have\nThen (23b) implies" } ]
Feed recommendation systems, which recommend a sequence of items for users to browse and interact with, have gained significant popularity in practical applications. In feed products, users tend to browse a large number of items in succession, so the previously viewed items have a significant impact on users' behavior towards the following items. Therefore, traditional methods that mainly focus on improving the accuracy of recommended items are suboptimal for feed recommendations because they may recommend highly similar items. For feed recommendation, it is crucial to consider both the accuracy and diversity of the recommended item sequences in order to satisfy users' evolving interest when consecutively viewing items. To this end, this work proposes a general re-ranking framework named Multi-factor Sequential Re-ranking with Perception-Aware Diversification (MPAD) to jointly optimize accuracy and diversity for feed recommendation in a sequential manner. Specifically, MPAD first extracts users' different scales of interests from their behavior sequences through graph clusteringbased aggregations. Then, MPAD proposes two sub-models to respectively evaluate the accuracy and diversity of a given item by capturing users' evolving interest due to the ever-changing context and users' personal perception of diversity from an item sequence perspective. This is consistent with the browsing nature of the feed scenario. Finally, MPAD generates the return list by sequentially selecting optimal items from the candidate set to maximize the joint benefits of accuracy and diversity of the entire list. MPAD has been implemented in Taobao's homepage feed to serve the main traffic
Multi-factor Sequential Re-ranking with Perception-Aware Diversification
[ { "figure_caption": "arXiv:2305.12420v1 [cs.IR] 21 May 2023 (a) Channel blocks (b) Feed recommendation", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: An example of different forms of real-world RS. Left: Traditional RS recommends items in different channel blocks. Right: Feed RS recommends a sequence of items where users can slide down to view more items.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An overview of the MPAD framework.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Influence of adding multi-scale user interests to the diversity measurement. (a) item-level similarity; (b) item-level similarity with macro-level interests; (c) itemlevel similarity with macro-level and micro-level interests.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The System Architecture for Online Deployment", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(b) Comparison of trade-off performances.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Trade-off between accuracy and diversity.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An example of personalized diversification.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "The same goes for the excited vector for the context of candidate items. Moreover, by replacing 𝒆 𝑖 𝑡 in(16) with the macro-level user interest h 𝑢 macro and the micro-level user interest h 𝑢 micro , we obtain another four excited vectors, i.e., h 𝑢 pr,lo , h 𝑢 pr,sh , h 𝑢 ca,lo , and h 𝑢 ca,sh , to model the drift of user interest based on the list-wise context. Finally, we concatenate all excited embeddings together and feed it into an MLP layer with softmax function to get the output scores: h 𝑢 all = Concat e 𝑖 𝑡 , 𝒉 𝑖 𝑡 prev , 𝒉 𝑖 𝑡", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of user interest modeling (bold: best; underline: runner-up). The marker * denotes that our model performs significantly better than the runner-up with 𝑝 < 0.01 over 25 runs. a page. Moreover, we use PV, Stay Time, Category Breadth, CLICK, CTR, and GMV to evaluate online performance. Here, PV refers to the total number of browsed items, Stay Time is the average browsing time of all users, and Category Breadth computes the average number of distinct categories of all exposed items on all pages, reflecting the diversity of recommendation results. CLICK refers to the total number of clicked items, CTR equals CLICK/PV which measures users' willingness to click.", "figure_data": "MethodMovieLensWeChatTaobaoAUC (↑) Logloss (↓) AUC (↑ ) Logloss (↓) AUC (↑ ) Logloss (↓)LR0.73370.62540.64330.66560.57250.1930PNN0.78360.55970.69760.62950.63090.1896WDL0.78830.55840.69680.62950.63160.1894DeepFM 0.78940.55710.69790.62900.63150.1898DIN0.80240.53940.69510.63190.63330.1896DIEN0.80280.53440.69940.62900.63240.1902SIM0.80230.53490.70060.63090.63120.1893Ours0.8056 *0.5305 *0.7014 *0.6279 *0.6361 *0.1884", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of item quality in the item list (bold: best; underline: runner-up). The marker * denotes that our model performs significantly better than the runner-up with 𝑝 < 0.01 over 25 runs.", "figure_data": "DatasetModelNDCG@3 NDCG@10 MAP@3 MAP@10DIN0.90170.92300.87600.8893DIEN0.90310.92390.87750.8900SIM0.90040.92190.87470.8879MovieLensSeq2Slate DLCM0.9098 0.90950.9295 0.92930.8863 0.88570.8978 0.8976PRM0.91020.92960.88650.8981ListCVAE0.83490.88000.81540.8632PivotCVAE0.86080.89280.84230.8775Ours0.9148 *0.9332 *0.8918 *0.9027 *DIN0.68110.72000.60840.5935DIEN0.69700.73100.62560.6075SIM0.69760.73290.62720.6100WeChatSeq2Slate DLCM0.7001 0.70290.7342 0.73730.6309 0.63180.6148 0.6164PRM0.70010.73630.62800.6152ListCVAE0.57100.65330.49750.5333PivotCVAE0.57380.65470.49930.5362Ours0.7095 *0.7419 *0.6398 *0.6216 *DIN0.20170.31720.16540.2227DIEN0.20030.31550.16310.2202SIM0.20060.32000.16450.2237TaobaoSeq2Slate DLCM0.2093 0.21150.3294 0.33080.1728 0.17490.2326 0.2337PRM0.21180.33030.17500.2335ListCVAE0.17670.30760.15230.2193PivotCVAE0.17850.31200.15120.2224Ours0.2166 *0.3339 *0.1799 *0.2372 *TaobaoTaobaoTaobao0.1700.07500.38MPADNDCG0.160 0.165MAP0.0700 0.0725ILAD0.36 0.370.155MPAD0.0675MPAD0.350.25 0.50 0.75 1.000.25 0.50 0.75 1.000.25 0.50 0.75 1.00ααα(a) Impact of parameter 𝛼 on Taobao dataset.0.380MovieLensWeChat0.38Taobao0.3750.46ILAD0.370 0.365FastDPP MMR MPADILAD0.44FastDPP MMR MPADILAD0.36 0.34FastDPP MMR MPAD0.7050.7100.680.690.700.160.17NDCGNDCGNDCG", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of online A/B tests in TaoBao App.", "figure_data": "PVBreadth Stay Time CLICKCTRGMVvs Heuristic +1.29% +4.02%+1.95%+2.38% +1.07% +1.29%vs fastDPP +0.10% +1.46%+1.41%+1.77% +1.67% +0.27%", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "4.3.1 Case Study. In Figure. 6, we present one case to illustrate how MPAD diversifies items to suit personal interests. We sample a female customer who recently clicked a series of clothing and dressing items, which indicate her browsing interests. Figure. 6(a) presents the diversified results based on heuristic rules which are universal for all users. It is clear that a few less relevant items appear in the recommendation result, such as sports shoes and down cloth. Figure. 6(b) shows the results obtained by MPAD, where the recommended items are all relevant to the clicked items and are well-spaced to avoid presenting similar items in a row. Figure. 6(c) shows the results of adjusting the parameter 𝛼 in MPAD to increase diversity. The items are now more proportioned than those in (b).", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Yue Xu; Hao Chen; Zefan Wang; Jianwen Yin; Qijie Shen; Dimin Wang; Feiran Huang; Tao Zhuang; Xia Hu
[ { "authors": "Mustafa Abdool; Malay Haldar; Prashant Ramanathan; Tyler Sax; Lanbo Zhang; Aamir Manaswala; Lynn Yang; Bradley Turnbull; Qing Zhang; Thomas Legrand", "journal": "", "ref_id": "b0", "title": "Managing diversity in airbnb search", "year": "2020" }, { "authors": "Qingyao Ai; Keping Bi; Jiafeng Guo; Bruce Croft", "journal": "", "ref_id": "b1", "title": "Learning a deep listwise context model for ranking refinement", "year": "2018" }, { "authors": "Azin Ashkan; Branislav Kveton; Shlomo Berkovsky; Zheng Wen", "journal": "", "ref_id": "b2", "title": "Optimal greedy diversity for recommendation", "year": "2015" }, { "authors": "J Michael; Barber", "journal": "Physical Review E", "ref_id": "b3", "title": "Modularity and community detection in bipartite networks", "year": "2007" }, { "authors": "Irwan Bello; Sayali Kulkarni; Sagar Jain; Craig Boutilier; Ed Chi; Elad Eban; Xiyang Luo; Alan Mackey; Ofer Meshi", "journal": "", "ref_id": "b4", "title": "Seq2slate: Re-ranking and slate optimization with rnns", "year": "2018" }, { "authors": "Rubi Boim; Tova Milo; Slava Novgorodov", "journal": "", "ref_id": "b5", "title": "Diversification and refinement in collaborative filtering recommender", "year": "2011" }, { "authors": "Allan Borodin; Hyun Chul Lee; Yuli Ye", "journal": "", "ref_id": "b6", "title": "Max-sum diversification, monotone submodular functions and dynamic updates", "year": "2012" }, { "authors": "Jaime Carbonell; Jade Goldstein", "journal": "", "ref_id": "b7", "title": "The use of mmr, diversity-based reranking for reordering documents and producing summaries", "year": "1998" }, { "authors": "Laming Chen; Guoxin Zhang; Eric Zhou", "journal": "", "ref_id": "b8", "title": "Fast greedy map inference for determinantal point process to improve recommendation diversity", "year": "2018" }, { "authors": "Qiwei Chen; Yue Xu; Changhua Pei; Shanshan Lv; Tao Zhuang; Junfeng Ge", "journal": "", "ref_id": "b9", "title": "Efficient long sequential user data modeling for click-through rate prediction", "year": "2022" }, { "authors": "Heng-Tze Cheng; Levent Koc; Jeremiah Harmsen; Tal Shaked; Tushar Chandra; Hrishi Aradhye; Glen Anderson; Greg Corrado; Wei Chai; Mustafa Ispir", "journal": "", "ref_id": "b10", "title": "Wide & deep learning for recommender systems", "year": "2016" }, { "authors": "Peizhe Cheng; Shuaiqiang Wang; Jun Ma; Jiankai Sun; Hui Xiong", "journal": "", "ref_id": "b11", "title": "Learning to recommend accurate and diverse items", "year": "2017" }, { "authors": "Liang Feng; Qianchuan Zhao; Cangqi Zhou", "journal": "Expert Systems with Applications", "ref_id": "b12", "title": "Improving performances of topn recommendations with co-clustering method", "year": "2020" }, { "authors": "Yufei Feng; Binbin Hu; Yu Gong; Fei Sun; Qingwen Liu; Wenwu Ou", "journal": "", "ref_id": "b13", "title": "GRN: Generative rerank network for context-wise recommendation", "year": "2021" }, { "authors": "Lu Gan; Diana Nurbakova; Léa Laporte; Sylvie Calabretto", "journal": "", "ref_id": "b14", "title": "Enhancing recommendation diversity using determinantal point processes on knowledge graphs", "year": "2020" }, { "authors": "Mike Gartrell; Elvis Dohmatob; Jon Alberdi", "journal": "", "ref_id": "b15", "title": "Deep determinantal point processes", "year": "2018" }, { "authors": "Huifeng Guo; Ruiming Tang; Yunming Ye; Zhenguo Li; Xiuqiang He", "journal": "", "ref_id": "b16", "title": "Deepfm: A factorization-machine based neural network for ctr prediction", "year": "2017" }, { "authors": "Qi Hao; Tianze Luo; Guangda Huzhang", "journal": "", "ref_id": "b17", "title": "Re-ranking with constraints on diversified exposures for homepage recommender system", "year": "2021" }, { "authors": "Jie Hu; Li Shen; Gang Sun", "journal": "", "ref_id": "b18", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "Jinhong Huang; Yang Li; Shan Sun; Bufeng Zhang; Jin Huang", "journal": "", "ref_id": "b19", "title": "Personalized flight itinerary ranking at fliggy", "year": "2020" }, { "authors": "Yanhua Huang; Weikun Wang; Lei Zhang; Ruiwen Xu", "journal": "", "ref_id": "b20", "title": "Sliding spectrum decomposition for diversified recommendation", "year": "2021" }, { "authors": "Kalervo Järvelin; Jaana Kekäläinen", "journal": "", "ref_id": "b21", "title": "IR evaluation methods for retrieving highly relevant documents", "year": "2017" }, { "authors": "Ray Jiang; Sven Gowal; Yuqiu Qian; Timothy Mann; Danilo J Rezende", "journal": "", "ref_id": "b22", "title": "Beyond greedy ranking: Slate optimization via list-cvae", "year": "" }, { "authors": "Zhengbao Jiang; Ji-Rong Wen; Zhicheng Dou; Wayne Xin Zhao; Jian-Yun Nie; Ming Yue", "journal": "", "ref_id": "b23", "title": "Learning to diversify search results via subtopic attention", "year": "2017" }, { "authors": "Alex Kulesza; Ben Taskar", "journal": "Foundations and Trends in Machine Learning", "ref_id": "b24", "title": "Determinantal point processes for machine learning", "year": "2012" }, { "authors": "Chao Li; Zhiyuan Liu; Mengmeng Wu; Yuchi Xu; Huan Zhao; Pipei Huang; Guoliang Kang; Qiwei Chen; Wei Li; Dik Lun; Lee ", "journal": "", "ref_id": "b25", "title": "Multi-interest network with dynamic routing for recommendation at tmall", "year": "2019" }, { "authors": "Zihan Lin; Hui Wang; Jingshu Mao; Wayne Xin Zhao; Cheng Wang; Peng Jiang; Ji-Rong Wen", "journal": "", "ref_id": "b26", "title": "Feature-aware diversified re-ranking with disentangled representations for relevant recommendation", "year": "2022" }, { "authors": "Shuchang Liu; Fei Sun; Yingqiang Ge; Changhua Pei; Yongfeng Zhang", "journal": "", "ref_id": "b27", "title": "Variation control and evaluation for generative slate recommendations", "year": "2021" }, { "authors": "Weiwen Liu; Yunjia Xi; Jiarui Qin; Fei Sun; Bo Chen; Weinan Zhang; Rui Zhang; Ruiming Tang", "journal": "", "ref_id": "b28", "title": "Neural re-ranking in multi-stage recommender systems: A review", "year": "2022" }, { "authors": "Yong Liu; Yingtai Xiao; Qiong Wu; Chunyan Miao; Juyong Zhang; Binqiang Zhao; Haihong Tang", "journal": "", "ref_id": "b29", "title": "Diversified interactive recommendation with implicit feedback", "year": "2020" }, { "authors": "Gary H Brendan Mcmahan; David Holt; Michael Sculley; Dietmar Young; Julian Ebner; Lan Grady; Todd Nie; Eugene Phillips; Davydov; Daniel Golovin", "journal": "", "ref_id": "b30", "title": "Ad click prediction: a view from the trenches", "year": "2013" }, { "authors": "Changhua Pei; Yi Zhang; Yongfeng Zhang; Fei Sun; Xiao Lin; Hanxiao Sun; Jian Wu; Peng Jiang; Junfeng Ge; Wenwu Ou", "journal": "", "ref_id": "b31", "title": "Personalized re-ranking for recommendation", "year": "2019" }, { "authors": "Qi Pi; Weijie Bian; Guorui Zhou; Xiaoqiang Zhu; Kun Gai", "journal": "", "ref_id": "b32", "title": "Practice on long sequential user behavior modeling for click-through rate prediction", "year": "2019" }, { "authors": "Qi Pi; Guorui Zhou; Yujing Zhang; Zhe Wang; Lejian Ren; Ying Fan; Xiaoqiang Zhu; Kun Gai", "journal": "", "ref_id": "b33", "title": "Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction", "year": "2020" }, { "authors": "Xufeng Qian; Yue Xu; Fuyu Lv; Shengyu Zhang; Ziwen Jiang; Qingwen Liu; Xiaoyi Zeng; Tat-Seng Chua; Fei Wu", "journal": "", "ref_id": "b34", "title": "Intelligent request strategy design in recommender system", "year": "2022" }, { "authors": "Jiarui Qin; Weinan Zhang; Xin Wu; Jiarui Jin; Yuchen Fang; Yong Yu", "journal": "", "ref_id": "b35", "title": "User behavior retrieval for click-through rate prediction", "year": "2020" }, { "authors": "Lijing Qin; Xiaoyan Zhu", "journal": "", "ref_id": "b36", "title": "Promoting diversity in recommendation by entropy regularizer", "year": "2013" }, { "authors": "Yanru Qu; Han Cai; Kan Ren; Weinan Zhang; Yong Yu; Ying Wen; Jun Wang", "journal": "", "ref_id": "b37", "title": "Product-based neural networks for user response prediction", "year": "2016" }, { "authors": "C E Rasmussen; C I K Williams", "journal": "MIT Press", "ref_id": "b38", "title": "Gaussian Processes for Machine Learning", "year": "2006" }, { "authors": "Jiarui Kan Ren; Yuchen Qin; Weinan Fang; Lei Zhang; Weijie Zheng; Guorui Bian; Jian Zhou; Yong Xu; Xiaoqiang Yu; Kun Zhu; Gai", "journal": "", "ref_id": "b39", "title": "Lifelong sequential modeling with personalized memorization for user response prediction", "year": "2019" }, { "authors": "Chaofeng Sha; Xiaowei Wu; Junyu Niu", "journal": "", "ref_id": "b40", "title": "A framework for recommending relevant and diverse items", "year": "2016" }, { "authors": "Jianing Sun; Wei Guo; Dengcheng Zhang; Yingxue Zhang; Florence Regol; Yaochen Hu; Huifeng Guo; Ruiming Tang; Han Yuan; Xiuqiang He", "journal": "", "ref_id": "b41", "title": "A framework for recommending accurate and diverse items using bayesian graph convolutional neural networks", "year": "2020" }, { "authors": "Fan Wang; Xiaomin Fang; Lihang Liu; Yaxue Chen; Jiucheng Tao; Zhiming Peng; Cihang Jin; Hao Tian", "journal": "", "ref_id": "b42", "title": "Sequential evaluation and generation framework for combinatorial recommender system", "year": "2019" }, { "authors": "Mark Wilhelm; Ajith Ramanathan; Alexander Bonomo; Sagar Jain; Ed H Chi; Jennifer Gillenwater", "journal": "", "ref_id": "b43", "title": "Practical diversified recommendations on youtube with determinantal point processes", "year": "2018" }, { "authors": "Mi Zhang; Neil Hurley", "journal": "", "ref_id": "b44", "title": "Avoiding monotony: improving the diversity of recommendation lists", "year": "2008" }, { "authors": "Yu Zheng; Chen Gao; Liang Chen; Depeng Jin; Yong Li", "journal": "", "ref_id": "b45", "title": "DGCN: Diversified recommendation with graph convolutional networks", "year": "2021" }, { "authors": "Guorui Zhou; Na Mou; Ying Fan; Qi Pi; Weijie Bian; Chang Zhou; Xiaoqiang Zhu; Kun Gai", "journal": "", "ref_id": "b46", "title": "In Deep Interest Evolution Network for Click-through Rate Prediction", "year": "2019" }, { "authors": "Guorui Zhou; Xiaoqiang Zhu; Chenru Song; Ying Fan; Han Zhu; Xiao Ma; Yanghui Yan; Junqi Jin; Han Li; Kun Gai", "journal": "", "ref_id": "b47", "title": "Deep interest network for click-through rate prediction", "year": "2018" }, { "authors": "Tao Zhuang; Wenwu Ou; Zhirong Wang", "journal": "", "ref_id": "b48", "title": "Globally optimized mutual influence aware ranking in e-commerce search", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 317.96, 598.59, 240.25, 20.38 ], "formula_id": "formula_0", "formula_text": "𝑆 = {𝑖 1 , 𝑖 2 , • • • , 𝑖 𝐾 } with size 𝐾 from a candidate set 𝐼 = {𝑖 1 , 𝑖 2 , • • • , 𝑖 𝑁 }" }, { "formula_coordinates": [ 3, 88.93, 109.11, 205.11, 14.1 ], "formula_id": "formula_1", "formula_text": "P 0 : arg max 𝑆 ⊆𝐼 𝑓 (𝑢, 𝑆) = 𝐹 (Acc(𝑢, 𝑆), Div(𝑢, 𝑆)),(1)" }, { "formula_coordinates": [ 3, 342.97, 212.15, 215.24, 14.24 ], "formula_id": "formula_2", "formula_text": "P 1 : arg max 𝑆 ∈𝐼 𝐹 (Acc(𝑢, 𝑆), Div(𝑢, 𝑆)) = log det(𝑲 𝑢 𝑆 ),(2)" }, { "formula_coordinates": [ 3, 376.85, 319.19, 181.35, 10.07 ], "formula_id": "formula_3", "formula_text": "𝐾 𝑢 𝑆 (𝑖, 𝑗) = 𝑔(𝑢, 𝑖) • 𝐷 (𝑖, 𝑗) • 𝑔(𝑢, 𝑗),(3)" }, { "formula_coordinates": [ 3, 363.76, 416.5, 194.44, 10.07 ], "formula_id": "formula_4", "formula_text": "𝐾 𝑢 𝑆 (𝑖, 𝑗) = 𝑔(𝑢, 𝑖 |𝑆) • 𝐷 (𝑖, 𝑗 |𝐸 𝑢 ) • 𝑔(𝑢, 𝑗 |𝑆),(4)" }, { "formula_coordinates": [ 3, 359.58, 489.09, 198.63, 15.71 ], "formula_id": "formula_5", "formula_text": "ℎ(𝑢, 𝑆) = ∑︁ 𝑖 ∈𝑆 𝑔(𝑢, 𝑖 |𝑆) + 𝛼 • log det(𝑫 𝑢 𝑆 ),(5)" }, { "formula_coordinates": [ 3, 323.41, 636.78, 234.79, 25.97 ], "formula_id": "formula_6", "formula_text": "𝑗 = arg max 𝑖 ∈𝐼 \\𝑆 log det 𝑲 𝑢 𝑆∪{𝑖 } -log det 𝑲 𝑢 𝑆 (6a) = arg max 𝑖 ∈𝐼 \\𝑆 ℎ (𝑢, 𝑆 ∪ {𝑖}) -ℎ(𝑢, 𝑆)(6b)" }, { "formula_coordinates": [ 3, 329.06, 670.61, 229.15, 35.52 ], "formula_id": "formula_7", "formula_text": "𝑖 ∈𝐼 \\𝑆 𝑔(𝑢, 𝑖 |𝑆) +𝛼 • log det 𝑫 𝑢 𝑆∪{𝑖 } -log det 𝑫 𝑢 𝑆 (6c) = arg max 𝑖 ∈𝐼 \\𝑆 𝑔(𝑢, 𝑖 |𝑆) + 𝛼 • log(𝑑 2 𝑖 ). (6d" }, { "formula_coordinates": [ 3, 554.69, 691.65, 3.51, 8.97 ], "formula_id": "formula_8", "formula_text": ") Concatenation & MLP Target Item Prev Context Cand Context Macro Interest Micro Interest User Profile CAE Model c1 c2 c3 i3 i5 i2 i3 i4 i5 i6 i1 i1 i2 i3 i4 s1 s2 s3 s4 ... c1 c2 c3 c4 Find Item's Cluster i1 i2 i6 i4" }, { "formula_coordinates": [ 4, 269.8, 136.77, 269.63, 121.19 ], "formula_id": "formula_9", "formula_text": "u I E u m acro H u m icro H T u m acro H ) ( T u m icro H ) ( T u I E ) ( Diversity kernel ) log( ) | , ( max arg 2 \\ i S I i d S i u g j      Bi-Sequential DPP" }, { "formula_coordinates": [ 4, 59.67, 386.29, 155.59, 30.95 ], "formula_id": "formula_10", "formula_text": "𝑒 𝑖 = (𝑫 𝑗𝑖 -⟨c 𝑗 , c 𝑖 ⟩)/𝑑 𝑗 . 7: Update 𝑑 2 𝑖 = 𝑑 2 𝑖 -𝑒 2 𝑖 , c 𝑖 = [c 𝑖 𝑒 𝑖 ]. 8:" }, { "formula_coordinates": [ 4, 382.22, 511.05, 175.98, 22.8 ], "formula_id": "formula_11", "formula_text": "𝑄 = 1 𝐸 ∑︁ 𝑖,𝑗 𝐴 𝑖 𝑗 -𝑃 𝑖 𝑗 𝛿 (𝑐 𝑖 , 𝑐 𝑗 ),(7)" }, { "formula_coordinates": [ 5, 53.26, 109.15, 73.36, 9.12 ], "formula_id": "formula_12", "formula_text": "𝐶 𝑢 = {𝑐 1 , 𝑐 2 , 𝑐 3 , 𝑐 4 }." }, { "formula_coordinates": [ 5, 112.65, 147.61, 181.4, 9.36 ], "formula_id": "formula_13", "formula_text": "h 𝑢 𝑚 = Aggregate e 𝑖 𝑥 , ∀𝑖 𝑥 ∈ 𝑐 𝑚 ,(8)" }, { "formula_coordinates": [ 5, 95.34, 244.06, 198.71, 9.16 ], "formula_id": "formula_14", "formula_text": "Att(𝑸 𝑚 , 𝑲 𝑚 , 𝑽 𝑚 ) = Softmax 𝛼𝑸 𝑚 𝑲 𝑇 𝑚 𝑽 𝑚 ,(9)" }, { "formula_coordinates": [ 5, 96.52, 324.95, 197.53, 8.97 ], "formula_id": "formula_15", "formula_text": "Head 𝑚 = Att(𝑸, 𝑲 𝑚 , 𝑽 𝑚 ),(10a)" }, { "formula_coordinates": [ 5, 98.18, 340.8, 195.86, 10.9 ], "formula_id": "formula_16", "formula_text": "h 𝑢 macro = Concat(Head 1 , • • • , Head 𝑚 )𝑾 𝑂 ,(10b)" }, { "formula_coordinates": [ 5, 109.16, 487.75, 184.89, 11.8 ], "formula_id": "formula_17", "formula_text": "ẽ𝑖 𝑥 = Concat{𝒆 𝑖 𝑥 , 𝒕 𝑖 𝑥 }, ∀𝑖 𝑥 ∈ 𝐼 𝑢 micro ,(11)" }, { "formula_coordinates": [ 5, 99.67, 554.76, 194.37, 8.97 ], "formula_id": "formula_18", "formula_text": "Head 𝑖 = Att(𝑸 𝑖 , 𝑲 𝑖 , 𝑽 𝑖 ),(12a)" }, { "formula_coordinates": [ 5, 99.05, 570.62, 194.99, 10.29 ], "formula_id": "formula_19", "formula_text": "h micro = Concat(Head 1 , • • • , Head ℎ )𝑾 𝑂 ,(12b)" }, { "formula_coordinates": [ 5, 372.74, 123.96, 185.46, 9.89 ], "formula_id": "formula_20", "formula_text": "h prev = Aggregate (e 𝑖 , 𝑖 ∈ [𝑘 -1]) ,(13)" }, { "formula_coordinates": [ 5, 342.56, 139.45, 95.55, 8.08 ], "formula_id": "formula_21", "formula_text": "[𝑘 -1] = {1, 2, • • • , 𝑘 -1}." }, { "formula_coordinates": [ 5, 377.94, 176.1, 180.26, 10.38 ], "formula_id": "formula_22", "formula_text": "h cand = Aggregate (e 𝑖 , 𝑖 ∈ [𝑁 ]) .(14)" }, { "formula_coordinates": [ 5, 378.63, 239.21, 179.58, 9.89 ], "formula_id": "formula_23", "formula_text": "W prev = 𝜎 (W 2 • 𝛿 (W 1 • h prev )),(15)" }, { "formula_coordinates": [ 5, 400.78, 324.4, 157.42, 11.38 ], "formula_id": "formula_24", "formula_text": "𝒉 𝑖 𝑡 prev = W prev ⊗ 𝒆 𝑖 𝑡 ,(16)" }, { "formula_coordinates": [ 5, 325.7, 454.98, 232.51, 13.31 ], "formula_id": "formula_25", "formula_text": "Ŷ𝐼𝐼 (𝑢, 𝑖) = Softmax MLP h 𝑢 all ,(17b)" }, { "formula_coordinates": [ 5, 348.96, 645.12, 209.24, 26.32 ], "formula_id": "formula_26", "formula_text": "𝐷 𝑢 macro (𝑖, 𝑗 |𝐸 𝑢 ) = 𝑎 2 𝑙 exp - 𝒉 𝑢 𝑖,macro ⊗ 𝒉 𝑢 𝑗,macro 𝑏 2 𝑙 ,(18)" }, { "formula_coordinates": [ 6, 86.83, 255.84, 207.22, 24.67 ], "formula_id": "formula_27", "formula_text": "𝐷 𝑢 micro (𝑖, 𝑗 |𝐸 𝑢 ) = 𝑎 2 𝑠 exp - 𝒉 𝑢 𝑖,micro ⊗ 𝒉 𝑢 𝑗,micro 𝑏 2 𝑠 ,(19)" }, { "formula_coordinates": [ 6, 107.2, 356.27, 186.84, 24.36 ], "formula_id": "formula_28", "formula_text": "𝐷 item (𝑖, 𝑗 |𝐸 𝑢 ) = 𝑎 2 𝑠 exp - 𝒆 𝑢 𝑖 ⊗ 𝒆 𝑢 𝑗 𝑏 2 𝑠 ,(20)" }, { "formula_coordinates": [ 6, 53.71, 468.12, 240.34, 20.07 ], "formula_id": "formula_29", "formula_text": "𝐷 𝑢 (𝑖,𝑗 |𝐸 𝑢 ) = 𝐷 𝑢 item (𝑖,𝑗 |𝐸 𝑢 ) + 𝛽 1 •𝐷 𝑢 macro (𝑖,𝑗 |𝐸 𝑢 ) + 𝛽 2 •𝐷 𝑢 micro (𝑖,𝑗 |𝐸 𝑢 ),(21)" } ]
2023-05-25
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b1", "b2", "b10", "b0", "b6", "b3", "b10", "b11", "b18", "b19" ], "table_ref": [], "text": "Recently introduced implicit neural fields-based approaches have demonstrated great potential beyond the area of photorealistic rendering [2], [3], [11]. By using the trained parameters of a neural implicit function, coordinates in 3D space can be mapped to different output quantities, including volumetric density [1], semantic labels [7], and material rigidity [4], etc.\nIn robotics, neural fields are naturally an attractive alternative to traditional spatial representations due to their intrinsic properties. First, encoding scene features in the weights of a fully connected multi-layer perceptron (MLP) can be significantly more memory-efficient, compared to using traditional representations like voxels, whose memory requirements grow cubically with the size of the scene. Second, neural fields as representations are disconnected from the scene's resolution, which gives autonomous robots the ability to query the implicit function on-the-fly, only in the areas of interest. Finally, MLPs model continuous functions, which allows for plausible predictions of unobserved regions and gaps, an important feature for incomplete scans from the exploration of unknown environments. Consequently, neural fields are becoming more and more popular for the purpose of creating rich and compact spatial scene representations with the aim of tackling traditional robotics tasks more effectively (e.g., SLAM).\nRecently, CLIP-Fields [11] demonstrated how an implicit function can be trained to map 3D points to high-dimensional embeddings in the CLIP feature space [12], where images and texts with similar meanings are represented by vectors that are close to each other. This type of language-grounded neural field can encode the \"semantic memory\" of a mobile Fig. 1: Our approach grounds open-vocabulary language-based queries in 3D space: • \"vacuum the rug\", • \"clean the table\", • \"pick up the plant\", • \"dust the blinds\". The colors indicate the areas in the encoded 3D space that correspond to each command.\nrobot, thus enabling open-vocabulary queries at run time. For training CLIP-Fields, the off-the-shelf Detic [19] model was leveraged that provides embeddings in the CLIP space for pixels that correspond to detected objects in an input image. Simultaneously, the text labels of the detected objects were tokenized with sentence-BERT [20]. Finally, the classified pixels were back-projected to the 3D world and used as input to the network, which was trained with two contrastive losses, one for the label token and one for the visual-language CLIP embedding.\nNevertheless, this approach has two major limitations. First, CLIP-Fields does not encode the geometry of the scene but relies on an external point cloud to perform queries. This choice confines the domain of the implicit function only to points that have been classified by Detic and makes CLIP-Fields restricted to a limited subset of the 3D points of the scene. As a result, we hypothesize that it is difficult for the trained neural field to make reliable predictions for the visuallanguage features of objects from novel-views, that better capture their geometry. Second, CLIP-Fields assumes that a set of possible scene object classes is available during training, so as to train the model with the contrastive learning paradigm. This assumption potentially limits the ability of the model to execute natural language commands for objects dissimilar to the ones found in the set.\nIn our work, we address the aforementioned limitations and propose a new approach for grounding knowledge from visual-language models (VLMs) into neural fields. Our method outperforms CLIP-Fields in the task of semantic segmentation, even though we do not require any prior knowledge of the object classes present." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Neural Fields", "publication_ref": [ "b24", "b26", "b4", "b27", "b7", "b28", "b17", "b6", "b22", "b31", "b23", "b1", "b2", "b35", "b3", "b16", "b15", "b0", "b30", "b29", "b32", "b33" ], "table_ref": [], "text": "Neural fields were primarily used for 3D reconstruction tasks (e.g., shape completion) [25]- [27]. Due to their ability to handle complex and irregular geometry, the interest in using neural implicit representations quickly spreads to other areas. NeRF [5] demonstrated how MLPs can be trained to encode the radiance field of a scene for the purpose of synthesizing photorealistic images from novel views. More recent methods have targeted large-scale scene reconstruction [28], faster training [8], dynamic scene encoding [29] and more [18].\nSemantic Segmentation: NeRF-based models have demonstrated great accuracy in the task of semantic segmentation. Semantic-NeRF [7] and DM-NeRF [23] performed scene decomposition, trained with supervision. More recently, NeRFbased panoptic lifting with pre-trained detection models was introduced [32], proposing a scheme for dealing with the inconsistent predictions from off-the-shelf models. NeSF [24] addressed the lack of generalization, by designing a separate model for performing semantic scene decomposition on numerous scenes that were encoded in different NeRFs. Other techniques used abstract features for the purpose of fusing them with the encoded geometry of the scene, either in the form of activations from off-the-shelf models (i.e., N3F [2] and FRR [3]), or from user interactions (i.e., iLabel [36] and [4]). Unlike these works, we perform semantic segmentation via neural fields, relying only on open-set vocabulary queries.\nRobotics applications: Several robotics tasks have been explored using the models that encode the geometry of a scene in the weights of a neural implicit function. NeRF-SLAM [17], NICE-SLAM [16] and iMap [1] demonstrated how SLAM methods can avail of neural fields and Loc-NeRF [31] and iNeRF [30] utilized NeRFs for performing pose estimation. Similarly, other works have focused on designing representations that encode the relative position between targets [33], [34]. Nevertheless, the use of neural fields in robotics is still in its infancy." }, { "figure_ref": [], "heading": "B. Grounding Language into Spatial Representations", "publication_ref": [ "b8", "b9", "b11", "b12", "b13", "b8", "b14", "b21", "b34", "b10" ], "table_ref": [], "text": "Web-trained visual-language models have recently managed to encode powerful mappings between images and text, leading to state-of-the-art zero-shot task performance [9], [10], [12]. This has inspired the grounding of language into spatial representations, for the purpose of enhancing the perception of robots with the ability to perform open-set classification and execution of open-vocabulary queries. In this direction, the augmentation of point clouds has been proposed, via the backprojecting of CLIP embeddings from the pixel domain to the 3D world [13], [14]. This can be achieved with a pretrained model like LSeg [9], a language-driven segmentation model, which produces per pixel CLIP embeddings. Beyond visual observations, other modalities (e.g., audio) can be used for grounding language [15], [22].\nHowever, this strategy can result to memory-expensive semantic maps and the need for visual-language feature fusion schemes, for 3D points that are observed from multiple views. To avoid this, neural fields can be exploited, both for predicting the features, instead of explicitly storing them, and for imposing multi-view consistency, which naturally averages features from many observations. Our model mostly relates to Distilled Feature Fields (DFF) [35] and the more recent CLIP-Fields [11], that train a neural field to predict these embeddings. However, DFF is most applicable for vision and graphics applications, mainly targeting photorealism (i.e., deploys a fine/coarse pair of MLPs, and needs almost a day to converge). Furthermore, its semantic segmentation capabilities are measured only on the points of the ground-truth point cloud, and thus incorrect predictions in empty space are not evaluated. In contrast, our evaluation is conducted per camera ray in the pixel domain, which simultaneously evaluates the quality of the encoded geometry and the grounding of the language features. On the other hand, CLIP-Fields can only make predictions on the classified points of a pre-defined point cloud, thus losing the ability to fuse the detected features with the scene geometry." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "We assume access to a collection of posed RGB-D images I ∈ R H×W ×(3+1) , depicting different views of an indoor environment. We feed the images to LSeg, which provides a H × W × 512 feature map in the CLIP embedding space." }, { "figure_ref": [], "heading": "A. Preliminaries", "publication_ref": [], "table_ref": [], "text": "Following the conventional NeRF rendering approach, we march N R rays r from the virtual camera's center of projection through random image pixels [u, v]. Along each ray, we select a set of N 3D points via stratified sampling and use the NeRF encoder to predict the corresponding densities σ i and RGB color values c i . The rendering equation can be approximated via the quadrature rule to estimate the expected color Ĉ(r) value along each ray:\nĈ(r) = N i=1 T i (1 -exp (-σ i δ i ))c i(1)\nwhere T i = exp (-i-1 j=1 σ j δ j ) and δ i is the distance between two adjacent samples. As a result, the colour for each pixel can be approximated as a weighted sum with weights w i = T i (1-exp (-σ i δ i )). We adopt the strategy from N3F, FRR, and DFF, treating VL-Fields as a regular differentiable renderer for all features. As a result, for each ray we can approximate the per pixel estimated color, depth D[u, v] and the visual-language feature F [u, v] :\nĈ[u, v] = N i=1 w i c i , D[u, v] = N i=1 w i d i , F [u, v] = N i=1 w i f i (2)" }, { "figure_ref": [], "heading": "B. Visual-Language Fields", "publication_ref": [ "b7" ], "table_ref": [], "text": "Similar to CLIP-Fields, our model consists of three components:\n(1) We utilize multi-resolution hash encoding (MRHE) [8] for mapping the input (x, y, z) ∈ R 3 coordinates to an intermediate 144-dimensional space. In contrast to the positionalencoding scheme of the original NeRF work, MRHE allows a neural fields to converge in a small fragment of the time.\n(2) The outputs are then propagated to a two-layer MLP, where each layer consists of 512 neurons, with ReLU nonlinearities.\n(3) The MLP's output is passed to two specialized heads. The first predicts the density σ and the RGB color value of the corresponding point. The other predicts a 512-dimensional vector in the CLIP embedding space.\nTo train VL-Fields, we minimize the weighted sum of the L2 distances between the predicted and ground truth color (photometric loss L P ), depth (geometric loss L G ), and visuallanguage embedding (visual-language loss L V L ):\nL total = w P L P + w G L G + w V L L V L .(3)\nUnlike the original NeRF work, and most of its variants, we do not include the viewing direction d in the input, as the feature extraction process should be viewpoint-invariant." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "We evaluate VL-Fields on the task of semantic segmentation and compare its performance against both CLIP-Fields and LSeg. Our hypothesis is that the imposed multiview consistency of the neural field will lead to better segmentation accuracy compared to LSeg. We also hypothesize that the encoding of the geometry of the scene will allow our model to fuse the language features to the shapes of the objects, leading to higher quality semantic maps compared to CLIP-Fields." }, { "figure_ref": [], "heading": "A. Experiment Setup", "publication_ref": [ "b20" ], "table_ref": [], "text": "For our experiments, we use a 5 scenes from the Replica dataset [21]. We sample 180 posed RGB-D images from each scene and resize the input images to the maximum dimensions LSeg can process (i.e., H = 390, W = 520), which provides us with H × W × 512 feature maps in the CLIP space.\nTo train VL-Fields, we set w P = 1, w G = 0.8, w V L = 0.8 and march 2048 random rays for 10 3 iterations, sampling 128 points per ray. For training CLIP-Fields, we first generate a dataset by back-projecting pixels to the 3D world with their corresponding CLIP embeddings. Furthermore, following the original setup, we pre-define a set of object labels and classify each point. For this, we use the labels of the objects in each Replica scene. Afterwards, we tokenize the label of each point with sentence-BERT. We train CLIP-Fields for 100 epochs, using 5% of the point cloud, following the contrastive learning paradigm. Both models require about 50 -60 minutes of training on a mobile RTX 3080Ti GPU.\nAfter training the two models, we sample 45 unseen views for each Replica scene and perform semantic segmentation. For this to happen, we first predict for each sampled pose the per pixel embedding for each model. For VL-Fields, we cast a ray for each pixel and compute the weighted sum as presented in Eq. 2, resulting in a H × W × 512 visual-language feature map. For CLIP-Fields, we use the depth map to back-project the pixel to the 3D world and query the model at the (x, y, z) coordinates, predicting a visual-language and a label feature map, with H × W × 512 and H × W × 716, respectively. Afterwards, we use the CLIP text-encoder to generate for each Replica label a 512-dimensional embedding. For CLIP-Fields, we also tokenize the labels with sentence-BERT, which results in a 716-dimensional embedding per label. Then, we measure the similarity by taking the dot product between the corresponding vectors and select the class with the highest output. In the case of CLIP-Fields, we follow their setup and compute the weighted sum of the BERT and CLIP dot products, setting the former to be 10 times more important." }, { "figure_ref": [], "heading": "B. Quantitative Evaluation", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In Table I, the mIoU results are presented for the VL-Fields, CLIP-Fields and LSeg. Our model consistently outperforms CLIP-Fields in all scenes, scoring on average 9.7% higher mIoU. This demonstrates how neural fields benefit from the fusing of high-dimensional features with the encoded scene geometry, even though CLIP-Fields had prior knowledge of the environment object classes. Compared to LSeg, our model scores on average 4.4% higher mIoU. This mainly stems from the imposed multi-view consistency, that implicitly averages predictions from multiple views and adjusts them to the scene geometry." }, { "figure_ref": [ "fig_0" ], "heading": "C. Qualitative Evaluation", "publication_ref": [], "table_ref": [], "text": "In Figure 2 we provide four qualitative comparisons between the ground truth, LSeg, CLIP-Fields, and our VL-Fields semantic segmentation predictions. Our model is able to filter out many mistakes made by LSeg (e.g., mistaking the floor as a rug and completely missing the painting in the first row). Nevertheless, if LSeg consistently makes a specific mistake from multiple views, then both CLIP-Fields and VL-Fields will fail to correct it (e.g., the mis-classification of the bench in the fourth example). In general, since VL-Fields fuses features with the learned geometry, we empirically observe overall sharper segmentation (e.g., the windows in the second row). Nevertheless, it seems to have a hard time detecting smaller objects, and usually fuses them with the nearest larger object (e.g., the vase and the plant in the third example are both classified as shelfs). On the other hand, CLIP-Fields seems to lack the ability to interpolate effectively and results in overall noisier predictions. We hypothesize that this is stems from the absence of a geometric reasoning.\nV. CONCLUSION We presented VL-Fields, a novel approach for grounding language into neural fields. Our model consistently outperformed CLIP-Fields and the one-shot LSeg model in semantic segmentation on scenes from the Replica dataset without prior knowledge of the object classes. Nevertheless, to a degree, our model still inherits the noisy and inconsistent predictions of LSeg, and seems to perform poorly in recognizing smaller objects. We plan to further investigate instances where our model under-performs, in order to identify the root cause of these issues. We believe VL-Fields is a promising spatial representation for mobile robots, that can act as a compact semantic map that will enable open-vocabulary queries. In future, we aim to evaluate our model for robotics tasks, such as multi-object navigation. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgments: We thank Nur Muhammad (Mahi) Shafiullah for the feedback regarding the training and evaluation of CLIP-Fields. This work was supported by the United Kingdom Research and Innovation (grant EP/S023208/1), EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems (RAS)." }, { "figure_ref": [], "heading": "APPENDIX", "publication_ref": [], "table_ref": [], "text": "In Figure 3, the architecture of VL-Fields is presented, along with the training pipeline. Rays are marched from different camera poses and the sampled points across each ray are fed to the MLP, which predicts an RGB value, a density σ and a 512-dim embedding in the CLIP feature space. The predictions along each ray are accumulated and the euclidean distance between the ground truth values and the accumulated predictions is computed, and finally backpropagated for updating the weights. " } ]
We present Visual-Language Fields (VL-Fields), a neural implicit spatial representation that enables openvocabulary semantic queries. Our model encodes and fuses the geometry of a scene with vision-language trained latent features by distilling information from a language-driven segmentation model. VL-Fields is trained without requiring any prior knowledge of the scene object classes, which makes it a promising representation for the field of robotics. Our model outperformed the similar CLIP-Fields model in the task of semantic segmentation by almost 10%.
VL-Fields: Towards Language-Grounded Neural Implicit Spatial Representations
[ { "figure_caption": "Fig. 2 :2Fig. 2: Qualitative comparison between the ground-truth, LSeg, CLIP-Fields, and our VL-Fields semantic maps.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Semantic segmentation evaluation comparing our VL-Fields (VLF), CLIP-Fields (CF), and LSeg (LS), over all available Replica classes.", "figure_data": "room_0 room_1 room_2 office_2 office_3LS0.5590.5830.7360.7400.752CF0.5150.5930.7200.6990.681VLF0.5960.6040.8100.7690.758A. micro mIoU.room_0 room_1 room_2 office_2 office_3LS0.2780.2730.3140.3190.277CF0.2640.2920.3340.2560.275VLF0.2810.2690.3330.3590.298B. macro mIoU.room_0 room_1 room_2 office_2 office_3LS0.6030.6430.7710.7550.759CF0.5440.6400.7480.7180.678VLF0.6290.6570.8210.7680.761", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" } ]
Nikolaos Tsagkas; Oisin Mac Aodha; Chris Xiaoxuan Lu
[ { "authors": "E Sucar; S Liu; J Ortiz; A J Davison", "journal": "", "ref_id": "b0", "title": "iMAP: Implicit mapping and positioning in real-time", "year": "2021" }, { "authors": "V Tschernezki; I Laina; D Larlus; A Vedaldi", "journal": "", "ref_id": "b1", "title": "Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations", "year": "2022" }, { "authors": "K Mazur; E Sucar; A Davison", "journal": "", "ref_id": "b2", "title": "Feature-Realistic Neural Fusion for Real-Time, Open Set Scene Understanding", "year": "2023" }, { "authors": "I Haughton; E Sucar; A Mouton; E Johns; A Davison", "journal": "", "ref_id": "b3", "title": "Realtime Mapping of Physical Scene Properties with an Autonomous Robot Experimenter", "year": "2022" }, { "authors": "B Mildenhall; P P Srinivasan; M Tancik; J T Barron; R Ramamoorthi; R Ng", "journal": "", "ref_id": "b4", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "A Simeonov; Y Du; A Tagliasacchi; J B Tenenbaum; A Rodriguez; P Agrawal; V Sitzmann", "journal": "", "ref_id": "b5", "title": "Neural descriptor fields: Se(3)-equivariant object representations for manipulation", "year": "2022" }, { "authors": "S Zhi; T Laidlow; S Leutenegger; A J Davison", "journal": "", "ref_id": "b6", "title": "In-place scene labelling and understanding with implicit scene representation", "year": "2021" }, { "authors": "T Muller; A Evans; C Schied; A Keller", "journal": "", "ref_id": "b7", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "B Li; K Q Weinberger; S Belongie; V Koltun; R Ranftl", "journal": "", "ref_id": "b8", "title": "Language-driven semantic segmentation", "year": "2021" }, { "authors": "G Ghiasi; X Gu; Y Cui; T Lin", "journal": "", "ref_id": "b9", "title": "Scaling Open-Vocabulary Image Segmentation With Image-Level Labels", "year": "2022" }, { "authors": "N M M Shafiullah; C Paxton; L Pinto; S Chintala; A Szlam", "journal": "", "ref_id": "b10", "title": "CLIP-fields: Weakly supervised semantic fields for robotic memory", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b11", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "C Huang; O Mees; A Zeng; W Burgard", "journal": "", "ref_id": "b12", "title": "Visual language maps for robot navigation", "year": "2022" }, { "authors": "S Peng; K Genova; C M Jiang; A Tagliasacchi; M Pollefeys; T Funkhouser", "journal": "", "ref_id": "b13", "title": "OpenScene: 3D Scene Understanding with Open Vocabularies", "year": "2023" }, { "authors": "C Huang; O Mees; A Zeng; W Burgard", "journal": "", "ref_id": "b14", "title": "Audio Visual Language Maps for Robot Navigation", "year": "2023" }, { "authors": "Z Zhu; S Peng; V Larsson; W Xu; H Bao; Z Cui; M Oswald; M Pollefeys", "journal": "", "ref_id": "b15", "title": "NICE-SLAM: Neural Implicit Scalable Encoding for SLAM", "year": "2022" }, { "authors": "A Rosinol; J Leonard; L Carlone", "journal": "", "ref_id": "b16", "title": "NeRF-SLAM: Real-Time Dense Monocular SLAM with Neural Radiance Fields", "year": "2022" }, { "authors": "Y Xie; T Takikawa; S Saito; O Litany; S Yan; N Khan", "journal": "Computer Graphics Forum", "ref_id": "b17", "title": "Neural Fields in Visual Computing and Beyond", "year": "2022" }, { "authors": "X Zhou; R Girdhar; A Joulin; P Krähenbühl; I Misra", "journal": "", "ref_id": "b18", "title": "Detecting twenty-thousand classes using image-level supervision", "year": "2022" }, { "authors": "N Reimers; I Gurevych", "journal": "", "ref_id": "b19", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "J Straub; T Whelan; L Ma; Y Chen; E Wijmans; S Green; J Engel; R Mur-Artal; C Ren; S Verma", "journal": "", "ref_id": "b20", "title": "The replica dataset: A digital replica of indoor spaces", "year": "2019" }, { "authors": "K M Jatavallabhula; A Kuwajerwala; Q Gu; M Omama; T Chen; S Li", "journal": "", "ref_id": "b21", "title": "Conceptfusion: Open-set multimodal 3d mapping", "year": "2023" }, { "authors": "B Wang; L Chen; B Yang", "journal": "", "ref_id": "b22", "title": "Dm-nerf: 3d scene geometry decomposition and manipulation from 2d images", "year": "2023" }, { "authors": "S Vora; N Radwan; K Greff; H Meyer; K Genova; M S M Sajjadi", "journal": "", "ref_id": "b23", "title": "NeSF: Neural semantic fields for generalizable semantic segmentation of 3d scenes", "year": "2022" }, { "authors": "L Mescheder; M Oechsle; M Niemeyer; S Nowozin; A Geiger", "journal": "", "ref_id": "b24", "title": "Occupancy Networks: Learning 3D Reconstruction in Function Space", "year": "2019" }, { "authors": "J J Park; P Florence; J Straub; R Newcombe; S Lovegrove", "journal": "", "ref_id": "b25", "title": "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation", "year": "2019" }, { "authors": "Z Chen; H Zhang", "journal": "", "ref_id": "b26", "title": "Learning Implicit Fields for Generative Shape Modeling", "year": "2019" }, { "authors": "K Rematas; A Liu; P P Srinivasan; J T Barron; A Tagliasacchi; T Funkhouser; V Ferrari", "journal": "", "ref_id": "b27", "title": "Urban Radiance Fields", "year": "2022" }, { "authors": "Z Li; S Niklaus; N Snavely; O Wang", "journal": "", "ref_id": "b28", "title": "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", "year": "2021" }, { "authors": "L Yen-Chen; P Florence; J T Barron; A Rodriguez; P Isola; T Lin", "journal": "", "ref_id": "b29", "title": "iNeRF: Inverting Neural Radiance Fields for Pose Estimation", "year": "2021" }, { "authors": "D Maggio; M Abate; J Shi; C Mario; L Carlone", "journal": "", "ref_id": "b30", "title": "Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields", "year": "2022" }, { "authors": "Y Siddiqui; L Porzi; S R Buló; N Müller; M Nießner; A Dai; Peter Kontschieder", "journal": "", "ref_id": "b31", "title": "Panoptic Lifting for 3D Scene Understanding with Neural Fields", "year": "2022" }, { "authors": "X Li; S De Mello; X Wang; M Yang; J Kautz; S Liu", "journal": "", "ref_id": "b32", "title": "Learning Continuous Environment Fields via Implicit Functions", "year": "2022" }, { "authors": "A Simeonov; Y Du; A Tagliasacchi; J B Tenenbaum; A Rodriguez; P Agrawal; V Sitzmann", "journal": "", "ref_id": "b33", "title": "Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation", "year": "2022" }, { "authors": "S Kobayashi; E Matsumoto; V Sitzmann", "journal": "", "ref_id": "b34", "title": "Decomposing nerf for editing via feature field distillation", "year": "2022" }, { "authors": "S Zhi; E Sucar; A Mouton; I Haughton; T Laidlow; A J Davison", "journal": "", "ref_id": "b35", "title": "iLabel: Interactive Neural Scene Labelling", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 369.15, 645.55, 193.88, 30.32 ], "formula_id": "formula_0", "formula_text": "Ĉ(r) = N i=1 T i (1 -exp (-σ i δ i ))c i(1)" }, { "formula_coordinates": [ 3, 52.45, 126.69, 247.57, 38.91 ], "formula_id": "formula_1", "formula_text": "Ĉ[u, v] = N i=1 w i c i , D[u, v] = N i=1 w i d i , F [u, v] = N i=1 w i f i (2)" }, { "formula_coordinates": [ 3, 95.46, 406.55, 204.57, 9.65 ], "formula_id": "formula_2", "formula_text": "L total = w P L P + w G L G + w V L L V L .(3)" } ]
2023-11-15
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b5", "b6", "b8" ], "table_ref": [], "text": "Action recognition, the task of understanding human activities from video sequences, is a fundamental problem in computer/robotics vision. This problem arises in many applications: video surveillance, human-computer interaction, sports analysis, human-robot interaction, etc. There has been considerable work on these problems in recent years, driven by the availability of large-scale video datasets and advancements in deep learning techniques such as two-stream Convolutional Neural Network (CNN) [1], Recurrent Neural Network (RNN) [2], and Transformer-based methods [3], [4]. These methods have achieved considerable success in extracting discriminative features from video sequences, leading to significant improvements in action recognition accuracy for ground videos and aerial videos.\nDespite the recent progress in action recognition algorithms, the success of most existing approaches relies on extensive labeled training data followed by a purely supervised learning paradigm that mainly focuses on a backbone architecture design. In this paper, our goal is to design a new learning method for video action recognition using prompt learning. Prompt-based techniques [5] have been proposed for natural language processing tasks to circumvent the issue of lack of labeled data. These learning methods use language models that estimate the probability of the text and use this probability to predict the label thereby reducing or obviating the need for large labeled datasets. In the context of action recognition, prompt learning offers the potential to design better optimization strategies by providing high-level texture descriptions or instructions associated with actions. These prompts can guide the learning process and enable the model to capture discriminative spatio-temporal patterns effectively, resulting in better performance. Furthermore, prompt information can be easily obtained from or embedded into the built-in robotic system.\nMany prompt learning-based techniques have been proposed for few-shot action recognition [6], zero-shot action recognition [7], [8], and ordinal action understanding. [6] proposes knowledge prompting, which leverages commonsense knowledge of actions from external resources to prompt a powerful pre-trained vision-language model for few-shot classification. [7] presents a unified, user promptguided zero-shot learning framework using a target domainindependent skeleton feature extractor, which is pre-trained on a large-scale action recognition dataset. Bridge-Prompt [9] proposes a prompt-based framework to model the semantics across adjacent actions from a series of ordinal actions in instructional videos. Our goal is to apply these techniques for video action recognition. Different from those existing methods, our method lies in an end-to-end fully learnable manner under the supervised learning paradigm.\nWe present a general prompt-learning approach that alleviates the burden of objective optimization by integrating prompt-based learning into the action recognition pipeline. Our approach is designed to enhance the model's ability to process customized inputs by utilizing prompt tokens. These prompt tokens can be either learnable tokens or predefined templates that include information specific to video action recognition. Our formulation leverages prompts, which makes it easier to focus on the interest targets and enables the learning of complex visual concepts.\nIn our prompt learning paradigm, we explore and discuss different types of prompts, including learnable prompts, auxiliary visual information (optical flow, detection, etc.), and large vision models (segmentation). For learnable prompts, which dynamically generate prompts from a pool of prompt experts under different inputs. Our goal is to optimize prompts that guide the model's predictions while explicitly learning input-invariant (prompt experts) and input-specific (data-dependent) prompt knowledge. Our learnable prompt can be easily embedded in any model without much extra computational cost, especially suitable for edge and mobile devices. For auxiliary visual information, we can easily obtain them from the robot's built-in system. For large vi- " }, { "figure_ref": [], "heading": "…", "publication_ref": [], "table_ref": [], "text": "Fig. 1: System: We use prompt learning for action recognition. Our method leverages the strengths of prompt learning to guide the learning process by helping models better focus on the descriptions or instructions associated with actions in the input videos. We explore various prompts, including optical flow, large vision models, and proposed learnable prompts to improve recognition performance. The recognition models can be CNNs or Transformers.\nsion models, given the advanced low-latency communication technologies, we can apply cloud servers to get prompt information. We validate the generalization by performing extensive evaluations on datasets comprised of aerial videos and ground camera videos on scenarios involving singleagent and multi-agent actions. We demonstrate that our technique can improve performance and enhance the generalization capabilities of video action recognition models in different scenarios. The novel components of our work include: 1) We present a general learning approach to use prompt learning and auto-regressive techniques for action recognition. 2) We propose a new learnable prompt method that can guide the model's predictions while explicitly learning input-invariant (prompt experts) and input-specific (data-dependent) prompt knowledge. 3) To the best of our knowledge, ours is the first approach to explore the possibility of using large vision models as the prompt to instruct the action recognition task. 4) Through empirical evaluations, we demonstrate the potential and effectiveness of prompt learning techniques for action recognition tasks. Specifically, we observe a 3.17-10.2% accuracy improvement on the aerial multiagent dataset Okutama. Moreover, we observe a 1.0-3.6% accuracy improvement on the ground camera single-agent video dataset Something Something V2." }, { "figure_ref": [], "heading": "II. RELATED WORKS A. Action Recognition", "publication_ref": [ "b0", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b22", "b24", "b25", "b23" ], "table_ref": [], "text": "Human action recognition, i.e., recognizing and understanding human actions, is crucial for a number of real-world applications. Recently, many deep learning architectures have been proposed to improve the performance. At a broad level, they can be classified into three categories: Two-stream 2D Convolutional Neural Network [1], [10], [11], [12], [13], [14], 3D CNN-based methods [15], [16], [17], [18], [19], Transformer-based approaches [20], [21], [22].\nAlthough these methods have had good success on the ground data and YouTube videos, they cannot achieve a similar level of accuracy on videos captured using Unmanned Aerial Vehicles (UAVs) [23], [24]. Compared to ground or YouTube videos, UAV videos have unique characteristics like small resolution, scale and size variations, and moving cameras. [23] proposed auto zoom algorithms with an attention mechanism for inference on both edge devices and desktop GPUs. [25] proposed a mutual information-based feature alignment and sampling method to extract spatialtemporal features corresponding to human actors for better recognition accuracy. [26] introduced Fourier transformation into attention modules to aggregate the motion salience. [24] proposed a novel frame sampler for aerial action recognition by measuring the similarity between frame patches." }, { "figure_ref": [], "heading": "B. Prompt Learning", "publication_ref": [ "b26", "b27", "b28", "b29", "b4", "b30", "b26", "b31", "b32", "b28", "b33", "b34", "b35", "b36", "b37", "b38", "b39", "b40", "b41" ], "table_ref": [], "text": "The concept of prompt learning, initially introduced by [27], has garnered significant attention in the field of Natural Language Processing (NLP) [28], [29], [30], [5], [31]. Prompt learning revolves around the fundamental idea of treating pre-trained language models like BERT or GPT as knowledge repositories, enabling their utilization in downstream tasks. Early studies, exemplified by [27], [32], concentrated on crafting prompts manually to enhance language model performance. Subsequently, researchers like [33], [29] aimed to automate this process using cost-effective, data-driven approaches. More recently, some works [34], [35], [36] have ventured into learning continuous prompts as an alternative to seeking discrete prompts.\nIn [37], the versatility of expressing a wide range of robot manipulation tasks through multimodal prompts is demonstrated using VIMA, a transformer-based generalist robot agent that processes prompts and generates motor actions autoregressively. [38] introduces a programmatic LLM prompt structure to facilitate plan generation adaptable to various settings, robot functionalities, and tasks. Additionally, [39] proposes a strategy combining prompt engineering principles and a high-level function library to enhance ChatGPT's adaptability to diverse robotics tasks, simulation environments, and hardware setups. niques in vision tasks [40], [41], [42].\nWhile previous research has predominantly concentrated on prompt learning for ground robot tasks, the application of prompt learning to UAV tasks has received limited attention. This paper introduces a comprehensive learning framework aimed at assessing the efficacy of prompt learning in the context of UAV video comprehension, particularly in the realm of action recognition in both ground/YouTube and aerial videos. The objective is to bridge this gap and broaden the applicability of prompt learning to video understanding tasks within this domain." }, { "figure_ref": [ "fig_0" ], "heading": "III. OUR APPROACH", "publication_ref": [], "table_ref": [], "text": "The problem of video action recognition can be broadly classified into single-agent and multi-agent action recognition. Based on different data types, there is aerial video action recognition and ground camera action recognition. Typically, they all involve several steps. Taking the transformer-based methods for example, first, the input video or image sequence is processed to extract relevant features such as movement patterns, appearance, or spatial-temporal information. These features are then fed into a reasoning model inference action label. Prompt learning can help the first step better handle feature extraction.\nWe denote the input as\nX i = {x 1 , x 2 , ..., x m }, i ∈ [1, N]\n, where x j is the j th frame in the i th video, m is the total frame number, and N is the total number of videos. The overall approach predicts the action categories by using model f (X i ), which can be CNNs or Transformers. As shown in Figure 2, taking transformer-based methods as an example, we follow the same scheme to extract the features, followed by using the reasoning process to predict the action labels. We also present a prompt-learning-based encoder to help better extract the feature and then propose an auto-regressive temporal reasoning algorithm for recognition models for enhanced inference ability.\nSpecifically, in an action model:\nf = f a • f e ([X, P]),(1)\nwhere f e is the prompt-learning-based input encoder, P is the prompt, and f a is the auto-regressive-based temporal reasoning model, which is used for the temporal dimension." }, { "figure_ref": [], "heading": "A. Prompt Learning-based Input Encoder", "publication_ref": [ "b42", "b42" ], "table_ref": [], "text": "For the first part of the input encoder, inspired by these prompt-based techniques in NLP, we present a new general prompt learning-based input encoder for action recognition. Our formulation leverages the strengths of prompt learning to guide the optimization by providing high-level descriptions or instructions associated with actions in the inputs. We use this to alleviate the burden of models' optimization by helping models better focus on the active region.\nPrompts can enhance the model's ability to process customized inputs by utilizing prompt tokens. By leveraging prompts, models can more easily focus on the interest targets, and prompt learning enables the model to learn complex visual concepts and capture discriminative spatio-temporal patterns effectively. Specifically, our prompts can be either predefined templates (non-learnable prompt: optical flow, large vision models) or learnable tokens (learnable prompt) that include task-specific information. They can be used either alone or in combination.\n1) Learnable Prompt: To better adapt to the input data, we also propose a learnable prompt, which learns to dynamically generate prompts from a pool of prompt experts under different inputs. Prompt experts are learnable parameters that can be updated from the training process. As shown in Figure 3, in our design, we use input-invariant (prompt experts) and input-specific (data dependent) prompts. The input-invariant prompts contain task information, and we use a dynamic mechanism to generate input-specific prompts for different inputs.\nThere are different actions and domains (different video sources) for different videos, so it's challenging to learn a single general prompt for all videos. Therefore, we design an input-invariant prompt experts pool, which contains l 3: Learnable prompt: Learning input-invariant (prompt experts) and input-specific (data dependent) prompt knowledge. The inputinvariant prompts will be updated from all the inputs, which contain task information, and we use a dynamic mechanism to generate input-specific prompts for different inputs. Add/Mul means element-wise operations. B × S ×C is the input features' shape, and l is the expert's number in the prompt pool. learnable prompts.\nP = {P 1 , ..., P l },(2)\nwhich is learnable and will be updated from all the inputs. For a specific input X * ,\nP * = Matmul(σ (FC(X * )), P),(3)\nWe first use an FC layer and sigmoid function to get dynamic weights. Then we apply these dynamic weights to the inputinvariant prompt pool to get a customized prompt P * for X * .\nx p i = f e ([x i , p i ]), x i ∈ X * , p i ∈ P * ,(4)\nwhere x p i is the prompt-based feature. 2) Non-Learnable Prompt: Non-Learnable prompts make use of statistical methods (e.g., optical flow) or existing powerful large vision models, which can offer reliable prompts without training. a) Optical Flow Prompt: Optical flow is a fundamental concept in computer vision that involves estimating the motion of objects within a video sequence. It represents the apparent motion of pixels between consecutive frames, providing valuable information about the movement of objects and their relative velocities.\nWe divide a video into m clips. For raw frame x i and frame x j from the video, the optical flow is:\no i = O(x i , x j ), x i ∈ clip i , x j ∈ clip j ,(5)\nwhere clip i and clip j are two adjacent clips from a video, and each clip contains several frames. When computing the optical flow, we only use one frame from each clip in a video and then apply the optical flow to this whole clip. This formulation is more efficient because it avoids many calculations for every frame. Therefore, the input with optical flow prompt becomes:\n[X, P] = {x k * o i | x k ∈ clip i , i ∈ [1, m]}(6)\nwhere clip i has k frames. We use [X, P] to replace the original X in video action recognition. b) Large Vision Model Prompt: Recently, large models have been attracting more attention for NLP and other applications. These large models are considered powerful since they are trained on huge amounts of data and don't need to be finetuned on new tasks as an auxiliary input (i.e. prompt). Our goal is to use these large models to generate prompts (e.g. mask, bbox) for video action recognition.\nOne popular work is the Segment Anything Model (SAM [43]), which can segment any object in an image given only some prompts like a single click or box. SAM is trained on a dataset of 11 million images and 1.1 billion masks. SAM can segment objects with high accuracy, even when they are new or have been modified from the training data. SAM generalizes to new objects and images without the need for additional training, so we don't need to finetune the model on our dataset. For some frames in a video clip, we generate a segmentation mask using a large vision model, SAM [43]. Next, these masks are used as prompts and fused with input frames to optimize the recognition model. Specifically, for frame x i , the output from SAM is:\np i = SAM(x i , boxes/points), x i ∈ clip i (7\n)\nclip i is a video clip containing a few frames,\n[X, P] = {x i * p i | i ∈ [1, m]}(8)\nWe use [X, P] to replace the original X." }, { "figure_ref": [], "heading": "B. Auto-regressive Temporal Reasoning", "publication_ref": [], "table_ref": [], "text": "Temporal reasoning is important for sequence data. Therefore, we propose an Auto-regressive Temporal Reasoning algorithm to better model the time-varying data. Autoregressive models are statistical models that make predictions based on previous observations. They assume that the future values of a variable can be estimated by considering its past values. For temporal reasoning, this concept is extended to capture dependencies between different frames in a video.\nAfter getting the prompt-based feature X p = {x p 1 , x p 2 , ..., x p m }, where x p i represents the observation at time step i, the goal is to predict the future values,\nxp i+1 = f a ( j<(i+1) ∏ j f a (x p j ) + x p i+1 )(9)\nwhere f a denotes the auto-regressive model that maintains an internal state and updates according to the sequential input.\n∏ means a series of functions here. The auto-regressive temporal reasoning model considers the past observations of the sequence and the corresponding future observations to learn the underlying temporal dependencies." }, { "figure_ref": [], "heading": "C. Single-agent and Multi-agent Objective", "publication_ref": [], "table_ref": [], "text": "The supervision formats used for single-agent and multiagent action recognitions are different. As a result, we choose different loss functions. Specifically, we choose the classical cross-entropy loss for single-agent action recognition,\nL n = - C ∑ c=1 log exp xp n,c ∑ C i=1 exp xp n,i y n,c , (10\n)\nwhere C is the class number, n is the video number, and xp\nn,c\nis the PLAR's output feature. y is the label. For multi-agent on Okutama, we use the BCEWithLogitsLoss,\nL n,c = -y n,c • log σ xp n,c + (1 -y n,c ) • log 1 -σ xp n,c(11)\nwhere xp n,c is the PLAR's output feature. σ is a sigmoid function. This loss combines a sigmoid function and the BCELoss, which is more numerically stable than using a plain sigmoid followed by a BCELoss because by combining the operations into one layer, it takes advantage of the log-sum-exp for numerical stability. For both singleagent and multi-agent videos, by sharing the same objective, our learning approach can optimize prompts that guide the model's predictions while explicitly learning input-invariant (prompt experts pool) and input-specific (data-dependent) prompt knowledge." }, { "figure_ref": [], "heading": "IV. DATASETS AND RESULTS", "publication_ref": [ "b43", "b44", "b43", "b50", "b49" ], "table_ref": [], "text": "To verify the effectiveness of PLAR, empirical evaluations were conducted on Okutama [44] and Something-something V2 [45] datasets comprising both aerial videos and ground camera videos and scenarios involving single-agent and multi-agent actions.\nA. Datasets and Experiment Settings a) Okutama [44]: The dataset consists of 43 minutelong sequences with 12 action classes, providing a challenge with dynamic action transitions, changing scales and aspect ratios, camera movement, and multi-labeled actors. All the frames extracted from the video datasets were scaled to 224 × 224. The backbone is Swin-T [51]. Following [50], the feature maps obtained were processed in the ROIAlign" }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b45", "b46", "b47", "b46", "b18", "b46", "b48", "b46", "b48", "b46", "b46", "b50", "b51", "b44", "b3", "b3", "b3" ], "table_ref": [], "text": "Frame size Accuracy AARN [46], [47] crops 33.75% Lite ECO [48], [47] crops 36.25% I3D(RGB) [19], [47] crops 38.12% 3DCapsNet-DR [49], [47] crops 39.37% 3DCapsNet-EM [49], [47] crops 41.87% DroneCaps [47] crops 47.50% DroneAttention without bbox [ function (crop size of 5 × 5) to get the desired ROIs. Other training settings follow [51]. b) NEC Drone [52]: is an indoor video dataset that features 5,250 videos depicting 16 distinct actions performed by 19 actors. The initial learning rate is set 0.05. Stochastic Gradient Descent (SGD) is used as the optimizer with 0.0005 weight decay and 0.9 momentum. We use cosine/poly annealing for learning rate decay. All the frames extracted from the video datasets were scaled to 224 × 224. c) Something-something v2 (SSV2 [45]): The SSV2 dataset is regarded as a substantial and comprehensive benchmark for action recognition, encompassing a vast collection of 220k action clips. Following [4], we train for 100 epochs using 8 GPUs with a batch size of 64 and a base learning rate of 5e-5 with a cosine learning rate schedule. We use Adamw and use a weight decay of 1e-4 and a drop path rate of 0.4. For other training and testing settings, we follow [4]. And the backbone is MViTv2-S [4]." }, { "figure_ref": [], "heading": "B. Results on Okutama", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Okutama is an aerial multi-agent action recognition dataset in which multiple actors sequentially perform a diverse set of actions, which makes it very challenging. In the real world, it's difficult to ensure that only a single agent is in the scene for action recognition. Therefore, multi-agent action recognition is a very practical and important research direction. We compare our PLAR with state-of-the-art (SOTA) works.\nAs shown in Table I, if there is no bbox information, we achieved 10.20% improvement over the SOTA method. If there is bbox information, we outperform the SOTA by 3.17%. This demonstrates the effectiveness of our method." }, { "figure_ref": [], "heading": "C. Results on NECDrone", "publication_ref": [ "b52" ], "table_ref": [ "tab_4" ], "text": "We compare our method with other existing methods on NEC-Drone. The frames are extracted from raw videos and augmented as in X3D [53]. The baseline methods use uniform and random sampling. As shown in Table II, on NEC Drone, our PLAR outperforms the baseline methods by 4.0 -7.4% and improves 0.9% over the SOTA." }, { "figure_ref": [], "heading": "D. Results on Something-something V2", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Something-something V2 is a challenging ground camera dataset for visual common sense because it requires models to understand the relationships between objects and actions. For example, to predict the category of a video, a model must understand that \"something bounces a ball\" is different from \"something rolls a ball\". In addition, the model must simultaneously pay attention to temporal modeling. We evaluate our PLAR's reasoning and temporal modeling ability on Something-somethingV2.\nAs shown in Table III, our PLAR improves 3.6% over MViTv1 and 1.0% over MViTv2, which illustrates the effectiveness of our proposed prompt learning and Auto-regressive temporal modeling. " }, { "figure_ref": [], "heading": "E. Ablation Study", "publication_ref": [ "b59", "b42" ], "table_ref": [ "tab_8", "tab_7" ], "text": "First, we conducted ablation studies on various prompts, including optical flow, large vision models, and learnable prompts, to verify their effectiveness. Then we further evaluate the effect of each component of our method and the experts' number of learnable prompts. You can refer [60] for visualization.\nDifferent Prompts To evaluate the effectiveness of different prompts, various prompts, including optical flow, large vision model(SAM [43]), and learnable prompts, are examined in this work. As shown in Table V, the large vision model and learnable prompt achieved better accuracy.\nEffect of Each Component of Our Method We also evaluated the effect of the components in our methods, including ROI alignment (ROI), Large Vision Model, and Learnable Prompt. As shown in Table IV, ROI can achieve 2.07% improvement, ROI combined with Large Vision Model can achieve 3.14% improvement, ROI combined with our Learnable Prompt can achieve 4.80% improvement. The experiments showed the effectiveness of our proposed methods.\nExperts Number for Learnable Prompt In our proposed learnable prompt, it learns to dynamically generate prompts from a pool of prompt experts under different inputs. Prompt experts are learnable parameters that can be updated from the training process and they are input-invariant (prompt experts) prompts. which contain task information. In this section, we explore the effect of the number of experts. As shown in " }, { "figure_ref": [], "heading": "F. Apply to Robot", "publication_ref": [], "table_ref": [], "text": "For robotics vision, most existing systems have detection, tracking, segmentation, etc. subsystems functioning when the robot is working. Our method can make full use of those auxiliary information as prompts to improve the action recognition results. If there are no auxiliary subsystems in some robots, we also provide learnable prompt to improve the action recognition performance. Our learnable prompt is very friendly for edge devices since it can benefit the model without introducing much extra computational cost." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We present a general prompt learning approach to alleviate the optimization burden by providing high-level texture descriptions or instructions associated with actions. Our proposed learnable prompt learns to dynamically generate prompts from a pool of prompt experts under different inputs. Our objective is to optimize prompts that guide the model's predictions while explicitly learning input-invariant (prompt experts) and input-specific (data-dependent) prompt knowledge. We observe good accuracy improvements on the challenging datasets." } ]
We present a new general learning approach, Prompt Learning for Action Recognition (PLAR), which leverages the strengths of prompt learning to guide the learning process. Our approach is designed to predict the action label by helping the models focus on the descriptions or instructions associated with actions in the input videos. Our formulation uses various prompts, including learnable prompts, auxiliary visual information, and large vision models to improve the recognition performance. In particular, we design a learnable prompt method that learns to dynamically generate prompts from a pool of prompt experts under different inputs. By sharing the same objective with the task, our proposed PLAR can optimize prompts that guide the model's predictions while explicitly learning input-invariant (prompt experts pool) and input-specific (data-dependent) prompt knowledge. We evaluate our approach on datasets consisting of both ground camera videos and aerial videos, and scenes with single-agent and multiagent actions. In practice, we observe a 3.17 -10.2% accuracy improvement on the aerial multi-agent dataset Okutamam and a 1.0 -3.6% improvement on the ground camera single-agent dataset Something Something V2. We plan to release our code on the WWW.
PLAR: Prompt Learning for Action Recognition
[ { "figure_caption": "Fig. 2 :2Fig.2: Overview of the action recognition framework: We use transformer-based action recognition methods as an example. We designed a prompt-learning-based encoder to help better extract the feature and use our auto-regressive temporal reasoning algorithm for recognition models for enhanced inference ability.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig.3: Learnable prompt: Learning input-invariant (prompt experts) and input-specific (data dependent) prompt knowledge. The inputinvariant prompts will be updated from all the inputs, which contain task information, and we use a dynamic mechanism to generate input-specific prompts for different inputs. Add/Mul means element-wise operations. B × S ×C is the input features' shape, and l is the expert's number in the prompt pool.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison with the state-of-the-art results on the Okutama dataset. With bbox information, we achieved 10.20% improvement over the SOTA method. Without bbox information, we outperformed the SOTA by 3.17%. crops: from detection.", "figure_data": "", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Comparison with existing methods on NEC Drone. Our PLAR improves 4.0-7.4% over X3D and 0.9% over MGS.", "figure_data": "MethodpretrainTop-1 Acc. Top-5 Acc.TEA [56]ImageNet 1k65.1%89.9%MoViNet-A3 [57]N/A64.1%88.8%ViT-B-TimeSformer [21]ImageNet 21k62.5%/SlowFast R101, 8×8 [58]Kinetics40063.1%87.6%MViTv1-B, 16×4 [59]Kinetics40064.7%89.2%MViTv2-S, 16×4 [4]Kinetics40067.3%91.0%PLAR (Ours)Kinetics40068.3%91.4%", "figure_id": "tab_4", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Comparison with the state-of-the-art results on the Something Something V2. Our PLAR improves 3.6% over MViTv1 and 1.0% over strong SOTA MViTv2.", "figure_data": "", "figure_id": "tab_5", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Ablation study in terms of the effect of different components in our method on the Okutama dataset. We evaluated ROI, Large Vision Model (SAM), and Learnable Prompt. The experiments showed the effectiveness of our proposed methods.", "figure_data": "MethodFrame size AccuracyBaseline224×22471.54%Baseline + Optical Flow224×22472.13%Baseline + Large Vision Model (SAM)224×22474.68%Baseline + Learnable Prompt224×22475.93%", "figure_id": "tab_7", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Ablation study in terms of different prompts on theOkutama dataset. We evaluated various prompts, including opticalflow, a large vision model(SAM [43]), and learnable prompts.From our experiment, the large vision model and learnable promptachieved better accuracy.", "figure_id": "tab_8", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "Table VI, we evaluated various experts number including 4, 8, 16, and 32. From our experiment, PLAR with the expert number of 8 achieved the best accuracy.", "figure_data": "MethodFrame size AccuracyBaseline224x22471.54%Baseline + Learnable Prompt with 4 Experts224x22476.16%Baseline + Learnable Prompt with 8 Experts224x22476.34%Baseline + Learnable Prompt with 16 Experts224x22475.93%Baseline + Learnable Prompt with 32 Experts224x22473.70%", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation study in terms of different experts number on the Okutama dataset. We evaluated various experts number including 4, 8, 16, and 32. From our experiment, with the expert number of 8 achieved better accuracy.", "figure_data": "", "figure_id": "tab_10", "figure_label": "VI", "figure_type": "table" } ]
Xijun Wang; Ruiqi Xian; Tianrui Guan; Dinesh Manocha
[ { "authors": "K Simonyan; A Zisserman", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Two-stream convolutional networks for action recognition in videos", "year": "2014" }, { "authors": "L Sun; K Jia; K Chen; D.-Y Yeung; B E Shi; S Savarese", "journal": "", "ref_id": "b1", "title": "Lattice long short-term memory for human action recognition", "year": "2017" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Attention is all you need", "year": "2017" }, { "authors": "Y Li; C.-Y Wu; H Fan; K Mangalam; B Xiong; J Malik; C Feichtenhofer", "journal": "", "ref_id": "b3", "title": "Mvitv2: Improved multiscale vision transformers for classification and detection", "year": "2022" }, { "authors": "P Liu; W Yuan; J Fu; Z Jiang; H Hayashi; G Neubig", "journal": "ACM Computing Surveys", "ref_id": "b4", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2023" }, { "authors": "Y Shi; X Wu; H Lin", "journal": "", "ref_id": "b5", "title": "Knowledge prompting for few-shot action recognition", "year": "2022" }, { "authors": "F Sato; R Hachiuma; T Sekii", "journal": "", "ref_id": "b6", "title": "Prompt-guided zero-shot anomaly action recognition using pretrained deep skeleton features", "year": "2023" }, { "authors": "M Wang; J Xing; Y Liu", "journal": "", "ref_id": "b7", "title": "Actionclip: A new paradigm for video action recognition", "year": "2021" }, { "authors": "M Li; L Chen; Y Duan; Z Hu; J Feng; J Zhou; J Lu", "journal": "", "ref_id": "b8", "title": "Bridgeprompt: Towards ordinal action understanding in instructional videos", "year": "2022" }, { "authors": "A Karpathy; G Toderici; S Shetty; T Leung; R Sukthankar; L Fei-Fei", "journal": "", "ref_id": "b9", "title": "Large-scale video classification with convolutional neural networks", "year": "2014" }, { "authors": "L Wang; Y Qiao; X Tang", "journal": "", "ref_id": "b10", "title": "Action recognition with trajectorypooled deep-convolutional descriptors", "year": "2015" }, { "authors": "J Sánchez; F Perronnin; T Mensink; J Verbeek", "journal": "International journal of computer vision", "ref_id": "b11", "title": "Image classification with the fisher vector: Theory and practice", "year": "2013" }, { "authors": "G Chéron; I Laptev; C Schmid", "journal": "", "ref_id": "b12", "title": "P-cnn: Pose-based cnn features for action recognition", "year": "2015" }, { "authors": "R Girdhar; D Ramanan; A Gupta; J Sivic; B Russell", "journal": "", "ref_id": "b13", "title": "Actionvlad: Learning spatio-temporal aggregation for action classification", "year": "2017" }, { "authors": "D Tran; L Bourdev; R Fergus; L Torresani; M Paluri", "journal": "", "ref_id": "b14", "title": "Learning spatiotemporal features with 3d convolutional networks", "year": "2015" }, { "authors": "S Ji; W Xu; M Yang; K Yu", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b15", "title": "3d convolutional neural networks for human action recognition", "year": "2012" }, { "authors": "H Zhang; L Zhang; X Qi; H Li; P H Torr; P Koniusz", "journal": "Springer", "ref_id": "b16", "title": "Few-shot action recognition with permutation-invariant attention", "year": "2020" }, { "authors": "X Li; B Shuai; J Tighe", "journal": "Springer", "ref_id": "b17", "title": "Directional temporal modeling for action recognition", "year": "2020" }, { "authors": "J Carreira; A Zisserman", "journal": "", "ref_id": "b18", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "A Arnab; M Dehghani; G Heigold; C Sun; M Lučić; C Schmid", "journal": "", "ref_id": "b19", "title": "Vivit: A video vision transformer", "year": "2021" }, { "authors": "G Bertasius; H Wang; L Torresani", "journal": "ICML", "ref_id": "b20", "title": "Is space-time attention all you need for video understanding?", "year": "2021" }, { "authors": "X Wang; S Zhang; Z Qing; Y Shao; Z Zuo; C Gao; N Sang", "journal": "", "ref_id": "b21", "title": "Oadtr: Online action detection with transformers", "year": "2021" }, { "authors": "X Wang; R Xian; T Guan; C M De Melo; S M Nogar; A Bera; D Manocha", "journal": "", "ref_id": "b22", "title": "Aztr: Aerial video action recognition with auto zoom and temporal reasoning", "year": "2023" }, { "authors": "R Xian; X Wang; D Kothandaraman; D Manocha", "journal": "", "ref_id": "b23", "title": "Pmi sampler: Patch similarity guided frame selection for aerial action recognition", "year": "2023" }, { "authors": "R Xian; X Wang; D Manocha", "journal": "", "ref_id": "b24", "title": "Mitfas: Mutual information based temporal feature alignment and sampling for aerial video action recognition", "year": "2023" }, { "authors": "D Kothandaraman; T Guan; X Wang; S Hu; M.-S Lin; D Manocha", "journal": "", "ref_id": "b25", "title": "Far: Fourier aerial video recognition", "year": "2022" }, { "authors": "F Petroni; T Rocktäschel; P Lewis; A Bakhtin; Y Wu; A H Miller; S Riedel", "journal": "", "ref_id": "b26", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Z Jiang; F F Xu; J Araki; G Neubig", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b28", "title": "How can we know what language models know?", "year": "2020" }, { "authors": "X L Li; P Liang", "journal": "", "ref_id": "b29", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Y Tian; Y Wang; D Krishnan; J B Tenenbaum; P Isola", "journal": "Springer", "ref_id": "b30", "title": "Rethinking few-shot image classification: a good embedding is all you need?", "year": "2020" }, { "authors": "N Poerner; U Waltinger; H Schütze", "journal": "", "ref_id": "b31", "title": "E-bert: Efficient-yeteffective entity embeddings for bert", "year": "2019" }, { "authors": "T Shin; Y Razeghi; R L L Iv; E Wallace; S Singh", "journal": "", "ref_id": "b32", "title": "Eliciting knowledge from language models using automatically generated prompts", "year": "2020" }, { "authors": "X Han; W Zhao; N Ding; Z Liu; M Sun", "journal": "AI Open", "ref_id": "b33", "title": "Ptr: Prompt tuning with rules for text classification", "year": "2022" }, { "authors": "B Lester; R Al-Rfou; N Constant", "journal": "", "ref_id": "b34", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Z Zhong; D Friedman; D Chen", "journal": "", "ref_id": "b35", "title": "Factual probing is [mask]: Learning vs. learning to recall", "year": "2021" }, { "authors": "Y Jiang; A Gupta; Z Zhang; G Wang; Y Dou; Y Chen; L Fei-Fei; A Anandkumar; Y Zhu; L Fan", "journal": "", "ref_id": "b36", "title": "Vima: General robot manipulation with multimodal prompts", "year": "2022" }, { "authors": "I Singh; V Blukis; A Mousavian; A Goyal; D Xu; J Tremblay; D Fox; J Thomason; A Garg", "journal": "IEEE", "ref_id": "b37", "title": "Progprompt: Generating situated robot task plans using large language models", "year": "2023" }, { "authors": "S Vemprala; R Bonatti; A Bucker; A Kapoor", "journal": "Microsoft Auton. Syst. Robot. Res", "ref_id": "b38", "title": "Chatgpt for robotics: Design principles and model abilities", "year": "2023" }, { "authors": "Y Rao; W Zhao; G Chen; Y Tang; Z Zhu; G Huang; J Zhou; J Lu", "journal": "", "ref_id": "b39", "title": "Denseclip: Language-guided dense prediction with contextaware prompting", "year": "2022" }, { "authors": "C Ju; T Han; K Zheng; Y Zhang; W Xie", "journal": "Springer", "ref_id": "b40", "title": "Prompting visuallanguage models for efficient video understanding", "year": "2022" }, { "authors": "K Zhou; J Yang; C C Loy; Z Liu", "journal": "International Journal of Computer Vision", "ref_id": "b41", "title": "Learning to prompt for vision-language models", "year": "2021" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W.-Y Lo", "journal": "", "ref_id": "b42", "title": "Segment anything", "year": "2023" }, { "authors": "M Barekatain; M Martí; H.-F Shih; S Murray; K Nakayama; Y Matsuo; H Prendinger", "journal": "", "ref_id": "b43", "title": "Okutama-action: An aerial view video dataset for concurrent human action detection", "year": "2017" }, { "authors": "R Goyal; S Ebrahimi Kahou; V Michalski; J Materzynska; S Westphal; H Kim; V Haenel; I Fruend; P Yianilos; M Mueller-Freitag", "journal": "", "ref_id": "b44", "title": "The\" something something\" video database for learning and evaluating visual common sense", "year": "2017" }, { "authors": "F Yang; S Sakti; Y Wu; S Nakamura", "journal": "IEEE Access", "ref_id": "b45", "title": "A framework for knowing who is doing what in aerial surveillance videos", "year": "2019" }, { "authors": "A M Algamdi; V Sanchez; C.-T Li", "journal": "IEEE", "ref_id": "b46", "title": "Dronecaps: Recognition of human actions in drone videos using capsule networks with binary volume comparisons", "year": "2020" }, { "authors": "M Zolfaghari; K Singh; T Brox", "journal": "", "ref_id": "b47", "title": "Eco: Efficient convolutional network for online video understanding", "year": "2018" }, { "authors": "P Zhang; P Wei; S Han", "journal": "Journal of Physics: Conference Series", "ref_id": "b48", "title": "Capsnets algorithm", "year": "2020" }, { "authors": "S K Yadav; A Luthra; E Pahwa; K Tiwari; H Rathore; H M Pandey; P Corcoran", "journal": "Neural Networks", "ref_id": "b49", "title": "Droneattention: Sparse weighted temporal attention for drone-camera based activity recognition", "year": "2023" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b50", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "J Choi; G Sharma; M Chandraker; J.-B Huang", "journal": "", "ref_id": "b51", "title": "Unsupervised and semi-supervised domain adaptation for action recognition from drones", "year": "2020" }, { "authors": "C Feichtenhofer", "journal": "", "ref_id": "b52", "title": "X3d: Expanding architectures for efficient video recognition", "year": "2020" }, { "authors": "Y Zhi; Z Tong; L Wang; G Wu", "journal": "", "ref_id": "b53", "title": "Mgsampler: An explainable sampling strategy for video action recognition", "year": "2021" }, { "authors": "S H Park; J Tack; B Heo; J.-W Ha; J Shin", "journal": "Springer", "ref_id": "b54", "title": "K-centered patch sampling for efficient video recognition", "year": "2022" }, { "authors": "Y Li; B Ji; X Shi; J Zhang; B Kang; L Wang", "journal": "", "ref_id": "b55", "title": "Tea: Temporal excitation and aggregation for action recognition", "year": "2020" }, { "authors": "D Kondratyuk; L Yuan; Y Li; L Zhang; M Tan; M Brown; B Gong", "journal": "", "ref_id": "b56", "title": "Movinets: Mobile video networks for efficient video recognition", "year": "2021" }, { "authors": "C Feichtenhofer; H Fan; J Malik; K He", "journal": "", "ref_id": "b57", "title": "Slowfast networks for video recognition", "year": "2019" }, { "authors": "H Fan; B Xiong; K Mangalam; Y Li; Z Yan; J Malik; C Feichtenhofer", "journal": "", "ref_id": "b58", "title": "Multiscale vision transformers", "year": "2021" }, { "authors": "X Wang; R Xian; T Guan; D Manocha", "journal": "", "ref_id": "b59", "title": "Prompt learning for action recognition", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 173.85, 564.72, 120.78, 9.9 ], "formula_id": "formula_0", "formula_text": "X i = {x 1 , x 2 , ..., x m }, i ∈ [1, N]" }, { "formula_coordinates": [ 3, 140.09, 724.72, 158.71, 9.72 ], "formula_id": "formula_1", "formula_text": "f = f a • f e ([X, P]),(1)" }, { "formula_coordinates": [ 4, 144.51, 327.95, 154.29, 9.9 ], "formula_id": "formula_2", "formula_text": "P = {P 1 , ..., P l },(2)" }, { "formula_coordinates": [ 4, 115.27, 374.1, 183.53, 11.3 ], "formula_id": "formula_3", "formula_text": "P * = Matmul(σ (FC(X * )), P),(3)" }, { "formula_coordinates": [ 4, 110.73, 433.84, 188.08, 14.24 ], "formula_id": "formula_4", "formula_text": "x p i = f e ([x i , p i ]), x i ∈ X * , p i ∈ P * ,(4)" }, { "formula_coordinates": [ 4, 106.22, 616.45, 192.58, 9.72 ], "formula_id": "formula_5", "formula_text": "o i = O(x i , x j ), x i ∈ clip i , x j ∈ clip j ,(5)" }, { "formula_coordinates": [ 4, 101.52, 724.72, 197.28, 9.88 ], "formula_id": "formula_6", "formula_text": "[X, P] = {x k * o i | x k ∈ clip i , i ∈ [1, m]}(6)" }, { "formula_coordinates": [ 4, 359.98, 581.06, 194.15, 9.72 ], "formula_id": "formula_7", "formula_text": "p i = SAM(x i , boxes/points), x i ∈ clip i (7" }, { "formula_coordinates": [ 4, 554.13, 581.38, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 4, 382.02, 620.54, 175.98, 9.72 ], "formula_id": "formula_9", "formula_text": "[X, P] = {x i * p i | i ∈ [1, m]}(8)" }, { "formula_coordinates": [ 5, 115.29, 131.91, 183.51, 27.97 ], "formula_id": "formula_10", "formula_text": "xp i+1 = f a ( j<(i+1) ∏ j f a (x p j ) + x p i+1 )(9)" }, { "formula_coordinates": [ 5, 107.62, 312.71, 187.03, 30.31 ], "formula_id": "formula_11", "formula_text": "L n = - C ∑ c=1 log exp xp n,c ∑ C i=1 exp xp n,i y n,c , (10" }, { "formula_coordinates": [ 5, 294.65, 321.8, 4.15, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 289.3, 356.1, 9, 6.35 ], "formula_id": "formula_13", "formula_text": "n,c" }, { "formula_coordinates": [ 5, 60.53, 395.14, 238.27, 20.91 ], "formula_id": "formula_14", "formula_text": "L n,c = -y n,c • log σ xp n,c + (1 -y n,c ) • log 1 -σ xp n,c(11)" } ]
10.18653/v1/D15-1109
2023-06-01
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b26", "b27", "b36", "b15", "b1", "b18", "b19", "b14", "b19", "b22", "b20", "b33", "b3" ], "table_ref": [], "text": "As a fundamental topic in the natural language processing (NLP) community, dependency parsing has drawn a great deal of research interest for decades (Marcus et al., 1994;McDonald et al., 2013;Zhang et al., 2021). The goal of dependency parsing is to find the head for each word and the corresponding relation (Kübler et al., 2009). Most of previous works have focused on the sentence level, while the dialogue-level dependency parsing still stands with the paucity of investigation.\nPrior studies build dialogue-level discourse parsing datasets (Asher et al., 2016;Li et al., 2020) with reference to the text-level discourse dependency (Li et al., 2014). The discourse structures in these data are constructed by elementary discourse units (EDUs) and the relationships between them, without regard to the inner structure of EDUs. It is of interest to incorporate both inner-EDU and inter-EDU dependencies throughout a dialogue to construct a word-wise dependency tree, which is in line with sentence-level dependencies and further able to express hierarchical structures. Hence, we form the dialogue-level dependency, by adapting commonly-used syntactic dependency (Jiang et al., 2018) and rhetorical structure theory (RST) dependency (Carlson et al., 2001;Li et al., 2014) into inner-EDU and inter-EDU dependencies.\nThe scarcity of research on dialogue-level dependency parsing might be caused by the prohibitive cost of annotation. Thus, we focus on low-resource settings, aiming to construct a sufficient test set to support reliable evaluation and label a small training set. We craft an annotation guideline and develop a platform for human labeling. After that, we perform manual annotation and obtain 50 training instances and 800 test samples. Figure 1 illustrates a fragment in a labeled dialogue, which contains three utterances with dependency links.\nLearning a parser with scant to no supervised signals can be challenging. Fortunately, it is straight-forward to use an existing syntactic treebank to attain inner-EDU dependencies. Meanwhile, we find some overlap between syntactic dependency and inter-EDU dependency. As shown in Figure 1, \"offer\" is dependent on \"see\" with the \"dfsubj\" (different subject) relationship in syntactic dependency, which matches the \"attr\" (attribution) of inter-EDU dependency. Furthermore, inter-EDU arcs between utterances are often emitted from the root node above to the root below. Hence, we can naturally obtain inter-EDU arcs from partial syntactic dependencies.\nWe find that certain words can reflect the discourse role of EDUs, helping to assign labels to inter-EDU arcs. For instance, \"see\" and \"if\" reflect the \"attribution\" and \"condition\" signals respectively, illustrated in Figure 1. Thus, we propose a method to discover and leverage these signals for inter-EDU label assignment. Inspired by promptbased learning (Liu et al., 2021), we adopt masked language modeling (MLM) to recover signal words, the set of which is carefully defined in advance. Predicted words are then mapped to corresponding signals. Based on these signals, a handful of simple well-defined rules can infer credible dependencies.\nThe gap between the syntactic treebank and dialogue-level dependencies goes beyond labels; differences in text style expose a trained parser to the out-of-distribution (OOD) effect (Li et al., 2019). To alleviate that, we utilize unlabeled dialogue data to bridge the cross-domain gap. We borrow ideas from self-training (Scudder, 1965) and co-training (Blum and Mitchell, 1998), applying single-view and multi-view data filtering augmentation. Given pseudo-labeled samples predicted by parsers, we calculate their confidence scores and set a threshold. Instances above the threshold are provided as additional training samples.\nWe carry out experiments in zero-shot and fewshot scenarios. The empirical results suggest that our signal-based baseline can achieve reasonable performance in the zero-shot condition and provide significant improvement in the few-shot setting. Incorporating filtered pseudo-labeled data further advances performance. In addition, we present several discussion topics, which can affirm directive intuitions and crucial practices in this work.\nOur contributions are mainly in three respects:\n• We build the first Chinese dialogue-level dependency treebank. • We propose signal-based dependency trans-formation and pseudo-labeled data filtering approaches as parsing baselines.\n• The proposed methods achieve reasonable parsing performance in low-resource settings. All our datasets and codes will be publicly available at github.com/Zzoay/DialogDep for research purpose." }, { "figure_ref": [], "heading": "Dataset Construction", "publication_ref": [ "b14", "b19" ], "table_ref": [], "text": "To facilitate our research on dialogue-level dependency parsing, here we construct a high-quality corpus manually, named the Chinese dialogue-level dependency treebank (CDDT). The treebank borrows the sentence-level dependency (Jiang et al., 2018) and the discourse-level dependency (Li et al., 2014) 1 , extending both to the dialogue texts. Below, we present the annotation details." }, { "figure_ref": [ "fig_0" ], "heading": "Annotation Preparation", "publication_ref": [ "b9", "b14" ], "table_ref": [], "text": "Data Collection. We use a publicly available dialogue dataset (Chen et al., 2020) as our raw corpus, which is collected from real-world customer service, containing multiple turns of utterance. We access this data from an available source. 2 We use 800 data from the full test set as the raw corpus of our test set and randomly sample 50 data from the 9101 training data as our raw corpus for few-shot learning. The data providers have segmented words and anonymized private information. We further manually clean the data, removing noise text and performing fine-grained word segmentation. Guideline. A well-defined dialogue-level dependency tree can be decomposed into two parts, i.e., inner-EDU dependencies and inter-EDU dependencies, respectively. Following Carlson et al. (2001), we treat EDUs as non-overlapping spans of text and use lexical and syntactic clues to help determine boundaries. To adapt to dialogue parsing and linguistic features of Chinese, we simplify the boundary determination, regarding the boundaries as the leftmost and rightmost words of spans, whose words are covered by a complete syntactic subtree. The root nodes of each subtree are often predicates that reflect single semantic events, illustrated in Figure 1.\nFor the inner-EDU dependency annotation, we follow the guideline proposed by Jiang et al. (2018) which includes 21 classes of syntactic dependency.\nFor the inter-EDU one, we examine and adapt the EDU-wise relations (Carlson et al., 2001) to wordwise dependencies, including 15 coarse-grained relations and 4 fine-grained relations divided from \"topic-comment\" to accommodate connections between utterances. In annotation, inner-EDU dependencies should be annotated first as the underlying basis, while inter-EDU dependencies annotation should take full account of the semantic relations between EDUs. Appendix B shows all the dependencies and reveals more details about our dialogue-level dependencies." }, { "figure_ref": [], "heading": "Human Annotation", "publication_ref": [], "table_ref": [], "text": "Platform. The dialogue-level dependency annotation consists of inner-utterance and inter-utterance parts, where the former is in line with the common sentence-level annotation, and the latter needs to connect from above to below. To accommodate this form of human labeling, we develop an online annotation platform. Annotators can easily label the inner-utterance and inter-utterance dependencies on this platform by linking tags between words. Annotation Process. We employ two postgraduate students as annotators, both of whom are native Chinese speakers. Each annotator is intensively trained to familiarise the guideline. To guarantee the labeling quality, we apply a double annotation and expert review process. Each annotator is assigned to annotate the same raw dialogue. The annotation is accepted if the two submissions are the same. Otherwise, an experienced expert with a linguistic background will compare the two submissions and decide on one answer. Moreover, the expert checks all labeling for quality assurance.\nStatistics. In the end, we arrive at 800 number of annotated dialogues for testing and 50 for few-shot training. Table 1 shows some statistical information. It can be seen that the number of dialogue rounds is large, leading to an expensive labeling process. Also, inter-EDU relations are much more scarce compared to inner-EDU ones, as inter-EDU dependencies are higher-level relations that reflect the connections between EDUs. Appendix B shows the quantities of each category." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [ "b20" ], "table_ref": [], "text": "Dialogue-level dependency parsing involves two folds, namely inner-EDU dependency parsing and inter-EDU dependency parsing. We leverage an existing syntactic treebank S (Li et al., 2019) to learn a parser, which can analyze inner-EDU dependencies well. Meanwhile, it can be observed that syntactic dependencies have the potential to be converted into inter-EDU ones (Figure 1). Thus, we propose a signal-based method to perform dependency transformation for parser training and inference. Moreover, we apply pseudo-labeled data filtering to leverage the large-scale unlabeled dialogue data D." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Signal-based Dependency Transformation", "publication_ref": [ "b13", "b22" ], "table_ref": [], "text": "Our dependency parsing model contains an encoder layer that uses a pretrained language model (PLM) and a biaffine attention decoder as detailed in Dozat and Manning (2017). Given a sentence (in S) or utterance (in dialogues) x = [w 1 , w 2 , . . . , w n ], the parser can output the arc and label predictions:\nȳ(arc) , ȳ(label) = Parser (x)(1)\nIn summary, the dependency transformation can be applied in two ways. The first is replacing partial labels of predicted syntactic dependencies with inter-EDU ones, similar to post-processing. We denote it to PostTran. The second is to transform the syntactic dependency dataset S into the one with extended inter-EDU dependencies for parser training, denoted to PreTran. For clarity, we denote the transformed treebank to S τ . We refer the parser trained on the syntactic treebank S to Parser-S, and the parser based on S τ to Parser-T. EDU Segmentation. When PreTran, a sentence in S is firstly separated into several EDUs. We exploit a simple but effective way, treating punctuation such as commas, full stops, and question marks as boundaries between EDUs. 3 Meanwhile, some specific syntactic labels (e.g. sasubj, dfsubj) that span a range can also be considered as implicit EDU segmenting signals (Figure 1). When PostTran, an utterance in dialogue is segmented in the same way. Additionally, there are clear boundaries between utterances, which can be used to separate EDUs.\nGiven the segmented EDUs, the next issue is where the inter-EDU arcs fall and in what direction. We find that inside utterances, inter-EDU dependencies are often overlapped with certain syntactic dependencies (Figure 1). We name the labels of these syntactic labels as \"transforming labels\" and predefine a set of them L = {root, sasubj, df subj}. If y (label) i ∈ L, we can directly retain (sometimes reverse) the arcs and convert the labels to inter-EDU ones. Besides, there is no predicted relationship between utterances. We find that most interutterance arcs are emitted from the root node above to the root node below. Thus, we link the predicted roots from above to below as the inter-EDU arcs between utterances. MLM-based Signal Detection. Given those inter-EDU arcs, we propose a signal-based transform approach to assign inter-EDU labels to them. First, we introduce the signal detection method, which is based on a masked language modeling (MLM) paradigm. We find that certain words reflect the semantic role of EDUs. For instance, an EDU that contains \"if\" is commonly the \"condition\" role (Figure 1). We can emit an arc \"cond\" from its head EDU to it. Thus, we pre-define a word-signal dictionary, mapping words to the corresponding inter-EDU relation signals. A word that reflects the signal is called \"signal word\".\nSubsequently, we apply the MLM on the largescale unlabeled dialogue data D to learn inter-EDU signals. During the training stage, the signal word v is randomly dropped in a segmented EDU e, and a masked language model is to recover it. Like prompt-based learning (Liu et al., 2021), we modify e by a slotted template into a prompt e ′ . The slot is a placeholder with \"[mask]\" tokens. Next, a model with PLM and MLP decoder outputs the word distribution in masked positions,4 as distribution of signal words:\nP (v|e) = softmax MLP PLM e ′ (2)\nwhere the MLP is to project the hidden vector to vocabulary space.\nAt the inference stage, the model outputs the distribution of signal words. The probabilities of signal words are grouped and averaged by their corresponding signals. The grouped probabilities form the distribution of inter-EDU signals.\nP (s|e) = GroupMean (P (v|e))(3)\nThe signal s can be obtained by argmax P (s|e).\nWe expand the predicted signal to the whole e. By executing the above procedure in batch with e in x, we end up with the signal sequence, which is denoted in bold s.\nSignal-based Transformation. Given detected signals s, Algorithm 1 show how partial syntactic labels are transformed into inter-EDU labels. We pre-set 3 conditions and the corresponding strategies to cover the majority of cases. 1) If the head node of the current word crosses the EDU's boundary, or the label y (label) i ∈ L and the connection spans k, we assign the label by the corresponding signal. 2) If s i is \"cond\" or \"attr\", we reverse the connections between head and tail nodes. 3) If greetings are the root EDU, then we invert the links, since we are more interested in the core semantics. The FindTail function in Algorithm 1 performs an iterative procedure. It looks for the first node in other EDUs whose head node has an index equal to the current index i. Meanwhile, the labels of arcs between utterances are directly assigned by the signals of their head words." }, { "figure_ref": [], "heading": "Unlabeled Data Utilization", "publication_ref": [ "b3" ], "table_ref": [], "text": "We leverage large-scale dialogue data D to bridge the distribution gap between the syntactic treebank and our annotated corpus. It is straightforward to use a well-trained parser to assign pseudo labels to D for subsequent training. Nevertheless, this pseudo-labeled data includes a sea of mislabels, which in turn jeopardize performance. Thus, we apply single-view and multi-view methods to select high-quality additional samples. Single-view. In intuition, predicted samples with higher confidence are of higher quality. Given an unlabeled utterance x, a well-trained parser (Parser-S or Parser-T) can output the probabilities of each arc and label, without the final decoding: P y (arc) x , P y (label) x = Parser * (x) (4) Then, we average the highest probabilities in each position, as the confidence of one utterance. end if 14: end for 15: return y (arc) , y (label) Equation 5 shows the calculation of arc confidence c (arc) . The label confidence c (label) is computed in the same way.\nc (arc) = 1 n n i max P y (arc) i x i(5)\nA pseudo-labeled utterance is reserved when its c (arc) and c (label) both greater than a confidence threshold ϵ. The filtered samples are incorporated with S or S τ for training a new parser.\nMulti-view. Prior research (Blum and Mitchell, 1998) and our pilot experiment suggest that multiview productions of pseudo data outperform singleview ones. Thus, we exploit Parser-S and Parser-T to conduct multi-view data augmentation.\nThe confidence computation and filtering methods are the same as the single-view ones above. The filtered samples labeled by Parser-T and Parser-S are merged together and de-duplicated by their confidence magnitude (i.e. if two samples are the same but have different labels, the one with higher confidence will be retained). Then, the merged instances are added to S or S τ for subsequent training. In this way, parsers trained on the set with such data augmentation can access complementary information." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Settings", "publication_ref": [ "b10", "b11", "b23" ], "table_ref": [], "text": "Evaluation. We use the labeled attachment score (LAS) and the unlabeled attachment score (UAS) for evaluation. For a fine-grained analysis, we report the scores of inner-EDU and inter-EDU dependencies. In the absence of the development set in our low-resource scenarios, we retain the last training checkpoint for evaluation. To maintain a balance between energy savings and results reliability, in zero-shot settings, we set a random seed of 42 for all experiments and report the testing results. In few-shot settings, we repeat data sampling on 5 random seeds in 4 few-shots (5, 10, 20, 50) settings. Then we choose the seeds that obtain median scores for training and final evaluation. All experiments are carried out on a single GPU of RTX 2080 Ti. Hyper-parameters. Our PLM is a Chinese version of ELECTRA (Clark et al., 2020), implemented by Cui et al. (2020). We exploit the base scale discriminator5 for fine-tuning. The hidden size of the subsequent parts of our Parser and MLM is set to 400, and the dropout is 0.2. We train our models by using the AdamW optimizer (Loshchilov and Hutter, 2017), setting the initial learning rate of the PLM to 2e-5 and of the subsequent modules to 1e-4, with a linear warmup for the first 10% training steps. The weight decay is 0.01. We apply the gradient clipping mechanism by a maximum value of 2.0 to alleviate gradient explosion. The training batch size is 32 and the epoch number is 15. We set the minimum span k of connections to 2. Moreover, we set the iteration number of pseudo-labeled data selection to 1. The confidence threshold ϵ is 0.98." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The approaches in this work can be thought of as a permutation of data, parsers, and filtering methods. There are two sets of data, the syntactic dependency treebank S (S τ ), and the large-scale unlabeled dialogue data D. Also, there are two types of transformations PreTran and PostTran. 6 For brevity, we simplify the Parser-S to f s and the Parser-T to f t . The pseudo-labeled data selection is denoted by a function η. We record and analyze the experimental results of zero-shot and few-shot settings. Zero-shot. Since the trends in the UAS and LAS are roughly consistent, here we only report the LAS. Details are recorded in Appendix D. As shown in Table 2, the parser trained on syntactic treebank S and transformed treebank S τ achieve similar performances in inner-EDU parsing. This shows how little our dependency transforming method disrupts the original syntactic structure. It is intuitive that the parser independently trained on pseudo-labeled dialogue data D performs not well. The parsers trained on the mergers of S and η (D) obtain the highest syntax parsing scores, demonstrating the usefulness of our data selection.\nFor the inter-EDU dependency parsing, it can be observed that the transformed treebank S τ makes the parser perform better. Similar to inner-EDU parsing, the direct use of D performs poorly. Although the ensemble of Parser-S and Parser-T can bring some performance gains, they are limited. The combination of S (S τ ) and η (D) is useful. The parser trained on S τ + η (f s (D) + f t (D)) achieves best inner-utterance performance. It demonstrates the effectiveness of our signal-based dependency transformation and multi-view data selection. Few-shot. We conduct few-shot (5,10,20,50) experiments, incorporating the few labeled samples with those typologies of training data in the above zero-shot setting. For clarity, we also present only the LAS. Appendix E gives more details.\nAs can be observed in is more pronounced than the one of inner-EDU parsing, probably thanks to the fact that the former is a more difficult task with larger room for enhancement. It can also be observed that as the annotated training data increases, the performance of inner-EDU parsing gradually reaches a higher level, and the improvement achieved by introducing augmented data becomes confined. Nevertheless, parsers trained on the data with augmentation S τ + η (f s (D) + f t (D)) achieves highest scores in the majority of few-shot settings, showing the effectiveness of the proposed method." }, { "figure_ref": [ "fig_0", "fig_0", "fig_2" ], "heading": "Discussion", "publication_ref": [ "b5", "b30" ], "table_ref": [], "text": "What are the advantages of introducing discourse dependency compared to only syntax? Figure 1 has illustrated the superiority of introducing discourse dependency in two aspects. Relationships between utterances can be expressed by discourse dependency. Also, it represents a hierarchical structure and enables a more nuanced connection between EDUs.\nFigure 2 gives another sample, the syntactic dependency suffers from the problem of somorphism. Those two sentences contain different semantic information, while their two syntactic subtrees are simply connected by \"sasubj\", resulting in their highly analogous architecture. According to RST dependency, the two subtrees of the first sentence are linked by \"cond\" (condition) from right to left, and the two below are connected by \"elbr\" (elaboration) from left to right. The inclusion of discourse dependency can alleviate the dilemma of somorphism by expressing high-level hierarchies. How much overlap between partial syntactic dependencies and inter-EDU ones? Figure 1 vides an example that demonstrates the potential of transforming syntactic dependencies into inter-EDU ones. Here we present a quantitative analysis.\nWe present a matching score to measure the extent of overlap. We first obtain syntactic predictions by Parser-S. Then we replace those dependencies that bear a specific syntactic label with an inter-EDU class. Next, we compute the LAS for that class, as the matching score between the syntactic and inter-EDU labels. Figure 3 illustrates the top-5 matching scores of syntactic labels \"root\", \"sasubj\", and \"dfsubj\". It can be seen that these syntactic labels have a respectable consistency with certain inter-EDU classes, revealing the potential for dependency conversion. Interestingly, different syntactic labels are matched with different inter-EDU classes, indicating the overlap of specific syntactic structures with certain discourse structures. How broadly can inter-EDU signals be reflected by pre-defined signal words? As mentioned above, certain words can reflect the semantic role of EDU. We calculate the consistency of signal words and the relationship labels on EDUs to quantify the extent. Given an EDU, we directly assign its label by the signal corresponding to the pre-defined signal word it contains. Then, we compute the accuracy of each inter-EDU label as the matching score of the relevant signals. It can be observed in Figure 4 that some labels such as \"elbr\" ( 27), \"qst-ans\" (37), and \"stm-rsp\" (38) can be strongly reflected by the pre-defined signal words. Some labels that do not contain significant signals in EDUs are maintained at low scores, especially \"bckg\" ( 22) and \"topic-chg\" (35), indicating that there is much room for improvement. Is the MLM-based signal detection method able to detect implicit signal? Our inter-EDU signal detection method applies an MLM paradigm, which tries to recover the dropped signal word during the training stage. Intuitively, this method should have the capacity to predict those implicit signals. Figure 5 gives two examples to prove that. It can be seen that even when \"if\" is removed, the approach of MLM-based signal detection can still predict the \"cond\" signal, which is denoted as a 25 number. Based on this, the parser can correctly predict the \"cond\" relations between two EDUs.\nHow does the reserved size of pseudo-labeled data change by confidence thresholds? We vary the threshold ϵ from 0.5 to 0.98 to investigate the changes in the amount of data selected. Figure 6 illustrates the trend. Interestingly, the size of selected data is still large even though the threshold is greater than 0.9, illustrating the high confidence in dependency parsing. It can also be observed that the increase in merged data becomes more pronounced as the threshold increases. Intuitively, as the amount of data selected decreases, fewer data will overlap.\nWhy not leverage Chinese RST treebanks to train a parser? To date, there exist two Chinese RST treebanks, the RST Spanish-Chinese Treebank (Cao et al., 2018) and GCDT (Peng et al., 2022). It is natural to use them to provide supervised signals for inter-EDU parsing. In our prior experiments, we find that directly using these treebanks for training a parser leads to unsatisfactory performance. One possible reason is that their annotation style is not aligned with ours. For instance, GCDT splits sentences very finely and uses lots of \"same-unit\" relations to align to the English style, fitting their multilingual tasks. This occurs not in our data as it tends to cause an incomplete structure within the EDU. Furthermore, GCDT lacks some of the label categories in our data, especially some common relations such as \"statement-response\". In addition, inconsistencies in data distribution can also contribute to underperformance. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b35", "b8", "b27", "b32", "b19", "b2", "b31", "b0", "b1", "b2", "b18", "b28", "b12", "b10", "b4", "b22", "b29", "b16", "b33", "b3" ], "table_ref": [], "text": "Dependency parsing. To date, there are several Chinese dependency paradigms and the corresponding treebanks (Xue et al., 2005;Che et al., 2012;McDonald et al., 2013;Qiu et al., 2014). These works mostly focus on sentence-level dependency parsing, while the document-level one is conspicuous by its paucity. Li et al. (2014) adopt a dependency parsing paradigm for discourse parsing, while its EDU-wise pattern neglects to parse inside EDUs. We propose a unified schema, which includes both word-wise dependencies within and between EDUs that organize whole dialogues into tree structures. Dialogue parsing. Discourse structures can be expressed by several theories, e.g., RST (Mann andThompson, 1987, 1988), SDRT (Asher and Lascarides, 2003), and PTDB (Prasad al., 2008). The investigation of dialogue-level discourse parsing is still in its early stage. Afantenos et al. (2015); Asher et al. (2016) build an annotated corpus STAC based on SDRT (Asher and Lascarides, 2003), using a dependency paradigm for annotation of relationships between EDUs. Li et al. (2020) present the Molweni dataset, contributing discourse dependency annotations. These two data are annotated at the granularity of EDUs and, to accommodate multi-party dialogue scenarios, their annotations are based on SDRT that fits the graph structure. Differently, in line with common syntactic dependencies, the inter-EDU part of our dialogue-level dependency is word-wise and references RST (Carlson et al., 2001), which organizes a text into the tree structure. Furthermore, the prevalent corpus for dialogue dependency parsing is in English, and our dataset fills a lacuna in the Chinese corpus.\nWeakly supervised learning. It is challenging to predict new classes that are unobserved beforehand (Norouzi et al., 2014). PLMs (Devlin et al., 2019;Clark et al., 2020;Brown et al., 2020) can address this challenge through language model-ing combined with label mapping, and promptbased learning can stimulate this ability (Liu et al., 2021;Ouyang et al., 2022;Lang et al., 2022). Inspired by that, we adopt an MLM-based method to sense inter-EDU signals and map them to unseen dependencies. Furthermore, our approach employs single-view and multi-view data selection, borrowing from self-training (Scudder, 1965) and co-training (Blum and Mitchell, 1998), which are used for growing a confidently-labeled training set." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b14" ], "table_ref": [], "text": "We presented the first study of Chinese dialoguelevel dependency parsing. First, we built a highquality treebank named CDDT, adapting syntactic (Jiang et al., 2018) and RST (Carlson et al., 2001) dependencies into inner-EDU and inter-EDU ones. Then, we conducted manual annotation and reached 850 labeled dialogues, 50 for training and 800 for testing. To study low-resource regimes, we leverage a syntactic treebank to get inner-EDU dependencies and induce inter-EDU ones. We employed an MLM method to detect the inter-EDU signals of each EDU and then assign the detected signals to the arcs between EDUs. Furthermore, we exploited single-view and multi-view approaches for pseudo-labeled sample filtering. Empirical results suggest that our signal-based method can achieve respectable performance in zero-shot and few-shot settings, and pseudo-labeled data utilization can provide further improvement." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This study suffers from four limitations. The first limitation is that even though we annotated 850 dialogues manually, which includes almost 200,000 dependencies, there is still room for improvement in the total number of labeled dialogues. The second one is that our parsing method of inter-EDU in the inter-utterance situation is simplistic and straightforward, and it can not cover certain difficult labels. It is desirable to propose a more elegant and comprehensive approach. The third is somewhat analogous to the second. In the future, we should propose an end-to-end method that replaces the current approach, which consists of several processing steps. The last one is about our pseudo-labeled data selection method. It could be interesting to investigate the iterative process." }, { "figure_ref": [], "heading": "A Brief Introduction of RST", "publication_ref": [ "b34" ], "table_ref": [], "text": "As a prevalent theory in discourse parsing, RST has been studied since early (Mann andThompson, 1987, 1988) and has been broadly developed (Carlson et al., 2001;Carlson and Marcu, 2001;Soricut and Marcu, 2003). Within the RST framework, a clause is primarily regarded as an EDU, and its boundaries are determined using lexical and syntactic indicators. Each EDU involved in a relationship is attributed with a rhetorical status or nuclearity assignment, which characterizes its semantic role in the overall discourse structure.\nThe RST framework distinguishes between two types of relations: mononuclear and multinuclear. Mononuclear relations consist of two units, namely the nucleus and the satellite, whereas multinuclear relations involve two or more units of equal importance. A total of 53 mononuclear and 25 multinuclear relations can be used for tagging the RST corpus. These 78 relations can be categorized into 16 distinct classes that share common rhetorical meanings. Furthermore, to establish a clear structure within the tree, three additional relations are utilized, namely textual-organization, span, and same-unit. Our inter-EDU dependency follows most of the coarse-grained relations, and the \"topiccomment\" relation is refined in four classes to fit the dialogue scenario." }, { "figure_ref": [], "heading": "B Dependencies for Dialogue", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "Different from Carlson et al. (2001), we weaken the concept of nucleus and satellite in our dialoguelevel dependencies. For mononuclear relations, an arc is emitted from the nucleus to the satellite. For the multinuclear ones, the arc is always emitted from above (left) to below (right). The landing point of the arc is the core semantic word (always a predicate) of a discourse unit. To satisfy the single root and single head constraints, we keep only the dummy root of the first utterance, and the ones of the other utterances are replaced by inter-utterance links. To adapt the customer service scenario in our dialogue data, we add the \"req-proc\" label corresponding to a situation, where one party proposes a requirement, and the other proposes to handle it. A model is encouraged to identify the more difficult relationships. Thus, when annotators find that two relationships appear to be appropriate, the more difficult one should be chosen.\nTable 4 shows the meaning and quantity of each label. It can be seen that the labels of \"root\", \"obj\", \"att\", \"adv\", \"adjct\", and \"punc\" are with leading numbers in syntax dependencies. In discourse dependencies, the \"elbr\" label is the most numerous of the inner-utterance dependencies, and the \"stmrsp\" is the most in quantity in inter-utterance ones." }, { "figure_ref": [], "heading": "C Implemented Details of MLM", "publication_ref": [ "b10" ], "table_ref": [], "text": "We use the large-scale unlabeled dialogue data D to train a signal detection model. The model contains an encoder layer of ELECTRA (Clark et al., 2020) and a linear layer as the decoder to project the hidden vector to a vocabulary space. Inspired by prompt-based learning, we set a template as \"The word that expresses the signal of discourse dependency is: [mask] [mask] [mask]\" in Chinese.\nThe template is as a prefix of input x to obtain the prompt x ′ . As there are many signal words and they are in Chinese, it is difficult to show them and thus we present them in publicly available code. We average the output probabilities in [mask] positions to obtain the signal word distribution. The probability of dropping a word is 0.2, and the one of dropping the signal word is 0.7. We set the epoch number to 2. The other hyper-parameters are the same as section 4.1." }, { "figure_ref": [], "heading": "D Impact of Post-Transformation", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "Table 5 shows the impact of PostTran. It can be observed that PostTran can benefit the most situations of discourse dependency parsing, both without and with PreTran. We think this is because PostTran determines some ambiguous predictions by detected signals. Meanwhile, we find that PostTran sometimes impairs the performance of inner-EDU parsing. This may be owing to incorrect signals or insufficiently comprehensive rules. " }, { "figure_ref": [], "heading": "E Detailed Results of Few-shot Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We would like to thank the anonymous reviewers for their constructive comments, which help to improve this work. This research is supported by the National Natural Science Foundation of China (No. 62176180)." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "We build a dialogue-level dependency parsing corpus by crowd annotations. The raw dialogue data is obtained from an open source. Besides, we remove information relating to user privacy. The annotation platform is developed independently by us. All annotators were properly paid by their efforts. This dataset can be employed for dialogue-level dependency parsing in both zero-shot and few-shot setting as well as in any other data settings." } ]
Dialogue-level dependency parsing has received insufficient attention, especially for Chinese. To this end, we draw on ideas from syntactic dependency and rhetorical structure theory (RST), developing a high-quality humanannotated corpus, which contains 850 dialogues and 199,803 dependencies. Considering that such tasks suffer from high annotation costs, we investigate zero-shot and fewshot scenarios. Based on an existing syntactic treebank, we adopt a signal-based method to transform seen syntactic dependencies into unseen ones between elementary discourse units (EDUs), where the signals are detected by masked language modeling. Besides, we apply single-view and multi-view data selection to access reliable pseudo-labeled instances. Experimental results show the effectiveness of these baselines. Moreover, we discuss several crucial points about our dataset and approach.
A Pilot Study on Dialogue-Level Dependency Parsing for Chinese
[ { "figure_caption": "Figure 1 :1Figure 1: A fragment example of dialogue-level dependencies. Vertical dashed lines are separation boundaries of EDUs. Above words are the inner-EDU dependency arcs, while the arcs below words or cross utterances represent the inter-EDU dependencies.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: Matching score of signals. The numbers below sentences indicate the signals detected, in the order given in Appendix B.", "figure_data": "", "figure_id": "fig_1", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Quantity variation of filtered instances.", "figure_data": "", "figure_id": "fig_2", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": ",", "figure_data": "StatisticTrainTest# dialogue50800avg.# turns2325avg.# words194212# inner9129 159803# inter167129200Table 1: Some statistical information about CDDT. \"#\"and \"avg.#\" represent \"the number of\" and \"the averagenumber of\" respectively.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Algorithm 1 Signal-based TransformationRequire: a sentence (utterance) x; arcs y (arc) ; labels y (label) ; inter-EDU signals s; set of transforming labels L; sequence length n. 1: for i = 1 to n do", "figure_data": "2: 3:if condition 1 then y (label) i ← s i▷ certain labels4:if condition 2 then▷ special cases5: 6: 7:t ← F indT ail(y arc i ) y (arc) t (arc) ← y i y (arc) i ← t8:else if condition 3 then▷ greetings9: 10: 11:t ← F indT ail(y i (arc) y (arc) i , y (label) ← t, \"elbr\" ) i y (arc) t , y (label) t ← 0, \"root\"12:end if13:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "(f s (D) + f t (D)) 84.24 50.62 S τ + η (f s (D) + f t (D)) 84.34 50.78", "figure_data": "Training DataInner InterS83.14 49.94S τ83.16 49.71f s (D)82.13 48.76f t (D)82.14 48.38(f s + f t ) (D)82.46 49.03S + f s (D)82.48 48.84S τ + f t (D)82.68 49.02S + η (f s (D))84.14 50.47S τ + η (f t (D))84.13 50.27S + η", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The test results under the zero-shot setting. We divide the results by a dashed line according to the source of training data. \"Inner\" represents the inner-EDU dependency, and \"Inter\" is the inter-EDU one.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ", our approachessignificantly outperform parsers trained on onlya small number of labeled training samples. Theperformance improvement of inter-EDU parsing", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "pro-+ η (f s (D) + f t (D)) 85.55 51.12 85.88 51.88 86.92 52.86 88.20 55.73", "figure_data": "Augmented data5 Inner Inter Inner Inter Inner Inter Inner Inter 10 20 50Null40.77 25.61 55.83 30.98 70.44 42.30 82.64 51.00S85.01 50.02 85.83 50.98 86.80 52.04 88.20 54.42S τ85.05 50.09 85.69 51.13 86.77 52.01 88.28 54.53S + η (f s (D))85.43 50.58 85.70 51.56 87.03 52.51 88.18 55.50S τ + η (f t (D))85.22 50.78 85.81 51.61 86.75 52.69 88.12 55.61S τ", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The test results under the few-shot settings. Here we report the LAS.", "figure_data": "", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Statistics of all the dependency relations.", "figure_data": "", "figure_id": "tab_9", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table 6 shows the details of results in few-shot settings. The definitions of the symbols are consistent with those above. It can be seen that the tends of UAS and LAS are extremely in line. (f s (D) + f t (D)) -88.02 84.78 65.84 50.18 +PostTran 88.12 84.24 66.34 50.62 S τ + η (f s (D) + f t (D)) -88.09 84.98 66.45 50.60 +PostTran 88.22 84.34 66.48 50.78", "figure_data": "Training DataMethodInnerInterUAS LAS UAS LASS-+PostTran 87.37 83.14 65.73 49.94 87.19 83.77 / /S τ-+PostTran 87.32 83.16 65.48 49.71 87.38 83.95 65.57 49.78f s (D)-+PostTran 86.18 82.13 64.31 48.76 86.13 82.90 64.21 48.50f t (D)-+PostTran 86.11 82.14 63.93 48.38 86.11 82.83 63.84 48.23(f s + f t ) (D)-+PostTran 86.48 82.46 64.75 49.03 86.48 82.46 64.58 48.92S + f s (D)-+PostTran 86.48 82.48 64.76 48.84 86.20 82.99 / /S τ + f t (D)-+PostTran 86.72 82.68 64.93 49.02 86.73 82.85 64.88 48.62S + η (f s (D))-+PostTran 88.05 84.14 66.29 50.47 87.97 84.81 / /S τ + η (f t (D))-+PostTran 88.08 84.13 65.88 50.27 88.13 84.92 65.83 50.20S + η", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The impact of PostTran.", "figure_data": "Training DataInnerInterUASLASUASLAS5-shot44.51±4.48 40.77±4.84 35.41±3.76 25.61±2.92+S88.7285.0165.8050.02+S τ88.7685.0565.8850.09+S + η (f s (D))89.2885.4366.4550.58+S τ + η (f t (D))88.9885.2266.5650.78+S τ + η (f s (D) + f t (D))89.1585.5567.5751.1210-shot59.01±4.86 55.83±4.94 45.30±5.72 30.98±3.86+S89.3485.8366.9450.98+S τ89.2785.6967.0951.13+S + η (f s (D))89.3585.7067.5251.56+S τ + η (f t (D))89.4585.8167.4751.61+S τ + η (f s (D) + f t (D))89.5085.8868.0851.8820-shot73.51±1.31 70.44±1.28 55.51±2.70 42.30±2.13+S90.2286.8068.3052.04+S τ90.1986.7768.2152.01+S + η (f s (D))90.7487.0369.1052.51+S τ + η (f t (D))90.1686.7568.9752.69+S τ + η (f s (D) + f t (D))90.5686.9269.2052.8650-shot85.36±0.22 82.64±0.25 66.39±0.98 51.00±0.66+S91.6288.2070.9354.42+S τ91.7588.2871.0054.53+S + η (f s (D))91.7088.1872.0555.50+S τ + η (f t (D))91.6888.1272.3555.61+S τ + η (f s (D) + f t (D))91.7488.2072.5355.73", "figure_id": "tab_11", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The details of few-shot results.", "figure_data": "", "figure_id": "tab_12", "figure_label": "6", "figure_type": "table" } ]
Gongyao Jiang; Shuang Liu; Meishan Zhang; Min Zhang
[ { "authors": "Stergos Afantenos; Eric Kow; Nicholas Asher; Jérémy Perret", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Discourse parsing for multiparty chat dialogues", "year": "2015" }, { "authors": "Nicholas Asher; Julie Hunter; Mathieu Morey; Benamara Farah; Stergos Afantenos", "journal": "European Language Resources Association (ELRA", "ref_id": "b1", "title": "Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus", "year": "2016" }, { "authors": "Nicholas Asher; Alex Lascarides", "journal": "Cambridge University Press", "ref_id": "b2", "title": "Logics of conversation", "year": "2003" }, { "authors": "Avrim Blum; Tom Mitchell", "journal": "", "ref_id": "b3", "title": "Combining labeled and unlabeled data with co-training", "year": "1998" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Shuyuan Cao; Iria Da Cunha; Mikel Iruskieta", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "The RST Spanish-Chinese treebank", "year": "2018" }, { "authors": "Lynn Carlson; Daniel Marcu", "journal": "", "ref_id": "b6", "title": "Discourse tagging reference manual", "year": "2001" }, { "authors": "Lynn Carlson; Daniel Marcu; Mary Ellen Okurovsky", "journal": "", "ref_id": "b7", "title": "Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory", "year": "2001" }, { "authors": "Wanxiang Che; Zhenghua Li; Ting Liu", "journal": "", "ref_id": "b8", "title": "Chinese dependency treebank 1.0 ldc2012t05. Philadelphia: Linguistic Data Consortium", "year": "2012" }, { "authors": "Meng Chen; Ruixue Liu; Lei Shen; Shaozu Yuan; Jingyan Zhou; Youzheng Wu; Xiaodong He; Bowen Zhou", "journal": "European Language Resources Association", "ref_id": "b9", "title": "The JDDC corpus: A large-scale multi-turn Chinese dialogue dataset for E-commerce customer service", "year": "2020" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b10", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Yiming Cui; Wanxiang Che; Ting Liu; Bing Qin; Shijin Wang; Guoping Hu", "journal": "", "ref_id": "b11", "title": "Revisiting pre-trained models for Chinese natural language processing", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Timothy Dozat; Christopher D Manning", "journal": "", "ref_id": "b13", "title": "Deep biaffine attention for neural dependency parsing", "year": "2017-04-24" }, { "authors": "Xinzhou Jiang; Zhenghua Li; Bo Zhang; Min Zhang; Sheng Li; Luo Si", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Supervised treebank conversion: Data and approaches", "year": "2018" }, { "authors": "Sandra Kübler; Ryan Mcdonald; Joakim Nivre", "journal": "", "ref_id": "b15", "title": "Dependency parsing. Synthesis lectures on human language technologies", "year": "2009" }, { "authors": "Hunter Lang; Monica N Agrawal; Yoon Kim; David Sontag", "journal": "", "ref_id": "b16", "title": "Co-training improves promptbased learning for large language models", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "Jiaqi Li; Ming Liu; Min-Yen Kan; Zihao Zheng; Zekun Wang; Wenqiang Lei; Ting Liu; Bing Qin", "journal": "International Committee on Computational Linguistics", "ref_id": "b18", "title": "Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure", "year": "2020" }, { "authors": "Sujian Li; Liang Wang; Ziqiang Cao; Wenjie Li", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Text-level discourse dependency parsing", "year": "2014" }, { "authors": "Zhenghua Li; Xue Peng; Min Zhang; Rui Wang; Luo Si", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Semi-supervised domain adaptation for dependency parsing", "year": "2019" }, { "authors": "Haitao Lin; Liqun Ma; Junnan Zhu; Lu Xiang; Yu Zhou; Jiajun Zhang; Chengqing Zong", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "CSDS: A fine-grained Chinese dataset for customer service dialogue summarization", "year": "2021" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "", "ref_id": "b22", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b23", "title": "Fixing weight decay regularization in adam", "year": "2017" }, { "authors": "C William; Sandra A Mann; Thompson", "journal": "", "ref_id": "b24", "title": "Rhetorical structure theory: A theory of text organization", "year": "1987" }, { "authors": "C William; Sandra A Mann; Thompson", "journal": "Text-interdisciplinary Journal for the Study of Discourse", "ref_id": "b25", "title": "Rhetorical structure theory: Toward a functional theory of text organization", "year": "1988" }, { "authors": "Mitchell Marcus; Grace Kim; Mary ; Ann Marcinkiewicz; Robert Macintyre; Ann Bies; Mark Ferguson; Karen Katz; Britta Schasberger", "journal": "", "ref_id": "b26", "title": "The Penn Treebank: Annotating predicate argument structure", "year": "1994-03-08" }, { "authors": "Ryan Mcdonald; Joakim Nivre; Yvonne Quirmbach-Brundage; Yoav Goldberg; Dipanjan Das; Kuzman Ganchev; Keith Hall; Slav Petrov; Hao Zhang; Oscar Täckström; Claudia Bedini; Núria Bertomeu Castelló; Jungmee Lee", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Universal Dependency annotation for multilingual parsing", "year": "2013" }, { "authors": "Mohammad Norouzi; Tomas Mikolov; Samy Bengio; Yoram Singer; Jonathon Shlens; Andrea Frome; Greg S Corrado; Jeffrey Dean", "journal": "", "ref_id": "b28", "title": "Zeroshot learning by convex combination of semantic embeddings", "year": "2014" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b29", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Siyao Peng; Yang ; Janet Liu; Amir Zeldes", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "GCDT: A Chinese RST treebank for multigenre and multilingual discourse parsing", "year": "2022" }, { "authors": "Rashmi Prasad; Nikhil Dinesh; Alan Lee; Eleni Miltsakaki; Livio Robaldo; Aravind Joshi; Bonnie Webber", "journal": "European Language Resources Association (ELRA)", "ref_id": "b31", "title": "The Penn Discourse TreeBank 2.0", "year": "2008" }, { "authors": "Likun Qiu; Yue Zhang; Jin Peng; Houfeng Wang", "journal": "", "ref_id": "b32", "title": "Multi-view chinese treebanking", "year": "2014" }, { "authors": "H Scudder", "journal": "IEEE Transactions on Information Theory", "ref_id": "b33", "title": "Probability of error of some adaptive pattern-recognition machines", "year": "1965" }, { "authors": "Radu Soricut; Daniel Marcu", "journal": "", "ref_id": "b34", "title": "Sentence level discourse parsing using syntactic and lexical information", "year": "2003" }, { "authors": "Naiwen Xue; Fei Xia; Fu-Dong Chiou; Marta Palmer", "journal": "Natural Language Engineering", "ref_id": "b35", "title": "The penn chinese treebank: Phrase structure annotation of a large corpus", "year": "2005" }, { "authors": "Meishan Zhang; Zhenghua Li; Guohong Fu; Min Zhang", "journal": "Artificial Intelligence", "ref_id": "b36", "title": "Dependency-based syntax-aware word representations", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 355.99, 465.31, 169.15, 9.96 ], "formula_id": "formula_0", "formula_text": "ȳ(arc) , ȳ(label) = Parser (x)(1)" }, { "formula_coordinates": [ 4, 87.68, 677.9, 202.19, 12.33 ], "formula_id": "formula_1", "formula_text": "P (v|e) = softmax MLP PLM e ′ (2)" }, { "formula_coordinates": [ 4, 343.21, 153.5, 181.93, 9.81 ], "formula_id": "formula_2", "formula_text": "P (s|e) = GroupMean (P (v|e))(3)" }, { "formula_coordinates": [ 5, 103.35, 439.15, 186.51, 33.71 ], "formula_id": "formula_3", "formula_text": "c (arc) = 1 n n i max P y (arc) i x i(5)" } ]
2023-05-21
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b16", "b37", "b13", "b20", "b41" ], "table_ref": [], "text": "Segmenting target objects described by users in a collection of images is a fundamental but overlooked capability that facilitates various real-world applications (as illustrated in Fig. 1), such as filtering and labeling cluttered internet images, multi-monitors event discovery, and mobile album retrieval. In recent years, Referring Expression Segmentation (RES) has become a research hotspot with great potentials to solve this demand. Various promising approaches [17,38,44,6,14] and datasets [21,46,36,42] have contributed to significant advancements in this field. However, the setting of RES is overly idealistic. It aims to segment what has been known to exist in a single image de-" }, { "figure_ref": [ "fig_3", "fig_2", "fig_2" ], "heading": "Image Search", "publication_ref": [ "b20" ], "table_ref": [], "text": "a person wearing a hat playing golf \"a vehicle damaged in a traffic accident\" scribed by a expression. This has restricted the practicality of RES in real-world situations, given that it is not always possible to determine if the described object exists in a specific image. Typically, we have a collection of images, some of which may contain the described objects.\nTo address this limitation, in this paper, we introduce a new realistic setting, namely Group-wise Referring Expression Segmentation (GRES), and define it as segmenting objects described in language expression from a group of related images. We establish the foundation of GRES in two aspects: firstly, a baseline method named Grouped Referring Segmenter (GRSer) that explicitly leverages language and intra-group vision connections to obtain promising results, and secondly, a meticulously annotated dataset, Group Referring Dataset (GRD), that ensures complete annotations of described objects across all images in a group.\nOur proposed GRSer, illustrated in Fig. 3, facilitates a simultaneous processing of multiple input images with an expression, and generates segmentation masks for all described objects. We devise a Triphasic Query Module (TQM), where the target objects not only queried by linguistic features, but also by intra-group visual features. In contrast to segmenting based solely on linguistic expres- sion, querying target objects with intra-group homo-modal visual features bridges the modal gap and assembles a more precise target concept. In the proposed Heatmap Hierarchizer, these heatmaps generated by intra-group visual querying are ranked based on their confidences, and then jointly used to predict segmentation masks in condition of the ranking priorities. Furthermore, we propose a mirror training strategy and triplet loss to learn anti-expression features, which are crucial for the TQM and Heatmap Hierarchizer, and enable GRSer to comprehend the image background and negative samples. The promising performance of GRSer makes it a strong research baseline for GRES.\nTo facilitate the research in novel GRES setting, the GRD dataset is introduced, which effectively overcomes the incomplete annotation problem in current RES datasets [21,46,36]. For example, in Fig. 2, RefCOCOg's expression of the 1st image also corresponds to objects in images 2, 3, and 4, but they are not annotated, causing erroneous false positive samples during evaluation if correctly segmented. In contrast, expressions in GRD refers objects completely for all images across the dataset, including images without targets or with multiple targets. Our GRD includes 16,480 positive object-expression pairs, and 41,231 reliable negative image-expression pairs. Additionally, GRD collects images from Internet search engines by group keywords, where negative samples inherently exist in each group, making them hard negatives and effectively increasing the dataset's difficulty. Finally, as shown in Fig. 2(b), compared with current RES datasets, GRD carefully labels details in segmentation masks, such as blocking and hollowing out, which contributes to a more accurate and re-liable evaluation efficacy than existing datasets.\nOur contributions can be summarized as:\n• We formalize a Group-wise Referring Expression Segmentation (GRES) setting over the RES task, which advances user-specified object segmentation towards more practical applications.\n• To support GRES research, we present a meticulously compiled dataset named GRD, possessing complete group-wise annotations of target objects. The dataset will also benefit various other vision-language tasks.\n• Extensive experiments show the effectiveness and generality of the proposed baseline method, GRSer, which achieves SOTA results on the GRES and related tasks, such as Co-Salient Object Detection and RES." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Referring Expression Segmentation (RES)", "publication_ref": [ "b16", "b27", "b42", "b37", "b13", "b21", "b44", "b33", "b29", "b42", "b4", "b56", "b17", "b20", "b41", "b20" ], "table_ref": [], "text": "RES aims to ground the target object in the given image referred by the language and generate a corresponding segmentation mask. Methods. A common approach to solve RES is to first extract both vision and language features, and then fuse the multi-modal features to predict the mask. Early methods [17,25,28] simply concatenate visual features and language features extracted by convolutional neural networks (CNNs) and recurrent neural networks (RNNs), respectively. Due to the breakthrough of Transformer [41, 9, 31], a rich line of works begin to explore its remarkable fusion power for multi-modality. Some [43,27,38,14,6,22,44] conduct cross-model alignment based on Transformer, others [45,34,30,35,43] adopt various attention mechanisms to achieve better feature weighting and fusing. There are some works to explore how to solve RES working with related tasks, such as visual grounding [35, 24, 55, 29], zero/one-shot segmentation [33], interactive segmentation [5], unified segmentation [57], and referring expression generation [18]. Datasets. Several datasets have been introduced to evaluate the performance of RES methods, including Ref-Clef [21], RefCOCO [46], RefCOCO+ [46], RefCOCOg (G-Ref) [36], and PhraseCut [42]. RefClef, RefCOCO, and RefCOCO+ are collected interactively in a two-player game, named ReferitGame [21], thus the given expressions are more concise and less flowery. Among them, Ref-COCO+ bans location words in expressions, making it more challenging. RefCOCOg is collected non-interactively, resulting in more complex expressions, often full sentences instead of phrases. PhraseCut's phases, consist of attribute, category, and relationship, are automatically generated by predefined templates and existing annotations from Visual Genome [23]. The above datasets fail to serve as reliable evaluation datasets for GRES setting due to their image-text pairs are one-to-one matched, which leads to incomplete annotation for target objects in unmatched images. More datasets comparison can be found in Tab. 1." }, { "figure_ref": [], "heading": "Co-Salient Object Detection (Co-SOD)", "publication_ref": [ "b49", "b52", "b19", "b53", "b50", "b12", "b51", "b11", "b46", "b38", "b53", "b52", "b50", "b19", "b51", "b12", "b15", "b36", "b53", "b51", "b12", "b46", "b14", "b0", "b11", "b53" ], "table_ref": [], "text": "Co-SOD is a recent research focus [50,53,20,54,49,51,13,52,12,47,56], aiming to discover the common semantic objects in a group of related images. In this task, the target object does not need to be specified by language expression, while required to appear commonly in all images. Co-SOD methods need to perceive what the common objects are from the pure visual modality, and then segment them. Historically, researchers refer to Co-SOD as \"detection\", but its outputs are actually segmentation maps. Methods. Recently, many impressive Co-SOD methods have arisen, focusing primarily on obtaining co-representations of common objects to guide target object segmentation. Corepresentations can be obtained through methods like feature concatenation [39], linear addition [54], channel shuffling [53], graph neural networks [19,51], and iterative purification [56]. There is also a body of research work focused on intra-group information exchange, such as using pair-wise similarity map [20], dynamic convolution [52], group affinity [13], and transformers [16,37]. Moreover, besides these central lines of exploring, efforts have been made to enhance the Co-SOD model through data enhancement [54,52], training strategies [13,47], adversarial attack preventing [15]. Datasets. Co-SOD datasets include iCoseg [1], MSRC [40], CoSal2015 [48], CoSOD3k [12], and CoCA [54]. Early datasets such as iCoseg and MSRC contain co-salient objects with similar appearance in similar scenes. CoSal2015 and CoSOD3k are large-scale datasets, featuring target objects with varying appearance in the same category. CoCA, the latest dataset, presents a more challenging setting with at least one extraneous salient object in each image, requiring the model to identify the target object in cluttered scenes. Although the data sets have favorable grouping scenarios, they lack expressions and negative samples, making them unsuitable for direct use as evaluation dataset for GRES." }, { "figure_ref": [], "heading": "Proposed Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "The pipeline of our Grouped Referring Segmenter (GRSer) is demonstrated in Fig. 3. Given an expression that specifies an object, a group of related images are processed simultaneously, and then all corresponding pixelwise masks of the target object are output. In particular, for the negative sample (i.e., image without target object), its output mask is 0 mask. There are four modules in our GRSer, including a multi-modal encoder, a triphasic query module, a heatmap hierarchizer, and a mask predictor.\nText & Image Encoder. BERT [4] is employed to embed the expression into linguistic features L ∈ R C l , where C l is the number of channels for the language feature. Meanwhile, we construct an anti-expression by adding a prefix <no> to the given expression, which is embedded as linguistic anti-features L anti ∈ R C l . We follow LAVT [44] to perform visual encoding to obtain visual features V n ∈ R Cv×H×W for each image x n in the group (n = 1, . . . , N ), where N is the number of images in one group, and C v , H, and W denote the channel number, height, and width, respectively. For more details about the encoder and decoder, please refer to supplementary materials.\nTQM & Heatmap Hierarchizer. The language-vision and intra-group vision-vision semantic relations are explicitly captured in proposed TQM (Sec. 3.2) to produce heatmaps, which reflect the spatial relation between linguistic and intra-group visual features. And these heatmaps are ranked and rearranged in heatmap hierarchizer (Sec. 3.3) according to their importance with the expression to better activate their locating capability for mask prediction.\nMask Predictor. The well-ranked heatmaps are concatenated with visual features V n to obtain the triphasic features z n , which integrates the discriminative cues of target object in TQM and heatmap hierarchizer. z n is used to distinguish positive or negative samples, and predict the segmentation masks. In inference, the positive distance\nd pos = d(z n , L) and negative distance d neg = d(z n , L anti ) are computed, where the Euclidean Distance d(•) is applied. If d pos + m < d neg (\nm is the margin value), then image x n is recognized as a positive sample, and its z n is then transmitted to the decoder to output segmentation mask. If not, 0 mask is reassigned as negative output." }, { "figure_ref": [ "fig_4", "fig_3", "fig_4" ], "heading": "Triphasic Query Module (TQM)", "publication_ref": [], "table_ref": [], "text": "Due to the inherent modality gap, directly querying objects through linguistic features often results in rougher language-activated heatmaps (e.g., the 2nd image in the bottom row of Fig. 4). We resort to intra-group homomodal visual features to act as \"experts\", offering suggested heatmaps from their perspectives. To this end, we devise the TQM, where \"triphasic\" means that the target object not only queried by linguistic features, but also by intra-group homo-modal visual features.\nIn the right top of Fig. 3, we take one image x n as an example to illustrate the detailed process. First, in order to detect the most discriminating region in the visual feature map responded to the referring expression, a languageactivated heatmap M l n ∈ R H×W is generated. Specifically, the cosine similarity is computed between the flattened visual features V n ∈ R Cv×HW and linguistic features L = ω l (L) ∈ R Cv , where a 1 × 1 convolution layer ω l with C v number of output channels are deployed to align the cross-modal features. This is denoted as\nM l n = V T n • L V n L .(1)\nSecond, M l n is element-wise multiplied with visual features V n , and the output features are averaged along spatial dimension (i.e., H × W ) with mask average pooling to generate a prototype p n ∈ R Cv corresponding to image x n , as\np n = avg(M l n V n ),(2)\nwhere M l n is broadcast to the same size as V n , and denotes the element-wise multiplication. In this manner, a group of prototypes {p i } N i=1 is generated, with each prototype corresponding to one image from a group. Intuitively, the prototype integrates visual features of the target object.\nNext, the intra-group queries are conducted between current image x n and a group of prototypes, and these prototypes serve as \"experts\" to provide localization heatmap suggestions from their perspectives.\nIn details, the cosine similarity is computed between the flattened visual features V n and each prototype p i from {p i } N i=1 one-by-one, and then produce N vision-activated heatmaps\nM v n = {M v ni } N i=1 , as M v ni = V T n • p i V n p i ,(3)\nwhere n denotes the index of image in a group, and i denotes the index of prototype in a group. As shown in Fig. 4, these four M v ni (the 3rd -6th in the bottom row) show stronger locating capability than the M l n (the 2nd in the bottom row), which thus provide more accurate guidance for mask prediction." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Heatmap Hierarchizer", "publication_ref": [ "b6" ], "table_ref": [], "text": "Considering that the vision-activated heatmaps suggested by \"experts\" from TQM can be uneven, especially when there are negative samples. For example, in Fig. 4, prototypes come from negative samples tend to generate counterfactual localization heatmaps (the 7th -10th in the bottom row). We need experts to give confidence of their suggestions to determine the heatmap priority in following prediction. To this end, we propose a heatmap hierarchizer to rank and rearrange these vision-activated heatmaps based on a confidence evaluation strategy.\nTo get the rank of different heatmaps, we define a scoring criterion based on the multi-modal representation dis- ) activated by linguistic anti-feature L anti in our proposed mirror training process (Sec. 3.4). In the second and third row, we demonstrate the language-activated heatmap M l (the one inside the solid green border) and vision-activated heatmaps M v (the others inside the dotted border) . Note that the raw order of heatmaps are shown in the second row, and the well-ranked heatmaps are demonstrated in the third row. Best viewed in color.\ntance. Specifically, we compute the Euclidean Distance [7] between each prototype from the group {p i } N i=1 and linguistic features L to get the positive score\n{s pos i } N i=1 . Negative score {s neg i } N\ni=1 is also obtained by computing Euclidean Distance between {p i } N i=1 and linguistic antifeatures L anti . In this way, a smaller s pos i indicates that the prototype p i gets closer to the target object, which means its corresponding generated heatmap M v ni is more reliable. Inversely, a smaller s neg i indicates the prototype p i fits the background (i.e., outside of the target object in a image) better. Then, we obtain the positive rank R pos and negative rank R neg for N vision-activated heatmaps\nM v n = {M v ni } N i=1\n, according to corresponding positive score s pos i (from smallest to largest) and negative score s neg i (from largest to smallest), respectively. The positive rank R pos and negative rank R neg are summed as the final rank to rearrange M v n by\nM v n = rearrange (M v n |R pos + R neg ),(4)\nwhere rearrange(•) means changing the channel-wise order of these stacked heatmaps. These heatmaps are then concatenated with visual features V n to get triphasic features z n for mask prediction. In Fig. 4, it can be seen that heatmaps with lower confidence (generated by negative samples) are relegated to the back after rearrangement." }, { "figure_ref": [ "fig_4" ], "heading": "Training Objectives", "publication_ref": [], "table_ref": [], "text": "Training with Negative Samples. For training, we set the ratio between positive samples x pos (i.e., image containing target object referred by the expression) and negative samples x neg (i.e., noisy image where no target object exists) in each image group as 1 : \nMirror Training Strategy. To further force our model to comprehend the semantics contained in linguistic antifeatures L anti , we design a mirror training strategy. Intuitively, linguistic anti-features represent the opposite semantics of the given expression, and thus we explicitly relate the linguistic anti-features to the image background (i.e., outside of the target object in an image). Specifically, during training, on the basis of original pipeline, we add an additional mirror one that swaps the roles of L and L anti , and corresponding ground-truth mask is replaced with the background (i.e., 1 -Y, where Y denotes the ground-truth mask for the target object). As shown in the first row of Fig. 4, the feature maps (i.e., Y anti i ) in decoder activated by L anti exactly focus on the background outside of the target object. The cross-entropy loss is applied for mirror training, denoted as L mirr ce . Objective Function. Note that only positive samples x pos are included for computing cross-entropy loss, while all samples (i.e., x pos and x neg ) are used for computing triplet margin loss. We adopt the increasing weighting strategy for triplet margin loss to optimize the training process, by\nL = L ce ( Ŷ, Y) + λL mirr ce ( Ŷanti , 1 -Y) + t T L tri ,(6)\nwhere t and T denote the current training epoch and total number of training epochs, respectively; λ is a hyperparameter to weigh the importance of mirror training strategy; Ŷ denotes predicted mask referred by the expression, and Ŷanti denotes predicted mask referred by the antiexpression obtained in mirror training strategy." }, { "figure_ref": [], "heading": "Proposed Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Dataset Highlights", "publication_ref": [], "table_ref": [], "text": "Within a collection of images, for a given expression, we label all described objects in all images without any omission. This constitutes the fundamental attribute that distinguishes GRD from its counterparts. For instance, in Ref-COCOg's two samples shown in Fig. 2, the first image's expression is \"man in blue clothes\", while the same object in the second image lacks annotation. This flaw renders the expression valid only in one image, making other images in the dataset unsuitable as negative samples. In addition to complete annotation, there are some features that make GRD exceptional. One is that the images in each group of GRD are related, so even if the described target does not appear on some images in the group, the scenes in these images are often close to the description, which makes this dataset more challenging. Additionally, our delicate annotation in Fig. 2 enables objective evaluation of model performance compared to current RES datasets. More features can be found in Tab. 1. Thanks to these features, GRD can help many other vision-language tasks, such as visual grounding, RES, and grounding caption. GRD is freely available for non-commercial research purposes." }, { "figure_ref": [], "heading": "Construction Procedures", "publication_ref": [], "table_ref": [], "text": "We collect images searching from Flickr1 . If crawling directly according to the expression, we usually get the iconic images, which appear in profile, unobstructed near the center of a neatly composed photo. In order to meet the real situation and increase the challenge, we employ the combination of target keywords and scene keywords to crawl a group of related images from search engines. Consequently, the images involves intricate scenes, i.e., non-iconic images [26]. Then, for each group of images, we carefully propose several related expressions to be annotated. The announcers will segment the objects in the group according to these expressions, without excluding any referred objects. This completeness allows our dataset to accurately assess the model's performance on negative samples. Each object Table 1: Valuable features bring by GRD dataset. \"scene grouping\" means samples are grouped by similar scenes. \"complete annotation\" means any object satisfying the given description is annotated across dataset. In this case, samples without the label for a specific expression could be reliably considered as \"certified negative samples\" for this expression. If there are \"multiple referred objects\" described in an image, all of them are annotated without omission. \"meticulous masks\" are provided to fits the object perfectly, especially for the hollowed-out and blocking areas. \"object-centric\" means the dataset concentrates on objects rather than broad concepts like grass and sky. \"avg. expression length\" represents the average expression length. RC, RC+, RCg, RCF, and PC denote RefCOCO, RefCOCO+, RefCOCOg, RefClef, and PhaseCut, respectively. RC RC+ RCg RCF PC GRD scene grouping complete annotation certified neg. samples multi. referred objects meticulous masks object-centric avg. expression length 3.6 3.5 8.4 3.5 2.0 5.9 annotation takes an average of 3 minutes to precisely define edges and remove hollow areas, guaranteeing accurate evaluation of model segmentation performances." }, { "figure_ref": [], "heading": "Datset Statistics.", "publication_ref": [], "table_ref": [], "text": "The GRD dataset contains 10,578 images. It includes 106 scenes (groups), such as indoor, outdoor and sports ground. Each group has around 100 images and 3 welldesigned expressions referring to various number of positive and negative samples. In total, the dataset is annotated with 316 expressions, resulting in 31,524 positive or negative image-text pairs. The expressions have an average length of 5.9 words. More statistics and examples can be viewed in supplementary materials." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Metrics", "publication_ref": [], "table_ref": [], "text": "To comprehensively evaluate GRSer's performance, apart from the proposed GRD, we also introduce RES and Co-SOD datasets as supplements. For RES dataset (e.g., RefCOCO [46], RefCOCO+ [46], and RefCOCOg [36]), given that there exist some repeated sentences in different images, we reconstruct these datasets to the form of \"one sentence vs. a group of referred images\", named as G-RefC, G-RefC+, and G-RefCg, with randomly sampled negative samples from other groups. These re-built datasets have 8717, 8020, and 2451 image groups, respectively, with Table 2: Quantitative comparisons with RES methods in terms of mean Intersection-over-Union (mIoU) for the RES setting and our proposed mIoU for the GRES setting on the G-RefC, G-RefC+, G-RefCg, and our proposed GRD datasets. The best results are marked in bold." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b13", "b41", "b9", "b10" ], "table_ref": [], "text": "Pub GRES (with negative samples) RES (no negative samples) G-RefC G-RefC+ G-RefCg GRD G-RefC G-RefC+ G-RefCg GRD EFN CVPR21 [14] 25. 42 We adopt the metric of mean intersection-over-union (mIoU) for evaluating model's performance in RES setting with no negative samples included. When negative samples are introduced, their corresponding ground-truth masks are 0 mask, where the originally defined mIoU is not valid (i.e., IoU ≡ 0, for the negative sample). Therefore, we define an adapted metric mIoU to measure model performance on both segmentation accuracy and recognition ability for negative samples. Specifically, the idea of confusion matrix is adopted: for a true positive sample (TP), its IoU is calculated in the same way as the vanilla IoU; for a true negative sample (TN), its IoU is set to 1; for a false positive sample (FP) or false negative sample (FN), its IoU is set to 0. Then, the IoU value of all m test samples are averaged to get the mIoU, i.e., mIoU = 1 m m i=1 IoU i . Besides, for Co-SOD task, common metrics of mean absolute error (MAE)[3], maximum F-measure [2] (F max ), S-measure [10] (S α ), and mean E-measure [11] (E ξ ) are adopted." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "The Transformer layers for visual encoding are initialized with classification weights pre-trained on ImageNet-22K from the Swin Transformer [31]. The language encoder is the base BERT [4] with 12 layers and hidden size of 768 (i.e., C l ), which is implemented from Hugging-Face's Transformer library [41]. C v is set to 512. Following [31,44], the AdamW optimizer [32] is adopted with weight decay of 0.01. The initial learning rate is set to 0.00005 with polynomial learning rate decay. The model is trained for 80 epochs with batch size of 4. Images are resized to 416 × 416 and no data augmentations are employed. The size of input image group N is set to 8. The margin value m is set to 1 in triplet margin loss." }, { "figure_ref": [], "heading": "Comparison with SOTA Methods", "publication_ref": [], "table_ref": [], "text": "Results on the GRES Setting. In Tab. 2, we compare our GRSer with other RES methods on the re-built G-RefC, G-RefC+, G-RefCg, and our proposed GRD datasets. Specifically, negative samples are introduced to each image group for both training and inference (see Sec. 5.1 for details), where the adapted metric mIoU is used. Compared methods are implemented following their original paradigms to input data in the form of \"one image vs. one expression\". The ground-truth for the negative sample is set as 0 mask. Note that the proposed GRD dataset is only used for inference, and its corresponding train set is the combination of train sets from G-RefC, G-RefC+ and G-RefCg. It can be seen that our GRSer significantly outperforms other methods, and excels in recognition of negative samples, due to our designed triplet loss and mirror training strategy, which effectively optimize the multi-modal representation space. Results on the RES Setting. In Tab. 2, we present the results in the conventional RES setting, where mIoU metric is adopted. Here, no negative sample is included and all images in a group do contain target objects. Similarly, our GRSer outperforms all compared methods, particularly on the more difficult G-RefCg and GRD dataset (the given ex- It can be seen that our method also achieves remarkable performances on this challenging real-world dataset." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Triphasic Query Module (TQM). We remove the proposed TQM, and only a single language-activated heatmap is concatenated with visual features and then fed to the mask predictor. In Tab. 5, the removal of TQM leads to a mIoU drop of 9.04% and 9.50% in G-RefCg and GRD, respectively, validating the effects of TQM. Besides, we try differnt image numbers in one group. In Tab. 6, when increasing the group size N , model performances get better. Heatmap Hierarchizer (HMapHier). To explore the effects of the heatmap order in HMapHier, we experiment with different ranking criteria. As shown in Tab. 4, removing HMapHier (i.e., heatmap orders in both training and testing are random) results in the mIoU drops of 6.40% and 6.39% in G-RefCg and GRD, respectively. Besides, inconsistent ranking criteria in training and testing resulted in inferior performance. Also, using the combination of positive rank R pos and negative rank R neg achieves the best results compared to using a single-source criterion.\nMirror Training (MirrorT). In Tab. 5, removing MirrorT leads to a mIoU drop of 5.94% and 5.65% in G-RefCg and GRD, respectively. This is because MirrorT plays a vital role in forcing model to comprehend the semantics contained in anti-expressions, helping our GRSer to be better aware of the image background and negative samples. Triplet Margin Loss (TriLoss). Tab. 5 shows that TriLoss is critical for GRSer when negative samples are included. Without TriLoss, the recall of negative samples R neg falls to 0 in both datasets, which means the model fails to recognize negative samples and output non-zero predicted masks for all images. TriLoss optimizes the multi-modal representation distances during training and constructs a welldistributed representation space that helps our model to distinguish between positive and negative samples." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we present a realistic multi-modal setting named Group-wise Referring Expression Segmentation (GRES), which relaxes the limitation of idealized setting in RES and extends it to a collection of related images. To facilitate this new setting, we introduce a challenging dataset named GRD, which effectively simulates the real-world scenarios by collecting images in a grouped manner and annotating both positive and negative samples thoroughly. Besides, a novel baseline method GRSer is proposed to explicitly capture the language-vision and vision-vision feature interactions for better comprehension of the target object. Extensive experiments show that our method achieves SOTA performances on GRES, RES, and Co-SOD." } ]
Referring Expression Segmentation (RES) is a widely explored multi-modal task, which endeavors to segment the pre-existing object within a single image with a given linguistic expression. However, in broader real-world scenarios, it is not always possible to determine if the described object exists in a specific image. Typically, we have a collection of images, some of which may contain the described objects. The current RES setting curbs its practicality in such situations. To overcome this limitation, we propose a more realistic and general setting, named Group-wise Referring Expression Segmentation (GRES), which expands RES to a collection of related images, allowing the described objects to be present in a subset of input images. To support this new setting, we introduce an elaborately compiled dataset named Grouped Referring Dataset (GRD), containing complete group-wise annotations of target objects described by given expressions. We also present a baseline method named Grouped Referring Segmenter (GRSer), which explicitly captures the language-vision and intra-group vision-vision interactions to achieve state-of-the-art results on the proposed GRES and related tasks, such as Co-Salient Object Detection and RES.
Advancing Referring Expression Segmentation Beyond Single Image
[ { "figure_caption": "Figure 1 :1Figure 1: Real-world applications of Group-wise Referring Expression Segmentation (GRES), which facilitates annotation auto-gathering from cluttered Internet images (upper), multi-monitors joint inference (lower), etc.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Expression: \"a person who is jumping into a swimming pool\" Expression: \"a black laptop\"", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Proposed GRD vs. RefCOCOg on the annotation completeness and fineness.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: The pipeline of proposed GRSer. First, grouped input images, given an expression together with its antiexpression (<no> prefix added), are encoded by image and text encoder, respectively, and fed into a triphasic query module (TQM), to generate a set of heatmaps that indicate the most discriminating region in the visual feature map responding to the target object. Next, these heatmaps are rearranged according to their correlation with the description, and then concatenated with visual features for mask prediction. In training, triplet loss and segmentation loss are both applied, and a mirror training strategy (dotted line) is introduced to better comprehend the anti-expression and image background. In inference, the mirror training will be discarded, and images close to the anti-expression are reassigned 0 masks.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Visualizations of the prediction mask, feature maps, and heatmaps on an example from the G-RefCg test set. The leftmost column demonstrates the input image, predicted mask (in yellow) and ground-truth mask (in green). In the first row of other columns, we visualize the feature maps in decoder (i.e., Y i ) activated by linguistic feature L, and feature maps in decoder (i.e., Y anti i", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "1 .1The training objectives are twofold:(1) Triplet margin loss to empower model with recognition ability for negative samples; (2) Cross-entropy loss to optimize the model's segmentation performance. Triplet Margin Loss. The goal of triplet margin loss [8] is to bring closer together the anchor and the positive example, while pull the anchor from the negative example away, as is illustrated in Eq. 5. The Euclidean Distance d(•) is applied, and m is the margin value. For a positive sample x pos , its triphasic features z n is regarded as the anchor, and linguistic features L and anti-features L anti are regarded as the positive and negative examples, respectively. And for a negative sample x neg , L anti and L are regarded as its positive and negative examples instead. The triplet margin loss is computed as L tri = max d(z n , L)-d(z n , L anti )+m, 0 for x pos max d(z n , L anti )-d(z n , L)+m, 0 for x neg", "figure_data": "", "figure_id": "fig_5", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Quantitative comparisons with Co-SOD methods in terms of mean absolute error (MAE)[3], maximum Fmeasure [2] (F max ), S-measure[10] (S α ), and mean E-measure[11] (E ξ ) on the CoCA[54] dataset. \"↑\" means that the higher the numerical value, the better the model performance, and vice versa for \"↓\". The best results are marked in bold. In our experiment, GRD and re-built RES datasets are regarded as RES setting if the negative samples of the dataset are removed, otherwise it is the GRES setting. Besides, we use the CoCA[54] dataset to evaluate our model's performance in Co-SOD task, where we take category names as expression inputs.", "figure_data": "22.3220.7715.29 63.5255.3752.8831.57VLTPAMI22 [6]26.8724.3822.8316.58 66.9859.1451.7333.78CRISCVPR22 [38]29.3127.2724.7419.33 70.6268.1258.9341.23LAVTCVPR22 [44]30.2227.1424.3818.48 75.2767.9359.9439.14GRSerOurs84.7778.4475.3257.12 79.3370.3865.4747.25CSMG GCAGC GICD ICNet CoEG DeepACG GCoNet CADC CoRP GRSerMetricCVPR19 [50]CVPR20 [51]ECCV20 NeurIPS20 PAMI21 [54] [20] [12]CVPR21 [49]CVPR21 [13]ICCV21 PAMI2023 [52] [56]OursMAE ↓ 0.1140.1110.126 0.148 0.1060.1020.1050.1320.1210.099CoCAF max ↑ 0.499 S α ↑ 0.6270.517 0.6660.513 0.514 0.493 0.658 0.657 0.6120.552 0.6880.544 0.6730.548 0.6810.551 0.6860.562 0.712E ξ ↑0.6060.6680.701 0.686 0.679-0.739-0.7150.728positive to negative sample ratio of 1 : 1 for both trainingand inference.", "figure_id": "tab_0", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation studies of ranking criteria in heatmap hierarchizer on the G-RefCg and the proposed GRD datasets in the GRES setting. (*) indicates default choices of our model. The best results are marked in bold.", "figure_data": "TrainTestmIoUG-RefCg E ξR neg mIoUGRD E ξR negRandomRandom R pos + R neg68.92 0.554 89.12 50.73 0.475 75.38 68.42 0.550 88.74 50.24 0.470 73.37R pos + R neg (*)Random R pos + R neg (*) 75.32 0.572 95.25 57.12 0.515 81.09 67.79 0.548 88.23 49.83 0.468 73.28R posR pos74.57 0.570 94.32 56.28 0.502 80.08R negR neg73.79 0.563 94.53 56.37 0.507 80.23", "figure_id": "tab_1", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation studies of main designs in our method on G-RefCg and GRD datasets in the GRES setting.", "figure_data": "G-RefCgGRDmIoU E ξ R neg mIoU E ξ R negw/o. TQM66.28 0.525 85.33 47.62 0.463 70.29w/o. HMapHier 68.92 0.554 89.12 50.73 0.475 75.38w/o. MirrorT69.38 0.543 90.12 51.47 0.479 75.54w/o. TriLoss30.37 0.493 0 23.14 0.435 0Full model75.32 0.572 95.25 57.12 0.515 81.09pressions are complex and hard to understand by models).", "figure_id": "tab_2", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation studies of group size (N ) in TQM on G-RefCg and GRD datasets in the GRES setting. (*) indicates default choices of our model. 72.98 0.559 92.38 54.89 0.484 78.23 N = 5 74.26 0.567 94.45 56.01 0.502 80.92 N = 8(*) 75.32 0.572 95.25 57.12 0.515 81.09", "figure_data": "G-RefCgGRDmIoU E ξR neg mIoU E ξR negN = 166.28 0.525 85.33 47.62 0.463 70.29N = 3", "figure_id": "tab_3", "figure_label": "6", "figure_type": "table" } ]
Yixuan Wu; Zhao Zhang; Chi Xie; Feng Zhu; Rui Zhao
[ { "authors": "Dhruv Batra; Adarsh Kowdle; Devi Parikh; Jiebo Luo; Tsuhan Chen", "journal": "IEEE", "ref_id": "b0", "title": "iCoseg: Interactive co-segmentation with intelligent scribble guidance", "year": "2010" }, { "authors": "Ali Borji; Ming-Ming Cheng; Huaizu Jiang; Jia Li", "journal": "IEEE TIP", "ref_id": "b1", "title": "Salient object detection: A benchmark", "year": "2015" }, { "authors": "Ming-Ming Cheng; Jonathan Warrell; Wen-Yan Lin; Shuai Zheng; Vibhav Vineet; Nigel Crook", "journal": "", "ref_id": "b2", "title": "Efficient salient region detection with soft image abstraction", "year": "2013" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Henghui Ding; Scott Cohen; Brian Price; Xudong Jiang", "journal": "Springer", "ref_id": "b4", "title": "Phraseclick: Toward achieving flexible interactive segmentation by phrase and click", "year": "2020" }, { "authors": "Henghui Ding; Chang Liu; Suchen Wang; Xudong Jiang", "journal": "IEEE TPAMI", "ref_id": "b5", "title": "Vlt: Vision-language transformer and query generation for referring segmentation", "year": "2022" }, { "authors": "Ivan Dokmanic; Reza Parhizkar; Juri Ranieri; Martin Vetterli", "journal": "IEEE Signal Processing Magazine", "ref_id": "b6", "title": "Euclidean distance matrices: essential theory, algorithms, and applications", "year": "2015" }, { "authors": "Xingping Dong; Jianbing Shen", "journal": "", "ref_id": "b7", "title": "Triplet loss in siamese network for object tracking", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b8", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Deng-Ping Fan; Ming-Ming Cheng; Yun Liu; Tao Li; Ali Borji", "journal": "", "ref_id": "b9", "title": "Structure-measure: A new way to evaluate foreground maps", "year": "2017" }, { "authors": "Deng-Ping Fan; Cheng Gong; Yang Cao; Bo Ren; Ming-Ming Cheng; Ali Borji", "journal": "", "ref_id": "b10", "title": "Enhanced-alignment measure for binary foreground map evaluation", "year": "2018" }, { "authors": "Deng-Ping Fan; Tengpeng Li; Zheng Lin; Ge-Peng Ji; Dingwen Zhang; Ming-Ming Cheng; Huazhu Fu; Jianbing Shen", "journal": "IEEE TPAMI", "ref_id": "b11", "title": "Re-thinking co-salient object detection", "year": "2021" }, { "authors": "Qi Fan; Deng-Ping Fan; Huazhu Fu; Chi Keung Tang; Ling Shao; Yu-Wing Tai", "journal": "CVPR", "ref_id": "b12", "title": "Group collaborative learning for co-salient object detection", "year": "2021" }, { "authors": "Guang Feng; Zhiwei Hu; Lihe Zhang; Huchuan Lu", "journal": "", "ref_id": "b13", "title": "Encoder fusion network with co-attention embedding for referring image segmentation", "year": "2021" }, { "authors": "Ruijun Gao; Qing Guo; Felix Juefei-Xu; Hongkai Yu; Huazhu Fu; Wei Feng; Yang Liu; Song Wang", "journal": "", "ref_id": "b14", "title": "Can you spot the chameleon? Adversarially camouflaging images from co-salient object detection", "year": "2022" }, { "authors": "Yanliang Ge; Qiao Zhang; Tian-Zhu Xiang; Cong Zhang; Hongbo Bi", "journal": "IEEE TCSVT", "ref_id": "b15", "title": "TCNet: Co-salient object detection via parallel interaction of Transformers and CNNs", "year": "" }, { "authors": "Ronghang Hu; Marcus Rohrbach; Trevor Darrell", "journal": "Springer", "ref_id": "b16", "title": "Segmentation from natural language expressions", "year": "2016" }, { "authors": "Shijia Huang; Feng Li; Hao Zhang; Shilong Liu; Lei Zhang; Liwei Wang", "journal": "", "ref_id": "b17", "title": "A unified mutual supervision framework for referring expression segmentation and generation", "year": "2022" }, { "authors": "Bo Jiang; Xingyue Jiang; Ajian Zhou; Jin Tang; Bin Luo", "journal": "", "ref_id": "b18", "title": "A unified multiple graph learning and convolutional network model for co-saliency estimation", "year": "2019" }, { "authors": "Wen-Da Jin; Jun Xu; Ming-Ming Cheng; Yi Zhang; Wei Guo", "journal": "NeurIPS", "ref_id": "b19", "title": "ICNet: Intra-saliency correlation network for cosaliency detection", "year": "2020" }, { "authors": "Sahar Kazemzadeh; Vicente Ordonez; Mark Matten; Tamara Berg", "journal": "", "ref_id": "b20", "title": "Referitgame: Referring to objects in photographs of natural scenes", "year": "2014" }, { "authors": "Namyup Kim; Dongwon Kim; Cuiling Lan; Wenjun Zeng; Suha Kwak", "journal": "", "ref_id": "b21", "title": "ReSTR: Convolution-free referring image segmentation using transformers", "year": "2022" }, { "authors": "Ranjay Krishna; Yuke Zhu; Oliver Groth; Justin Johnson; Kenji Hata; Joshua Kravitz; Stephanie Chen; Yannis Kalantidis; Li-Jia Li; David A Shamma", "journal": "IJCV", "ref_id": "b22", "title": "Visual Genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2017" }, { "authors": "Muchen Li; Leonid Sigal", "journal": "NeurIPS", "ref_id": "b23", "title": "Referring transformer: A onestep approach to multi-task visual grounding", "year": "2021" }, { "authors": "Ruiyu Li; Kaican Li; Yi-Chun Kuo; Michelle Shu; Xiaojuan Qi; Xiaoyong Shen; Jiaya Jia", "journal": "", "ref_id": "b24", "title": "Referring image segmentation via recurrent refinement networks", "year": "2018" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b25", "title": "Microsoft COCO: Common objects in context", "year": "2014" }, { "authors": "Chang Liu; Xudong Jiang; Henghui Ding", "journal": "IEEE TMM", "ref_id": "b26", "title": "Instancespecific feature propagation for referring segmentation", "year": "2022" }, { "authors": "Chenxi Liu; Zhe Lin; Xiaohui Shen; Jimei Yang; Xin Lu; Alan Yuille", "journal": "", "ref_id": "b27", "title": "Recurrent multimodal interaction for referring image segmentation", "year": "2017" }, { "authors": "Jiang Liu; Hui Ding; Zhaowei Cai; Yuting Zhang; Ravi Kumar Satzoda; Vijay Mahadevan; Manmatha", "journal": "", "ref_id": "b28", "title": "Poly-Former: Referring image segmentation as sequential polygon generation", "year": "2023" }, { "authors": "Si Liu; Tianrui Hui; Shaofei Huang; Yunchao Wei; Bo Li; Guanbin Li", "journal": "IEEE TPAMI", "ref_id": "b29", "title": "Cross-modal progressive comprehension for referring segmentation", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b30", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b31", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Timo Lüddecke; Alexander Ecker", "journal": "", "ref_id": "b32", "title": "Image segmentation using text and image prompts", "year": "2022" }, { "authors": "Gen Luo; Yiyi Zhou; Rongrong Ji; Xiaoshuai Sun; Jinsong Su; Chia-Wen Lin; Qi Tian", "journal": "", "ref_id": "b33", "title": "Cascade grouped attention network for referring expression segmentation", "year": "2020" }, { "authors": "Gen Luo; Yiyi Zhou; Xiaoshuai Sun; Liujuan Cao; Chenglin Wu; Cheng Deng; Rongrong Ji", "journal": "", "ref_id": "b34", "title": "Multi-task collaborative network for joint referring expression comprehension and segmentation", "year": "2020" }, { "authors": "Junhua Mao; Jonathan Huang; Alexander Toshev; Oana Camburu; Alan L Yuille; Kevin Murphy", "journal": "", "ref_id": "b35", "title": "Generation and comprehension of unambiguous object descriptions", "year": "2016" }, { "authors": "Yukun Su; Jingliang Deng; Ruizhou Sun; Guosheng Lin; Qingyao Wu", "journal": "", "ref_id": "b36", "title": "A unified transformer framework for group-based segmentation: Co-segmentation, co-saliency detection and video salient object detection", "year": "2022" }, { "authors": "Zhaoqing Wang; Yu Lu; Qiang Li; Xunqiang Tao; Yandong Guo; Mingming Gong; Tongliang Liu", "journal": "", "ref_id": "b37", "title": "CRIS: CLIP-driven referring image segmentation", "year": "2022" }, { "authors": "Lina Wei; Shanshan Zhao; Omar El ; Farouk Bourahla; Xi Li; Fei Wu; Yueting Zhuang", "journal": "IEEE TIP", "ref_id": "b38", "title": "Deep group-wise fully convolutional network for co-saliency detection with graph propagation", "year": "2019" }, { "authors": "J Winn; A Criminisi; T Minka", "journal": "", "ref_id": "b39", "title": "Object categorization by learned universal visual dictionary", "year": "2005" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b40", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Chenyun Wu; Zhe Lin; Scott Cohen; Trung Bui; Subhransu Maji", "journal": "", "ref_id": "b41", "title": "Phrasecut: Language-based image segmentation in the wild", "year": "2020" }, { "authors": "Sibei Yang; Meng Xia; Guanbin Li; Hong-Yu Zhou; Yizhou Yu", "journal": "", "ref_id": "b42", "title": "Bottom-up shift and reasoning for referring image segmentation", "year": "2021" }, { "authors": "Zhao Yang; Jiaqi Wang; Yansong Tang; Kai Chen; Hengshuang Zhao; Philip Hs Torr", "journal": "", "ref_id": "b43", "title": "LAVT: Language-aware vision transformer for referring image segmentation", "year": "2022" }, { "authors": "Licheng Yu; Zhe Lin; Xiaohui Shen; Jimei Yang; Xin Lu; Mohit Bansal; Tamara L Berg", "journal": "", "ref_id": "b44", "title": "MattNet: Modular attention network for referring expression comprehension", "year": "2018" }, { "authors": "Licheng Yu; Patrick Poirson; Shan Yang; Alexander C Berg; Tamara L Berg", "journal": "Springer", "ref_id": "b45", "title": "Modeling context in referring expressions", "year": "2016" }, { "authors": "Siyue Yu; Jimin Xiao; Bingfeng Zhang; Eng Gee Lim", "journal": "", "ref_id": "b46", "title": "Democracy does matter: Comprehensive feature mining for co-salient object detection", "year": "2022" }, { "authors": "Dingwen Zhang; Junwei Han; Chao Li; Jingdong Wang; Xuelong Li", "journal": "IJCV", "ref_id": "b47", "title": "Detection of co-salient objects by looking deep and wide", "year": "2016" }, { "authors": "Kaihua Zhang; Mingliang Dong; Bo Liu; Xiao-Tong Yuan; Qingshan Liu", "journal": "", "ref_id": "b48", "title": "DeepACG: Co-saliency detection via semantic-aware contrast gromov-wasserstein distance", "year": "2021" }, { "authors": "Kaihua Zhang; Tengpeng Li; Bo Liu; Qingshan Liu", "journal": "", "ref_id": "b49", "title": "Cosaliency detection via mask-guided fully convolutional networks with multi-scale label smoothing", "year": "2019" }, { "authors": "Kaihua Zhang; Tengpeng Li; Shiwen Shen; Bo Liu; Jin Chen; Qingshan Liu", "journal": "", "ref_id": "b50", "title": "Adaptive graph convolutional network with attention graph clustering for co-saliency detection", "year": "2020" }, { "authors": "Ni Zhang; Junwei Han; Nian Liu; Ling Shao", "journal": "", "ref_id": "b51", "title": "Summarize and search: Learning consensus-aware dynamic convolution for co-saliency detection", "year": "2021" }, { "authors": "Qijian Zhang; Runmin Cong; Junhui Hou; Chongyi Li; Yao Zhao", "journal": "NeurIPS", "ref_id": "b52", "title": "CoADNet: Collaborative aggregation-anddistribution networks for co-salient object detection", "year": "2020" }, { "authors": "Zhao Zhang; Wenda Jin; Jun Xu; Ming-Ming Cheng", "journal": "Springer", "ref_id": "b53", "title": "Gradient-induced co-saliency detection", "year": "2020" }, { "authors": "Chaoyang Zhu; Yiyi Zhou; Yunhang Shen; Gen Luo; Xingjia Pan; Mingbao Lin; Chao Chen; Liujuan Cao; Xiaoshuai Sun; Rongrong Ji", "journal": "Springer", "ref_id": "b54", "title": "SeqTR: A simple yet universal network for visual grounding", "year": "2022" }, { "authors": "Ziyue Zhu; Zhao Zhang; Zheng Lin; Xing Sun; Ming-Ming Cheng", "journal": "IEEE TPAMI", "ref_id": "b55", "title": "Co-salient object detection with corepresentation purification", "year": "2023" }, { "authors": "Xueyan Zou; Zi-Yi Dou; Jianwei Yang; Zhe Gan; Linjie Li; Chunyuan Li; Xiyang Dai; Harkirat Behl; Jianfeng Wang; Lu Yuan", "journal": "", "ref_id": "b56", "title": "Generalized decoding for pixel, image, and language", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 308.86, 381.97, 236.25, 34.44 ], "formula_id": "formula_0", "formula_text": "d pos = d(z n , L) and negative distance d neg = d(z n , L anti ) are computed, where the Euclidean Distance d(•) is applied. If d pos + m < d neg (" }, { "formula_coordinates": [ 4, 128.19, 451.25, 158.17, 24.8 ], "formula_id": "formula_1", "formula_text": "M l n = V T n • L V n L .(1)" }, { "formula_coordinates": [ 4, 121.51, 539.17, 164.86, 12.69 ], "formula_id": "formula_2", "formula_text": "p n = avg(M l n V n ),(2)" }, { "formula_coordinates": [ 4, 90.23, 443.61, 454.88, 271.34 ], "formula_id": "formula_3", "formula_text": "M v n = {M v ni } N i=1 , as M v ni = V T n • p i V n p i ,(3)" }, { "formula_coordinates": [ 5, 50.11, 313.48, 236.25, 25.64 ], "formula_id": "formula_4", "formula_text": "{s pos i } N i=1 . Negative score {s neg i } N" }, { "formula_coordinates": [ 5, 50.11, 434.21, 79.27, 12.32 ], "formula_id": "formula_5", "formula_text": "M v n = {M v ni } N i=1" }, { "formula_coordinates": [ 5, 87.74, 499.29, 198.62, 14.94 ], "formula_id": "formula_6", "formula_text": "M v n = rearrange (M v n |R pos + R neg ),(4)" }, { "formula_coordinates": [ 6, 56.2, 96.46, 230.17, 22.31 ], "formula_id": "formula_8", "formula_text": "L = L ce ( Ŷ, Y) + λL mirr ce ( Ŷanti , 1 -Y) + t T L tri ,(6)" } ]
2024-01-09
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b1", "b4", "b7", "b34", "b28", "b5", "b21", "b41", "b20", "b22", "b23", "b26", "b27", "b15", "b7", "b16", "b16", "b15", "b18", "b39", "b32", "b24", "b6" ], "table_ref": [], "text": "Computational models of argumentation [4] play a central role in non-monotonic reasoning and have a wide range of applications in various fields such as law and healthcare [2]. One key aspect of these models is the use of structured argumentation formalisms [5], which outline formal argumentative workflows from building blocks. Prominent approaches include assumption-based argumentation (ABA) [8], ASPIC + [35], DeLP [29], and deductive argumentation [6]. The reasoning process within these formalisms typically involves creating argument structures and identifying conflicts among them in a systematic manner from rule-based knowledge bases. The resulting arguments and conflicts are known as argumentation frameworks (AFs) [22]. These frameworks are then evaluated using semantics to resolve conflicts, determine the acceptability of arguments and draw conclusions based on the original knowledge bases.\nIn this paper, we focus on ABA, as a versatile structured argumentation formalism that, despite its simplicity, is able to handle reasoning with certain types of preferences, strict and defeasible rules, and different types of attacks without the need for additional tools [42]. Additionally, ABA can be naturally expanded to include more advanced reasoning with preferences [21] and probabilities [23], can support applications (e.g. in healthcare [14; 18], law [24] and robotics [27]), and can be suitably deployed in multi-agent settings to support dialogues [28].\nAn ABA framework (ABAF) amounts to a set of rules from some deductive system, candidate assumptions amongst the sentences in its language, and a contrary for each assumption: at an abstract level, arguments are deductions supported by rules and assumptions, and attacks are directed at assumptions in the support of arguments, by means of arguments for their contraries. Thus, assumptions constitute the defeasible part of an ABAF. There are no restrictions in ABA, in general, as to where the assumptions may appear in the rules [16]. A common restriction adopted in the study and deployment of ABA, however, is that ABAFs are flat, i.e., each set of assumptions is closed [8]. Intuitively speaking, flatness means that assumptions cannot be inferred, only assumed to be true or not. Flat ABA is well-studied and, because of its relative simplicity [17], it is equipped with a variety of computational mechanisms [41; 3; 34]. However, general, non-flat ABAFs have not yet been studied as comprehensively (except for aspects of computational complexity, e.g. as in [17]), despite their potential for a broader range of applications than restricted flat ABA. The following example gives a simple illustration.\nExample 1.1. Consider the following discussion about climate change. It is an abstraction of an idealised but realistic debate: is climate change actually happening? We cannot prove it for sure, so it makes sense to see it as an assumption (cc), but we can try and establish it by looking at its consequences: if it is actually happening we may expect an increased amount of rain (assumption mr), but then may need to deal with arguments against the validity of this assumption: one may argue that there has always been more rain at times, and so it is standard (assumption sr for \"standard rain\") and thus object against mr (using rule not mr ← sr), which in turn can be defeated by looking at statistics (s). This yields an ABAF D consisting of atoms L = {cc, mr, sr, s, not cc, not mr, not sr}, assumptions A = {cc, mr, sr}, and rules (R): mr ← cc, not mr ← sr, not sr ← s, s ←, Moreover, for each assumption X, the contrary is not X. This is a non-flat ABAF, as the assumption mr is derivable from the assumption cc. Allowing for assumptions to be derived from rules can thus accommodate a form of hypothetico-deductive reasoning in debates of this form.\nA main booster for the development of flat ABAFs was the close correspondence to abstract AFs [16]. This contributed to the theoretical understanding of flat ABA, but plays also an important role in further aspects like explainability [19], dynamic environments [40], and solving reasoning tasks [33].\nIn this paper, we extend this line of research and establish a connection between non-flat ABA and an abstract argumentation formalism. To this end, we require two ingredients. The first crucial observation is that ABAFs can be translated into bipolar AFs (BAFs) [32; 11; 1] under a novel semantics. As opposed to Dung-style AFs, BAFs do not only consider an attack relation between arguments, representing conflicts, but also a support relation, that can represent justifications. Various semantics for BAFs have been proposed in the literature (see [10; 13] for overviews). Our BAF semantics, which capture non-flat ABAFs, borrow ideas from previous approaches, but are novel in their technical details. The second observation is that the aforementioned approach does not work for all common ABA semantics. We tackle this issue by slightly extending our BAFs, similarly in spirit to so-called claim-augmented AFs (CAFs) [25] which assign to each argument a corresponding claim. In our work, we will extend BAFs with premises storing under which conditions an argument can be inferred.\nThe main contributions of this paper are as follows.\n• We define BAF semantics: novel, albeit similar in spirit to an existing semantics interpreting support as deductive [7]. We also study basic properties.\n• We show that for complete-based semantics, non-flat ABAFs admit a translation to BAFs w.r.t. our semantics.\n• We propose so-called premise-augmented BAFs and show that they capture all common ABA semantics.\n• We analyse the computational complexity of our BAFs." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b21", "b0" ], "table_ref": [], "text": "Abstract Argumentation. An abstract argumentation framework (AF) [22] is a directed graph F = (A, Att) where A represents a set of arguments and Att ⊆ A × A models attacks between them. For two arguments x, y ∈ A, if (x, y) ∈ Att we say that x attacks y, x is an attacker of y, as well as x attacks (any set) E given that y ∈ E ⊆ A. We let\nE + F = {x ∈ A | E attacks x}. A set E ⊆ A is conflict-free in F iff for no x, y ∈ E, (x, y) ∈ Att. E defends an argument x if E attacks each attacker of x. A conflict-free set E is admissible in F (E ∈ ad (F )) iff it defends all its elements. A semantics is a function F → σ(F ) ⊆ 2 A .\nThis means, given an AF F = (A, R), a semantics returns a set of subsets of A. These subsets are called σ-extensions. In this paper we consider so-called admissible, complete, grounded, preferred, and stable semantics (abbr. ad , co, gr , pr , stb). For an AF and E ∈ ad (F ), we let i)\nE ∈ co(F ) iff E contains all arguments it defends; ii) E ∈ gr (F ) iff E is ⊆- minimal in co(F ); iii) E ∈ pr (F ) iff E is ⊆-maximal in co(F ); iv) E ∈ stb(F ) iff E + F = A \\ E.\nA bipolar argumentation framework (BAF) is a tuple F = (A, Att, Sup), where A is a finite set of arguments, Att ⊆ A × A is the attack relation as before and Sup ⊆ A × A is the support relation [1]. Given a BAF F = (A, Att, Sup), we call F = (A, Att) the underlying AF of F . Graphically, we depict the attack relation by solid edges and the support relation by dashed edges.\nAssumption-based Argumentation. We assume a deductive system (L, R), where L is a formal language, i.e., a set of sentences, and R is a set of inference rules over L. A rule r ∈ R has the form a 0 ← a 1 , . . . , a n with a i ∈ L. We denote the head of r by head(r) = a 0 and the (possibly empty) body of r with body(r) = {a 1 , . . . , a n }. Definition 2.1. An ABA framework is a tuple (L, R, A, ), where (L, R) is a deductive system, A ⊆ L a non-empty set of assumptions, and : A → L a (total) contrary function.\nWe say that a sentence p ∈ L is derivable from assumptions S ⊆ A and rules R ⊆ R, denoted by S ⊢ R p, if there is a finite rooted labeled tree T such that the root is labeled with p, the set of labels for the leaves of T is equal to S or S ∪ {⊤}, and for every inner node v of T there is a rule r ∈ R such that v is labelled with head(r), the number of successors of v is |body(r)| and every successor of v is labelled with a distinct a ∈ body(r) or ⊤ if body(r) = ∅ By Th D (S) = {p ∈ L | ∃S ′ ⊆ S : S ′ ⊢ R p} we denote the set of all conclusions derivable from an assumption-set S in an ABA framework (ABAF) D. Observe that S ⊆ Th D (S) since, by definition, each a ∈ A is derivable from {a} ⊢ ∅ a. For S ⊆ A, we let S = {a | a ∈ S}; moreover, for a derivation S ⊢ p we write asms(S ⊢ p) = S and for a set E of derivations we let asms(E) = x∈E asms(x). Also, we often write S ⊢ R p simply as S ⊢ p. A set S ⊆ A attacks a set T ⊆ A if for some a ∈ T we have that a ∈ Th D (S). A set S is conflict-free, denoted E ∈ cf (D), if it does not attack itself. With a little notational abuse we say S attacks a if S attacks the singleton {a}.\nGiven S ⊆ A, the closure cl (S) of S is cl(S) = Th D (S) ∩ A. With a little notational abuse we write cl (a) instead of cl ({a}) whenever S is a singleton. A set S ⊆ A is closed if S = cl (S). Observe that, in order for S to be non-closed, it is necessary that R contains a rule a 0 ← a 1 , . . . , a n s.t. a 0 ∈ A, i.e., the head of the rule is an assumption. Now we consider defense [8; 16]. Observe that defense in general ABAFs is only required against closed sets of attackers. Formally: Definition 2.2. Let D = (L, R, A, ) be an ABAF, S ⊆ A and a ∈ A. We say that S defends a iff for each closed set T of assumptions s.t. T attacks a, we have that S attacks T ; S defends itself iff S defends each b ∈ S.\nWe next recall admissible, grounded, complete, preferred, and stable ABA semantics.\nDefinition 2.3. Let D = (L, R, A, ) be an ABAF and S ⊆ A be a set of assumptions s.t. S ∈ cf (S). We say • S ∈ ad (D) iff S is closed and defends itself;\n• S ∈ pr (D) iff S is ⊆-maximal in ad (D); • S ∈ co(D) iff S ∈ ad (D)\nand is a superset of every assumption set it defends;\n• S ∈ gr (D) iff S = T ∈co(D) T ;\n• S ∈ stb(D) iff S is closed and attacks each x ∈ A \\ S.\nIn this paper we stipulate that the empty intersection is interpreted as ∅, i.e., if co(D) = ∅, then gr (D) = ∅. " }, { "figure_ref": [], "heading": "Closed Extensions for Bipolar AFs", "publication_ref": [ "b6" ], "table_ref": [], "text": "Our goal is to translate non-flat ABAFs into BAFs. In this section, we develop BAF semantics which are suitable for this endeavor (under complete-based semantics for ABAFs). To this end we interpret the support relation in the spirit of the notion of deductive support [7], i.e., the intuitive reading is that whenever x is accepted and x supports y, then y is also accepted. While this approach borrows from the BAF literature, to the best of our knowledge the exact definitions do not coincide with any previously proposed BAF semantics. We define extension-based semantics directly on the given BAF, without re-writing it to an AF. We start with the notion of the closure for BAFs.\nDefinition 3.1. Let F = (A, Att, Sup) be a BAF. Consider the operator µ defined by\nµ(E) = E ∪ {a ∈ A | ∃e ∈ E : (e, a) ∈ Sup}. We call cl (E) = n≥1 µ n (E) the closure of E. A set E ⊆ A is called closed if E = cl(E).\nNow we introduce the basic concepts of conflict-freeness and defense underlying our semantics. As usual, a set of arguments is said to be conflict-free whenever it does not attack itself. Our notion of defense is inspired by the way it is defined for ABA (cf. Definition 2.2).\nDefinition 3.2. Let F = (A, Att, Sup) be a BAF. A set E ⊆ A is conflict-free if E ∩ E + F = ∅; E defends a ∈ A if E attacks each closed set S ⊆ A which attacks a; the characteristic function of F is Γ(E) = {a ∈ A | E defends a}.\nObserve that this is a weaker condition than the defense notion of AFs since we can disregard non-closed attackers.\nExample 3.3. Let F be the following BAF (recall that the attack relation is depicted by solid edges and the support relation by dashed edges):\nz y x F : u v\nWe have that cl (y) = {y, z} and y defends z which can be seen as follows: even though u attacks z, u is not a closed set of arguments. The closure of {u} is cl ({u}) = {u, v}. Since y attacks v, we find z ∈ Γ({y}).\nAs we saw in this example, our defense notion can intuitively be interpreted as follows: if we seek to defend some argument a, then it suffices to counterattack the closure of each attacker b of a (rather than b itself). Lemma 3.4. Let F = (A, Att, Sup) be a BAF and let E ⊆ A and a ∈ A. Then E defends a iff for each attacker b of a it holds that E attacks cl ({b}).\nLet us now define admissibility. We require a set of arguments to be conflictfree, closed, and self-defending.\nDefinition 3.5. Let F = (A, Att, Sup) be a BAF. A set E ⊆ A is admissible, E ∈ ad (F ), if i) E is conflict-free, ii) E is closed, and iii) E ⊆ Γ(E). Example 3.6. Recall Example 3.3. Let us verify that E = {y, z} ∈ ad (F ).\nClearly, E is closed with E ∈ cf (F ). The two attackers of E are x and u with cl (x) = {x} and cl (u) = {u, v}, both of which are counter-attacked. Another admissible set is E ′ = {u, v} since the attack by y is countered due to cl (y) = {y, z} and u attacks z.\nAs usual, the empty set is always admissible and hence, we can guarantee ad (F ) = ∅ for any given BAF F . Proposition 3.7. Let F be a BAF. Then ∅ ∈ ad (F ). In particular, ad (F ) = ∅.\nGiven this notion of admissibility, the definition of the remaining semantics is natural: for complete extensions, we require E to include all defended arguments; preferred extensions are defined as maximal admissible sets; the grounded extension is the intersection of all complete ones.\nDefinition 3.8. Let F = (A, Att, Sup) be a BAF. A set E ⊆ A of arguments s.t. E ∈ cf (F ) is • preferred, E ∈ pr (F ), iff it is maximal admissible; • complete, E ∈ co(F ), iff E ∈ ad (F ) and E = Γ(E); • grounded, E ∈ gr (F ), iff E = S∈co(F ) S; • stable, E ∈ stb(F ), iff it is closed and E + = A \\ E.\nExample 3.9. In our Example 3.3, the admissible extension E = {y, z} is maximal and thus preferred. Moreover, {x, u, v} ∈ pr (F ). We observe however that {y, z} is not complete since it does not contain the unattacked argument u. Hence co(F ) = {{u, v}}.\nAs a final remark regarding our BAF semantics, let us mention that they do not admit a translation into Dung-style AFs. As the previous example already shows, preferred extensions are in general not complete (which is the case for AFs). Another interesting observation is that we do not necessarily have a complete extension. These properties show that a translation to AFs is impossible.\nExample 3.10. Let F be the following BAF:\nz y x F :\nSuppose E ∈ co(F ). Since z is unattacked, we must have z ∈ E. As complete sets must be closed, y ∈ E follows. However, by the same reasoning, x ∈ E and thus, E / ∈ cf (F ); a contradiction. Indeed, the only admissible sets are ∅ and {x}, both of which are not complete.\nNote that, for the same reason, ABAFs cannot be translated into AFs: general ABAFs violate many properties that hold for AFs. Thus we require BAFs for the translation." }, { "figure_ref": [], "heading": "Instantiated BAFs", "publication_ref": [], "table_ref": [], "text": "Suppose we are given an ABAF D = (L, R, A, ). Our goal is to translate D into a BAF F D = (A, Att, Sup). The underlying idea is to define A and Att as it is done for flat ABAFs, i.e., each ABA argument S ⊢ p corresponds to an argument in F D which attacks arguments in F D corresponding to T ⊢ q whenever p ∈ T . What we have left to discuss is the support relation. To this end we need to take care of the closure of sets of assumptions. More specifically, if some assumption a is in the closure of a set S, i.e., S ⊢ a is an argument in D, then we encode this in F D using Sup.\nTo illustrate this, suppose we are given assumptions a, b, c and rules \nr 1 : p ← a,\na ∈ cl (S) ⇒ (S ⊢ p, {a} ⊢ a) ∈ Sup. (1\n)\nWe therefore define the support relation of our corresponding BAF according to (1) as follows.\nDefinition 4.1. For an ABAF D = (L, R, A, ), we define the instantiated BAF F D = (A, Att, Sup) via As we saw, {b} is admissible in D. Now consider the set E of all arguments with {b} as assumption set, i.e., E = {A 2 , A 3 , b}. Again, E is conflict-free and closed (in F D ). The attack from A 1 is countered; moreover, A 4 attacks A 3 ; however, as A 4 supports d, the closure of A 4 (in F D ) is {A 4 , d} with A 3 attacking d; so this attack is also countered. We infer that E is admissible in F D .\nA = {(S ⊢ p) | (S ⊢ p) is an argument in D} Att = {(S ⊢ p, T ⊢ q) ∈ A 2 | p ∈ T } Sup = {(S ⊢ p, {a} ⊢ a) ∈ A 2 | a ∈ cl (S)}\nThough Definition 4.1 induces infinitely many arguments, they are determined by their underlying assumptions and conclusion. Hence it suffices to construct a finite BAF.\nIn the remainder of this section, we establish the following main result showing that the BAF F D as defined above is suitable to capture non-flat ABAFs for complete-based semantics: if E is some extension (in F D ), then a set of acceptable assumptions can be obtained by gathering all assumptions underlying the arguments in E; and if S is acceptable (in D), then all arguments constructible from the assumptions in S from a corresponding extension in the BAF F D .\nTheorem 4.3. Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. Let σ ∈ {co, gr , stb}. As stated in the theorem, we only get one direction for the ad semantics; for the pr semantics, both directions fail. We will discuss and subsequently fix the underlying issue later.\n• If E ∈ σ(F D ), then asms(E) ∈ σ(D). • If S ∈ σ(D), then {x ∈ A | asms(x) ⊆ S} ∈ σ(F D ). If S ∈ ad (D), then {x ∈ A | asms(x) ⊆ S} ∈ ad (F D )." }, { "figure_ref": [], "heading": "From ABA to BAF", "publication_ref": [], "table_ref": [], "text": "Translating an extension of the given ABAF into one in the BAF is the easier direction. Our first step is the following proposition which shows the desired connection between conflict-free and closed sets. \n• If S ∈ cf (D), then E ∈ cf (F D ). • If S is closed in D, then E is closed in F D . • If S ∈ ad (D), then E ∈ ad (F D ).\nIn order to extend this result to complete extensions, we have only left to show that all defended arguments are included in the corresponding BAF F D . Proposition 4.6. Let D = (L, R, A, ) be an ABAF and\nF D = (A, Att, Sup) the instantiated BAF. If S ∈ co(D), then for E = {x ∈ A | asms(x) ⊆ S} we get E ∈ co(F D ).\nMoreover, the set of attacked assumptions is preserved. Thus, we also find stable extensions of F D . Consequently, the first item in Theorem 4.3 is shown." }, { "figure_ref": [], "heading": "From BAF to ABA", "publication_ref": [], "table_ref": [], "text": "Turning extensions of the instantiated BAF into extensions of the underlying ABAF is more involved and does not work for admissible sets without any further restriction. It is important to understand why it does not work for admissible sets since this will also demonstrate why complete-based semantics do not face this issue. The problem is related to the way we have to construct our support relation. We illustrate this in the following example.\nExample 4.8. Let D = (L, R, A, ) be the ABAF where L = {a, b, c, a, b, c, p}, A = {a, b, c}, is as indicated, and R = {p ← a., q ← b., c ← p, q., c ← c.}. Observe that S = {a, b} is not admissible in D since S is not closed: indeed, we can derive p from a and q from b and thus c from S, i.e., c ∈ cl (S). Now consider F D = (A, Att, Sup):\np a A 1 q b A 2 c p q a b A 3 c c A 4 c c p q a b A 5 a b c\nWe want to emphasize that there is neither a support arrow from A 1 to c nor from A 2 to c; the fact that c ∈ cl({a, b}) holds is reflected in (A 3 , c) ∈ Sup and (A 5 , c) ∈ Sup.\nConsider now E = {a, b, A 1 , A 2 }. As all arguments in E are unattacked and have no out-going support arrows, it is clear that E ∈ ad (F D ). Yet, the required assumptions to build these arguments are {a, b}, despite {a, b} / ∈ ad (D).\nThe mismatch in the previous example occurred because we did not take all arguments we can build from a and b. Indeed, we did not include A 3 and A 5 in our extension E of F D . These arguments encode the support from a and b to c and, thus, we would have detected the missing c we cannot defend. This observation leads to the following notion. The next proposition states that the problematic behavior we observed in Example 4.8 regarding admissible extensions does not occur for assumption exhaustive sets. Proposition 4.12. Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. Let E ⊆ A be assumption exhaustive and let S = asms(E).\n• If E ∈ cf (F D ), then S ∈ cf (D). • If E is closed in F D , then S is closed in D. • If E ∈ ad (F D ), then S ∈ ad (D).\nGiven the results we have established so far, Proposition 4.12 can only serve as an intermediate step because it relies on assumption exhaustive sets of arguments. However, within the context of an abstract BAF this notion does not make any sense; it is tailored to arguments stemming from instantiating D. Because of this, the following lemma is crucial. It states that for completebased semantics each extension is assumption exhaustive. Thus, in this case, the mismatch we observed in Example 4.8 does not occur. Example 4.14. In Example 4.8, we saw that the set E = {a, b, A 1 , A 2 } of arguments is not assumption exhaustive. Indeed, since E is admissible, it is clear by definition that each argument x with asms(x) ⊆ {a, b} is defended, including A 3 and A 5 . Hence E is not complete. On the other hand, Γ(E) = {a, b, A 1 , A 2 , A 3 , A 5 } = E ′ is assumption exhaustive as desired. Lemma 4.13 allows us to apply Proposition 4.12 to all complete extensions. Hence we can infer the following result without restricting to assumption exhaustive sets. • If E ∈ gr (F D ), then S = asms(E) ∈ gr (D).\n• If S ∈ gr (D), then for E = {x ∈ A | asms(x) ⊆ S} we have that E ∈ gr (F D ).\nFinally, stable extensions (in the BAF F D ) are also assumption exhaustive due to Lemma 4.13 (each stable extension is complete). This yields the desired connection for stb. The BAF reflects the fact that cc (in favor of climate change) can only be accepted when mr (more rain) is also included in the extension. As desired, cc is acceptable; for instance, {cc, mr} ∈ co(D). Correspondingly, {cc, mr, A 1 , A 3 } ∈ co(F D ) so the acceptability of cc is also found in F D ." }, { "figure_ref": [], "heading": "BAFs and Admissible Semantics", "publication_ref": [], "table_ref": [], "text": "Our analysis in Section 4 reveals that we cannot capture admissible (and consequently preferred) ABA semantics by means of our instantiated BAFs since there is no way to guarantee that the accepted sets of arguments are assumption exhaustive. In this section, we will propose a slightly augmented version of BAFs which additionally stores this information. This proposal is in line with recent developments in AF generalizations which capture certain features of instantiated graphs in addition to a purely abstract view [26; 39; 40]. In Section 6 we will see that the computational price we have to pay for this is moderate.\nDefinition 5.1. Let P be a set (of premises). A premise-augmented BAF (pBAF) F is a tuple F = (A, Att, Sup, π) where F = (A, Att, Sup) is a BAF and π : A → 2 P is the premise function; F is called the underlying BAF of F.\nWe let π(E) = a∈E π(a). We sometimes abuse notation and write F = (F , π) for the pBAF F = (A, Att, Sup, π) with underlying BAF F = (A, Att, Sup). The following properties are defined due to the underlying BAF F : Definition 5.2. Let F = (F , π) be a pBAF. A set E ⊆ A is conflict-free resp. closed whenever this is the case for E in F ; E defends a ∈ A in F iff this is the case in F .\nThe only novel concept we require is the notion of an exhaustive set of arguments.\nDefinition 5.3. Let F = (F , π) be a pBAF. A set E ⊆ A is exhaustive iff π(a) ⊆ π(E) implies a ∈ E.\nSemantics for pBAFs are defined similarly as for BAFs, but with the important difference that we require all admissible-based extensions to be exhaustive. We have that, for instance, E = {a, A 1 } ∈ ad (F). Both arguments are unattacked with no out-going support arrow. Thus the only condition to verify is exhaustiveness. This property is satisfied since no further argument x satisfies π(x) ⊆ π(E) = {a}.\nDefinition 5.4. For a pBAF F = (F , π), a set E ∈ cf (F ) is • admissible, E ∈ ad (F), iff E is exhaustive and E ∈ ad (F ); • preferred, E ∈ pr (F), iff it is ⊆-maximal admissible; • complete, E ∈ co(F), iff E ∈ ad (F) and E = Γ(E); • grounded, E ∈ gr (F), iff E = S∈co(F) S; • stable, E ∈ stb(F), iff\nOn the other hand, E ′ = {a, b, A 1 , A 2 } is not admissible. Since π(E ′ ) = {a, b}, exhaustiveness would also require presence of A 3 and A 5 (which, in turn, would result in acceptance of c which cannot be defended).\nFollowing the observations made in this example, we define F D as follows:\nThe underlying BAF F is given as before and π stores the assumptions required to entail an argument. Definition 5.6. For an ABAF D = (L, R, A, ), the instantiated pBAF\nF D = (A, Att, Sup, π) = (F , π) is F = F D ∀x ∈ A : π(x) = asms(x).\nWe can now capture any non-flat ABAF as follows.\nTheorem 5.7. Let D = (L, R, A, ) be an ABAF and F D = (F , π) the instantiated pBAF. Then\n• if E ∈ σ(F D ), then asms(E) ∈ σ(D); • if S ∈ σ(D), then {x ∈ A | asms(x) ⊆ S} ∈ σ(F D )\nfor any σ ∈ {ad , co, pr , gr , stb}." }, { "figure_ref": [], "heading": "Computational Complexity", "publication_ref": [ "b16" ], "table_ref": [], "text": "We consider the usual decision problems (under semantics σ) in formal argumentation. Let K be a knowledge base (i.e., an ABAF, BAF, or pBAF), let a be an assumption resp. argument, and let E be a set of assumptions resp. arguments.\n• Credulous acceptance Cred σ : Given K and some a, is it true that a ∈ E for some E ∈ σ(K)?\n• Skeptical acceptance Skept σ : Given K and some a, is it true that a ∈ E for each E ∈ σ(K)?\n• Verification Ver σ : Given K and a set E, is it true that E ∈ σ(K)?\nWe start with the computational complexity of BAFs with our novel semantics. The high level observation is that many tasks are close to reasoning in usual AFs. However, computing the grounded extension is much harder, inducing certain consequences (e.g. there is no shortcut for skeptical reasoning under complete semantics). Theorem 6.1. For BAFs, the problem • Ver σ is tractable for σ ∈ {ad , co, stb}, coNP-complete for σ = pr , and DP-complete for σ = gr .\n• Cred σ is NP-complete for σ ∈ {ad , co, pr , stb} and DP-complete for σ = gr .\n• Skept σ is trivial for σ = ad , DP-complete for σ ∈ {co, gr , stb}, and Π P 2complete for σ = pr .\nSurprisingly, the price we have to pay for also capturing admissible-based semantics is rather small. The computational complexity of the pBAFs we construct is almost the same; the only difference is that skeptical acceptance w.r.t. admissible semantics is not trivial anymore, but now becomes coNP-complete. Proposition 6.2. For pBAFs, Skept ad is coNP-complete.\nFrom this observation the following main theorem follows as a corollary of the complexity results for BAFs. Theorem 6.3. For pBAFs, the problem • Ver σ is tractable for σ ∈ {ad , co, stb}, coNP-complete for σ = pr , and DP-complete for σ = gr .\n• Cred σ is NP-complete for σ ∈ {ad , co, pr , stb} and DP-complete for σ = gr .\n• Skept σ is coNP-complete for σ = ad , DP-complete for σ ∈ {co, gr , stb}, and Π P 2 -complete for σ = pr . \nCredσ ABA Σ P 2 -c DP2-c Σ P 2 -c Σ P 2 -c NP-c BAF NP-c DP-c NP-c NP-c NP-c pBAF NP-c DP-c NP-c NP-c NP-c Skept σ ABA Π P 2 -c DP2-c DP2-c Π P 3 -c DP-c BAF triv. DP-c DP-c Π P 2 -c DP-c pBAF coNP-c DP-c DP-c Π P 2 -c DP-c\nTable 1: Complexity: Non-flat ABA vs. (p)BAFs Table 1 summarizes our complexity results for BAFs and pBAFs and compares them to the known computational complexity of non-flat ABA [17]. 1 We observe that for each reasoning problem we consider, pBAFs are one level below non-flat ABA in the polynomial hierarchy. Moreover, BAFs and pBAFs are comparable for most reasoning problems, with skeptical reasoning for ad semantics being the only exception. This is in line with our results that pBAFs are capable of capturing admissible reasoning in ABA (cf. Theorem 5.7), whereas BAFs are not. Note that Theorem 5.7 does not contradict the differing results pertaining to computational in complexity in ABA vs. pBAFs since the instantiation procedure yields exponentially many arguments in general." }, { "figure_ref": [], "heading": "Discussion and Related Work", "publication_ref": [ "b8", "b6", "b35", "b36", "b11", "b37", "b30", "b29", "b6", "b0", "b21", "b0", "b0", "b10" ], "table_ref": [], "text": "There is a rich selection of bipolar argumentation approaches in the literature [9]. The most prominent ones are deductive [7], necessary [36], evidential [37] and backing [12] support. More recent work on classical BAFs looked at symmetry between attack and support [38], argument attributes [31] and monotonicity [30].\nOur notion of defense can be characterized using notions of extended attacks that occur in BAFs [1; 7]. There is a mediated attack from a to b if a attacks an argument c that is transitively supported by b [7]. Using our notion of closure from Definition 3.2, this can be rewritten as a attacks cl({b}). Hence, Lemma 3.4 states that E defends a iff for each attacker b of a, there is a direct or mediated attack from E to b.\nAnother approach is due to [1]. Here a set of arguments S defends [22] an argument a if S attacks every attacker of a. Further, S attacks an argument a iff there is a direct attack from S to a or S transitively supports an argument b that attacks a. This amounts to requiring that cl(S) attacks a. A set of arguments S is then called conflict free if S does not attack any argument in S in the previous sense, that is, if cl(S) does not directly attack any argument in S. They call S admissible iff S is conflict-free, closed under support and defends all its elements. One important difference to our notion of admissible sets is the definition of defense. While [1] allow defense via direct and supported attacks, we allow defense via direct or mediated attacks. For instance in F = ({a, b, c}, {(a, c), (b, a)}, {(b, c)}), {a} is admissible w.r.t. our definition because it defends itself against b via a mediated attack, is closed under support and conflict-free. However, it is not admissible w.r.t. the definition in [1] because here, it does not defend against b.\nThe c-admissibility in [11] is close to ours, but their work uses supported and indirect defeat which is incompatible with our concepts.\n[15] also explored the relation ABAFs-BAFs, but understands the BAFs under existing semantics in terms of a restricted form of non-flat-ABAFs, rather than ABAF as BAF, under new semantics, as we do." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b15", "b19", "b32", "b18", "b19" ], "table_ref": [], "text": "We translated non-flat ABAFs into BAFs. To this end we proposed novel BAF semantics to capture complete-based semantics. By means of a novel formalisms, called premise-augmented BAFs, we also established a correspondence between admissible-based semantics of non-flat ABA and our BAFs. We discussed basic properties of these semantics and proved the correspondence to ABA. We then investigated the computational complexity of BAFs and showed that, compared to non-flat ABA, the typical reasoning problems are one level lower in the polynomial hierarchy.\nThis work opens several avenues for future work. It would be interesting to extend our results to further ABA semantics like semi-stable and ideal semantics [16], as well as a set-stable semantics [20]. Further, we only discussed basic properties of our (p)BAF semantics: it would also be interesting to study a version of the fundamental lemma, existence of extensions, and the impact of restricting the graph class.\nThe lower computational complexity of (p)BAFs compared to non-flat ABAFs raises the question as to which extent our research can contribute to the development of efficient instantiation-based ABA solvers. As for the flat ABA instantiation by means of AFs, our constructed graphs are infinite in general. However, in the case of AFs, only finitely many arguments suffice to determine the semantics, and techniques to reduce the size of the constructed AF even further are available [33].\nAs a final remark, most approaches for explaining reasoning in ABA construct the underlying AF and extract an argumentative explanation from it [19]. Our research serves as the first step to enabling this machinery for non-flat ABAFs as well: the next goal would be the computation of intuitive explanations in (p)BAFs. This line of research would contribute to applications where non-flat ABAFs give natural representations, as in Example 1.1 as well as set-tings where agents share information (e.g. as in [20]) and may thus disagree on which information is factual and defeasible.\n• If S ∈ cf (D), then E ∈ cf (F D ). • If S is closed in D, then E is closed in F D . • If S ∈ ad (D), then E ∈ ad (F D ).\nProof. (conflict-free) Suppose there is some argument x ∈ E attacking E. Let conc(x) = ā; then E must contain some argument y with a ∈ asms(y). Then a ∈ S. Moreover, x ∈ E implies S ⊢ conc(x) = ā. Thus, S is not conflict-free, a contradiction.\n(closed) Let x = {a} ⊢ a be an argument in F D s.t. (y, x) ∈ Sup for some y ∈ E. For y of the form T ⊢ p we have by definition a ∈ cl (T ). By choice of E, T ⊆ S and hence cl (T ) ⊆ cl (S). Since S is closed we deduce a ∈ S. Again by choice of E, x ∈ E.\n(defense) Suppose E ′ is a closed set of arguments attacking E. Let us first assume E ′ is of the form E ′ = cl (x) for some x ∈ A. Let T = asms(x). Then cl (T ) = asms(E ′ by Lemma A.2. By admissibility of S, S ⊢ ā for some a ∈ cl (T ), i.e., some a ∈ asms(E ′ ). By definition, there is some tree-based argument y = S ′ ⊢ ā with S ′ ⊆ S. By choice of E we have y ∈ E due to asms(y) = S ′ ⊆ S. Hence E → E ′ in F D , i.e., E defends itself against E ′ as desired. Now for the general case suppose E ′ is an arbitrary closed set of arguments attacking E. Observe that for each x ∈ E ′ , cl (x) ⊆ cl (E ′ ). Hence take any x ∈ E ′ attacking E. By the above reasoning, E → cl (x) and thus E → E ′ . Proof. Since admissibility is already established, we have left to show:\n(fixed-point) Suppose E defends x ∈ A. We have to show that x ∈ E. Let a ∈ asms(x). We show that a is defended by S in D. We make use of Lemma A.1 and consider some tree-based argument T ⊢ ā, i.e., it attacks a. Now T ⊢ ā attacks x in F D . Since E defends x, there is some S ′ ⊆ S = asms(E) with S ′ ⊢ b for some b ∈ cl (T ); in particular, this argument S ′ ⊢ b is contained in E. Thus, b ∈ Th D (S) and therefore S counter-attacks cl (T ) in D.\nAs T ⊢ ā was an arbitrary attacker of a, Lemma A.1 ensures that S defends a. Completeness of S thus implies a ∈ S. Since a was an arbitrary assumption in asms(x), we deduce asms(x) ⊆ S. By construction of E, x ∈ E as desired.\nProposition 4.7. Let D = (L, R, A, ) be an ABAF and\nF D = (A, Att, Sup) the instantiated BAF. If S ∈ stb(D), then for E = {x ∈ A | asms(x) ⊆ S} we get E ∈ stb(F D ).\nProof. We know already that E is conflict-free and closed.\n(stb) Suppose x ∈ A \\ E. Let x be of the form T ⊢ p. By construction of E, T \\ S = ∅; consider some a ∈ T \\ S. Since S is stable, S attacks a, i.e., there is some tree-based argument S ′ ⊢ ā with S ′ ⊆ S. We have S ′ ⊢ ā ∈ E and hence E attacks x.\nFor the other direction, we again start with an auxiliary observation.\nLemma A.3. Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the corresponding BAF. If E is an assumption exhaustive and closed set of arguments in F D with a ∈ cl (asms(E)), then a ∈ asms(E). In particular, any argument of the form T ⊢ ā attacks E.\nProof. Since a ∈ cl (asms(E)), there is some argument T ⊢ a with T ⊆ asms(E). Since E is assumption exhaustive, T ⊢ a ∈ E. Moreover, a ∈ cl (T ) and hence (T ⊢ a, {a} ⊢ a) ∈ Sup. Since E is closed, {a} ⊢ a ∈ E. Hence a ∈ asms(E). Proposition 4.12. Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. Let E ⊆ A be assumption exhaustive and let S = asms(E).\n• If E ∈ cf (F D ), then S ∈ cf (D). • If E is closed in F D , then S is closed in D. • If E ∈ ad (F D ), then S ∈ ad (D).\nProof. (conflict-free) Suppose S ⊢ ā with a ∈ S. Then, there is a tree-based argument x = S ′ ⊢ ā with S ′ ⊆ S in F D . By Lemma A.3, x attacks E in F D . Since S ′ ⊆ S = asms(E), we infer x ∈ E since E is assumption exhaustive. Hence E is not conflict-free, a contradiction.\n(closed) Suppose a ∈ cl (S). Then there is some S ′ ⊆ S with S ′ ⊢ a. Since E is assumption exhaustive, E contains each argument x ∈ A with asms(x) ⊆ S; in particular we obtain S ′ ⊢ a ∈ E. Since (S ′ ⊢ a, {a} ⊢ a) ∈ Sup and E is closed, {a} ⊢ a ∈ E follows. Hence a ∈ asms(E) = S.\n(defense) Let S ′ be a closed set of assumptions attacking S. Applying Lemma A.1 we suppose S ′ = cl (T ) for some tree-based argument x = T ⊢ ā with a ∈ S. Since a ∈ asms(E), and E is assumption exhaustive, x attacks E in F D . By admissibility of E, there is some S ′ ⊆ S and a tree-based argument S ′ ⊢ b ∈ E s.t. b ∈ cl (T ). Since S ′ ⊆ S, we have b ∈ Th D (S) as well and hence, S counter-attacks S ′ in D. Proof. Since E is assumption exhaustive, we have access to Proposition 4.12.\nIn addition, we have left to show:\n(fixed-point) Suppose S defends a ∈ A in D. We have to show a ∈ S. For this, it suffices to show that E defends {a} ⊢ a, because this implies {a} ⊢ a ∈ E (completeness) and hence a ∈ asms(E) = S. Thus, let E ′ be a closed set of arguments in F D attacking {a} ⊢ a.\nWe first assume E ′ = cl (x) for some x = T ⊢ ā. By assumption, S defends a and thus, b ∈ Th(S) for some b ∈ cl (T ). Hence there is some tree-based argument S ′ ⊢ b with S ′ ⊆ S. Since S ′ ⊆ S = asms(E), we have that S ′ ⊢ b ∈ E by Lemma 4.13. It follows that E counter-attacks cl (x) (recall Lemma A.2).\nIn case E ′ is an arbitrary closed set of attackers, take one argument x ∈ E ′ attacking E and reason analogously.\nSince E ′ was an arbitrary closed attacker of {a} ⊢ a and since E is complete, {a} ⊢ a ∈ E as desired. We deduce a ∈ S and again since a was an arbitrary assumption defended by S, we have that S contains all defended assumptions in D. Proof. We know already that S is conflict-free and closed due to Proposition 4.12. Now let a ∈ A with a / ∈ S. Then a / ∈ asms(E) and thus E attacks {a} ⊢ a since E is stable in F D . We deduce ā ∈ Th D (asms(E)) and hence S attacks a in D." }, { "figure_ref": [ "fig_17", "fig_18" ], "heading": "B Proof Details of Section 5", "publication_ref": [], "table_ref": [], "text": "Theorem 5.7. Let D = (L, R, A, ) be an ABAF and F D = (F , π) the instantiated pBAF. Then\n• if E ∈ σ(F D ), then asms(E) ∈ σ(D); • if S ∈ σ(D), then {x ∈ A | asms(x) ⊆ S} ∈ σ(F D )\nfor any σ ∈ {ad , co, pr , gr , stb}.\nProof. We have left to show the transfer of exhaustiveness, since then Proposition 4.12 is applicable establishing the connection for ad and thus also pr . For complete-based semantics, the result follows from the previos main theorem for BAFs. The fact that exhaustiveness is preserved is almost immediate:\n(exhaustive) If E ∈ σ(F D ), then E is exhaustive, i.e., π(a) ⊆ π(E) implies a ∈ E. By construction, E is thus assumption exhaustive.\nIf S ∈ σ(D), then {x ∈ A | asms(x) ⊆ S} ∈ σ(F D ) is assumption exhaustive and thus π(a) ⊆ π(E) implies a ∈ E by definition of π.\nConsequently, Proposition 4.12 is applicable from which the claim follows. • Cred σ is NP-complete for σ ∈ {ad , co, pr , stb} and DP-complete for σ = gr .\nϕ ⊤ c 1 c 2 c 3 x 1 x1 x 2 x2 x 3 x3\n• Skept σ is trivial for σ = ad , DP-complete for σ ∈ {co, , stb}, and Π P 2complete for σ = pr .\nBefore heading into our membership and hardness results, let us give the constructions we require. The following adaptation of the standard construction will construct a BAF which has at least one complete extension iff Φ is satisfiable. It makes use of the same technique we already applied for our introductory BAF examples: An unattacked argument ⊤ defends ϕ, and hence each complete extension must defend the latter. Construction 1. Let Φ a propositional formula over atoms X = {x 1 , . . . , x n } in 3-CNF which we identify with a set C of clauses. Let F Φ = (A, Att, Sup) where\nA ={x i , xi | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {⊤, ϕ} Att ={(x i , xi ), (x i , x i ) | 1 ≤ i ≤ n}∪ {(x i , c) | x i ∈ c ∈ C} ∪ {(x i , c) | ¬x i ∈ c ∈ C}∪ {(c, ϕ) | c ∈ C} Sup ={(⊤, ϕ)}\nAn example of this construction can be found in Figure 1.\nLemma C.1. Let Φ be a propositional formula in 3-CNF. The BAF F Φ as given in Construction 1 satisfies the following properties:\n• For each E ∈ co(F Φ ), ϕ ∈ E and ⊤ ∈ E.\n• It holds that co(F Φ ) = ∅ iff Φ is satisfiable.\nProof. Applying the usual reasoning for the standard construction, the second statement is a consequence of the first one. To see the first statement, note that E ∈ co(F Φ ) implies ⊤ ∈ E. Since E must be closed, ϕ ∈ E.\nIn order for ϕ to be defended, E must contain x i resp. xi arguments corresponding to a satisfying assignment of Φ. Hence ϕ ∈ E ∈ co(F Φ ) is possible iff Φ is satisfiable.\nHence in the BAF constructed in Construction 1, ⊤ and ϕ are in the grounded extension and none of the C i are. However, it might happen that some x i is in the intersection of all complete extensions because it might be true in every satisfying assignment. In this case, gr (F Ψ ) does not only contain ⊤ and ϕ, but also all affected x i resp. xi arguments. In order to prevent this, we introduces copies x ′ i and x′ i of all variables. They interact with the remaining AF analogously to x i resp. xi and all four arguments mutually attack each other for each i. This yields: Construction 2. Let Φ a propositional formula over atoms X = {x 1 , . . . , x n } in 3-CNF which we identify with a set C of clauses. Let G Φ = (A, Att, Sup) where\nA ={x i , x ′ i , xi , x′ i | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {⊤, ϕ} Att ={(x i , xi ), (x i , x ′ i ), (x i , x′ i ) | 1 ≤ i ≤ n}∪ {(x i , x i ), (x i , x ′ i ), (x i , x′ i ) | 1 ≤ i ≤ n}∪ {(x ′ i , x i ), (x ′ i , xi ), (x ′ i , x′ i ) | 1 ≤ i ≤ n}∪ {(x ′ i , x i ), (x ′ i , x ′ i ), (x ′ i , xi ) | 1 ≤ i ≤ n}∪ {(x i , c), (x ′ i , c) | x i ∈ c ∈ C}∪ {(x i , c), (x ′ i , c) | ¬x i ∈ c ∈ C}∪ {(c, ϕ) | c ∈ C} Sup ={(⊤, ϕ)}\nConsidering how Construction 2 is obtained from Construction 1 the following can be inferred.\nLemma C.2. Let Φ be a propositional formula in 3-CNF. The BAF G Φ as given in Construction 2 satisfies the following properties:\n• If Φ is satisfiable, then gr (G Φ ) = {⊤, ϕ}. • If Φ is not satisfiable, then gr (G Φ ) = ∅.\nNext we consider some construction which will help us to show that skeptical reasoning with co is coNP-hard for BAFs. For this, we make use of a gadget which ensures that each complete extension corresponds to some satisfying assignment (and not to just a partial assignment) to the X variables.\nψ ψ c 1 c 2 c 3 x 1 x1 x 2 x2 x 3 x3 ⊤ 1 d 1 ⊥ 1 ⊤ 2 d 2 ⊥ 2 ⊤ 3 d 3 ⊥ 3 Figure 2: Construction 3 applied to the formula Ψ = {x 1 , x 2 }, {¬x 1 , x 3 }, {¬x 1 , ¬x 3 } Construction 3.\nLet Ψ a propositional formula over atoms X = {x 1 , . . . , x n } in 3-CNF which we identify with a set C of clauses. Let H Ψ = (A, Att, Sup) where\nA ={x i , xi , ⊤ i , ⊥ i , d i | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {⊤, ϕ} Att ={(x i , xi ), (x i , x i ) | 1 ≤ i ≤ n}∪ {(x i , c) | x i ∈ c ∈ C} ∪ {(x i , c) | ¬x i ∈ c ∈ C}∪ {(x i , ⊥ i ), (x i , ⊥ i ), (⊥ i , ⊥ i ), (⊥ i , d i ) | 1 ≤ i ≤ n}∪ {(c, ϕ) | c ∈ C} Sup ={(⊤ i , d i ) | 1 ≤ i ≤ n}\nAn example of this construction can be found in Figure 2.\nLemma C.3. Let Ψ be a propositional formula in 3-CNF. The BAF H Ψ as given in Construction 3 satisfies the following properties:\n• For each 1 ≤ i ≤ n and each E ∈ co(H Ψ ) we have -⊤ i , d i ∈ E, -either x i ∈ E or xi ∈ E.\n• The formula Ψ is unsatisfiable iff ψ ∈ E for each E ∈ co(H Ψ ).\n• Let G ∈ gr (H Ψ ) be the grounded extension. The formula Ψ is unsatisfiable iff\nG = {⊤ i , d i | 1 ≤ i ≤ n} ∪ { ψ}\nProof. The first statement follows from ⊤ i ∈ E. Hence d i needs to be defended and thus, x i ∈ E or xi ∈ E (second statement). Thus, each E ∈ co(H Ψ ) represents an assignment to the X-variables which implies the third statement (usual logic of the standard construction).\n• Show that a ∈ E for each E ∈ co(F ) by iterating over each subset S of A and checking that S is not complete or contains a.\n(hardness) Given an instance (Φ, Ψ) of the DP-complete problem SAT-UNSAT we apply Constructions 2 and 3 to obtain the combined AF F = G Φ ∪ H Ψ . By Lemmata C.2 and C.3 ψ is credulously accepted (i.e., in the intersection of all complete extensions) iff Φ is satisfiable and Ψ is unsatisfiable.\nProposition C.6 (Skeptical Reasoning). The problem Skept σ is trivial for σ = ad , DP-complete for σ ∈ {co, gr , stb}, and Π P 2 -complete for σ = pr . Proof. Let F be the given BAF.\nFor ad the answer is negative as usual.\nFor σ ∈ {co, stb, gr } membership is by guessing a witness for σ(F ) = ∅ (existential quantifier) and then showing that each E ∈ σ(F ) contains the query argument (universal quantifier). Hardness follows from again applying Constructions 2 and 3 to obtain the combined AF F = G Φ ∪ H Ψ . For gr we argued already in the proof of Proposition C.5. For co note that skeptical reasoning coincides with reasoning in gr . Finally note that by construction, stb and co coincide in the two constructions.\nRegarding pr semantics membership follows by guessing and verifying some counter-example, i.e., E ∈ pr (F ) not containing the query argument (this can be done in Σ P\n2 ) and hardness is due to the AF case. Now we turn our attention to pBAFs.\nTheorem 6.3. For pBAFs, the problem\n• Ver σ is tractable for σ ∈ {ad , co, stb}, coNP-complete for σ = pr , and DP-complete for σ = gr .\n• Cred σ is NP-complete for σ ∈ {ad , co, pr , stb} and DP-complete for σ = gr .\n• Skept σ is coNP-complete for σ = ad , DP-complete for σ ∈ {co, gr , stb}, and Π P 2 -complete for σ = pr . Almost all results follow since verifying exhaustiveness can be done in polynomial time. The only difference is that now, the empty set is not necessarily admissible anymore which renders skeptical reasoning more involved. We give the proof for this case. Proposition 6.2. For pBAFs, Skept ad is coNP-complete.\nProof. (membership) To show that x ∈ A is not skeptically accepted, we simply need to guess a set E ∈ ad (F) s.t. x / ∈ E. Since we can verify E ∈ ad (F) in P, this procedure (proving the contrary) is in NP.\n(hardness) For hardness, we adjust Construction 3 as follows: We use a gadget at the bottom to ensure that each admissible extension corresponds to some assignment to the X-variables. To this end arguments d i are only defended Following the same reasonong, we use auxiliary arguments t and ⊥ t to ensure that each admissible extension contains either ψ (satisfying assignment) or ψ (no satisfying assignment). We can thus check whether the formula has no satisfying assignment as follows.\nψ {ψ} ψ { ψ} ⊥t {⊥t} t {} c 1 {c 1 } c 2 {c 2 } c 3 {c 3 } x 1 {x 1 } x1 {x 1 } x 2 {x 2 } x2 {x 2 } x 3 {x 3 } x3 {x 3 } d 1 {} ⊥ 1 {⊥ 1 } d 2 {} ⊥ 2 {⊥ 2 } d 3 {} ⊥ 3 {⊥ 3 }\nConstruction 4. Let Ψ a propositional formula over atoms X = {x 1 , . . . , x n } in 3-CNF which we identify with a set C of clauses. Let F Ψ = (A, Att, Sup, π) where An example of this construction can be found in Figure 3.\nA ={x i , xi , ⊥ i , d i | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {ψ, ψ, t, ⊥ t } Att ={(x i , xi ), (x i , x i ) | 1 ≤ i ≤ n}∪ {(x i , c) | x i ∈ c ∈ C} ∪ {(x i , c) | ¬x i ∈ c ∈ C}∪ {(x i , ⊥ i ), (x i , ⊥ i ), (⊥ i , d i ) | 1 ≤ i ≤ n}∪ {(c, ψ) | c ∈ C}∪ {(ψ,\nBy construction as well as the usual reasoning, ψ is not skeptically accepted iff Ψ is satisfiable. Thus the complement is NP-complete which proves the claim." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was partially funded by the European Re-search Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101020934, ADIX), by J.P. Morgan and by the Royal Academy of Engineering under the Research Chairs and Senior Research Fellowships scheme, and by the Federal Ministry of Education and Research of Germany and by Sächsische Staatsministerium für Wissenschaft, Kultur und Tourismus in the programme Center of Excellence for AI-research \"Center for Scalable Data Analytics and Artificial Intelligence Dresden/Leipzig\", project identification number: ScaDS.AI. Any views or opinions expressed herein are solely those of the authors." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Non-flat ABA is an Instance of Bipolar Argumentation -Supplementary Material A Proof Details of Section 4 Theorem 4.3. Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. Let σ ∈ {co, gr , stb}.\n• If E ∈ σ(F D ), then asms(E) ∈ σ(D).\nIf S ∈ ad (D), then {x ∈ A | asms(x) ⊆ S} ∈ ad (F D ).\nProof. This main result is a corollary of the subsecquent propositions.\nWe start with two auxiliary lemmata.\nLemma A.1. Let D = (L, R, A, ) be an ABAF. A set S ⊆ A of assumptions defends some a ∈ A iff for each tree-based argument T ⊢ p attacking a, S attacks cl (T ).\nProof. (⇒) is clear since cl (T ) is a closed set of assumptions attacking a.\n(⇐) If T is closed set of assumptions attacking a, then ā ∈ Th D (T ). Take an argument T ′ ⊢ ā with T ′ ⊆ T . By assumption S attacks cl (T ′ ). Since cl (T ′ ) ⊆ cl (T ), S attacks cl (T ) as well.\nWe can therefore reduce our attention to the closure of single arguments, instead of considering arbitrary closed sets of assumptions. Thus, if a / ∈ S but a ∈ cl (x), then it must be the case that (x, {a} ⊢ a) ∈ Sup. By definition of Sup this means a ∈ cl (S).\n(⊇) Suppose a ∈ cl (S). If a ∈ S, then a ∈ asms(x) and we are done. Otherwise (x, {a} ⊢ a) ∈ Sup by construction. Thus {a} ⊢ a ∈ cl (x) and hence a ∈ asms(cl (x)). Equipped with these constructions, we are ready to prove Theorem 6.1.\nProposition C.4 (Verification). The problem Ver σ is tractable for σ ∈ {ad , co, stb}, coNP-complete for σ = pr , and DP-complete for σ = gr .\nProof. Let F be a BAF.\n(ad ) Given E ⊆ A it is clear that checking E ∩ E + = ∅ and E = cl (E) can be done in polynomial time. For defense we use Lemma 3.4: Let a be an attacker of E. We can compute cl ({a}) in polynomial time and thus checking whether E → cl ({a}) holds is also tractable.\n(co) For completeness, we iterate additionally over each a ∈ A \\ E and check whether a is defended by E. This is tractable as well.\n(co) We check E ∈ cf (F ) and E = cl (E); then we compute the range and verify that each a ∈ A \\ E is attacked.\n(pr ) Membership follows from tractability for the case ad : Simply verify E ∈ ad (F ) and then iterate over each superset of E in order to verify maximality. Hardness follows from hardness in the AF case.\n(gr ) (membership) To verify that E is the intersection of all complete extensions we proceed as follows:\n• Show that E does not contain too many arguments: For each S ⊆ A, verify that S ∈ co(F ) implies E ⊆ A (universal quantifier in DP);\n• Show that E does not contain too few arguments: For the linearly many a ∈ A \\ E, guess a set E a with a / ∈ E a and verify that E a ∈ co(F ) (existential quantifier in DP).\n(hardness) Given an instance (Φ, Ψ) of the DP-complete problem SAT-UNSAT we apply Constructions 2 and 3 to obtain the combined AF F = G Φ ∪ H Ψ . By Lemmata C.2 and C.3\nis the grounded extension of F iff Φ is satisfiable and Ψ is unsatisfiable.\nProposition C.5 (Credulous Reasoning). The problem Cred σ is NP-complete for σ ∈ {ad , co, pr , stb} and DP-complete for σ = gr .\nProof. Let F be the given BAF. For σ ∈ {ad , co, pr , stb} a simple guess and check procedure suffices. Note that credulous reasoning for preferred semantics coincides with credulous reasoning for admissible semantics; therefore we can apply Proposition C.4. Hardness follows from the AF case.\n(gr ) (membership) To verify that a ∈ A is in the intersection of all complete extensions we proceed as follows:\n• Show that gr (F ) = ∅ by guessing and verifying any E ∈ co(F ) (existential quantifier in DP)." } ]
Assumption-based Argumentation (ABA) is a well-known structured argumentation formalism, whereby arguments and attacks between them are drawn from rules, defeasible assumptions and their contraries. A common restriction imposed on ABA frameworks (ABAFs) is that they are flat, i.e., each of the defeasible assumptions can only be assumed, but not derived. While it is known that flat ABAFs can be translated into abstract argumentation frameworks (AFs) as proposed by Dung, no translation exists from general, possibly non-flat ABAFs into any kind of abstract argumentation formalism. In this paper, we close this gap and show that bipolar AFs (BAFs) can instantiate general ABAFs. To this end we develop suitable, novel BAF semantics which borrow from the notion of deductive support. We investigate basic properties of our BAFs, including computational complexity, and prove the desired relation to ABAFs under several semantics.
Non-flat ABA is an Instance of Bipolar Argumentation
[ { "figure_caption": "Example 2 . 4 .24Let D = (L, R, A, ) be the ABAF where L = {a, b, c, d, a, b, c, d}, A = {a, b, c, d}, the contrary function is given as indicated, and R consists of rules: b ← a. a ← b. d ← b. b ← c. d ← c. Let us discuss why S = {b} is admissible in D. First of all, b is conflict-free as it does not derive b. Also, b is closed, i.e., cl (b) = {b}. Regarding defense, we have that {a} ⊢ b, but also {b} ⊢ a, so the attack is defended against. Finally, c attacks b ({c} ⊢ b), but {c} is not closed. Indeed, cl (c) = {c, d}. Since {b} ⊢ d, this attack is also defended against.", "figure_data": "", "figure_id": "fig_0", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "r 2 : b ← a, and r 3 : b ← c. From r 2 it follows that b ∈ cl (a), which can be encoded in our instantiated BAF as follows: since b is an assumption, there is some generic argument {b} ⊢ b for it, then the argument stemming from rule r 2 , i.e., {a} ⊢ b, supports the argument {b} ⊢ b; hence any closed set accepting {a} ⊢ b must also accept {b} ⊢ b. Including the usual attacks, this would give the following BAF (for a ∈ A, we depict {a} ⊢ a by just a): It is now indeed impossible to accept {a} ⊢ b without counter-attacking {c} ⊢ b since the former supports {b} ⊢ b. With this support relation we miss however that we cannot accept {a} ⊢ p, either: since constructing this argument requires a, we would then also have to include b due to b ∈ cl (a). Hence {a} ⊢ p should also support {b} ⊢ b. More generally, an argument S ⊢ p shall support each a ∈ cl (S):", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Example 4 . 2 .42Recall our ABAF D from Example 2.4. The instantiated BAF F D is given as follows (again, we depict the generic argument {a} ⊢ a for each a ∈ A by a).", "figure_data": "", "figure_id": "fig_2", "figure_label": "42", "figure_type": "figure" }, { "figure_caption": "Example 4 . 4 .44Recall our ABA D and F D from above. As we already saw, S = {b} ∈ ad (D) and indeed, E = {x ∈ A | asms(x) ⊆ S} = {A 2 , A 3 , b} ∈ ad (F D ) as we verified.", "figure_data": "", "figure_id": "fig_3", "figure_label": "44", "figure_type": "figure" }, { "figure_caption": "Proposition 4 . 5 .45Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. Let S ⊆ A and let E = {x ∈ A | asms(x) ⊆ S}.", "figure_data": "", "figure_id": "fig_4", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Proposition 4 . 7 .47Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If S ∈ stb(D), then for E = {x ∈ A | asms(x) ⊆ S} we get E ∈ stb(F D ).", "figure_data": "", "figure_id": "fig_5", "figure_label": "47", "figure_type": "figure" }, { "figure_caption": "Definition 4 . 9 .49Let D = (L, R, A, ) be an ABAF;F D = (A, Att, Sup) the instantiated BAF. A set E ⊆ A is assumption exhaustive if asms(x) ⊆ asms(E) implies x ∈ E.Example 4.10. In the previous Example 4.8, for the set E = {a, b, A 1 , A 2 } of arguments we have asms(E) = {a, b} and hence E is not assumption exhaustive because A 3 and A 5 also satisfy asms(A i ) ⊆ asms(E).", "figure_data": "", "figure_id": "fig_6", "figure_label": "49", "figure_type": "figure" }, { "figure_caption": "Remark 4 . 11 .411In the previous subsection we started with assumptions S and constructed E = {x ∈ A | asms(x) ⊆ S}. Such E is assumption exhaustive by design.", "figure_data": "", "figure_id": "fig_7", "figure_label": "411", "figure_type": "figure" }, { "figure_caption": "Lemma 4 . 13 .413Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If E ∈ co(F D ), then E is assumption exhaustive.", "figure_data": "", "figure_id": "fig_8", "figure_label": "413", "figure_type": "figure" }, { "figure_caption": "Proposition 4 . 15 .415Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If E ∈ co(F D ), then S = asms(E) ∈ co(D).", "figure_data": "", "figure_id": "fig_9", "figure_label": "415", "figure_type": "figure" }, { "figure_caption": "Corollary 4 . 16 .416Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF.", "figure_data": "", "figure_id": "fig_10", "figure_label": "416", "figure_type": "figure" }, { "figure_caption": "Proposition 4 . 17 .417Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If E ∈ stb(F D ), then S = asms(E) ∈ stb(D).", "figure_data": "", "figure_id": "fig_11", "figure_label": "417", "figure_type": "figure" }, { "figure_caption": "Example 4 . 18 .418Let us head back to our motivating Example 1.1 on climate change. We instantiate the following BAF.", "figure_data": "", "figure_id": "fig_12", "figure_label": "418", "figure_type": "figure" }, { "figure_caption": "Proposition 4 . 6 .46Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If S ∈ co(D), then for E = {x ∈ A | asms(x) ⊆ S} we get E ∈ co(F D ).", "figure_data": "", "figure_id": "fig_14", "figure_label": "46", "figure_type": "figure" }, { "figure_caption": "Lemma 4 . 13 .413Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If E ∈ co(F D ), then E is assumption exhaustive. Proof. Let Y ⊆ A be a closed set of arguments attacking x = S ⊢ p with S ⊆ asms(E). Then some y ∈ Y is of the form T ⊢ ā with a ∈ S. Since a ∈ asms(E), y attacks E. Hence Y attacks E and thus E attacks Y since E defends itself. Since Y was an arbitrary closed attacker of x, E defends x. Since E is complete, x ∈ E. Proposition 4.15. Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If E ∈ co(F D ), then S = asms(E) ∈ co(D).", "figure_data": "", "figure_id": "fig_15", "figure_label": "413", "figure_type": "figure" }, { "figure_caption": "Proposition 4 . 17 .417Let D = (L, R, A, ) be an ABAF and F D = (A, Att, Sup) the instantiated BAF. If E ∈ stb(F D ), then S = asms(E) ∈ stb(D).", "figure_data": "", "figure_id": "fig_16", "figure_label": "417", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Construction 1 applied to the formula Φ = {x 1 , x 2 }, {¬x 1 , x 3 }, {¬x 1 , ¬x 3 }", "figure_data": "", "figure_id": "fig_17", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Construction 4 applied to the formula Ψ = {x 1 , x 2 }, {¬x 1 , x 3 }, {¬x 1 , ¬x 3 }", "figure_data": "", "figure_id": "fig_18", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "it is closed and E + F = A \\ E. Example 5.5. Let us illustrate how pBAFs can help us fixing our issue illustrated in Example 4.8. Let us construct the same BAF, but assign to each argument the assumptions required to entail it as premises.", "figure_data": "{a}A1{b}A2{ab}A3{c}A4{ab}A5{a}a{b}b{c}c", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "ψ), (⊥ t , t), (ψ, ⊥ t ), ( ψ, ⊥ t )} Sup =∅ π ={(a, {a}) | a ∈ A \\ {d 1 , . . . , d n , t}}∪ {(d i , ∅) | 1 ≤ i ≤ n} ∪ {(t, ∅)}", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
Markus Ulbricht; Nico Potyka; Anna Rapberger; Francesca Toni
[ { "authors": "L Amgoud; C Cayrol; M Lagasquie-Schiex; P Livet", "journal": "Int. J. Intell. Syst", "ref_id": "b0", "title": "On bipolarity in argumentation frameworks", "year": "2008" }, { "authors": "K Atkinson; P Baroni; M Giacomin; A Hunter; H Prakken; C Reed; G R Simari; M Thimm; S Villata", "journal": "AI Magazine", "ref_id": "b1", "title": "Towards Artificial Argumentation", "year": "2017" }, { "authors": "Z Bao; K Cyras; F Toni", "journal": "Springer", "ref_id": "b2", "title": "ABAplus: Attack Reversal in Abstract and Structured Argumentation with Preferences", "year": "2017" }, { "authors": "P Baroni; D Gabbay; M Giacomin; L Van Der Torre", "journal": "College Publications", "ref_id": "b3", "title": "Handbook of Formal Argumentation", "year": "2018" }, { "authors": "P Besnard; A Garcia; A Hunter; S Modgil; H Prakken; G Simari; F Toni", "journal": "Argument Comput", "ref_id": "b4", "title": "Introduction to structured argumentation", "year": "2014" }, { "authors": "P Besnard; A Hunter", "journal": "MIT Press", "ref_id": "b5", "title": "Elements of Argumentation", "year": "2008" }, { "authors": "G Boella; D M Gabbay; L Van Der Torre; S Villata", "journal": "IOS Press", "ref_id": "b6", "title": "Support in abstract argumentation", "year": "2010" }, { "authors": "A Bondarenko; P M Dung; R A Kowalski; F Toni", "journal": "Artif. Intell", "ref_id": "b7", "title": "An Abstract, Argumentation-Theoretic Approach to Default Reasoning", "year": "1997" }, { "authors": "C Cayrol; A Cohen; M.-C Lagasquie-Schiex", "journal": "", "ref_id": "b8", "title": "Higher-Order Interactions (Bipolar or not) in Abstract Argumentation: A State of the Art", "year": "2021" }, { "authors": "C Cayrol; M Lagasquie-Schiex", "journal": "Int. J. Approx. Reason", "ref_id": "b9", "title": "Bipolarity in argumentation graphs: Towards a better understanding", "year": "2013" }, { "authors": "C Cayrol; M.-C Lagasquie-Schiex", "journal": "Springer", "ref_id": "b10", "title": "On the acceptability of arguments in bipolar argumentation frameworks", "year": "2005" }, { "authors": "A Cohen; A J García; G R Simari", "journal": "Springer", "ref_id": "b11", "title": "Backing and Undercutting in Abstract Argumentation Frameworks", "year": "2012" }, { "authors": "A Cohen; S Gottifredi; A J García; G R Simari", "journal": "Knowl. Eng. Rev", "ref_id": "b12", "title": "A survey of different approaches to support in argumentation systems", "year": "2014" }, { "authors": "R Craven; F Toni; C Cadar; A Hadad; M Williams", "journal": "AAAI Press", "ref_id": "b13", "title": "Efficient Argumentation for Medical Decision-Making", "year": "2012" }, { "authors": "K Cyras; X Fan; C Schulz; F Toni", "journal": "FLAP", "ref_id": "b14", "title": "Assumption-based Argumentation: Disputes, Explanations, Preferences", "year": "2017" }, { "authors": "K Čyras; X Fan; C Schulz; F Toni", "journal": "College Publications", "ref_id": "b15", "title": "Assumption-Based Argumentation: Disputes, Explanations, Preferences", "year": "2018" }, { "authors": "K Cyras; Q Heinrich; F Toni", "journal": "Artif. Intell", "ref_id": "b16", "title": "Computational complexity of flat and generic Assumption-Based Argumentation, with and without probabilities", "year": "2021" }, { "authors": "K Cyras; T Oliveira; A Karamlou; F Toni", "journal": "Argument Comput", "ref_id": "b17", "title": "Assumptionbased argumentation with preferences and goals for patient-centric reasoning with interacting clinical guidelines", "year": "2021" }, { "authors": "K Cyras; A Rago; E Albini; P Baroni; F Toni", "journal": "", "ref_id": "b18", "title": "Argumentative XAI: A Survey", "year": "2021" }, { "authors": "K Cyras; C Schulz; F Toni", "journal": "Springer", "ref_id": "b19", "title": "Capturing Bipolar Argumentation in Non-flat Assumption-Based Argumentation", "year": "2017" }, { "authors": "K Cyras; F Toni", "journal": "AAAI Press", "ref_id": "b20", "title": "ABA+: Assumption-Based Argumentation with Preferences", "year": "2016" }, { "authors": "P M Dung", "journal": "Artif. Intell", "ref_id": "b21", "title": "On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games", "year": "1995" }, { "authors": "P M Dung; P M Thang", "journal": "IOS Press", "ref_id": "b22", "title": "Towards (Probabilistic) Argumentation for Jury-based Dispute Resolution", "year": "2010" }, { "authors": "P M Dung; P M Thang; N D Hung", "journal": "Argument Comput", "ref_id": "b23", "title": "Modular argumentation for modelling legal doctrines of performance relief", "year": "2010" }, { "authors": "W Dvorák; A Rapberger; S Woltran", "journal": "", "ref_id": "b24", "title": "Argumentation Semantics under a Claim-centric View: Properties, Expressiveness and Relation to SETAFs", "year": "2020" }, { "authors": "W Dvorák; S Woltran", "journal": "Artif. Intell", "ref_id": "b25", "title": "Complexity of abstract argumentation under a claim-centric view", "year": "2020" }, { "authors": "X Fan; S Liu; H Zhang; C Leung; C Miao", "journal": "IOS Press", "ref_id": "b26", "title": "Explained Activity Recognition with Computational Assumption-Based Argumentation", "year": "2016" }, { "authors": "X Fan; F Toni", "journal": "Artif. Intell", "ref_id": "b27", "title": "A general framework for sound assumptionbased argumentation dialogues", "year": "2014" }, { "authors": "A J García; G R Simari", "journal": "Theory Pract. Log. Program", "ref_id": "b28", "title": "Defeasible Logic Programming: An Argumentative Approach", "year": "2004" }, { "authors": "A Gargouri; S Konieczny; P Marquis; S Vesic", "journal": "", "ref_id": "b29", "title": "On a notion of monotonic support for bipolar argumentation frameworks", "year": "2021" }, { "authors": "M G E Gonzalez; M C Budán; G I Simari; G R Simari", "journal": "J. Art. Intell. Res", "ref_id": "b30", "title": "Labeled bipolar argumentation frameworks", "year": "2021" }, { "authors": "N Karacapilidis; D Papadias", "journal": "Information systems", "ref_id": "b31", "title": "Computer supported argumentation and collaborative decision making: the Hermes system", "year": "2001" }, { "authors": "T Lehtonen; A Rapberger; M Ulbricht; J P Wallner", "journal": "", "ref_id": "b32", "title": "Argumentation Frameworks Induced by Assumption-based Argumentation: Relating Size and Complexity", "year": "2023" }, { "authors": "T Lehtonen; J P Wallner; M Järvisalo", "journal": "IOS Press", "ref_id": "b33", "title": "Algorithms for Reasoning in a Default Logic Instantiation of Assumption-Based Argumentation", "year": "2022" }, { "authors": "S Modgil; H Prakken", "journal": "Artif. Intell", "ref_id": "b34", "title": "A general account of argumentation with preferences", "year": "2013" }, { "authors": "F Nouioua; V Risch", "journal": "IEEE", "ref_id": "b35", "title": "Bipolar argumentation frameworks with specialized supports", "year": "2010" }, { "authors": "N Oren; T J Norman", "journal": "IOS Press", "ref_id": "b36", "title": "Semantics for Evidence-Based Argumentation", "year": "2008" }, { "authors": "N Potyka", "journal": "", "ref_id": "b37", "title": "Bipolar abstract argumentation with dual attacks and supports", "year": "2020" }, { "authors": "A Rapberger", "journal": "", "ref_id": "b38", "title": "Defining Argumentation Semantics under a Claimcentric View", "year": "2020" }, { "authors": "A Rapberger; M Ulbricht", "journal": "J. Artif. Intell. Res", "ref_id": "b39", "title": "On Dynamics in Structured Argumentation Formalisms", "year": "2023" }, { "authors": "F Toni", "journal": "Artif. Intell", "ref_id": "b40", "title": "A generalised framework for dispute derivations in assumption-based argumentation", "year": "2013" }, { "authors": "F Toni", "journal": "Argument Comput", "ref_id": "b41", "title": "A tutorial on assumption-based argumentation", "year": "2014" } ]
[ { "formula_coordinates": [ 3, 133.8, 602.45, 343.64, 59.8 ], "formula_id": "formula_0", "formula_text": "E + F = {x ∈ A | E attacks x}. A set E ⊆ A is conflict-free in F iff for no x, y ∈ E, (x, y) ∈ Att. E defends an argument x if E attacks each attacker of x. A conflict-free set E is admissible in F (E ∈ ad (F )) iff it defends all its elements. A semantics is a function F → σ(F ) ⊆ 2 A ." }, { "formula_coordinates": [ 4, 133.8, 150.95, 343.76, 34.71 ], "formula_id": "formula_1", "formula_text": "E ∈ co(F ) iff E contains all arguments it defends; ii) E ∈ gr (F ) iff E is ⊆- minimal in co(F ); iii) E ∈ pr (F ) iff E is ⊆-maximal in co(F ); iv) E ∈ stb(F ) iff E + F = A \\ E." }, { "formula_coordinates": [ 5, 148.68, 205.43, 187.01, 29.14 ], "formula_id": "formula_2", "formula_text": "• S ∈ pr (D) iff S is ⊆-maximal in ad (D); • S ∈ co(D) iff S ∈ ad (D)" }, { "formula_coordinates": [ 5, 133.8, 636.71, 343.61, 38.5 ], "formula_id": "formula_3", "formula_text": "µ(E) = E ∪ {a ∈ A | ∃e ∈ E : (e, a) ∈ Sup}. We call cl (E) = n≥1 µ n (E) the closure of E. A set E ⊆ A is called closed if E = cl(E)." }, { "formula_coordinates": [ 6, 133.8, 182.87, 343.78, 34.06 ], "formula_id": "formula_4", "formula_text": "Definition 3.2. Let F = (A, Att, Sup) be a BAF. A set E ⊆ A is conflict-free if E ∩ E + F = ∅; E defends a ∈ A if E attacks each closed set S ⊆ A which attacks a; the characteristic function of F is Γ(E) = {a ∈ A | E defends a}." }, { "formula_coordinates": [ 6, 202.23, 298.12, 204.8, 10.67 ], "formula_id": "formula_5", "formula_text": "z y x F : u v" }, { "formula_coordinates": [ 6, 133.8, 476.99, 343.73, 41.98 ], "formula_id": "formula_6", "formula_text": "Definition 3.5. Let F = (A, Att, Sup) be a BAF. A set E ⊆ A is admissible, E ∈ ad (F ), if i) E is conflict-free, ii) E is closed, and iii) E ⊆ Γ(E). Example 3.6. Recall Example 3.3. Let us verify that E = {y, z} ∈ ad (F )." }, { "formula_coordinates": [ 7, 133.8, 127.07, 343.78, 103.54 ], "formula_id": "formula_7", "formula_text": "Definition 3.8. Let F = (A, Att, Sup) be a BAF. A set E ⊆ A of arguments s.t. E ∈ cf (F ) is • preferred, E ∈ pr (F ), iff it is maximal admissible; • complete, E ∈ co(F ), iff E ∈ ad (F ) and E = Γ(E); • grounded, E ∈ gr (F ), iff E = S∈co(F ) S; • stable, E ∈ stb(F ), iff it is closed and E + = A \\ E." }, { "formula_coordinates": [ 7, 244.71, 396.4, 119.64, 10.67 ], "formula_id": "formula_8", "formula_text": "z y x F :" }, { "formula_coordinates": [ 7, 133.8, 651.35, 343.68, 22.18 ], "formula_id": "formula_9", "formula_text": "r 1 : p ← a," }, { "formula_coordinates": [ 8, 228.24, 327.59, 245.02, 10.18 ], "formula_id": "formula_10", "formula_text": "a ∈ cl (S) ⇒ (S ⊢ p, {a} ⊢ a) ∈ Sup. (1" }, { "formula_coordinates": [ 8, 473.26, 327.81, 4.25, 9.96 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 8, 203.88, 415.31, 203.46, 42.7 ], "formula_id": "formula_12", "formula_text": "A = {(S ⊢ p) | (S ⊢ p) is an argument in D} Att = {(S ⊢ p, T ⊢ q) ∈ A 2 | p ∈ T } Sup = {(S ⊢ p, {a} ⊢ a) ∈ A 2 | a ∈ cl (S)}" }, { "formula_coordinates": [ 9, 133.8, 273.47, 249.05, 49.18 ], "formula_id": "formula_13", "formula_text": "• If E ∈ σ(F D ), then asms(E) ∈ σ(D). • If S ∈ σ(D), then {x ∈ A | asms(x) ⊆ S} ∈ σ(F D ). If S ∈ ad (D), then {x ∈ A | asms(x) ⊆ S} ∈ ad (F D )." }, { "formula_coordinates": [ 9, 148.68, 519.83, 200.81, 49.42 ], "formula_id": "formula_14", "formula_text": "• If S ∈ cf (D), then E ∈ cf (F D ). • If S is closed in D, then E is closed in F D . • If S ∈ ad (D), then E ∈ ad (F D )." }, { "formula_coordinates": [ 9, 133.8, 609.83, 343.71, 34.06 ], "formula_id": "formula_15", "formula_text": "F D = (A, Att, Sup) the instantiated BAF. If S ∈ co(D), then for E = {x ∈ A | asms(x) ⊆ S} we get E ∈ co(F D )." }, { "formula_coordinates": [ 10, 205.63, 358.21, 199.51, 85.13 ], "formula_id": "formula_16", "formula_text": "p a A 1 q b A 2 c p q a b A 3 c c A 4 c c p q a b A 5 a b c" }, { "formula_coordinates": [ 11, 148.68, 242.75, 200.81, 48.82 ], "formula_id": "formula_17", "formula_text": "• If E ∈ cf (F D ), then S ∈ cf (D). • If E is closed in F D , then S is closed in D. • If E ∈ ad (F D ), then S ∈ ad (D)." }, { "formula_coordinates": [ 11, 148.68, 610.55, 328.84, 22.18 ], "formula_id": "formula_18", "formula_text": "• If S ∈ gr (D), then for E = {x ∈ A | asms(x) ⊆ S} we have that E ∈ gr (F D )." }, { "formula_coordinates": [ 12, 133.8, 591.35, 343.67, 22.18 ], "formula_id": "formula_19", "formula_text": "Definition 5.3. Let F = (F , π) be a pBAF. A set E ⊆ A is exhaustive iff π(a) ⊆ π(E) implies a ∈ E." }, { "formula_coordinates": [ 12, 133.8, 665.03, 266.02, 10.18 ], "formula_id": "formula_20", "formula_text": "Definition 5.4. For a pBAF F = (F , π), a set E ∈ cf (F ) is • admissible, E ∈ ad (F), iff E is exhaustive and E ∈ ad (F ); • preferred, E ∈ pr (F), iff it is ⊆-maximal admissible; • complete, E ∈ co(F), iff E ∈ ad (F) and E = Γ(E); • grounded, E ∈ gr (F), iff E = S∈co(F) S; • stable, E ∈ stb(F), iff" }, { "formula_coordinates": [ 13, 133.8, 463.65, 343.62, 43.8 ], "formula_id": "formula_21", "formula_text": "F D = (A, Att, Sup, π) = (F , π) is F = F D ∀x ∈ A : π(x) = asms(x)." }, { "formula_coordinates": [ 13, 148.68, 570.95, 229.23, 30.1 ], "formula_id": "formula_22", "formula_text": "• if E ∈ σ(F D ), then asms(E) ∈ σ(D); • if S ∈ σ(D), then {x ∈ A | asms(x) ⊆ S} ∈ σ(F D )" }, { "formula_coordinates": [ 15, 192, 179.81, 227.29, 76.34 ], "formula_id": "formula_23", "formula_text": "Credσ ABA Σ P 2 -c DP2-c Σ P 2 -c Σ P 2 -c NP-c BAF NP-c DP-c NP-c NP-c NP-c pBAF NP-c DP-c NP-c NP-c NP-c Skept σ ABA Π P 2 -c DP2-c DP2-c Π P 3 -c DP-c BAF triv. DP-c DP-c Π P 2 -c DP-c pBAF coNP-c DP-c DP-c Π P 2 -c DP-c" }, { "formula_coordinates": [ 22, 148.68, 127.07, 200.81, 48.22 ], "formula_id": "formula_24", "formula_text": "• If S ∈ cf (D), then E ∈ cf (F D ). • If S is closed in D, then E is closed in F D . • If S ∈ ad (D), then E ∈ ad (F D )." }, { "formula_coordinates": [ 22, 133.8, 575.15, 343.71, 34.06 ], "formula_id": "formula_25", "formula_text": "F D = (A, Att, Sup) the instantiated BAF. If S ∈ stb(D), then for E = {x ∈ A | asms(x) ⊆ S} we get E ∈ stb(F D )." }, { "formula_coordinates": [ 23, 148.68, 290.51, 200.81, 50.02 ], "formula_id": "formula_26", "formula_text": "• If E ∈ cf (F D ), then S ∈ cf (D). • If E is closed in F D , then S is closed in D. • If E ∈ ad (F D ), then S ∈ ad (D)." }, { "formula_coordinates": [ 24, 148.68, 480.71, 229.23, 30.1 ], "formula_id": "formula_27", "formula_text": "• if E ∈ σ(F D ), then asms(E) ∈ σ(D); • if S ∈ σ(D), then {x ∈ A | asms(x) ⊆ S} ∈ σ(F D )" }, { "formula_coordinates": [ 25, 190.07, 127.53, 230.69, 80.52 ], "formula_id": "formula_28", "formula_text": "ϕ ⊤ c 1 c 2 c 3 x 1 x1 x 2 x2 x 3 x3" }, { "formula_coordinates": [ 25, 190.56, 548.27, 230.08, 69.94 ], "formula_id": "formula_29", "formula_text": "A ={x i , xi | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {⊤, ϕ} Att ={(x i , xi ), (x i , x i ) | 1 ≤ i ≤ n}∪ {(x i , c) | x i ∈ c ∈ C} ∪ {(x i , c) | ¬x i ∈ c ∈ C}∪ {(c, ϕ) | c ∈ C} Sup ={(⊤, ϕ)}" }, { "formula_coordinates": [ 26, 189, 388.83, 233.34, 131.46 ], "formula_id": "formula_30", "formula_text": "A ={x i , x ′ i , xi , x′ i | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {⊤, ϕ} Att ={(x i , xi ), (x i , x ′ i ), (x i , x′ i ) | 1 ≤ i ≤ n}∪ {(x i , x i ), (x i , x ′ i ), (x i , x′ i ) | 1 ≤ i ≤ n}∪ {(x ′ i , x i ), (x ′ i , xi ), (x ′ i , x′ i ) | 1 ≤ i ≤ n}∪ {(x ′ i , x i ), (x ′ i , x ′ i ), (x ′ i , xi ) | 1 ≤ i ≤ n}∪ {(x i , c), (x ′ i , c) | x i ∈ c ∈ C}∪ {(x i , c), (x ′ i , c) | ¬x i ∈ c ∈ C}∪ {(c, ϕ) | c ∈ C} Sup ={(⊤, ϕ)}" }, { "formula_coordinates": [ 26, 148.68, 591.35, 187.49, 30.19 ], "formula_id": "formula_31", "formula_text": "• If Φ is satisfiable, then gr (G Φ ) = {⊤, ϕ}. • If Φ is not satisfiable, then gr (G Φ ) = ∅." }, { "formula_coordinates": [ 27, 133.8, 127.08, 343.62, 214.76 ], "formula_id": "formula_32", "formula_text": "ψ ψ c 1 c 2 c 3 x 1 x1 x 2 x2 x 3 x3 ⊤ 1 d 1 ⊥ 1 ⊤ 2 d 2 ⊥ 2 ⊤ 3 d 3 ⊥ 3 Figure 2: Construction 3 applied to the formula Ψ = {x 1 , x 2 }, {¬x 1 , x 3 }, {¬x 1 , ¬x 3 } Construction 3." }, { "formula_coordinates": [ 27, 180.48, 377.15, 250.38, 84.94 ], "formula_id": "formula_33", "formula_text": "A ={x i , xi , ⊤ i , ⊥ i , d i | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {⊤, ϕ} Att ={(x i , xi ), (x i , x i ) | 1 ≤ i ≤ n}∪ {(x i , c) | x i ∈ c ∈ C} ∪ {(x i , c) | ¬x i ∈ c ∈ C}∪ {(x i , ⊥ i ), (x i , ⊥ i ), (⊥ i , ⊥ i ), (⊥ i , d i ) | 1 ≤ i ≤ n}∪ {(c, ϕ) | c ∈ C} Sup ={(⊤ i , d i ) | 1 ≤ i ≤ n}" }, { "formula_coordinates": [ 27, 148.68, 523.55, 227.54, 45.34 ], "formula_id": "formula_34", "formula_text": "• For each 1 ≤ i ≤ n and each E ∈ co(H Ψ ) we have -⊤ i , d i ∈ E, -either x i ∈ E or xi ∈ E." }, { "formula_coordinates": [ 27, 171.36, 607.53, 132.3, 12.6 ], "formula_id": "formula_35", "formula_text": "G = {⊤ i , d i | 1 ≤ i ≤ n} ∪ { ψ}" }, { "formula_coordinates": [ 30, 192.32, 141.44, 226.68, 142.86 ], "formula_id": "formula_36", "formula_text": "ψ {ψ} ψ { ψ} ⊥t {⊥t} t {} c 1 {c 1 } c 2 {c 2 } c 3 {c 3 } x 1 {x 1 } x1 {x 1 } x 2 {x 2 } x2 {x 2 } x 3 {x 3 } x3 {x 3 } d 1 {} ⊥ 1 {⊥ 1 } d 2 {} ⊥ 2 {⊥ 2 } d 3 {} ⊥ 3 {⊥ 3 }" }, { "formula_coordinates": [ 30, 178.32, 472.77, 256.38, 87.24 ], "formula_id": "formula_37", "formula_text": "A ={x i , xi , ⊥ i , d i | 1 ≤ i ≤ n} ∪ {c | c ∈ C} ∪ {ψ, ψ, t, ⊥ t } Att ={(x i , xi ), (x i , x i ) | 1 ≤ i ≤ n}∪ {(x i , c) | x i ∈ c ∈ C} ∪ {(x i , c) | ¬x i ∈ c ∈ C}∪ {(x i , ⊥ i ), (x i , ⊥ i ), (⊥ i , d i ) | 1 ≤ i ≤ n}∪ {(c, ψ) | c ∈ C}∪ {(ψ," } ]
10.18653/v1/N19-1423
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34", "b8", "b23", "b13", "b29", "b17", "b31", "b2", "b21", "b30", "b39", "b40", "b36", "b27", "b9", "b30", "b10", "b32", "b1", "b19", "b12", "b41" ], "table_ref": [], "text": "Large language models based on Transformer (Vaswani et al., 2017) architectures, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2020) , and GPT models (Radford et al., Brown et al., 2020), have gained prominence in recent years for their remarkable state-of-the-art performance in various tasks related to Natural Language Processing (NLP). These works rely on deep networks with millions or even billions of parameters, and the availability of high computation and large storage capability plays a key role in their success. In this regard, there has been a proliferation of studies aimed at improving the efficiency of large language models, including knowledge distillation (Hinton et al., 2015, Sanh et al., 2019, Jiao et al., 2020), quantization (Shen et al., 2020), low-rank factorization (Ben Noach and Goldberg, 2020), weight sharing (Lan et al., 2020), and weight pruning (Sanh et al., 2020, Xia et al., 2022) and dynamic accelerating (Xin et al., 2020, Goyal et al., 2020).\nPruning has emerged as a promising approach to compress and accelerate DNN models, significantly reducing storage and computational costs. Structured pruning method delivers a static compact model by removing structured blocks of weights, e.g. heads (Voita et al., 2019, Michel et al., 2019) and encoder layers (Fan et al., 2020). However, removing a large proportion of parameters may result in noticeable accuracy loss. To address this, the distillation paradigm is commonly adopted for recovery training, where the pruned model learns the knowledge delivered from the unpruned model. (Sanh et al., 2020) While these pruning methods achieve compelling results, they are static and have a fixed computation route for all inputs, regardless of the differing information redundancy of various sequences.\nAnother pruning approach that we consider in this paper is token pruning, a dynamic pruning method that reduces computation by progressively dropping unimportant tokens in the sequence, allocating adaptive computational budget to different samples. It is appealing for its similarity to humans, who pay more attention to the more informative tokens.\nWe draw inspiration from the work of (Goyal et al., 2020) who demonstrated that attention-based models accumulate information redundancy as tokens pass through encoder layers. Based on this observation, we propose a dynamic pruning method that downsamples tokens before each encoder layer, in accordance with an information compression demand. To deploy and optimize this compression process, we utilize the information bottleneck (IB) principle (Tishby et al., 2000). IB recognizes the deep neural network as a process of information compression and extracting, optimized by maximizing the mutual information of inputs and labels, while controlling the mutual information between arXiv:2305.12458v1 [cs.CL] 21 May 2023 the inputs and hidden representatives. (Bang et al., 2021) We explore the potential of applying IB principle on token pruning. However, thus far, token pruning method rarely achieves large speedups (1.5-3x at most) as it leaves the model parameters intact ( Kim et al., 2022 ) or introduces additional parameters (Guan et al., 2022, Ye et al., 2021). In this work, we propose Infor-Coef, combining the information bottleneck-based token downsampling with static pruning to create a highly compact and efficient model.\nOur empirical results on the GLUE benchmark demonstrate that Infor-Coef outperforms different static pruning, dynamic pruning, and distillation baselines at various levels of speedups, with slight accuracy degradation of less than 8%. Specifically, Infor-Coef achieves 18x FLOPs speedup with padding and 16x reduction without extra padding tokens. We also show that our IB-based optimization yields better results than the typical l0-normbased token pruning loss function.1 2 Related Works" }, { "figure_ref": [], "heading": "Structured Pruning with Distillation", "publication_ref": [ "b4", "b30", "b30", "b36", "b27", "b26", "b14", "b9", "b13", "b20", "b30", "b39" ], "table_ref": [], "text": "Pruning searchs for a compact subnetwork from an overparameterized model by eliminating the redundant parameters and modules. Different pruning granularities, from fine-grained to coarse-grained, include unstructured pruning by removing individual weights (Chen et al., 2020,Sanh et al., 2020, Sanh et al., 2020), head pruning in multihead attention mechanism (Voita et al., 2019,Michel et al., 2019), intermediate dimension pruning in feedforward layer (McCarley et al., 2021,Hou et al., 2020), and entire encoder unit dropping (Fan et al., 2020) have been investigated to reduce the model size. Among them, unstructured pruning yields irregular weights elimination and won't necessarily boost efficiency. Structured pruning, targeted at reducing and simplifying certain modules and pruning structured blocks of weights, delivers compact models and achieves speedup.\nDistillation is applied to transfer the knowledge from the larger model to a smaller model. (Hinton et al., 2015) Distillation objective is commonly adopted and leads to significant performance improvements for training during or after pruning. (Lagunas et al., 2021,Sanh et al., 2020) The unified structured pruning framework, CoFi (Xia et al., 2022), jointly prunes different gran-ularity of units while distilling from predictions and layer outputs to maintain the performance. It prunes 60% of the model size without any accuracy drop." }, { "figure_ref": [], "heading": "Dynamic Token Pruning", "publication_ref": [ "b40", "b24", "b10", "b18", "b19", "b41", "b41", "b12" ], "table_ref": [], "text": "Unlike the static pruning strategy with a fixed computation cost, dynamic compression strategies are devised to selectively and adaptively allocate computation conditioned on different inputs. The dynamic approaches include dynamic depth (Xin et al., 2020), dynamic width (Liu et al., 2021) and dynamic token length. Dynamic token length method accelerates the Transformer model by progressively dropping the tokens of less importance during inference. PoWER-BERT (Goyal et al., 2020), one of the earliest works, recognizes the tokens as redundant for pruning. This is extended by LAT (Kim and Cho, 2021) which uses Length-Drop, a skimming technique to drop tokens and recover them in the final layer, followed by an evolutionary search. Learned Token Pruning (Kim et al., 2022) improves PoWER-BERT by introducing soft thresholds optimized in training. However, as is discussed in (Ye et al., 2021), their attention weights-based token pruning strategies can lead to a suboptimal selection. TR-BERT (Ye et al., 2021) adopts reinforcement learning on token skimming but is hard to converge. Transkimmer (Guan et al., 2022) exploits a parameterized module that functions as token selector before each encoder layer that can be optimized using reparameterization trick." }, { "figure_ref": [], "heading": "Information Bottleneck Principle", "publication_ref": [ "b32", "b33", "b0", "b6" ], "table_ref": [], "text": "Information bottleneck (IB) was first proposed in (Tishby et al., 2000). IB principle can be utilized to interpret and analyze the deep neural networks (Tishby and Zaslavsky, 2015). VIB (Alemi et al., 2016) extends it by presenting a variational approximation to get a tractable bound and leverage backpropagation in training. Originally, information bottleneck theory takes the internal representation of the intermediate layer as hidden variable Z of the input variable X. It aims to extract the representation Z of X that pertains the mutual information I(X; Y ) between the original input and target output, as well as compresses the mutual information I(X; Z). In (Dai et al., 2018) " }, { "figure_ref": [], "heading": "Dynamic pruning", "publication_ref": [ "b42" ], "table_ref": [], "text": "Figure 1: Overview of Infor-Coef. The dotted bordered rectangle denotes that the units / hidden dimensions are pruned using different kinds of masks. The structured pruning masks and dynamic pruning masks are learned using distillation objectives and information bottleneck respectively. 1998) and VGG models (Zhang et al., 2016). To the best of our knowledge, the method proposed in this work is the first to explore IB principle in terms of dynamic token pruning." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "We propose a collaborative pruning strategy, Infor-Coef, that implements static model pruning (section 3.1) and performs dynamic token downsampling (section 3.2) with a variational information bottleneck objective (section 3.3). We depict the overview of our model structure in Figure 1." }, { "figure_ref": [], "heading": "Static Pruning", "publication_ref": [ "b34", "b39", "b25", "b38", "b39" ], "table_ref": [], "text": "The weights and computations of transformer (Vaswani et al., 2017) model mainly come from H (e.g. 12) layers of multihead attention (MHA) and feed-forward network (FFN) modules. The embedded sequence matrix x ∈ R L × R d , where L corresponds to the token length and d to the feature dimension (which is usually equal to 768 in BERT models).\nInside BERT, an MHA layer with N h (e.g.12) heads process the input sequence in parallel. After the MHA layer, the FFN layer follows, which first projects the processed sequence into a hidden size of F , and then down projects it to the original size to facilitate addition with the residual connection. In the static slenderization, we systematically reduce both the depth (H) as well as the width (N h ,F ,d) of the model.\nWe leverage the pruning and distillation strategy from CoFi (Xia et al., 2022). Specifically, we exert masks with different positions and granularity of (1) the feature dimension d; (2) heads in the MHA layer; (3) intermediate dimension F in FFN layer; (4) the entire MHA layer; (5) the entire FFN layer.\nFollowing (Louizos et al., 2018) and (Wang et al., 2020), we generate hard concrete distributions to leverage the l0 regularization. In the forward pass, masks are sampled to prune the corresponding neurons and get the overall sparsity s. Given a predefined sparsity ratio ŝ, the l0 penalty is\nL 0 = µ 1 (ŝ -s) + µ 2 (ŝ -t) 2 (1)\nwhere µ 1 and µ 2 are lagrangian multipliers that are updated during training to push the model towards a fixed size.\nSince the removal of weights may lead to large performance degradation, distillation objectives are also added. We implement both layerwise distillation and output prediction distillation in (Xia et al., 2022) from the original model and the pruned model." }, { "figure_ref": [], "heading": "Dynamic Token Downsampling", "publication_ref": [], "table_ref": [], "text": "The hidden representation of a sentence in a MHA layer undergoes inner product operations along the dimension of the sentence's length in a selfattention mechanism, thus leading to a computational complexity that is almost proportional to the square of the sentence's length. With the inputs varying in complexity, we use dynamic token downsampling for sample-wise length reduction before each MHA layer.\nWe adopt the MLP decision layer and reparameterization trick in Guan et al., 2022." }, { "figure_ref": [], "heading": "Token Sampler", "publication_ref": [ "b12", "b41", "b16" ], "table_ref": [], "text": "To achieve the hierarchical token elimination, we sample binary masks corresponding to each token in each encoder layer.\nLet\nh i ∈ R L i × R d denote the i, i ∈ 1, .\n. . , Hth hidden state. Before entering the ith encoder layer, it is passed through a sampling module Sampler i , which generates the likelihood of \"pruning\" each token with probabilities π i ∈ [0, 1] L i and samples z i ∈ {0, 1} L i accordingly. Following (Guan et al., 2022) and TR-BERT (Ye et al., 2021), the Sampler is set to be a two-layer MLP function. It always makes the \"no pruning\" decision at the initial state. We forward the outputs of it to the softmax function to get a Bernoulli parameter:\n(π 0 , π 1 ) = sof tmax(M LP (x))\n(2)\nThe probability of pruning the token π 0 is also used in the loss function, which we would explore in section 3.3. The discrete binary masks are not differentiable. For optimization, we take the reparameterization method, approximating the Bernoulli distribution with the Gumbel-Softmax trick. (Jang et al., 2017) Now the sampler is:\nz = Gumbelsof tmax((π 0 , π 1 ) = one_hot(arg max i∈{0,1} [g i + log π i ])(3)\nwhere g i is drawn from Gumbel(0, 1). For differentiating, Gumbel-Softmax trick replaces the argmax operation with a softmax function." }, { "figure_ref": [], "heading": "Token Pruner", "publication_ref": [ "b18" ], "table_ref": [], "text": "Now we get the pruned hidden states with s i = P runer(h i , z i ), at length L i+1 . During inference we actually prune certain tokens for z i l = 0 in the P runer(h i , z i ), so L i ≥ L i+1 .But during training, we only set the pruned tokens zeroed out to simulate the pruning process, so theoratically we have\ns i = diag{z i }h i , L i ≡ L.\nIn the operation, we do not directly mask the tokens, considering that the zeroed token would affect other tokens in the self-attention mechanism. Instead, we convert the token masks to attention masks by:\nR = exp( QK T √ d h ) M ij = I z i =1 I z j =1 Attn = M ij R ij L i=1 M ij R ij (4)\nwhere Q, K, V denote the query, key, value matrix respectively, and d h stands for the head size. I p equals 1 given p is true, otherwise I p = 0.\nIn this way, we eliminate the effects and cut off the information flow with regard to the masked tokens. Additionally, the pruned tokens in the downsampling are forwarded to the last hidden layer, which is the same as LAT (Kim and Cho, 2021) and Transkimmer(Guan et al., 2022)." }, { "figure_ref": [], "heading": "Variational Information Bottleneck", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce a variational information bottleneck loss to guide the information flow in token downsampling. Basically, we minimize the mutual information before and after the downsampling, while maintaining the mutual information between the preserved tokens and the true labels." }, { "figure_ref": [], "heading": "variational approximation", "publication_ref": [ "b0" ], "table_ref": [], "text": "We use the same notations in section 3.1 and section 3.2. Hence, p(s i |h i ) is defined via the relation\ns i = P runer(h i , z i ) = diag{z i }h i z i ∼ Bernoulli(π i ) π i = Sampler(h i ) (5)\nAnother assumption is, following (IB) :\nx → h 1 → s 1 → • • • → h H → s H → ŷ (6) is a markov chain.\nDuring the training, our goal is to maximize the mutual information of the pruned hidden states and the true label, i.e. I(s i ; y), as well as control the mutual information before and after the pruning, i.e. I(h i ; s i ). Added β ≥ 0 for the tradeoff, we have the variational bottleneck loss function\nJ IB = H i=1 [-I(s i ; y) + βI(s i ; h i )](7)\nHowever, the architecture of BERT does not facilitate tractable computation of (7). We adopt the variational approximation technique in (Alemi et al., 2016) to get its upper bound.\nLet q(y|s i ) be a variational approximation to p(y|s i ) and r(s i ) ∼ N (0, 1) to p(s i ), now the upper bound of -I(s i ; y) + βI(s i ;\nh i ) is -E s i ∼p(s i |x),(x,y)∼D [p(s i |x) log q(y|s i )] + H(y) + βE s i ∼p(s i |h i ) [log p(s i |h i n ) r(s i ) ]\n(8) Please refer to appendix A for the detailed derivation." }, { "figure_ref": [], "heading": "information bottleneck loss", "publication_ref": [], "table_ref": [], "text": "Since here p(s i |x) represents the hidden states of x in the forward pass, and q(y|s i ) equals the final classification output based on s i , the first item in (8) is equivalent to the cross entropy loss.\nSplitting the second item in (8) into two parts:\nE s i ∼p(s i |h i ) [log p(s i |h i n )] -E s i ∼p(s i |h i ) [log r(s i )](9)\nGiven the training set {(x n , y n ), n = 1, . . . , N }, we estimate p(s andh i n is the ith layer's hidden state of x n before entering the Sampler i in the forward pass.Conditioned on h i n , s i and z i is one-to-one. The former part of (9) therefore equals\ni |x n ) = δ s i =s i n where s i n = P runer(z i n , h i n ), z i n = Sampler(h i n ),\nds i p(s i |h i n ) log p(s i |h i n ) = -H(s i |h i n ) (10) = -H(z i |h i n )\nwhere the masks z i = (z i 1 , . . . , z i L ) ∈ {0, 1} L that are conditioned on h i n , are independent variables following\nz i l ∼ Bernoulli(π i l ), l = 1, 2, . . . , L(11)\nTherefore\n-H(z i |h i n ) = L l=1 -H(z i l |h i n ) (12) = L l=1 π i l log π i l + (1 -π i l ) log(1 -π i l )(13\n) But to get the second part in (9), which equals\nds i p(s i |h i n ) log r(s i ) =E z∼Bernoulli L (π l ) [r(s i )](14)\nis computationally challenging, since the discrete probability space has 2 L outcomes. We simply estimate p(s\ni |h i n ) with p(s i |h i n ) ≈ δ s i =s i n s i n = diag{π i n }h i n (15\n)\nin a forward propagation. Hence now we get\nds i p(s i |h i n ) log r(s i ) = log N (s i n ; 0, I) (16) = - L 2 log(2π) - 1 2 s i n 2 F\nFinally, we can put everything together( and delete some constants) to get the following objective function, which we try to minimize:\nJ IB = L ce + β( H i=1 -H(z i ) + 1 2 s i n 2 F ) (17)\nwhere L ce is the cross entropy loss, and H(z i ) is the entropy of the ith layer's token masks, computed by (13) .\nIn practice, we split the objective function into three losses, and scale them in terms of layers and size. The main training objective is\nL = L ce + γ 1 L entropy + γ 2 L norm (18)\n4 Experiments" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b37", "b39", "b39", "b17", "b29", "b10", "b23" ], "table_ref": [], "text": "Datasets and metrics To validate our approach, we apply it on four tasks of GLUE benchmark (Wang et al., 2018) Training Steps We used the BERT base model as our base model and implemented a two-stage training process to create a compact and efficient model.\nIn the first stage, we learn static pruning masks using a sparsity objective and a distillation loss. For more information about this stage, please refer to (Xia et al., 2022). We kept training until arriving at a targeted pruning ratio ∈ {60%, 80%, 90%, 95%}.\nThen we perform the token downsampling instead of the vanilla finetuning process. In specific, we first finetune the model with L ce + γ 1 L entropy until convergence as a warmup. Then we add the L norm to start the token sampling. The ratio of eliminated tokens is adjusted by γ 1 and γ 2 . We set the seed to 42. (See Appendix C for the hyperparameters setting)\nFLOPs and Parameters Calulation We measure the inference FLOPs as a general measurement of the model's computational complexity, which allows us to assess models' speedups independent of their operating environment. We pad a batch of input sequences to the maximum length of the corresponding batch, with a batch size of 32. We calculate the FLOPs based on the model architecture as well as the inputs and the sampled masks. Then the FLOPs is averaged by tasks. When computing model parameters, following (Xia et al., 2022) and (Movement pruning: Adaptive sparsity by finetuning.), we exclude the parameters of the embedding layer.\nBaselines We compare against several baselines, all of which are constructed based on BERT model: (1) TinyBERT (Jiao et al., 2020) and Dis-tillBERT (Sanh et al., 2019): They are representative distillation models, both adopting general distillation and task-specific distillation. We also include TinyBERT without general distillation.\n(2) CoFi : The strong structured pruning model.\n(3) PoWER-BERT (Goyal et al., 2020) and Transkimmer(Guan et al., 2022): Both of them are token pruning models. We did not include LTP(LTP) because it is constructed on RoBerta (Liu et al., 2020)." }, { "figure_ref": [], "heading": "Overall Results", "publication_ref": [ "b39", "b19", "b10", "b39", "b12", "b12" ], "table_ref": [ "tab_2", "tab_2", "tab_1", "tab_3" ], "text": "We begin by showing the overall results of our model in table 2. For a fair comparison, we train two models with a parameter size that equals CoFi-s60 or CoFi-90 so we could use the reported results in (Xia et al., 2022). Notably, \"padding\" in table 2 stands for the padding strategy when implementing the token pruning models. According to (Kim et al., 2022) when input sequences are padded to a fixed length, the results can be exaggerated because the pruning module tends to drop redundant padding tokens. However, we measured FLOPs using two types of padding strategies: \"sequence,\" where sequences are padded to a fixed length according to PoWER-BERT (Goyal et al., 2020) (details are provided in Appendix B), and \"batch,\" where sequences are padded according to the batch size.\nOur experiments demonstrate that our model achieves significant speedup with only a minor drop in accuracy (or F1 score on MRPC). We divide the model into three groups, with the backbone of BERT base and CoFi (Xia et al., 2022). As demonstrated in the first group, on average our model achieves 5x speedup with less than 1% accuracy degradation, and achieves 18x speedup with 5% degradation. Compared to CoFi with the same weight compression rate, our models also experience less than 1% accuracy drop but provide an acceleration in inference by 100%. The substantial speedup does not depend on the model size, which is due to the orthogonality of the token downsampling strategy and the static pruning approach. This allows us to achieve both a high level of compression and an acceleration in inference without sacrificing large model performance.\nIn the second group, we present the performance of our model with 40% sparsity, namely Infor-Coef-4x. The comparative methods include the dynamic token pruning model baselines, which typically achieve 1.5-3 FLOPs speedup compared to the vanilla BERT model. Overall, we outperform the token pruning methods both in speedup ratio and accuracy. We also reimplement CoFi-s80, which denotes the CoFi model with 20% weights and report the results, since it has a similar speedup ratio with Infor-Coef-4x. In the third group, we compare Infor-Coef-16x, which has a 16x-18x speedup, Model params padding speedup MRPC(F1) QNLI(acc) SST2( acc Figure 2: Accuracy-Speedup trade-off curve in a 2-4x speedup. We compress our model to the 60% sparsity and apply token downsampling to different ratio. We then compare Infor-Coef(ours) against state-of-the-art pruning and distillation baselines.FLOPs speedup is analyzed using the padding strategy of \"batch\".\nagainst TinyBERT 4 with or without general distillation. Infor-Coef-16x prunes 95% of the model weights but has a competitive performance. Empirically, our models outperform all the comparative models on three tasks in terms of speedup and accuracy.\nTo showcase the flexibility and effectiveness of our models, we also compare their accuracy on GLUE development dataset to other methods while also measuring their inference speedup. These results are presented as tradeoff curves in Figure 2. In particular, we outperformed CoFi on all tasks except SST2, which is consistent with the results presented in Table 2. Overall, our models achieve competitive performance when compared with other methods.\nWe note that our model does not achieve the best performance on SST2 and QNLI in Table 2 and Figure 2. This is probably because the model is heavily influenced by similar training strategies and modules used in CoFi and Transkimmer. For instance, CoFi-s60 has a lower accuracy (86.1) than TinyBERT 4 (86.7) on QNLI. Although our model has higher compression rates compared to Tiny-BERT, it fails to surpass its performance when taking CoFi as our upper bound. Additionally, general distillation requires significant effort to pre-train a single model of a fixed size and computation, meaning that our strategy without pretraining could save substantial computation costs. Furthermore, SST2 has a shorter average length of 32 compared to other datasets in the GLUE benchmark (as shown in Table 1). According to Guan et al., 2022, Transkimmer only achieves a 1.58x speedup on this dataset. This suggests that a small input size could handle the acceleration of token pruning methods. and present the results in Table 3. Although the improvement brought by entropy loss is not significant, we observed consistent improvements in the performance of our models across different GLUE datasets. The removal of the norm loss leads to the convergence toward a vanilla BERT model without dynamic accelerating. Theoretically, the entropy loss encourages the samplers to make a more \"certain\" decision, therefore it contributes to the stability of the performance. Including entropy loss only, however, may force the model to preserve all the tokens, leading to the vanilla model. As demonstrated in Figure 3, we also compare our loss with the skim loss item in (Guan et al., 2022), which is essentially the proportion of preserved tokens in each layer. We adopt the same hyperparameter setting in its original paper (Guan et al., 2022). The FLOPs is calculated with the batch padding strategy, and all the models included are pruned with a 40% parameter reduction. The trade-off curve suggests that our information bottleneck loss offers superior tradeoffs between accuracy and inference speedup when compared to skim loss." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Effects of Different Losses", "publication_ref": [], "table_ref": [], "text": "Acceleration Effects of static and dynamic pruning In this work, we propose a novel collaborative approach for model pruning that combines structural pruning and dynamic token pruning. We investigate the effects of this approach by systematically ablating different stages of the training process. Figure 4 provides a visual representation of our proposed approach.\nThe figure demonstrates that the joint pruning outperforms the dynamic token downsampling significantly, having both superior FLOPs compression and accuracy retaining. The dynamic downsampling only gets 1.5-2.5x FLOPs reduction without a large accuracy sacrifice, while our proposed method could reduce the FLOPs by 80%. Furthermore, the performance of joint pruning not only exceeds structured pruning but also provides a larger range of speedup. Compared with structured pruning which prunes 95% parameters to get an approx-imately 10x speedup, Infor-Coef reaches a speedup ratio of larger than 17x, showing significant flexibility.\nFigure 4: Trade-off results between accuracy and remaining FLOPs. We calculate the FLOPs ratio after pruning using batch padding. The \"dynamic only\", \"structrured only\" and \"joint\" mean conducting dynamic token downsampling, static pruning and both two strategy respectively." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a model acceleration approach for large language models that incorporates dynamic pruning and static pruning, optimized by the information bottleneck loss. Our models selectively and adaptively allocate computation on different inputs and hidden states, resulting in a slenderized and efficient subnetwork. We also introduced a novel information bottleneck-based training strategy that outperforms the vanilla l0 norm-like loss for dynamic token reduction. Our empirical results demonstrate that our approach can achieve over 16x speedup while maintaining 95% performance. We conclude that different pruning methods are well-adaptable to each other through task-specific fine-tuning, and we hope that our work will inspire future research in the context of pruning large language models. Figure 3: Accuracy-Speedup trade-off curve in a 2-4x speedup. We compress our model to the 60% sparsity and apply token downsampling with the information bottleneck loss and skim loss. FLOPs speedup is analyzed using the padding strategy of \"batch\"." }, { "figure_ref": [], "heading": "A Derivation of Informtion Bottleneck Upper Bound", "publication_ref": [], "table_ref": [], "text": "Using that Kullback Leibler divergence is always positive, we have I(s i ; y) = ds i dyp(s i , y) log p(y|s i ) p(y)\n≥ ds i dyp(s i , y) log q(y|s i ) p(y)\nGiven the training set {(x n , y n ), n = 1, . . . , N }, we estimate p(s i |x n ) = δ s i =s i n where s i n = P runer(z i n , h i n ), z i n = Sampler(h i n ), h i n is the ith layer's hidden state of x n before entering the Sampler i in the forward pass. Leveraging our Markov assumption, p(y, s i ) = dxp(x, y, s i ) = dxp(x)p(s i |x)p(z|x)\nWe can rewrite the mutual information lower bound I(s i ; y) ≥ dxds i dyp(s i |x)p(y|x) log q(y|s i ) -H(y) (21)\n≈ 1 N N n=1\nds i p(s i |x n ) log q(y n |s i )\n-constant(22)\nSince here q(y|s i ) equals the final classification output based on s i , it is equivalant to minimize the cross entropy loss.\nFor the second mutual information item, we let r(s i ) ∼ N (0, 1) be a variational approximation to p(s i ). Using Kullback Leibler divergence again, we have I(s i ; h i ) = ds i dh i p(s i , h " }, { "figure_ref": [], "heading": "C Training Parameters", "publication_ref": [], "table_ref": [], "text": "We provide the hyperparameters used in our experiments as a reference for reimplementing our method. However, we acknowledge that the results may differ slightly depending on various factors such as the hardware devices and package versions.\nMRPC QNLI batch size 32 32 learning rate 1e-5,2e-5,5e-5 1e-5,2e-5 norm coef 5e-4,6e-4 5e-4,7e-4 entropy coef 0,5e-4 3e-4,4e-4 epoch 10 5 MNLI SST2 batch size 32 32 learning rate 1e-5,2e-5 1e-5,2e-5 norm coef 5e-4,4e-4 5e-4,4e-4 entropy coef 4e-4 5e-4,6e-4 epoch 3 10 " } ]
The prevalence of Transformer-based pretrained language models (PLMs) has led to their wide adoption for various natural language processing tasks. However, their excessive overhead leads to large latency and computational costs. The statically compression methods allocate fixed computation to different samples, resulting in redundant computation. The dynamic token pruning method selectively shortens the sequences but are unable to change the model size and hardly achieve the speedups as static pruning. In this paper, we propose a model accelaration approaches for large language models that incorporates dynamic token downsampling and static pruning, optimized by the information bottleneck loss. Our model, Infor-Coef, achieves an 18x FLOPs speedup with an accuracy degradation of less than 8% compared to BERT. This work provides a promising approach to compress and accelerate transformer-based models for NLP tasks.
Infor-Coef: Information Bottleneck-based Dynamic Token Downsampling for Compact and Efficient language model
[ { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Summary of evaluation datasets.", "figure_data": "DatasetAverage LengthTaskmetricMRPC53Paraphrase F1QNLI51QAacc.MNLI39NLIacc.SST232Sentiment acc.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on GLUE development set. GD denotes general distillation, which distills the student model on a large unlabeled data.", "figure_data": ") MNLI(acc)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation results on GLUE development set with 4.3x compression. We provide the results after removing the entropy loss and the norm loss.", "figure_data": "To investigate the", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "i ) log p(s i |h i ) p(s i ) = ds i dh i p(h i )p(s i |h i ) log p(s i |h i ) p(s i ) (23) ≤ ds i dh i p(h i )p(s i |h i ) log p(s i |h i ) r(s i )Given the training dataset {(x i , y i )} N i=1 , the upper bound can be approximated asds i dh i p(h i )p(s i |h i ) log p(s i |h i ) r(s i )Following(Goyal et al., 2020), we pad the inputs into a fixed length depending on different datasets. Padding length on the evaluation dataset.", "figure_data": "≈1 NN n=1ds i p(s i |h i n ) logp(s i |h i n ) r(s i )(24)=1 NN n=1[ds i p(s i |h i n ) log p(s i |h i n )-ds i p(s i |h i n ) log r(s i )]B Fixed Padded lengthDataset LengthMRPC128MNLI128QNLI128SST264", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Hyper parameters setting.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Wenxi Tan
[ { "authors": "Ian Alexander A Alemi; Joshua V Fischer; Kevin Dillon; Murphy", "journal": "", "ref_id": "b0", "title": "Deep variational information bottleneck", "year": "2016" }, { "authors": "Seojin Bang; Pengtao Xie; Heewook Lee; Wei Wu; Eric Xing", "journal": "Association for the Advancement of Artificial Intelligence", "ref_id": "b1", "title": "Explaining a black-box by using a deep variational information bottleneck approach", "year": "2021" }, { "authors": "Matan Ben; Noach ; Yoav Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Compressing pre-trained language models by matrix decomposition", "year": "2020" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Tianlong Chen; Jonathan Frankle; Shiyu Chang; Sijia Liu; Yang Zhang; Zhangyang Wang; Michael Carbin", "journal": "", "ref_id": "b4", "title": "The lottery ticket hypothesis for pretrained bert networks", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b5", "title": "", "year": "" }, { "authors": "Bin Dai; Chen Zhu; Baining Guo; David Wipf", "journal": "", "ref_id": "b6", "title": "Compressing neural networks using the variational information bottleneck", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Angela Fan; Edouard Grave; Armand Joulin", "journal": "", "ref_id": "b9", "title": "Reducing transformer depth on demand with structured dropout", "year": "2020" }, { "authors": "Saurabh Goyal; Anamitra Roy Choudhury; Saurabh Raje; Venkatesan Chakaravarthy; Yogish Sabharwal; Ashish Verma", "journal": "", "ref_id": "b10", "title": "PoWER-BERT: Accelerating BERT inference via progressive word-vector elimination", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "Zhengyi Yue Guan; Jingwen Li; Zhouhan Leng; Minyi Lin; Guo", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Transkimmer: Transformer learns to layer-wise skim", "year": "2022" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b13", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Lu Hou; Zhiqi Huang; Lifeng Shang; Xin Jiang; Xiao Chen; Qun Liu", "journal": "", "ref_id": "b14", "title": "Dynabert: Dynamic bert with adaptive width and depth", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "Eric Jang; Shixiang Gu; Ben Poole", "journal": "", "ref_id": "b16", "title": "Categorical reparameterization with gumbel-softmax", "year": "2017" }, { "authors": "Xiaoqi Jiao; Yichun Yin; Lifeng Shang; Xin Jiang; Xiao Chen; Linlin Li; Fang Wang; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "TinyBERT: Distilling BERT for natural language understanding", "year": "2020" }, { "authors": "Gyuwan Kim; Kyunghyun Cho", "journal": "", "ref_id": "b18", "title": "Lengthadaptive transformer: Train once with length drop, use anytime with search", "year": "2021" }, { "authors": "Sehoon Kim; Sheng Shen; David Thorsley; Amir Gholami; Woosuk Kwon; Joseph Hassoun; Kurt Keutzer", "journal": "Association for Computing Machinery", "ref_id": "b19", "title": "Learned token pruning for transformers", "year": "2022" }, { "authors": "François Lagunas; Ella Charlaix; Victor Sanh; Alexander Rush", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Block pruning for faster transformers", "year": "2021" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b21", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2020" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "Proceedings of the IEEE", "ref_id": "b22", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b23", "title": "Ro{bert}a: A robustly optimized {bert} pretraining approach", "year": "2020" }, { "authors": "Zejian Liu; Fanrong Li; Gang Li; Jian Cheng", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "EBERT: Efficient BERT inference with dynamic structured pruning", "year": "2021" }, { "authors": "Christos Louizos; Max Welling; Diederik P Kingma", "journal": "", "ref_id": "b25", "title": "Learning sparse neural networks through l 0 regularization", "year": "2018" }, { "authors": "J S Mccarley; Rishav Chakravarti; Avirup Sil", "journal": "", "ref_id": "b26", "title": "Structured pruning of a bert-based question answering model", "year": "2021" }, { "authors": "Paul Michel; Omer Levy; Graham Neubig", "journal": "Curran Associates Inc", "ref_id": "b27", "title": "Are Sixteen Heads Really Better than One", "year": "2019" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b28", "title": "Language models are unsupervised multitask learners", "year": "" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b29", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Victor Sanh; Thomas Wolf; Alexander M Rush", "journal": "", "ref_id": "b30", "title": "Movement pruning: Adaptive sparsity by finetuning", "year": "2020" }, { "authors": "Sheng Shen; Zhen Dong; Jiayu Ye; Linjian Ma; Zhewei Yao; Amir Gholami; Michael W Mahoney; Kurt Keutzer", "journal": "", "ref_id": "b31", "title": "Q-bert: Hessian based ultra low precision quantization of bert", "year": "2020" }, { "authors": "Naftali Tishby; Fernando C Pereira; William Bialek", "journal": "", "ref_id": "b32", "title": "The information bottleneck method", "year": "2000" }, { "authors": "Naftali Tishby; Noga Zaslavsky", "journal": "", "ref_id": "b33", "title": "Deep learning and the information bottleneck principle", "year": "2015" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b35", "title": "", "year": "" }, { "authors": "Elena Voita; David Talbot; Fedor Moiseev; Rico Sennrich; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Ziheng Wang; Jeremy Wohlwend; Tao Lei", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Structured pruning of large language models", "year": "2020" }, { "authors": "Mengzhou Xia; Zexuan Zhong; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Structured pruning learns compact and accurate models", "year": "2022" }, { "authors": "Ji Xin; Raphael Tang; Jaejun Lee; Yaoliang Yu; Jimmy Lin", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "DeeBERT: Dynamic early exiting for accelerating BERT inference", "year": "2020" }, { "authors": "Deming Ye; Yankai Lin; Yufei Huang; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "TR-BERT: Dynamic token reduction for accelerating BERT inference", "year": "2021" }, { "authors": "Xiangyu Zhang; Jianhua Zou; Kaiming He; Jian Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b42", "title": "Accelerating very deep convolutional networks for classification and detection", "year": "2016" } ]
[ { "formula_coordinates": [ 3, 350.08, 631.35, 174.33, 13.13 ], "formula_id": "formula_0", "formula_text": "L 0 = µ 1 (ŝ -s) + µ 2 (ŝ -t) 2 (1)" }, { "formula_coordinates": [ 4, 98.89, 301.9, 155.54, 13.58 ], "formula_id": "formula_1", "formula_text": "h i ∈ R L i × R d denote the i, i ∈ 1, ." }, { "formula_coordinates": [ 4, 108.43, 465.41, 143.14, 10.63 ], "formula_id": "formula_2", "formula_text": "(π 0 , π 1 ) = sof tmax(M LP (x))" }, { "formula_coordinates": [ 4, 101.3, 608.88, 187.84, 35.45 ], "formula_id": "formula_3", "formula_text": "z = Gumbelsof tmax((π 0 , π 1 ) = one_hot(arg max i∈{0,1} [g i + log π i ])(3)" }, { "formula_coordinates": [ 4, 306.14, 99.52, 113.6, 11.76 ], "formula_id": "formula_4", "formula_text": "s i = diag{z i }h i , L i ≡ L." }, { "formula_coordinates": [ 4, 363.64, 193.13, 160.77, 77.39 ], "formula_id": "formula_5", "formula_text": "R = exp( QK T √ d h ) M ij = I z i =1 I z j =1 Attn = M ij R ij L i=1 M ij R ij (4)" }, { "formula_coordinates": [ 4, 368.69, 578.78, 155.72, 66.07 ], "formula_id": "formula_6", "formula_text": "s i = P runer(h i , z i ) = diag{z i }h i z i ∼ Bernoulli(π i ) π i = Sampler(h i ) (5)" }, { "formula_coordinates": [ 4, 306.14, 680.23, 218.27, 38.42 ], "formula_id": "formula_7", "formula_text": "x → h 1 → s 1 → • • • → h H → s H → ŷ (6) is a markov chain." }, { "formula_coordinates": [ 5, 103.54, 110.55, 185.59, 33.71 ], "formula_id": "formula_8", "formula_text": "J IB = H i=1 [-I(s i ; y) + βI(s i ; h i )](7)" }, { "formula_coordinates": [ 5, 73.29, 238.41, 216.12, 82.03 ], "formula_id": "formula_9", "formula_text": "h i ) is -E s i ∼p(s i |x),(x,y)∼D [p(s i |x) log q(y|s i )] + H(y) + βE s i ∼p(s i |h i ) [log p(s i |h i n ) r(s i ) ]" }, { "formula_coordinates": [ 5, 71.02, 469.24, 218.12, 27.25 ], "formula_id": "formula_10", "formula_text": "E s i ∼p(s i |h i ) [log p(s i |h i n )] -E s i ∼p(s i |h i ) [log r(s i )](9)" }, { "formula_coordinates": [ 5, 70.87, 524.5, 218.27, 28.33 ], "formula_id": "formula_11", "formula_text": "i |x n ) = δ s i =s i n where s i n = P runer(z i n , h i n ), z i n = Sampler(h i n )," }, { "formula_coordinates": [ 5, 122.2, 635.89, 166.94, 55.17 ], "formula_id": "formula_12", "formula_text": "ds i p(s i |h i n ) log p(s i |h i n ) = -H(s i |h i n ) (10) = -H(z i |h i n )" }, { "formula_coordinates": [ 5, 94.18, 757.01, 194.95, 14.27 ], "formula_id": "formula_13", "formula_text": "z i l ∼ Bernoulli(π i l ), l = 1, 2, . . . , L(11)" }, { "formula_coordinates": [ 5, 317.99, 89.73, 206.42, 90.21 ], "formula_id": "formula_14", "formula_text": "-H(z i |h i n ) = L l=1 -H(z i l |h i n ) (12) = L l=1 π i l log π i l + (1 -π i l ) log(1 -π i l )(13" }, { "formula_coordinates": [ 5, 357.81, 208.84, 166.6, 38.26 ], "formula_id": "formula_15", "formula_text": "ds i p(s i |h i n ) log r(s i ) =E z∼Bernoulli L (π l ) [r(s i )](14)" }, { "formula_coordinates": [ 5, 360.07, 278.89, 159.79, 51.07 ], "formula_id": "formula_16", "formula_text": "i |h i n ) with p(s i |h i n ) ≈ δ s i =s i n s i n = diag{π i n }h i n (15" }, { "formula_coordinates": [ 5, 519.87, 309.38, 4.54, 9.46 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 5, 356.29, 357.84, 168.12, 65.58 ], "formula_id": "formula_18", "formula_text": "ds i p(s i |h i n ) log r(s i ) = log N (s i n ; 0, I) (16) = - L 2 log(2π) - 1 2 s i n 2 F" }, { "formula_coordinates": [ 5, 316.6, 483.85, 207.81, 33.71 ], "formula_id": "formula_19", "formula_text": "J IB = L ce + β( H i=1 -H(z i ) + 1 2 s i n 2 F ) (17)" }, { "formula_coordinates": [ 5, 330.72, 611.53, 193.69, 10.63 ], "formula_id": "formula_20", "formula_text": "L = L ce + γ 1 L entropy + γ 2 L norm (18)" }, { "formula_coordinates": [ 11, 347.42, 455.98, 41.64, 33.58 ], "formula_id": "formula_23", "formula_text": "≈ 1 N N n=1" }, { "formula_coordinates": [ 11, 357.72, 495.12, 166.69, 9.81 ], "formula_id": "formula_24", "formula_text": "-constant(22)" } ]
2023-11-28
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://github.com/HKUST-LongGroup/RECODE Holding: a person having an object in their hands Carrying: a person supporting an object in their hands Which of the pictures are \"holding\" and \"carrying\"? AC are \"holding\", and BD are \"carrying\" holding subject with hands subject is standing object with a handle or a grip that is held by the subject's hand carrying subject with hands, with arms subject in a more engaged position, perhaps walking or running subject with both hands and arms supporting a heavy object CD are \"holding\", and AB are \"carrying\" " }, { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "Pretrained vision-language models, such as CLIP, have demonstrated strong generalization capabilities, making them promising tools in the realm of zero-shot visual recognition. Visual relation detection (VRD) is a typical task that identifies relationship (or interaction) types between object pairs within an image. However, naively utilizing CLIP with prevalent class-based prompts for zero-shot VRD has several weaknesses, e.g., it struggles to distinguish between different fine-grained relation types and it neglects essential spatial information of two objects. To this end, we propose a novel method for zero-shot VRD: RECODE, which solves RElation detection via COmposite DEscription prompts. Specifically, RECODE first decomposes each predicate category into subject, object, and spatial components. Then, it leverages large language models (LLMs) to generate description-based prompts (or visual cues) for each component. Different visual cues enhance the discriminability of similar relation categories from different perspectives, which significantly boosts performance in VRD. To dynamically fuse different cues, we further introduce a chain-of-thought method that prompts LLMs to generate rea- " }, { "figure_ref": [ "fig_0", "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b0", "b0", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "Recent advances in pretrained vision-language models (VLMs) [1,2,3,4] (e.g., CLIP [1]), have shown remarkable generalization ability and achieved impressive performance on zero-shot recognition tasks. Specifically, CLIP employs two encoders: an image encoder that converts images into visual features, and a text encoder that transforms sentences into semantic features. This design allows the encoders to map different modalities into a common semantic space. When the inputs to the text encoder are class-based prompts, such as \"A [CLASS]\", \"A photo of [CLASS]\", CLIP can compare the image and prompts in the shared semantic space, thereby enabling zero-shot recognition of novel categories [1]. Compared to object recognition, visual relation detection (VRD) is much more challenging, which needs to identify the relation types between object pairs within an image in the form of ⟨subject, relation, object⟩ [5,6,7,8,9]. It differs from object recognition in that it requires an understanding of how objects are related to each other. By crafting class-based prompts to describe these relation types, CLIP could potentially be extended to perform zero-shot VRD.\nHowever, this straightforward baseline presents notable challenges. Imagine you are a child asked to distinguish relation categories \"holding\" and \"carrying\", both involving a person and an object. Based on the similar concepts of \"holding\" (i.e., a person having an object in their hands) and \"carrying\" (i.e., a person supporting an object in their hands), it would be difficult to determine the correct prediction (cf., Figure 1(a)). In other words, class-based prompts of \"holding\" and \"carrying\" might be projected to adjacent locations in semantic space by CLIP, leading to a relation sensitivity issue: CLIP struggles to differentiate between the subtle nuances of similar relations. Secondly, class-based prompts overlook the unique spatial cues inherent to each relation category, leading to a spatial discriminability issue. The \"holding\" category generally suggests the object being at a certain height and orientation relative to the person, while \"carrying\" implies a different spatial position, typically with the object located lower and possibly supported by the person's entire body. The neglect of spatial cues leads to inaccuracies in distinguishing between such spatial-aware relation categories. Moreover, applying CLIP in this manner brings about a computational efficiency issue. Using CLIP requires cropping each union region of a subject-object pair separately from the original image (i.e., N2 crops for N proposals), leading to computational inefficiencies. Nonetheless, we humans can distinguish relation categories from different visual cues. For example, from the subject's perspective, we could think that in the case of \"holding\", a person might be standing while having an object, such as an umbrella, in their hand. Meanwhile, in the case of \"carrying\", a person should be in a more engaged position, perhaps walking or running with both hands and arms supporting a heavy object, like a suitcase. In addition, spatial cues also play an important role in identifying these relation categories. For example, when a person is carrying an umbrella, the umbrella is usually positioned lower and closer to the person's body compared to when the person is holding an umbrella. Based on these visual cues, we can easily identify scenarios such as \"person-holding-umbrella\" and \"person-carrying-umbrella\" as in Figure 1(c).\nInspired by our humans' ability to extract and utilize different visual cues, we present a novel method for zero-shot VRD: RECODE, which classifies RElation via COmposite DEscriptions. It first uses large language models (LLMs) [10], to generate detailed and informative descriptions 2 for different components of relation categories, such as subject, object, and spatial. These descriptions are then used as description-based prompts for the CLIP model, enabling it to focus on specific visual features that help distinguish between similar relation categories and improve VRD performance. Specifically, for the subject and object components, these prompts include visual cues such as appearance (e.g., with leg), size (e.g., small), and posture (e.g., in a sitting posture). For the spatial component, these prompts include cues related to the spatial relationships between objects, such as relative position and distance. By incorporating different visual cues, RECODE enhances the discriminability of similar relation categories, such as \"riding\" and \"mounted\" based on the different postures of the subject, e.g., \"seated on the back of animal\" for the subject of \"riding\". Similarly, spatial visual cues can be used to differentiate between \"laying on\" and \"holding\" based on the relative position between the subject and object, such as \"subject above object\" and \"subject under object\" (cf., Figure 2).\nIn addition, we explore the limitations of several description generation prompts for visual cue, e.g., relation class description prompt [11], and then design a guided relation component description prompt that utilizes the high-level object categories to generate more accurate visual cues for each relation category. For instance, if the high-level category of object is \"animal\", the generated object descriptions for relation \"riding\" are tailored to the \"animal\" category, e.g., \"with four legs\", instead of the \"product\", e.g., \"with wheels\". Meanwhile, to better fuse the evidence from different visual cues, we further leverage LLMs to predict reasonable weights for different components. Particularly, we design a chain-of-thought (CoT) method [12] to break down this weight assignment problem into smaller, more manageable pieces, and prompt LLM to generate a series of rationales and weights.\nTo evaluate our RECODE, we conducted experiments on four benchmark datasets: Visual Genome (VG) [13] and GQA [14] datasets for scene graph generation (SGG), and HICO-DET [15] and V-COCO [16] datasets for human-object interaction (HOI) detection. Experimental results prove the generalization and interpretability of our method. In summary, we made three main contributions in this paper: 1) We analyze the weaknesses of the prevalent class-based prompt for zero-shot VRD in detail and propose a novel solution RECODE. RECODE leverages the power of LLMs to generate description-based prompts (visual cues) for each component of the relation class, enhancing the CLIP model's ability to distinguish between various relation categories. 2) We introduce a chain-of-thought method that breaks down the problem into smaller, more manageable pieces, allowing the LLM to generate a series of rationales for each cue, ultimately leading to reasonable weights for each component. 3) We conduct experiments on four benchmark datasets and demonstrate the effectiveness and interpretability of our method." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [ "b4", "b16", "b0", "b10" ], "table_ref": [], "text": "Typically, VRD is comprised of two sub-tasks: object detection and relation classification [5]. Since zero-shot object detection has been extensively studied [17,1,11], in this paper, we primarily focus on zero-shot relation classification. Specifically, given the bounding boxes (bboxes) {b i } and object categories {o i } of all objects, our target is to predict the visual relation (or predicate/interaction) categories {r ij } between pairwise objects. To facilitate presentation, we use s, o, and p to denote the subject, object, and their spatial position in a triplet respectively, and r to denote the relation category. or \"a photo of [REL-CLS]\". Each prompt is then passed through T (•) to get semantic embedding t, while the union region of a subject-object pair is passed through V (•) to get visual embedding v. The cosine similarity between v and t of different relation categories is calculated and processed by a softmax function to obtain the probability distribution over all relation categories." }, { "figure_ref": [ "fig_2" ], "heading": "Zero-shot VRD with Composed Visual Cues", "publication_ref": [], "table_ref": [], "text": "To overcome the limitations of class-based prompts, we propose a novel approach RECODE for zero-shot VRD. It consists of three parts: visual feature decomposing, semantic feature decomposing, and relation classification (cf., Figure 3). In the first two parts, we decompose the visual features of the triplet into subject, object, and spatial features, and then generate semantic features for each component. In the last part, we calculate the similarities between the decomposed visual features and a set of semantic features, and aggregate them to get the final predictions over all relations.\nVisual Feature Decomposing. To enhance spatial discriminability and computational efficiency, we decompose the visual features of a triplet into subject, object, and spatial features. For subject and object features, we crop the regions of the subject and object from the original image using the given bboxes b s and b o , and encode them into visual embeddings v s and v o using the image encoder V (•) of CLIP. For spatial features, we aim to obtain the spatial relationship between the subject and object based on their bounding boxes. However, directly obtaining all spatial images based on the given bounding boxes is computationally expensive due to the diversity of spatial positions (N 2 each image). To address this, we simulate the spatial relationship between the subject and object using a finite set of spatial images, represented by red and green bboxes respectively. We define four attributes (shape, size, relative position, and distance) based on bounding box properties. Each attribute is assigned a finite set of values to construct a finite set of simulated spatial images. For a given triplet, we match the calculated attribute values with the most similar simulated image 3 . The matched spatial image is then encoded into a visual embedding v p using V (•) of CLIP.\nSemantic Feature Decomposing. To improve the CLIP model's ability to distinguish between different relation classes, we incorporate a set of description-based prompts D to augment the original class-based prompt for each relation category. For the subject and object components, we generate a set of description-based prompts D s and D o to provide additional visual cue information, the generation process is described in Sec. 2.2. These prompts contain object categories with specific visual cues that highlight the unique characteristics of the relation being performed, e.g., \"women, with legs\", which enhances the discriminability between similar relation categories. For the spatial component, it only contains a set of description-based prompts D p that include information about the relative position and distance between the subject and object in the image. By incorporating this additional information, we aim to distinguish between relations based on spatial location. After generating these sets of description-based prompts, we obtain semantic embeddings {t ds i }, {t do i }, and {t dp i } using a text encoder T (•), separately. These embeddings, along with the class-based prompt embedding t c , are used for relation classification.\nRelation Classification. In this step, we compute the similarity score between the visual and semantic features to obtain the relation probability distribution. We first calculate the cosine similarity ϕ(•, •) between each visual embedding and semantic embedding for each relation category r. The final score incorporates both class-based and description-based prompts, and is calculated as follows:\nS(r) = ϕ(vs, tc) + ϕ(vo, tc) class-based prompts + k∈{s,o,p} w k |D k (r)| d k i ∈D k (r) ϕ(v k , t d k i ) description-based prompts ,(1)\nwhere w k represents the importance of visual cues for each component k ∈ {s, o, p}, and |D k (r)| denotes the number of visual cues in D k (r) for relation category r. We compute the similarity of individual visual cues for each component and then obtain their average. The weights of different components are determined by a LLM, which will be discussed in Sec. 2.2. Finally, we apply a softmax operation to the scores to obtain the probability distribution over all relation categories." }, { "figure_ref": [], "heading": "Visual Cue Descriptions and Weights Generation", "publication_ref": [ "b9" ], "table_ref": [], "text": "LLMs, such as GPT [10], have been shown to contain significant world knowledge. In this section, we present the process of generating descriptions of visual cue D s , D o , and D p , as well as the weights w s , w o , and w p for each component of each relation category using LLMs." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_3", "fig_3" ], "heading": "Visual Cue Descriptions", "publication_ref": [ "b10" ], "table_ref": [], "text": "In this section, we explore methods for generating descriptions of visual cues for relation decomposition. Inspired by the work [11] of zero-shot image classification, we first propose relation class description prompt, which generates descriptions from the perspective of class-level (cf., Figure 4(a)).\nIt has the advantage of producing descriptions that are easy to interpret and understand. However, it may result in overly diverse and information-rich descriptions that could hinder the extraction of meaningful visual cues, e.g., \"speed of the person\" in Figure 4(a).\nTo address this limitation, we then consider another relation component description prompt, which involves decomposing the relation into its subject and object components and generating descriptions of their visual features separately (cf., Figure 4(b)). While this type of prompt allows for more focused and specific descriptions of visual cues, it may not be effective in capturing the variations in visual features between different subject-object category pairs. For example, \"man-riding-horse\" and \"person-riding-bike\" typically have totally different visual features for the object. The visual cues \"reins\" and \"saddle\" of the object in Figure 4(b) are inappropriate for a \"bike\". LLM analyzes the importance of each cue step by step and assigns more reasonable weights.\nTherefore, we design guided relation component description prompt. It builds upon the second method by incorporating the high-level category information of the object into the generation process, leading to more accurate and informative descriptions of the visual features of both the subject and object components (cf., Figure 4(c)). To achieve this, we classify the object into high-level classes, such as \"human\", \"animal\", and \"product\", to guide the description generation. For example, \"bike\" is classified as \"product\", and \"horse\" is classified as \"animal\". This allows for the separate generation of visual feature descriptions for each high-level object class, e.g., \"a harness or saddle on its body\" for \"animal\", resulting in more precise and relevant visual cues for each relation category 3 ." }, { "figure_ref": [], "heading": "Visual Cue Weights", "publication_ref": [ "b11", "b17" ], "table_ref": [], "text": "Intuitively, different combinations of visual cues may have varying degrees of importance in relation classification. For example, for relation \"looking at\", the visual cue \"with visible features\" of the object may not be as informative as the visual cue \"with eye\" of the subject. To account for this, we leverage the impressive knowledge and reasoning abilities of LLMs to analyze the discriminative power of different visual cues and dynamically assign weights accordingly. Specifically, we provide each combination of visual cues as input to LLM and prompt it to determine the appropriate weight for each cue for distinguishing the given predicate. The prompts used for this purpose are in Figure 5.\nChain-of-Thought (CoT) Prompting. To ensure the generated weights are reasonable, we utilize a CoT method that has demonstrated remarkable reasoning abilities [12,18]. Specifically, we prompt the LLM to generate rationales by using the stepwise reasoning prompt \"Let's think step by step!\" to break down the problem into smaller, more manageable pieces. Then LLM generates a series of rationales, and those that lead to the reasonable weights. For example in Figure 5, we demonstrate the importance of the CoT method in generating more accurate weights. Without the stepwise reasoning prompt, LLM generates the same weight for both the subject and object visual cues for \"looking at\", which is clearly unreasonable. However, with the CoT prompt, LLM is able to analyze each cue step by step, leading to a more accurate assignment of weights, i.e., the cues about the subject are relatively more important. In order to standardize the format of the strings generated by LLMs for extracting different components of visual cues and weights, we make certain modifications to the prompts for descriptions and weights 3 ." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "Experiment setup", "publication_ref": [ "b12", "b4", "b18", "b19", "b13", "b20", "b21", "b22" ], "table_ref": [], "text": "Datasets. We evaluated our method on four zero-shot VRD benchmarks: 1) VG [13] contains 26,443 images for testing, each annotated with object and predicate labels to form a scene graph. Following previous works [5], we used the pre-processed VG with 150 object classes. We adopted the 24 semantic predicate classes proposed in [19,20], as they are more informative and challenging for classifying. 2) GQA [14] is a large-scale SGG dataset. We used the same split provided by [21], which contains 8,208 images for testing with 200 object classes. As for predicate classes, we selected Evaluation Metrics. For SGG datasets (i.e., VG and GQA), we reported Recall@K (R@K) which indicates the proportion of ground-truths that appear among the top-K confident predictions, and mean Recall@K (mR@K) which averages R@K scores calculated for each category separately [22].\nFor HOI datasets (i.e., HOI-DET and V-COCO), we reported mean Average Precision (mAP) [23].\nImplementation Details. For the LLM, we employed the GPT-3.5-turbo, a highly performant variant of the GPT model. As for CLIP, we leveraged the OpenAI's publicly accessible resources, specifically opting for the Vision Transformer with a base configuration (ViT-B/32) as default backbone 3 .\nSettings. The bounding box and category of objects were given in all experiments. We compared our RECODE with two baselines: 1) CLS, which uses relation-CLasS-based prompts (e.g., \"riding\") to compute the similarity between the image and text. 2) CLSDE, which uses prompts of relation CLasS DEscription as shown in Figure 4(a). Each component of the proposed framework can serve as a plug-and-play module for zero-shot VRD. Specifically, 1) Filter, which denotes filtering those unreasonable predictions (e.g., kid-eating-house) with the rules generated by GPT 3 . 2) Cue, which denotes using description-based prompts (Sec. 2.1). 3) Spatial, which denotes using spacial images as additional features. 4) Weight, which denotes using dynamic weights generated by GPT to determine the importance of each feature, i.e., visual cue weights." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In this work, we evaluated the prediction performance of the proposed framework on two related tasks, i.e., SGG and HOI. The former outputs a list of relation triplet ⟨sub,pred,obj⟩, while the latter just fix the category of sub to human. Overall, our method achieved significant improvement on the two tasks compared to the CLS baseline, which shows the superiority of our method. Evaluation on HOI. Since standard evaluation procedure of HOI had filtered out those unreasonable predictions, RECODE ⋆ was not evaluated here. From the results in Table 2, we can observe that the performance gains were lower than those on SGG, e.g., 0.0% to 0.7% gains on HICO-DET and 0.4% to 0.5% gains on V-COCO. The reasons are two-fold. On the one hand, since the category of subject is always a human, its features are too similar to be distinguished by CLIP. On the other hand, some of the actions are very similar in appearance. For example, distinguishing between actions like \"person-throw-sports ball\" and \"person-catch-sports ball\" is challenging due to their visual similarity." }, { "figure_ref": [], "heading": "Diagnostic Experiment", "publication_ref": [ "b5" ], "table_ref": [ "tab_3", "tab_4" ], "text": "Architectures. We investigated the impact of changing the architectures of CLIP, as shown in Table 3.\nFrom the results, we can observe consistent improvements regardless of the architecture used. Predicate Classification R@20 R@50 R@100 mR@20 mR@50 mR@100 4. The first row refers to the CLS baseline. Four crucial conclusions can be drawn. First, with the guidance of Cue, consistent improvements can be observed, e.g., 0.2% to 3.4% gains on R@K w/o Filter and 1.3% to 4.1% gains on R@K with Filter. Second, by introducing the spatial feature, the relative position of subject and object is considered, resulting in notable performance gains on R@K (0.8% to 1.7%) and mR@K (0.3% to 1.0%) w/o Filter compared to just using Cue. This is because the spatial feature is of importance for relation detection [6]. Third, benefiting from the impressive reasoning ability of LLMs, the proposed weighting strategy can determine the importance of different cues, thus achieving further improvements, e.g., 0.5% to 1.1% gains on R@K compared to average aggregation. Fourth, by filtering those unreasonable predictions, consistent improvements can be observed. The reason may be that the performance of relation detection of CLIP is not accurate enough. Empirically, commonsense knowledge is a feasible way to filter those noise. Combining all components allows for the best overall performance on all evaluation metrics.\nCase study. To investigate the most important regions for distinguishing relations, we visualized the attention map given different images and prompts (cf., Figure 6). From the visualization of class-based prompts, we can observe that CLIP may attend those regions unrelated to the query prompts, e.g., focusing on the body of a person given relation \"growing on\". We attribute this phenomenon to the insufficient information within given prompts, which is also our motivation to introduce visual cue descriptions. As for description-based prompts, CLIP can attend to right regions with the guidance of descriptions, e.g., focusing on colorful patterns on the product given relation \"painted on\"." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed a novel approach for zero-shot Visual Relationship Detection (VRD) that leverages large language models (LLMs) to generate detailed and informative descriptions of visual cues for each relation category. The proposed method addresses the limitations of traditional class-based prompts and enhances the discriminability of similar relation categories by incorporating specific visual cues. Moreover, we introduced a chain-of-thought method that breaks down the problem into smaller, more manageable pieces, allowing the LLM to generate a series of rationales for each visual cue and ultimately leading to reasonable weights. Our experiments on four benchmark datasets demonstrated the effectiveness and interpretability of our method." }, { "figure_ref": [], "heading": "Given: subject belongs to [SUB HL CLS] and object belongs to [OBJ HL CLS]. The visual features of subject: [SUB CUES]. The visual features of object:", "publication_ref": [], "table_ref": [], "text": "[OBJ CUES]. The visual features of position:\n[POS CUES]. Q: How do you weight these visual features (subject, object, position) to determine the predicate is \"REL CLS\"? The sum of weights must be 1.0! A: Let's think step by step!\nThe prompt is also divided into four distinct parts: setting, constraint, example, question. Setting: The setting (i.e., \"Suppose...\") establishes the role and perspective of the model in the task. Constraint: The constraint (i.e., \"The sum of weights must be 1.0!\") provides some limitations or constraints on the output generated by the LLMs. Example: The example (i.e., the example of determining the predicate \"looking at\") serves as a guide for the LLMs to understand the context and expected output. Question: The question (i.e., \"How do...\") prompts the model to determine the weights assigned to visual cues in order to classify the given predicate. Additionally, the stepwise prompt \"Let's think step by step!\" guides the LLMs to incrementally analyze the problem and generate rationales, which lead to more reasonable determination of visual cue weights.\nFilter Prompt. The prompt is used to filter unreasonable sub-pred and obj-pred categories. The prompt for sub-pred is as follows:\nQ: Can the window be sitting on something? After thinking about it, just answer \"Yes\" or \"No\"! A: Let's think step by step! It is possible for a window to be sitting in something, such as a frame or sill. Answer is Yes. Q: Can the {SUB CLS} be {REL CLS} something? After thinking about it, just answer \"Yes\" or \"No\"! A: Let's think step by step!\nThe prompt also adopts an in-context learning approach, leveraging illustrative examples to enhance comprehension of the situation. The stepwise prompt stimulates logical reasoning, facilitating the LLMs in rendering more robust judgments by leveraging the provided information." }, { "figure_ref": [], "heading": "H Implementation Details", "publication_ref": [], "table_ref": [], "text": "Our RECODE does not require a training process and can be directly tested on a NVIDIA 2080 Ti GPU. We pre-computed the visual features encoded by CLIP for each bounding box, enabling us to set the batch size to 512. For the LLM, we utilized the GPT-3.5-turbo, a highly performant variant of the GPT model. As for CLIP, we leveraged the OpenAI's publicly accessible resources, specifically opting for the Vision Transformer with a base configuration (ViT-B/32) as the default backbone." }, { "figure_ref": [], "heading": "I Further Analysis I.1 Comparison with Training-based Methods", "publication_ref": [ "b18", "b39", "b40", "b41", "b18", "b39", "b40", "b41" ], "table_ref": [ "tab_6" ], "text": "In this section, we compared the proposed training-free RECODE framework with those welldesigned training-based ones in Table 5. Note that such comparisons are unfair as training-based frameworks can learn the underline patterns and data distribution from the training set. For completeness, we still reported the results and investigate the performance gap between training-based frameworks and RECODE. Specifically, we compared the proposed RECODE with several relevant baselines, including triplet-level zero-shot VRD [19,40], few-shot VRD [41], and category-level zero-shot VRD [42]. Since all of them can not detect relations without training, we reported Zero-shot Recall@K (zR@K), which only calculates the Recall@K for those unseen triplet categories.\n• Triplet-level zero-shot VRD methods. Motifs [19] is a traditional strong baseline without explicitly modeling the nature of zero-shot. COACHER [40] explicitly models the nature of zero-shot, and takes the power of the common sense from ConceptNet resulting in better performance. • Few-shot VRD methods. DPL [41] is a few-shot baseline, which mainly investigates making predictions with a few examples (here we evaluate 1-shot). • Category-level zero-shot VRD methods. CaCao [42] also explicitly models the nature of zero-shot, and leverages language information from captions of CC3M and COCO for enhanced performance.\nSurprisingly, even without training, RECODE still achieves competitive results, with zR@20, zR@50, and zR@100 of 8.2%, 16.1%, and 23.2%, respectively. This signifies its potential in handling unseen categories, due to the effective visual cues and inference mechanisms. We conducted an ablation study to investigate the impact of different class-based prompts on zero-shot VRD performance. The class-based prompts were manually designed to generate text embedding for relation classification. We compared two types of class-based prompts: 1) \"[REL-CLS]-ing/ed\" prompt, where [REL-CLS] represents the name of relation category. For example, for the relation class \"milk\", the prompt would be \"milking\". 2) \"a photo of [REL-CLS]\" prompt. For example, for the relation class \"riding\", the prompt would be \"a photo of riding\"." }, { "figure_ref": [], "heading": "I.2 Ablation on Different Class-based Prompts", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Table 6 summarized the results. Our method achieved improved scores across various metrics for both types of prompts. With the \"[REL-CLS]-ing/ed\" prompt, we observed significant gains (3.1% to 5.6%) on R@K. Similarly, when using the \"a photo of [REL-CLS]\" prompt, we achieved the highest R@100 score of 28.8% and mR@100 of 28.4%. These results indicated that our method consistently outperforms the CLS baseline, regardless of the specific prompt type used. The effectiveness of our method suggested a promising solution for zero-shot VRD tasks." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "I.3 Interpretability Analysis", "publication_ref": [], "table_ref": [], "text": "To gain a deeper understanding of the interpretability of our RECODE in VRD, we conducted an in-depth analysis comparing its predictions with CLS baseline that utilizes only class-based prompts. By evaluating the similarity between the description-based prompts and the corresponding visual features, we revealed the underlying reasons for the accuracy of RECODE's predictions and the inaccuracies of the CLIP baseline.\nFigure 7 presented qualitative comparisons of RECODE and the CLS baseline on challenging examples from the VG dataset. Our description-based prompts significantly improve CLIP's understanding of the various relation categories, leading to more accurate predictions. Taking the top image of Figure 7 as an example, RECODE accurately predicts the \"says\" relation category by identifying the presence of visual features associated with \"text or image\". In contrast, the failure case of the \"using\" relationship category predicted by the CLS baseline can be attributed to the absence of distinctive visual features related to \"usable feature\", as highlighted by our description-based prompts." }, { "figure_ref": [], "heading": "J Broader Impacts", "publication_ref": [], "table_ref": [], "text": "Like every coin has two sides, using our method will have both positive and negative impacts.\nPositive Impacts. Firstly, RECODE emphasizes the importance of pairwise recognition, encouraging researchers to develop more diverse and comprehensive recognition models. By focusing on the relationships between object pairs, we inspire the exploration of a broader range of relationship types and promote a deeper understanding of complex interactions between objects (e.g., n-tuple interaction), especially in the zero-shot setting without any extra training stage. Secondly, our method introduces the incorporation of spatial information in visual relation detection. By considering spatial cues and relationships between objects, we highlight the significance of spatial information in understanding object interactions. This not only improves the accuracy of relation detection but also encourages researchers to explore the integration of other useful auxiliary information. This can include incorporating contextual information, temporal relationships, or other relevant cues that can enhance recognition performance. Thirdly, our method promotes the use of Chain-of-Thought prompting with LLMs for weight assignment in recognition tasks. By leveraging the knowledge and capabilities of LLMs, we enable the generation of more informed and reasonable weights for different components of the recognition process. This improves the interpretability of the recognition results and opens up new possibilities for utilizing the vast knowledge and capabilities of language models to enhance recognition systems.\nNegative Impacts. However, we also acknowledge that there are potential negative impacts associated with the use of our method. For example, the reliance on LLMs could lead to the perpetuation of biases and inequalities present in the data used to pre-train these models.\nIn conclusion, the proposed method for zero-shot visual relation detection brings about positive impacts by inspiring more complex recognition models under the zero-shot setting, highlighting the significance of contextual cues, and promoting the use of LLMs for weight assignment. It is essential to continue exploring ways to address potential negative impacts and ensure the responsible and ethical use of these advancements in our community." }, { "figure_ref": [], "heading": "K Limitations", "publication_ref": [], "table_ref": [], "text": "As the first zero-shot visual relation detection work using LLMs, our method still has some limitations: 1) Firstly, we did not specifically evaluate spatial relation categories (e.g., \"on\", \"under\") and ownership relation categories (e.g., \"belong to\"). In this work, our method mainly focuses on classifying semantic predicate groups based on visual cue descriptions. However, by extensive empirical results, these spatial and ownership relationships can be easily predicted from only spatial positions or object categories. 2) Secondly, our framework assumes the availability of ground truth bounding boxes and object categories for relation classification. However, in real-world scenarios, object detection can introduce errors or uncertainties. 3) Thirdly, to avoid overmuch queries to LLMs, our approach proposes a trade-off solution and only relies on coarse-grained triplet category descriptions. However, this simplification may not capture fine-grained nuances in different visual relationships. Using more detailed and comprehensive descriptions (with more LLM queries) could potentially further improve the performance. 4) Fourthly, the accuracy and correctness of the visual cue descriptions are not guaranteed. Despite efforts to ensure quality, errors or incomplete information may be present. It is essential to validate and verify cue descriptions for reliable results." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement. This work was supported by the National Key Research & Development Project of China (2021ZD0110700), the National Natural Science Foundation of China (U19B2043, 61976185), and the Fundamental Research Funds for the Central Universities (226-2023-00048). Long Chen is supported by HKUST Special Support for Young Faculty (F0927), and HKUST Sports Science and Technology Research Grant (SSTRG24EG04)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "eating: giraffe, with head or mouth toward the food hanging from: giraffe, with limbs hanging from 𝜙: 27.2 hanging from 𝜙: 30.8\ngrowing on: tree, a larger, stationary tree with a sturdy structure that can accommodate the growing product hanging from: tree, with a hook, loop or handle growing on 𝜙: 23. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b5", "b21", "b23", "b24", "b25", "b26", "b21", "b5", "b27", "b7", "b28", "b29", "b30", "b31", "b32", "b33", "b34", "b0", "b35", "b36", "b37", "b10", "b10" ], "table_ref": [], "text": "Visual Relation Detection (VRD) aims to predict the relationships of given subject-object pairs, which can be viewed as a pair-wise classification task and have been widely studied in the image domain, e.g., scene graph generation (SGG) [5,6,22,24] and human-object interaction (HOI) detection [25,26,27]. Previous solutions mainly focus on learning representations from the training samples on pre-defined categories, which may suffer from noisy annotations [22] or long-tailed predicate distribution [6,28] and are far from the needs of the real-world scenarios. Recently, some attempts [8,29] adopted prompt-tuning [30] to predict unseen categories during inference. However, since the learnable prompts may be overfitting when trained on seen categories, their performance is sensitive to the split of seen/unseen categories [31]. In contrast, our method can predict the relationships directly without any training samples, and has better interpretability and generalization ability, especially in rare informative relation categories.\nZero-shot Visual Recognition enables the model to recognize new categories that it has never seen during training, which is one of the research hotspots in the vision community. Aligning visual representations to pre-trained word embeddings (e.g., Word2Vec [32] and GloVe [33]) is an intuitive and feasible way to achieve this goal [34]. More recently, VLMs, which use contrastive learning [35] to learn a joint space for vision and language, have demonstrated their impressive zero-shot ability [1]. Therefore, many zero-shot works [36,37,38] adopted such VLMs as their basic component to use the knowledge of the learned joint space. However, most of them only utilized the class name of unseen categories during inference, which makes an over-strong assumption that the text encoder project proper embeddings with only category names [11]. Then, Menon and Vondrick [11] proposed to query LLMs for the rich context of additional information. Nonetheless, it is non-trivial to apply such paradigms to VRD as discussed in Sec. 1. To the best of our knowledge, we are the first to leverage both LLMs and VLMs for VRD in an efficient, effective, and explainable way." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "This supplementary document is organized as follows:\n• The details about stimulated spatial images generation mentioned in Sec. " }, { "figure_ref": [], "heading": "F Stimulated Spatial Images Generation", "publication_ref": [], "table_ref": [], "text": "We propose to simulate the spatial relationship between the subject and object by generating a finite set of spatial images, as mentioned in Sec. 2.1. Each spatial image represents the bboxes of the subject and object, where the subject's bounding box is visually denoted by a red box, and the object's bounding box is denoted by a green box. We define four essential attributes, namely shape, size, relative position, and distance, to describe the spatial relationships between the subject and object. These attributes are calculated based on various characteristics, including the aspect ratio ρ and area A of the bounding boxes, the cosine similarity sim(•, •), and the Euclidean distance d between their centers. By assigning different values to these attributes, we can generate a diverse set of simulated spatial images. Given a specific triplet, we calculate the value of each attribute based on the characteristics of the subject and object. Next, we search for the most suitable spatial image in the simulated set by matching these attribute values. This matching process involves comparing the calculated attributes of the triplet with the corresponding attribute ranges in the simulated set. For instance, the aspect ratios and areas of the subject and object bounding boxes determine their shape and size attributes, while the cosine similarity and Euclidean distance between their centers contribute to the relative position and distance attributes. By utilizing this approach, we can effectively simulate various spatial relationships between the subject and object to improve computing efficiency. The detailed procedures of the algorithm are provided in Algorithm 1. Step 1: Generate simulated spatial image set 2: Define the attributes: shape, size, relative position, and distance.\n3: Specify the corresponding value intervals for each attribute.\n• Shape: horizontal, vertical, and square denoted as {H, V, Q}.\n• Size: small, medium, and large denoted as {S, M, L}.\n• Relative position: above (↑), below (↓), left (←), right (→), top-left (↖), top-right (↗), bottom-left (↙), and bottom-right (↘). • Distance: small, medium, large denoted as {S, M, L}. " }, { "figure_ref": [], "heading": "G Prompts", "publication_ref": [ "b9", "b38" ], "table_ref": [], "text": "In this section, we present prompts for high-level object category generation (cf., Sec. 2.2.1), visual cue description (cf., Sec. 2.2.1), visual weight determination (cf., Sec. 2.2.2), and unreasonable predicate filtering (cf., Sec. 3.1).\nHigh-level Object Class Generation Prompt. To facilitate the classification of low-level object categories into high-level object categories, we provide the following prompt:\nGiven the low-level object categories: [ALL OBJ CLS]. Please classify each low-level object category into high-level object categories [\"human\", \"animal\", \"product\"] based on their most common semantics in visual relation detection. Ensure that body parts and similar categories are not classified as \"human\". Note that human beings engaged in certain activities must be classified as \"human\"! In this prompt, we provide a list of low-level object categories: [ALL OBJ CLS] that need to be categorized into high-level object categories(i.e., \"human\", \"animal\", and \"product\"). The prompt instructs the LLMs to assign the low-level object categories to the most appropriate high-level object category based on their prevalent semantics in visual relation detection. This prompt guides the model to understand the distinctive characteristics and visual cues associated with different object categories, contributing to accurate descriptions of visual cue for each relation category.\nVisual Cue Description Prompt. We present guided relation component description prompt for generating the descriptions of visual cue, specifically designed for the relation class \"REL CLS\" when provided with the High-Level (HL) categories of the subject and object, i.e., \"SUB HL CLS\" and \"OBJ HL CLS\". The prompt is structured as follows:\nKnown: a visual triplet is formulated as [subject, predicate, object]. Note that:\n[position] must not include nouns other than subject and object! [position] must contain [orientation: (\"above\", \"below\", \"left\", \"right\", \"inside\"), shape: (\"horizontal\", \"vertical\", \"square\"), distance: (\"small distance\", \"mid distance\", \"large distance\")]! Describe the visual features of the predicate \"sitting on\" in a photo, when subject belongs to [human], object belongs to [product]:\n[subject]:\n-with legs.\n-with hip.\n[object]:\n-with flat surface.\n[position]:\n-square subject above horizontal object with a small distance.\nDescribe the visual features of the predicate \"REL CLS\" in a photo, when subject belongs to [SUB HL CLS], object belongs to [OBJ HL CLS]:\nThe prompt is divided into four distinct parts: setting, constraint, example, and question. Setting:\nThe setting (i.e., \"Known...\") provides specific roles and known conditions for the LLMs to operate within. Constraint: The constraint (i.e., \"Note that...\") outlines some limitations or constraints on the output generated by the LLMs. Example: The example (i.e., the example of \"sitting on\") serves as a guide for the model to produce similar output in an in-context learning [10,39] manner, which is also generated by LLM. Question: Finally, the question (i.e., \"Describe...\") prompts the model to generate a description of the visual features that are specific to the relation being considered. This comprehensive prompt structure aids in more reasonable and standardized generation of visual cue descriptions for subject, object, and spatial components for each relation category.\nVisual Cue Weight Prompt. The prompt is designed to determine the visual cue weights for subject (SUB CUES), object (OBJ CUES), and spatial (POS CUES) in relation classification. It is structured as follows:\nSuppose you are a relation classification model." }, { "figure_ref": [], "heading": "Given: subject belongs to [human] and object belongs to [product].", "publication_ref": [], "table_ref": [], "text": "The visual features of subject:\n[\"with eyes directed towards the object\", \"with head upright\"]. The visual features of object:\n[\"with visible features such as front, display, or screen\"]. The visual features of position:\n[\"subject positioned either above, below, left or right of the object at a mid distance\"]. Q: How do you weight these visual features (subject, object, position) to determine the predicate is \"looking at\"? The sum of weights must be 1.0! A: Let's think step by step! First, we need to consider the importance of the subject's visual features. Since the direction of the eyes and head position strongly indicate the focus of attention, we will give them a weight of 0.6. Next, we need to consider the importance of the object's visual features. Since the visible features such as front, display, or screen indicate that the object is something that can be looked at, we will give them a weight of 0.3. Finally, we need to consider the importance of the position visual features. Since the relative position of the subject and object at a mid-distance helps us understand that the subjects are looking at the object in question, we will give them a weight of 0.1. Therefore, we can weight these visual features as follows: Weight(\"looking at\") = 0.6 * Weight(visual features of subject) + 0.3 * Weight(visual features of object) + 0.1 * Weight(visual features of position)." } ]
Zero-shot Visual Relation Detection via Composite Visual Cues from Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the challenges of VRD with similar relation categories holding and carrying. Four images and their ground-truths are on the left. The subject and object for each triplet are denoted by blue and pink boxes, respectively. (a) A child may incorrectly identify these two relations only based on similar concepts alone. (b) Using class-based prompts, CLIP always maps these two relations to adjacent locations in the semantic space. (c) We humans always utilize composite visual cues to correctly distinguish between different relations. (d) Our proposed RECODE uses LLM (e.g., GPT) to generate composite descriptions that aid the CLIP model in distinguishing between them.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: A comparative analysis of predictions made by RECODE and baseline CLIP using classbased prompts. It illustrates how our method offers interpretability to the relation classification results through the similarity ϕ between the image and the description-based prompts.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The framework of RECODE. 1) Visual feature decomposing decomposes the triplet into subject, object, and spatial features. 2) Semantic feature decomposing decomposes relation categories into subject, object, and spatial descriptions. 3) Relation classification calculates similarities between decomposed visual and semantic features and applies softmax to obtain the probability distribution.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Examples of different prompts used for generating descriptions of visual cues. (a) Relation class description generates descriptions for each relation class directly. (b) Relation component description generates descriptions for each component of the relation separately. (c) Guided relation component description incorporates high-level object category guide generation process.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "1 Figure 5 :15Figure 5: Illustration of the effectiveness of CoT method in generating reasonable visual cue weights. (a) Prompt without CoT. LLM assigns same weights for subject and object. (b) Prompt with CoT. LLM analyzes the importance of each cue step by step and assigns more reasonable weights.", "figure_data": "", "figure_id": "fig_4", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "bottom surface (for stability) a type of vehicle with a flat surface that is designed for something to park made subject positioned near or around the object, with possible physical contact or connection subject and object oriented towards each other subject above object, the object parallel or perpendicular to the ground horizontal subject above horizontal object with a small distance vertical subject above horizontal object with a mid to large distance horizontal or vertical subject on horizontal object with a small distance on horizontal object with a mid distance subject attached to the bottom of object or suspended from a part of the object.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: A comparative analysis of predictions made by RECODE and a baseline with class-based prompts on the test set of VG. It illustrates how our method offers interpretability to the VRD through the similarity ϕ between the image and the description-based prompts.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Evaluation results on the test set of VG and GQA datasets. † denotes removing the guidance from high-level object category. ⋆ denotes integrated with Filter strategy. @20 △ R@50 △ R@100 △ mR@20 △ mR@50 △ mR@100 △", "figure_data": "Predicate ClassificationData MethodCLS7.2-10.9 -13.2-9.4-14.0-17.6-CLSDE7.0 -0.2 10.6 -0.3 12.9 -0.38.5 -0.9 13.6 -0.416.9 -0.7VGRECODE † 7.3 0.1 11.2 0.3 15.4 2.2 RECODE 9.7 2.5 14.9 4.0 19.3 6.1 10.2 0.8 16.4 2.4 8.2 -1.2 13.5 -0.518.3 22.70.7 5.1RECODE ⋆ 10.6 3.4 18.3 7.4 25.0 11.8 10.7 1.3 18.7 4.727.8 10.2CLS5.6-7.7-9.9-6.3-9.5-12.2-GQACLSDE RECODE † 5.2 -0.4 7.8 0.1 10.2 0.3 5.4 -0.2 7.2 -0.5 9.3 -0.6 RECODE 6.3 0.7 9.4 1.7 11.8 1.96.0 -0.3 5.8 -0.5 7.8 1.5 11.9 2.4 8.8 -0.7 8.9 -0.611.5 -0.7 11.3 -0.9 15.1 2.9RECODE ⋆ 7.0 1.4 11.1 3.4 15.4 5.59.43.1 14.8 5.320.48.226 semantic predicate classes by referring to VG. 3) HICO-DET [15] contains 9,658 testing imagesannotated with 600 HOI triplets derived from combinations of 117 verb classes and 80 object classes.4) V-COCO [16] comprises 4,946 testing images annotated with 29 action categories.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation results on the test set of HICO-DET and V-COCO datasets.", "figure_data": "HICO-DETV-COCOMethodFull Rare Non-Rare Role 1 Role 2CLS32.3 33.2 31.825.5 28.6CLSDE32.5 33.1 32.225.6 28.8RECODE † 32.5 33.0 32.425.7 28.8RECODE 32.7 33.2 32.526.0 29.0", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation studies on different architectures of CLIP. The official released weights are used.", "figure_data": "Predicate ClassificationArchitecture ViT-L/14R@20 R@50 R@100 mR@20 mR@50 mR@100 8.3 15.0 21.5 7.6 14.2 24.2 RECODE ⋆ 11.2 19.9 28.0 Method CLS ⋆ 9.1 18.5 28.1ViT-L/14@336pxCLS ⋆ RECODE ⋆ 12.1 21.1 29.2 8.6 15.4 21.87.7 9.713.9 19.523.0 28.2ViT-B/32CLS ⋆ RECODE ⋆ 10.6 18.3 25.0 7.5 13.7 19.49.1 10.715.9 18.724.0 27.8ViT-B/16CLS ⋆ RECODE ⋆ 12.6 21.0 28.5 8.6 15.5 22.19.8 12.517.2 20.225.2 30.0", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Analysis of key components on the test set of VG.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison with SOTA VRD methods on the VG dataset. Note that none of these methods can be applied in the training-free zero-shot setting.", "figure_data": "NoUnseenTrainingPredicate ClassificationModelTrainingRelationData SourcezR@20 zR@50 zR@100Motifs [19]VG8.9 15.2 18.5COACHER [40]VG& ConceptNet 28.2 34.1 37.2DPL [41]VG6.07.79.3CaCao [42]VG&CC3M&COCO 17.2 21.3 23.1RECODE-8.2 16.1 23.2", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation studies of different class-based prompts on the test set of VG", "figure_data": "Predicate ClassificationClass-based PromptMethodR@20 R@50 R@100 mR@20 mR@50 mR@100CLS ⋆7.5 13.7 19.49.115.924.0[REL CLS]-ing/edRECODE ⋆ 10.6 18.3 25.010.718.727.8CLS ⋆11.7 19.2 26.210.919.427.1a photo of [REL CLS]RECODE ⋆ 13.5 21.8 28.812.219.628.4", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "may have a shape or design that suggests its function or purpose", "figure_data": "CLIPusing54.6%says45.3%laying on0.1%lying oncarrying0.0%CLIP with Visual Cueswith text or symbolsays using23.7%63.6%subjectwith a 28.5 27.0laying on lying on carrying0.0% 12.1% 0.6%saysobject24.1 22.7spatialCLIPhanging from47.2%standing on16.3%sitting on15.3%eating13.5%holdingCLIP with Visual Cuesin a sitting positionsitting on standing on hanging from eating20.2% 13.3% 14.0%51.9%sitting onsubject objectwith legs with a body of an animal with aholding0.4%spatial", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Whywith relevant appendages for operationsubjectwith appropriate interface to connect with something26.9 26.7usingobjectwith usable features with interface for connection to something𝜙23.3 23.7spatialWhysign-says-letter𝜙Whywith limbshangingsubjectwith body with handle or hook𝜙26.1 26.8 23.5fromobject18.66.8%spatialWhy28.327.2bird-sitting on-branch𝜙24.6 24.920.1CLIPstanding on33.9%using30.8%parked on28.1%covering7.1%sitting onCLIP with Visual Cuesparked on94.3%coveringusingsitting onstanding on", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" } ]
Lin Li; Jun Xiao; Guikun Chen; Jian Shao; Yueting Zhuang; Long Chen
[ { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b0", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Mitchell Wortsman; Gabriel Ilharco; Jong Wook Kim; Mike Li; Simon Kornblith; Rebecca Roelofs; Raphael Gontijo Lopes; Hannaneh Hajishirzi; Ali Farhadi; Hongseok Namkoong", "journal": "", "ref_id": "b1", "title": "Robust fine-tuning of zero-shot models", "year": "2022" }, { "authors": "Jaemin Cho; Seunghyun Yoon; Ajinkya Kale; Franck Dernoncourt; Trung Bui; Mohit Bansal", "journal": "", "ref_id": "b2", "title": "Fine-grained image captioning with clip reward", "year": "2022" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "", "ref_id": "b3", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Danfei Xu; Yuke Zhu; Christopher B Choy; Li Fei-Fei", "journal": "", "ref_id": "b4", "title": "Scene graph generation by iterative message passing", "year": "2017" }, { "authors": "Kaihua Tang; Yulei Niu; Jianqiang Huang; Jiaxin Shi; Hanwang Zhang", "journal": "", "ref_id": "b5", "title": "Unbiased scene graph generation from biased training", "year": "2020" }, { "authors": "Xingchen Li; Long Chen; Jian Shao; Shaoning Xiao; Songyang Zhang; Jun Xiao", "journal": "", "ref_id": "b6", "title": "Rethinking the evaluation of unbiased scene graph generation", "year": "2022" }, { "authors": "Tao He; Lianli Gao; Jingkuan Song; Yuan-Fang Li", "journal": "", "ref_id": "b7", "title": "Towards open-vocabulary scene graph generation with prompt-based finetuning", "year": "2022" }, { "authors": "Lin Li; Guikun Chen; Jun Xiao; Yi Yang; Chunping Wang; Long Chen", "journal": "", "ref_id": "b8", "title": "Compositional feature augmentation for unbiased scene graph generation", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "", "ref_id": "b9", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Sachit Menon; Carl Vondrick", "journal": "", "ref_id": "b10", "title": "Visual classification via description from large language models", "year": "2022" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b11", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Ranjay Krishna; Yuke Zhu; Oliver Groth; Justin Johnson; Kenji Hata; Joshua Kravitz; Stephanie Chen; Yannis Kalantidis; Li-Jia Li; David A Shamma", "journal": "IJCV", "ref_id": "b12", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2017" }, { "authors": "A Drew; Christopher D Hudson; Manning", "journal": "", "ref_id": "b13", "title": "Gqa: A new dataset for real-world visual reasoning and compositional question answering", "year": "2019" }, { "authors": "Yu-Wei Chao; Zhan Wang; Yugeng He; Jiaxuan Wang; Jia Deng", "journal": "", "ref_id": "b14", "title": "Hico: A benchmark for recognizing human-object interactions in images", "year": "2015" }, { "authors": "Saurabh Gupta; Jitendra Malik", "journal": "", "ref_id": "b15", "title": "Visual semantic role labeling", "year": "2015" }, { "authors": "Caixia Yan; Xiaojun Chang; Minnan Luo; Huan Liu; Xiaoqin Zhang; Qinghua Zheng", "journal": "TPAMI", "ref_id": "b16", "title": "Semantics-guided contrastive network for zero-shot object detection", "year": "2022" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Alex Smola", "journal": "", "ref_id": "b17", "title": "Automatic chain of thought prompting in large language models", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b18", "title": "Neural motifs: Scene graph parsing with global context", "year": "2018" }, { "authors": "Anh Duc Bui; Soyeon ; Caren Han; Josiah Poon", "journal": "Springer", "ref_id": "b19", "title": "Sg-shuffle: Multi-aspect shuffle transformer for scene graph generation", "year": "2022" }, { "authors": "Xingning Dong; Tian Gan; Xuemeng Song; Jianlong Wu; Yuan Cheng; Liqiang Nie", "journal": "", "ref_id": "b20", "title": "Stacked hybrid-attention and group collaborative learning for unbiased scene graph generation", "year": "2022" }, { "authors": "Lin Li; Long Chen; Yifeng Huang; Zhimeng Zhang; Songyang Zhang; Jun Xiao", "journal": "", "ref_id": "b21", "title": "The devil is in the labels: Noisy label correction for robust scene graph generation", "year": "2022" }, { "authors": "Yu-Wei Chao; Yunfan Liu; Xieyang Liu; Huayi Zeng; Jia Deng", "journal": "", "ref_id": "b22", "title": "Learning to detect human-object interactions", "year": "2018" }, { "authors": "Yao Teng; Limin Wang", "journal": "", "ref_id": "b23", "title": "Structured sparse r-cnn for direct scene graph generation", "year": "2022" }, { "authors": "Keizo Kato; Yin Li; Abhinav Gupta", "journal": "", "ref_id": "b24", "title": "Compositional learning for human object interaction", "year": "2018" }, { "authors": "Dong-Jin Kim; Xiao Sun; Jinsoo Choi; Stephen Lin; In So Kweon", "journal": "", "ref_id": "b25", "title": "Detecting human-object interactions with action co-occurrence priors", "year": "2020" }, { "authors": "Yue Liao; Aixi Zhang; Miao Lu; Yongliang Wang; Xiaobo Li; Si Liu", "journal": "", "ref_id": "b26", "title": "Gen-vlkt: Simplify association and enhance interaction understanding for hoi detection", "year": "2022" }, { "authors": "Guikun Chen; Lin Li; Yawei Luo; Jun Xiao", "journal": "IEEE", "ref_id": "b27", "title": "Addressing predicate overlap in scene graph generation with semantic granularity controller", "year": "2023" }, { "authors": "Kaifeng Gao; Long Chen; Hanwang Zhang; Jun Xiao; Qianru Sun", "journal": "", "ref_id": "b28", "title": "Compositional prompt tuning with motion cues for open-vocabulary video relation detection", "year": "2023" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "ACM Computing Surveys", "ref_id": "b29", "title": "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2023" }, { "authors": "Mingfei Gao; Chen Xing; Juan Carlos Niebles; Junnan Li; Ran Xu; Wenhao Liu; Caiming Xiong", "journal": "", "ref_id": "b30", "title": "Open vocabulary object detection with pseudo bounding-box labels", "year": "2022" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b31", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b32", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": "Xiaolong Wang; Yufei Ye; Abhinav Gupta", "journal": "", "ref_id": "b33", "title": "Zero-shot recognition via semantic embeddings and knowledge graphs", "year": "2018" }, { "authors": "Ching-Yao Chuang; Joshua Robinson; Yen-Chen Lin; Antonio Torralba; Stefanie Jegelka", "journal": "", "ref_id": "b34", "title": "Debiased contrastive learning", "year": "2020" }, { "authors": "Alireza Zareian; Kevin Dela Rosa; Derek Hao Hu; Shih-Fu Chang", "journal": "", "ref_id": "b35", "title": "Open-vocabulary object detection using captions", "year": "2021" }, { "authors": "Xiuye Gu; Tsung-Yi Lin; Weicheng Kuo; Yin Cui", "journal": "", "ref_id": "b36", "title": "Open-vocabulary object detection via vision and language knowledge distillation", "year": "2022" }, { "authors": "Tao Wang; Nan Li", "journal": "", "ref_id": "b37", "title": "Learning to detect and segment for open vocabulary object detection", "year": "2023" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen", "journal": "", "ref_id": "b38", "title": "What makes good in-context examples for gpt-3?", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b39", "title": "Zero-shot scene graph relation prediction through commonsense knowledge integration", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b40", "title": "Decomposed prototype learning for few-shot scene graph generation", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b41", "title": "Visually-prompted language model for fine-grained scene graph generation in an open world", "year": "2023" } ]
[ { "formula_coordinates": [ 5, 167.64, 369.33, 337.03, 40.33 ], "formula_id": "formula_0", "formula_text": "S(r) = ϕ(vs, tc) + ϕ(vo, tc) class-based prompts + k∈{s,o,p} w k |D k (r)| d k i ∈D k (r) ϕ(v k , t d k i ) description-based prompts ,(1)" } ]
10.18653/v1/2020.emnlp-main.479
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b20", "b1", "b19", "b22", "b12", "b28", "b21", "b15", "b29", "b18", "b29", "b5", "b8", "b9", "b14", "b7", "b3", "b4", "b6", "b25", "b16", "b26", "b27", "b2", "b14" ], "table_ref": [], "text": "Dialog systems (chatbots) have made great progress and have achieved close-to-human performances in many scenarios (Su et al., 2020;Adiwardana et al., 2020;Shuster et al., 2022;Thoppilan et al., 2022;Liu et al., 2023) state-of-the-art approaches rely on huge amounts of training data, which is only available in English and a few high-resource languages-such as Chinese (Zhang et al., 2020), Japanese (Sugiyama et al., 2021) and German (Schweter, 2020).\nTypically each language develops its own chatbot individually without cross-lingual resource sharing. Repeating this process for all languages is infeasible, as most low-resource languages do not have enough conversational data to support this type of training (Zhao et al., 2020;Shen et al., 2022). Even for high-resource languages, collecting sufficient amount of high-quality data to cover various domains is still costly (Xu et al., 2020;Chang et al., 2021). Therefore, we believe crosslingual transfer is crucial for efficiently developing chatbots in multiple languages, through which the same resource can be reused across languages. Figure 1 illustrates the scenario that we are targeting at in this paper. This is a common scenario for most low-resource languages since usually we can only afford collecting high-quality dialogs for one specific domain.\nThere have been many studies on cross-lingual arXiv:2305.12480v1 [cs.CL] 21 May 2023 transfer for classification tasks (Hu et al., 2020;Jiang et al., 2020;Ruder et al., 2021;Ding et al., 2021). For generation tasks, however, much less attention has been paid to it and the results are far from satisfactory (Cao et al., 2020;Chang et al., 2020;Chen et al., 2021;Žagar and Robnik-Šikonja, 2021;Shen et al., 2023). The challenge is especially prominent in dialog generation, as different language users have different habits of conversing. For example, the typical conversation \"-How are you? -Fine, and you?\" in English can be very unnatural when translated into other languages such as Chinese or Japanese, because their speakers do not usually greet each other in this way (Zhang et al., 2021(Zhang et al., , 2022)). This is usually not a big problem for understanding tasks but crucial for dialog generation if we would like to produce human-like, culturally grounded conversations.\nIn this work, we investigate the performance of several baseline methods for cross-lingual transfer in dialog generation. To simulate a low-resource scenario, we collect limited Chinese conversational data related to the movie domain and large amounts of English conversational data related to various domains as our training data. The test data cover three additional domains-music, books and technologyso that we can test the domain transferability of developed models2 . We construct this benchmark dataset in order to see how can we effectively leverage the English data to benefit us in developing a good Chinese chatbot.\nWe compare three types of baseline cross-lingual transfer techniques: (1) translate-train, which translates the English training data into Chinese first and finetunes a Chinese-centric chatbot on it;\n(2) translate-test, which trains an English-centric chatbot first and uses a translator at inference time; and (3) multilingual finetune, which simply finetunes on English data followed by Chinese data regardless of their vocabulary difference. Multilingual finetune has been a common practice in classification tasks but rarely applied for generation tasks (Alabi et al., 2020;Ruder et al., 2021). We find that translate-train consistently outperforms translate-test but both suffer from the translationese problem. Multilingual finetune, surpris-ingly, perform the best with as few as 500 Chinese dialogs available for training. The advantage further grows with increasing Chinese dialogs.\nOur contribution can be summarized as follow: (1) We construct a benchmark dataset covering various domains for studying cross-lingual transfer in dialog generation, which can be used for further studies. (2) We compare baseline models through comprehensive human evaluations for both in-domain and out-of-domain performances. (3) We conduct extensive experiments to study the effects of various factors such as the translation quality and the training set size. Results and analysis are shared to benefit future research." }, { "figure_ref": [ "fig_0" ], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "We collect a benchmark to simulate the scenario in Figure 1. As mentioned, we choose English as the source high-resource language and Chinese as the target low-resource language. Source. We collect English dialogs from Reddit3 and Chinese ones from Douban4 , both being popular social forums in the US and China respectively. Domain. We choose four domains that are shared between Reddit and Douban: movies, music, books, and technology. The English dialogs are collected from these four subreddits, and Chinese ones are from the corresponding Douban groups. To simulate the scenario where the English corpora is large enough to provide various domains whereas the Chinese corpora has only limited data in one domain, we collect equal number of dialogs from each domain for English. For Chinese, we collect the training set only from the movie domain, and the test set from all the four domains. Preprocessing. We filter the sentences that fulfills any of the following conditions: (1) too short (less than 5 words for English and 6 characters for Chinese); (2) too long (more than 128 words);\n(3) contains URLs or offensive words identified by phrase matching against a large blocklist; (4) from a known bot; (5) the response contains words that repeat over 3 times. Size. In our base setting, we use 400k/20k English dialogs and 500/50 Chinese dialogs for training/validation. The test set contains 500 Chinese dialogs from each of the 4 domains. " }, { "figure_ref": [], "heading": "Translate-Train", "publication_ref": [], "table_ref": [], "text": "The translate-train approach first translates the English training corpora into Chinese to train a Chinese-centric chatbot. In the zeroshot setting (Train_Zero), the model is only trained on the translated corpora. In the few-shot setting (Train_Few), the model is trained on the translated corpora then finetuned on the Chinese corpora.\nTranslate-Test The translate-test approach trains an English-centric chatbot. During inference time, we translate the Chinese context into English, generate its response, then translate the response back into Chinese. In the zero-shot setting (Test_Zero), the model is only trained on the original English corpora. In the few-shot setting (Test_Few), the model is trained on the original corpora followed by the translated Chinese corpora.\nMultilingual-finetune The multilingual-finetune (multi-FT) approach trains the model on the original English corpora then finetunes on the Chinese corpora regardless of without leveraging any external translators. This approach only applies to the few-shot setting. In the zero-shot setting, the model will only generate English responses as it is trained only on English responses. We further compare with two more methods: (1) Chinese-only finetune (FT), which only finetunes on the Chinese dialog corpora without cross-lingual transfer and (2) GPT-Chinese, which finetunes a pretrained Chinese GPT-2 chatbot 5 using the Chinese corpora. This can serve as an upper bound of cross-lingual performance since it accessed large 5 https://github.com/yangjianxin1/GPT2-chitchat amounts of dialogs in the target language." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Settings", "publication_ref": [ "b24", "b10" ], "table_ref": [], "text": "We initialize all approaches with the pretrained MT5-base model, which is a multilingual model that support 101 languages (Xue et al., 2021), to keep the comparison fair. We use MarianMT as the basic translation method (Junczys-Dowmunt et al., 2018), as it is a widely used machine translation tool that provides more than 1000 translation models. Hyperparameter details are in the appendix." }, { "figure_ref": [], "heading": "Evaluation Metric", "publication_ref": [ "b13", "b11", "b17" ], "table_ref": [], "text": "We employ both automatic and human evaluations to assess the performance of the compared methods. We use BLEU, Distinct-1 and Distinct-2 as the automatic evaluation metrics.\nBLEU measures the n-gram overlap between predicted response and the target response (Papineni et al., 2002). We report the bigram BLEU-2 score using the sacreBLEU toolkit6 .\nDistinct-1/2 measure the generation diversity, i.e., the percentage of distinct uni-or bi-grams in generated words (Li et al., 2016;Shen et al., 2018).\nFor human evaluation, we randomly select 200 dialogue contexts from the testset and generate responses using the compared methods. The annotators are asked to rate, using a score of 1 to 5, the response quality from four perspectives-Naturalness, Diversity, Coherence, and Overall. A higher score indicates better quality." }, { "figure_ref": [ "fig_2" ], "heading": "Analysis", "publication_ref": [ "b2" ], "table_ref": [ "tab_1" ], "text": "Overall Result The results of the seven approaches evaluated in the movie domain are shown in Table 1. According to the human evaluation, GPT-Chinese performs the best and excels especially on the naturalness score. This is expected since it is pretrained on 500k Chinese dialogs and has learnt more about how to produce more natural responses. The diversity of the method FT is worse than the others, which suggests finetuning only on a small Chinese corpora may fail to generate more diverse responses.\nThe translate-test methods perform worse than the translate-train methods, because the reliance on an external translator in the inference time might reinforce the error propagation. In most metrics, translate-test methods are even worse than the FT baseline, suggesting they might not be a good way to consider for cross-lingual transfer.\nMulti-FT outperforms all other cross-lingual transfer approaches, especially on the naturalness score. This is interesting since its first-stage training does not update any Chinese word embeddings but only the upper-level encoder-decoders. Only in the second-stage finetuning, the Chinese word embeddings get updated to adapt to upper-level encoder-decoders. This suggests the upper-level parameters might be more important and can learn universal conversational knowledge beyond for one fixed language. Further finetuning on a small targetlanugage corpus (500 dialogs) is enough to adapt to new vocabularies. Similar findings have been found for classification tasks (Alabi et al., 2020). Translation Quality To simulate translators with different qualities, we collect different sizes of English/Chinese data from WMT177 to train different translation models (all models are nitialized from MT5-base). Fig 2a shows the BLEU-2 scores with different translation qualities. When we replace MarianMT with other translation models that are not well trained, the BLEU2 scores for all the translation-based methods decrease by a large margin. We conclude that translation quality influences the performance of the methods considerably." }, { "figure_ref": [], "heading": "Cross-Domain", "publication_ref": [ "b0" ], "table_ref": [], "text": "If the quality of the translator is bad, the model can even underperform the FT baseline without any cross-lingual transfer. Considering that most low-resource languages do not have high-quality MT systems yet (Adelani et al., 2022), this further implies we should rather focus on translation-free approaches for this task.\nTraining Set Size Fig 2b and 2c further shows the BLEU-2 scores with varying sizes of English and Chinese training data. As the training size of English corpora becomes larger, all cross-lingual-transfer methods perform consistently better. Zeroshot approaches are affected more than fewshot performances as they rely solely on the English corpora to train. When increasing the Chinese corpus, all models perform better except Test-Few. This is possible because Test-Few is trained on the translated Chinese corpus which is not guaranteed to be cycle-consistent when translated back. Therefore, its training objective does not fully align with our inference-time objective and increasing the Chinese corpura size might not help. The advantage of Multi-FT over the other translation-based methods also improves with more Chinese data. Considering further the cost of translating the corpus, Multi-FT seems to be a better baseline for cross-lingual transfer than translation-based methods." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we construct a benchmek to systematically study the task of cross-lingual transder for dialog generation. We conduct extensive experiments and ablation studies to understand the performance of popular baseline methods. The results suggest that directly training on high-resource-language data then finetuning on low-resource-language data yield a very strong baseline, improving both the naturelness, relevance and domain transferability. An external translator might not be necessary." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "As we concluded, by training on the original English corpora, the naturalness and relevance of the generated Chinese responses can be improved. However, when training models on English corpora, the Chinese embeddings are not updated, and only encoder/decoder layers are updated. Thus, the Chinese embeddings might not be compatible with encoder/decoder layers after training. We plan to investigate how to alleviate this problem during training in the future. Furthermore, we only studied two languages for the 4 considered domains.\nTo which extent the results drawn from this study also apply to other languages and domains is still uncertain." }, { "figure_ref": [], "heading": "A Hyperparameter Details", "publication_ref": [], "table_ref": [], "text": "The learning rate is 1e-4 for large English training set and 1e-5 for small Chinese training set. The maximum sequence length of context and response is set to 128. The batch size is 16 for MT5-base models. The training epoch is set to 3 for datasets larger than 400k and 9 for smaller datasets. The ADAM optimizer is used. We use top-k top-p sampling for decoding, because it is often used in real generation scenario in order to improve diversity. Top k is set to 3, and top p to 0.9. For each experiment, we run three times to get the mean scores of the automatic evaluation metrics." }, { "figure_ref": [], "heading": "B Generated Dialog Samples", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We show some generated dialog samples for the best three methods-Train Few, Multi FT, and GPT-Chinese-in Table 2. As we can see, the response that Train Few generated a grammatical error \"那片电影\" (that piece of film), which might be brought from the translation step during training. The response of Multi FT generated is natural and relevant. The response of GPT-Chinese is more fluent for dialog scenario than the other two." }, { "figure_ref": [], "heading": "C The Human Annotation Instructions", "publication_ref": [], "table_ref": [], "text": "We depict the sketchy definitions of the four perspectives for human annotation here. Naturalness Score 1: The response include totally unreadable sentences.\nScore 2: The response is readable, but have some grammatical errors, or translationese problems. Coherence Score 1: The response not related to the context at all. Score 2: The respones is only a little related to the context, or has conflict with the context. Score 3: The response is related, and does not have conflict with the context. Score 4: The response is related to the context and is the continuation of the topic in the context.\nScore 5: The response is closely related to the context and is all about the topic in the context." }, { "figure_ref": [], "heading": "Overall", "publication_ref": [], "table_ref": [], "text": "Score 1: The response is not related to context at all, or it is unreadable, or it contains repeated words.\nScore 2: The response is not related, or it is unnatural, or it is quite general.\nScore 3: The response is related, natural, and not general.\nScore 4: The response is related or closely re-lated, and quite natural, and not general. Score 5: The response is closely related, very fluent, and diverse." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "://github.com/lorashen/cross_lingual_ transfer_dialog_generation" } ]
Cross-lingual transfer is important for developing high-quality chatbots in multiple languages due to the strongly imbalanced distribution of language resources. A typical approach is to leverage off-the-shelf machine translation (MT) systems to utilize either the training corpus or developed models from highresource languages. In this work, we investigate whether it is helpful to utilize MT at all in this task. To do so, we simulate a low-resource scenario assuming access to limited Chinese dialog data in the movie domain and large amounts of English dialog data from multiple domains. Experiments show that leveraging English dialog corpora can indeed improve the naturalness, relevance and cross-domain transferability in Chinese. However, directly using English dialog corpora in its original form, surprisingly, is better than using its translated version. As the topics and wording habits in daily conversations are strongly culture-dependent, MT can reinforce the bias from high-resource languages, yielding unnatural generations in the target language. Considering the cost of translating large amounts of text and the strong effects of the translation quality, we suggest future research should rather focus on utilizing the original English data for cross-lingual transfer in dialog generation. We perform extensive human evaluations and ablation studies. The analysis results, together with the collected dataset, are presented to draw attention towards this area and benefit future research 1 .
Is Translation Helpful? An Empirical Analysis of Cross-Lingual Transfer in Low-Resource Dialog Generation
[ { "figure_caption": "Figure 1 :1Figure1: Scenario that requires cross-lingual transfer for dialog generation: There is large amounts of dialog data from various domains in a high-resource langauge, but only limited dialog data from one domain in a low-resource language.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: BLEU-2 results by varying the translation qualities of translator models, as well as the training sizes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overall Human Scores for Four Domains. The grey color indicates the drop compared with the movie domain.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": ". However, current", "figure_data": "high-resourcelanguage,domain Bcross lingual transferhigh-resource language, domain Alow-resource language, domain Ahigh-resourcelanguage,domain C", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The results for seven methods. For this table, we fix the training set size as 400k for English, and 500 for Chinese. The best score is in bold, and the one with underline is the second best.", "figure_data": "Automatic EvaluationHuman EvaluationBLEU2 Distinct-1 Distinct-2 naturalness diversity relevance OverallFT4.120.9390.8952.672.692.242.24Train Zero5.330.8870.9462.562.912.582.34Train Few5.370.9310.9232.762.862.732.69Test Zero4.900.8760.9352.282.982.302.26Test Few4.580.9060.9222.453.002.112.17Multi-FT5.370.9360.9263.062.932.782.95GPT-Chinese5.490.9740.9183.402.942.763.073 ApproachesWe implement three popular types of methodsfor cross-lingual transfer: (1) translate-train, (2)translate-test and (3) multilingual-finetune. Thefirst two are further tested in the zero-shot settingwithout Chinese dialogs, and the few-shot settingwith limitd Chinese dialogs for finetuning.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Generated dialog samples for method Train Few, Multi FT, and GPT-Chinese. The colour red represents grammatical error.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" } ]
Lei Shen; Shuai Yu; Xiaoyu Shen
[ { "authors": "David Adelani; Jesujoba Alabi; Angela Fan; Julia Kreutzer; Xiaoyu Shen; Machel Reid; Dana Ruiter; Dietrich Klakow; Peter Nabende; Ernie Chang", "journal": "", "ref_id": "b0", "title": "A few thousand translations go a long way! leveraging pre-trained models for african news translation", "year": "2022" }, { "authors": "Daniel Adiwardana; Minh-Thang Luong; David R So; Jamie Hall; Noah Fiedel; Romal Thoppilan; Zi Yang; Apoorv Kulshreshtha; Gaurav Nemade; Yifeng Lu", "journal": "", "ref_id": "b1", "title": "Towards a human-like open-domain chatbot", "year": "2020" }, { "authors": "Jesujoba Alabi; Kwabena Amponsah-Kaakyire; David Adelani; Cristina España-Bonet ", "journal": "European Language Resources Association", "ref_id": "b2", "title": "Massive vs. curated embeddings for low-resourced languages: the case of Yorùbá and Twi", "year": "2020" }, { "authors": "Yue Cao; Hui Liu; Xiaojun Wan", "journal": "", "ref_id": "b3", "title": "Jointly learning to align and summarize for neural crosslingual summarization", "year": "2020" }, { "authors": "Ernie Chang; David Ifeoluwa Adelani; Xiaoyu Shen; Vera Demberg", "journal": "", "ref_id": "b4", "title": "Unsupervised pidgin text generation by pivoting english data and self-training", "year": "2020" }, { "authors": "Ernie Chang; Xiaoyu Shen; Alex Marin; Vera Demberg", "journal": "", "ref_id": "b5", "title": "The selectgen challenge: Finding the best training samples for few-shot neural text generation", "year": "2021" }, { "authors": "Yiran Chen; Zhenqiao Song; Xianze Wu; Danqing Wang; Jingjing Xu; Jiaze Chen; Hao Zhou; Lei Li", "journal": "", "ref_id": "b6", "title": "Mtg: A benchmarking suite for multilingual text generation", "year": "2021" }, { "authors": "Bosheng Ding; Junjie Hu; Lidong Bing; Sharifah Mahani Aljunied; Shafiq Joty; Luo Si; Chunyan Miao", "journal": "", "ref_id": "b7", "title": "Globalwoz: Globalizing multiwoz to develop multilingual task-oriented dialogue systems", "year": "2021" }, { "authors": "Junjie Hu; Sebastian Ruder; Aditya Siddhant; Graham Neubig; Orhan Firat; Melvin Johnson", "journal": "", "ref_id": "b8", "title": "Xtreme: A massively multilingual multitask benchmark for evaluating cross-lingual generalization", "year": "2020" }, { "authors": "Zhengbao Jiang; Antonios Anastasopoulos; Jun Araki; Haibo Ding; Graham Neubig", "journal": "", "ref_id": "b9", "title": "X-factr: Multilingual factual knowledge retrieval from pretrained language models", "year": "2020" }, { "authors": "Marcin Junczys-Dowmunt; Roman Grundkiewicz; Tomasz Dwojak; Hieu Hoang; Kenneth Heafield; Tom Neckermann; Frank Seide; Ulrich Germann; Alham Fikri Aji; Nikolay Bogoychev; F T André; Alexandra Martins; Birch", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Marian: Fast neural machine translation in C++", "year": "2018" }, { "authors": "Jiwei Li; Michel Galley; Chris Brockett; Jianfeng Gao; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "A diversity-promoting objective function for neural conversation models", "year": "2016" }, { "authors": "Yiheng Liu; Tianle Han; Siyuan Ma; Jiayue Zhang; Yuanyuan Yang; Jiaming Tian; Hao He; Antong Li; Mengshen He; Zhengliang Liu", "journal": "", "ref_id": "b12", "title": "Summary of chatgpt/gpt-4 research and perspective towards the future of large language models", "year": "2023" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b13", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Sebastian Ruder; Noah Constant; Jan Botha; Aditya Siddhant; Orhan Firat; Jinlan Fu; Pengfei Liu; Junjie Hu; Dan Garrette; Graham Neubig; Melvin Johnson", "journal": "", "ref_id": "b14", "title": "Xtreme-r: Towards more challenging and nuanced multilingual evaluation", "year": "2021" }, { "authors": "Stefan Schweter", "journal": "", "ref_id": "b15", "title": "German gpt-2 model", "year": "2020" }, { "authors": "Xiaoyu Shen; Akari Asai; Bill Byrne; Adrià De; Gispert ", "journal": "", "ref_id": "b16", "title": "xpqa: Cross-lingual product question answering across 12 languages", "year": "2023" }, { "authors": "Xiaoyu Shen; Hui Su; Wenjie Li; Dietrich Klakow", "journal": "", "ref_id": "b17", "title": "Nexus network: Connecting the preceding and the following in dialogue generation", "year": "2018" }, { "authors": "Xiaoyu Shen; Svitlana Vakulenko; Marco Del Tredici; Gianni Barlacchi; Bill Byrne; Adrià De; Gispert ", "journal": "", "ref_id": "b18", "title": "Low-resource dense retrieval for opendomain question answering: A comprehensive survey", "year": "2022" }, { "authors": "Kurt Shuster; Jing Xu; Mojtaba Komeili; Da Ju; Eric Michael Smith; Stephen Roller; Megan Ung; Moya Chen; Kushal Arora; Joshua Lane", "journal": "", "ref_id": "b19", "title": "Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage", "year": "2022" }, { "authors": "Hui Su; Xiaoyu Shen; Zhou Xiao; Zheng Zhang; Ernie Chang; Cheng Zhang; Cheng Niu; Jie Zhou", "journal": "", "ref_id": "b20", "title": "Moviechats: Chat like humans in a closed domain", "year": "2020" }, { "authors": "Hiroaki Sugiyama; Masahiro Mizukami; Tsunehiro Arimoto; Hiromi Narimatsu; Yuya Chiba; Hideharu Nakajima; Toyomi Meguro", "journal": "", "ref_id": "b21", "title": "Empirical analysis of training strategies of transformer-based japanese chit-chat systems", "year": "2021" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Du", "journal": "", "ref_id": "b22", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Binxia Xu; Siyuan Qiu; Jie Zhang; Yafang Wang; Xiaoyu Shen; Gerard De; Melo ", "journal": "", "ref_id": "b23", "title": "Data augmentation for multiclass utterance classification-a systematic study", "year": "2020" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Aleš Žagar; Marko Robnik-Šikonja", "journal": "Journal of Intelligent Information Systems", "ref_id": "b25", "title": "Crosslingual transfer of abstractive summarizer to lessresource language", "year": "2021" }, { "authors": "Mozhi Zhang; Wei Wang; Budhaditya Deb; Guoqing Zheng; Milad Shokouhi; Ahmed Hassan", "journal": "", "ref_id": "b26", "title": "A dataset and baselines for multilingual reply suggestion", "year": "2021" }, { "authors": "Qingyu Zhang; Xiaoyu Shen; Ernie Chang; Jidong Ge; Pengke Chen", "journal": "", "ref_id": "b27", "title": "Mdia: A benchmark for multilingual dialogue generation in 46 languages", "year": "2022" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "DIALOGPT : Largescale generative pre-training for conversational response generation", "year": "2020" }, { "authors": "Xueliang Zhao; Wei Wu; Chongyang Tao; Can Xu; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b29", "title": "Low-resource knowledge-grounded dialogue generation", "year": "2020" } ]
[]
2023-06-05
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b32", "b57", "b56", "b58", "b56", "b51", "b60", "b54", "b47", "b2", "b0", "b59", "b58", "b56", "b40" ], "table_ref": [], "text": "Single-view indoor scene understanding from a single RGB image is an essential yet challenging problem and has important applications such as augmented reality and service robotics. Most of the existing works solve room layout estimation, object detection, and reconstruction separately. Some recent works, including CooP [17], Total3D [33], and IM3D [58], show that learning these tasks jointly helps to improve the performance on each subtask by exploiting context information. In addition, panoramic image with a 360 • field-of-view (FOV) contains much richer information than a regular perspective image, whose FOV is nor-Figure 1. Given a single RGB panorama, we simultaneously estimate the room layout, oriented object bounding boxes (left), and full scene meshes (right). The first and second rows are examples from the iGibson-Synthetic [57] and ReplicaPano datasets. mally around 60 • . PanoContext [59] and DeepPanoContext [57] prove that the context becomes significantly more robust and powerful with a larger FOV, which further improves the performance and enables accurate holistic scene understanding. Despite recent progress, the indoor scene understanding problem remains challenging since predicting object pose and shape from a single RGB image can be ambiguous without any 3D prior information in a real indoor environment with occlusion and clutter.\nThis paper proposes a new method for end-to-end total 3D scene understanding from a single panorama ( Fig. 1). Our approach has two important features. Firstly, we incorporate a monocular depth estimation sub-model to exploit 3D information to facilitate indoor scene understanding tasks. In this way, a point cloud based 3D object detector can be naturally applied to predict not only the 3D object boxes with semantic category labels but also the object shape codes. Our experiments show that integrating the estimated depth as a prior in a scene understanding framework can boost performance remarkably. We learn shape codes using an encoder that maps an object shape into an embedding representation, and then a decoder is used to recover the 3D shape of an object given its embedding vector. The observation is that the object features that are used to estimate boxes should contain information on object ge-ometries; therefore, it is unnecessary to add an additional sub-model to predict object mesh.\nSecondly, in order to better capture the global context in the scene, we unify different tasks together and propose a novel transformer-based context model for simultaneously predicting object shapes, oriented bounding boxes and 3D room layout. The key idea of this context model is to take all tokens as input to compute features for each task, in which the contribution of each token can be learned automatically by the attention mechanism. In addition, we also employ physical violation loss and random token masking strategy to strengthen the interactions across objects and room layout. Based on this idea, this model learns to discover context information among object-object and object-layout.\nWhen it comes to the panoramic datasets for holistic scene understanding, more efforts should be put into this area. Existing panoramic datasets are either for single application [52,61,55,48] or missing critic 3D ground truth such as object boxes [3,1] and object shapes [60,59]. Compared with annotating the oriented object boxes and 3D shapes which is extremely labor-costing, it could be easier to generate ground truth from a simulator. Zhang et al. [57] release the first holistic panoramic scene dataset with complete ground truth, rendering from synthetic scenes, while the panoramas lack realism and may set the barrier to deploy the algorithm into real-world. To minimize the domain gap between synthetic and real data, we render gravity aligned panoramas and depth images based on high-fidelity scene scan [41], then label layout, 3D object boxes and shapes accurately.\nIn general, the main contributions of our work can be summarized as follows:\n• We propose a new method using depth prior for simultaneously estimating object bounding boxes, shapes, and 3D room layout from a single RGB panorama, followed by a novel transformer-based context model. To our best knowledge, it is the first work using a transformer to enable the network to capture context information efficiently for holistic 3D scene understanding.\n• We introduce ReplicaPano, a real-world panoramic dataset comprising oriented bounding boxes, room layouts, and object meshes for panoramic 3D scene understanding.\n• The proposed method achieves state-of-the-art performance on both synthetic and real-world datasets." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b7", "b14", "b23", "b30", "b61", "b54", "b41", "b42", "b48", "b35", "b16", "b10", "b4", "b45", "b6", "b13", "b17", "b21", "b18", "b5", "b12", "b33", "b31", "b34", "b16", "b32", "b57", "b26", "b58", "b56", "b46", "b8", "b27", "b9", "b44", "b55", "b50", "b28", "b1", "b52", "b36", "b15", "b29", "b49", "b51", "b58", "b0", "b2", "b47", "b60", "b54", "b59", "b56", "b37", "b40" ], "table_ref": [], "text": "Single-View Scene Understanding Scene understanding from a single image is highly ill-posed and ambiguous because of the unknown scale and severe occlusion in the scene. Many works have been proposed to study room layout estimation, 3D object detection and pose estimation, and 3D object reconstruction. Early room layout estimation works often make cuboid assumption [8,15,24,31] or Manhattan assumption [62,55,42,43,49], while Pintore et al. [36] model room structure as a 3D mesh to exploit the possibility of estimating arbitrary room layout. Object detection works [17,11,5,46] aim to infer 3D bounding boxes and object poses from 2D representation, with a 2D object detection [7,14] stage. In terms of object reconstruction, CAD models are selected from a large dataset to match the 2D object proposals in [18,22,19], while [6,13,34,32,35] demonstrated that implicit neural representations outperform grid, point, and mesh-based representations in parameterizing geometry and seamlessly allow for learning priors over shapes. Some recent works start to solve multiple tasks together to exploit context information. CooP [17] introduces the target parameterizing and cooperative training scheme to solve for object poses and the layout of the indoor scene, but object shapes are absent. Total3D [33] is the first work to solve layout, 3D object detection and pose estimation, and object reconstruction jointly. Zhang et al. [58] proposes to improve the performance of all three tasks via implicit neural functions and graph convolutional networks. Liu et al. [27] further improves the visual quality of indoor scene reconstruction using implicit representation. All these aforementioned methods only work on the perspective images, which lack enough information to better parse the entire scene. Zhang et al. [59] first introduced to parse indoor scenes using 360 • full-view panorama. Then, the follow-up work [57] utilizes a deep learning-based framework that leverages image information and scene context to estimate objects' shapes, 3D poses and the room layout from a single panorama. Instead, we propose to incorporate depth prior and design a transformer-based context module for the panoramic scene understanding task, which can fully explore spatial context information among different components in an indoor scene.\nTransformer Transformer [47,9,28] has been the dominant network in the field of NLP for a few years. Inspired by ViT [10], researchers have designed many efficient networks [45,56,51,29,2] to combine the advantages of both CNNs and transformers.\nThe review [53] shows that the transformer structure can better learn context information among multi-modal input data. CLIP [37] jointly trains the image encoder and text encoder at the pretraining stage and converts an image classification task as a text retrieval task at test time. Hu and Singh [16] combined image and text to conduct multi-modal multi-task training and achieved good results in 7 visual and text tasks. Liu et al. [30] utilize the attention mechanism in transformers to fuse the object features and point features iteratively to generate more accurate object detection results from a point cloud. Similarly, conditional object query was used in [50] to fuse point cloud and image features to obtain better results on the 3D detection task. There is a notable advantage of transformers for multimodal tasks, in this paper, we introduce a transformer-based context module to facilitate holistic indoor scene understanding.\nPanoramic Dataset SUN360 [52] is the first real-world panoramic dataset used for place recognization, then it is annotated by Zhang et al. [59] for indoor scene understanding, but only room layout and objects' axis aligned bounding box are provided. 2D-3D-S [1] and Matterport3D [3] are published concurrently with real-world panoramas, but oriented object boxes and meshes are absent. In addition, there are some datasets [48,61,55] published recently for the purpose of depth estimation or layout estimation on panorama. Zheng et al. [60] propose a large photo-realistic panoramic dataset for structured 3D modeling, namely Structured3D, but both mesh ground truths of scenes and objects are not published. To tackle that there is no panorama dataset with complete ground truths, author in [57] uses iGibson [38] to synthesize 1500 panoramas with detailed 3D shapes, poses, semantics as well as room layout. However, the real-world panoramic indoor scene dataset containing all ground truth is still missing. To minimize the gap between synthetic and real-world data, we introduce a panoramic dataset rendered from real-scan [41], containing 2,700 photo-realistic panorama and high-fidelity depth images, accurately annotated room layout, and object bounding boxes and shapes. To our best knowledge, it's the first real-world image dataset with full ground truth for holistic scene understanding." }, { "figure_ref": [ "fig_0" ], "heading": "Our Method", "publication_ref": [], "table_ref": [], "text": "The proposed pipeline simultaneously predicts the room layout, 3D object bounding boxes, and shapes with a depth estimation sub-model. As shown in Fig. 2, we first estimate the whole-room depth map from the input panorama to facilitate the following modules. And the depth map will be converted into a point cloud, which can be used in the Object Detection Network (ODN) to jointly predict 3D object boxes and shape codes. In the meantime, the layout is recovered as a triangle mesh from a single panorama through the Layout Estimation Network (LEN). In this paper, we exploit the transformer's intrinsic advantages and scalability in modeling different modalities and tasks, making it easier to learn appropriate spatial relationships among objects and layout. Features from layout, image, and 3D objects are fed into the context model to better estimate representations and relations among objects and layout. Finally, the room layout and object shapes are recovered as mesh, then scaled and placed into appropriate locations to reconstruct the full scene. We elaborate on the details of each module in this section." }, { "figure_ref": [], "heading": "Layout Estimation", "publication_ref": [ "b35", "b35", "b54", "b3", "b24", "b25" ], "table_ref": [], "text": "As we want to relax the geometrical constraints applied to the output layout model (e.g., forcing vertical walls and/or planar walls and ceilings), we follow Pintore et. al. [36] to map panoramic image to a triangle mesh representation (V, E, F ), where V (n, 3) is the set of n = 642 vertices, E(m, 2) is the set of m edges, each connecting two vertices, and F (n, d) are the image feature vectors of dimension d = 288 associated to vertices, denoted as F layout in the following. Two Graph Convolution Network(GCN) blocks deform an initial tessellated sphere by offsetting its vertices, driven by associating image features to mesh vertices in a coarse-to-fine form. Unlike [36] only extracts features from equirectangular view, we additionally extract features from perspective views (e.g., ceiling and flooring views) through Equirectangular-to-Perspective (E2P) conversion. Then, E2P-based feature fusion [55] is employed to fuse two types of features and get gravity aligned features. Specifically, we use ResNet-18 as the architecture for both equirectangular view and perspective views, the input dimension of image I is 3 × 512 × 1024, the output dimension of fused global image feature F image is 512 × 16 × 32. The ablation experiment in Sec. 4.3 shows that the accuracy of room layout benefits from perspective features.\nDrawing on the previous multi-modal transformer models [25,26], in order to fully associate the layout feature with the image feature, we inject the global image feature F image and layout features F layout from the first GCN block into the Context module, which will be elaborated in Sec. 3.3. Then the refined layout representation is sent into the layout head (the second GCN block). As a result, the second block returns the final deformed vertices V * (4n -6, 3)." }, { "figure_ref": [], "heading": "3D Object Detection and Mesh Generation", "publication_ref": [ "b29", "b22", "b29", "b19", "b20", "b31", "b3", "b56" ], "table_ref": [], "text": "Our ODN adopts a similar structure of Group-Free [30] to accurately detect 3D objects in a point cloud. We first employ Unifuse [23] as our panoramic depth estimation network to generate a spherical depth map of the scene, then convert it into the form of a dense point cloud and rapidly sampled through Fabinacci Sampling. The following steps are the same as [30], feeding the downsampled point cloud S ∈ R N ×3 into the backbone network and the Initial Object Sampling module to get point cloud features and K initial object candidates denoted as F point ∈ R do×M and F object ∈ R do×K respectively, where K = 256, M = 1024 and the feature dimension d o = 288. To automatically learn the contribution of all points to each object, these intermediate results will serve as points tokens and object tokens in the next subsection.\nInspired by [20,21], we observe that shape information is embedded in the object feature in the process of 3D object detection. Thus, in addition to the existing object prediction head, we add a shape prediction head to jointly predict the shape latent code and bounding box of the candidate object. The shape latent code is supervised by a pretrained autoencoder of object mesh, here we choose ONet [32] to serve as the autoencoder, because of its computation-friendly size of object shape latent code (1D vector of size 512 ), which can be easily used to construct the shape loss during the training. The ONet is pre-trained on ShapeNet [4] and refined on iGibson-Synthetic [57] with data augmentation." }, { "figure_ref": [ "fig_1" ], "heading": "Transformer-based Context Module", "publication_ref": [], "table_ref": [], "text": "Given a single panorama, our goal is to further explore the intrinsic relationships among different components of the indoor scene. We designed the transformerbased context module with a multi-layer encoder structure to extract better representations of objects and room layouts from different features. As shown in Fig. 3, the position embeddings of point, object, layout, and global image are computed by applying independent linear layers on the parameterization vector of point (x, y, z), 3D box (x, y, z, l, h, w), layout vertice (x, y, z), and unit spherical coordinate (cosϕsinθ, sinϕ, cosϕcosθ), respectively. The global image feature F image along with point feature F point , object feature F object , and layout feature F layout are pointwise summed with their position embeddings and then are concatenated together and act as the input for the context module:\nZ = [F image , F layout , F point , F object ].(1)\nThe context module is composed of 6 stacked transformer encoder layers, each layer includes a multi-head self-attention (MHSA) layer and a feed-forward network (FFN). MHSA is the foundation of a transformer, allowing the model to jointly attend to information from different representation subspaces. In a self-attention module, embedding Z will go through three projection matrices (W Q , W K , W V ) to generate three embedding Q(query), K(key) and V(value):\nQ = ZW Q , K = ZW K , V = ZW V . (2\n)\nThe output of self-attention is the aggregation of the values that are weighted by the attention weights. In our case, we propose a token random masking scheme to help the encoder to be robust and effective in handling situations with heavy occlusions, formulated as:\nMSA(Q, K, V) = sof tmax QK T √ d ⊙ M V,(3)\nwhere d is the dimension of query embedding and M is the specific masking matrix. Multiple self-attention layers are stacked and their concatenated outputs are fused by weighting matrix W h , to form MHSA:\nMHSA(Q, K, V) = H h=1 MSA(Q, K, V)W h . (4)\nAfter iterative refinement of MHSA, the resulting embedding of different stages are fed into different prediction heads to generate the results of each task, which will be ensemble to produce superior results. " }, { "figure_ref": [ "fig_2" ], "heading": "Loss Function", "publication_ref": [ "b35", "b29" ], "table_ref": [], "text": "In this section, we conclude the learning targets with the corresponding loss functions, and describe our joint loss for end-to-end training. Layout Loss At first, we adopt the loss function from Pintore et al. [36] to define layout loss which measures the prediction with respect to the ground truth layout:\nL layout = λ p * L pos + λ n * L norm + λ e * L sharp . (5)\nwhere L * and λ * are the losses and coefficients for vertex position, surface normal, and edge sharpness, respectively. Object Loss The loss for ODN is similar to [30], including sampling loss L samp , objectness loss L objness , classification loss L cls , center offset loss L cen , size classification loss L size cls , and size offset loss L size off . Additionally, 1) since we aim to estimate the orientated bounding box of the object, the box's heading prediction with a cross-entropy loss L head cls and a smooth-L1 loss L head off is included. 2) the shape code prediction loss L shape . Let θ denote the estimated shape codes, we use a smooth-L1 loss to minimize the errors between predictions and ground truth:\nL shape = 1 K K k=1 ℓ 1 θ -θ ,(6)\nwhere ground truths θ are given from pre-trained autoencoder. For the sake of brevity, these losses will be referred to as a set {L object loss }. We define the object estimation losses on all encoder layers in the context module, which are averaged to form the final loss:\nL object = 1 L L l=1 L l obj , L l obj = x∈{L object loss } β x * L x .(7)\nEach β x is the loss weight corresponding to the specific object loss.\nPhysical Violation Loss In order to produce a physically plausible scene and regularize the relationships between objects and layout, we add physical violation loss as a part of the joint loss. As shown in Fig. 4, when the bounding box of an object intersects with the layout (i.e., walls, ceiling, or floor), the physical violation loss is calculated with the Manhattan distance to the layout. Some types of objects do intersect with the layout, such as windows and doors. So the physical constraints are only applied for categories that should never intersect with the layout. The physical violation loss is defined as:\nL physic = 1 K K k=1 1 ins L 3d violation , L 3d violation = 8 i=1 (relu(x k i -max(X L )) + relu(min(X L ) -x k i )),(8)\nwhere x k i is corner of the kth object bounding box, X L is a set of vertices of layout mesh. The relu is applied to consider only the outside corners. 1 ins has a value of 1 if the bounding box is not completely outside of the layout, and a value of 0 otherwise. All the loss functions in joint training can be defined as :\nL = σ l * L layout + σ o * L object + σ p * L physic .(9)" }, { "figure_ref": [], "heading": "Panoramic Dataset", "publication_ref": [ "b40", "b53", "b39", "b3", "b11", "b43" ], "table_ref": [], "text": "For now, the realistic panoramic dataset with all ground truth is still missing. To benefit the community, we publish ReplicaPano, a real-world panoramic scene understanding dataset with full ground truth. With the help of the highfidelity textured mesh provided by Replica dataset [41], we render photo-realistic panorama from 27 rooms diversely furnished by 3D objects. For each room, we randomly render 100 pairs of equirectangular RGB and depth images, all the images are gravity aligned and the height of the camera center is 1.6m. Given a panorama, we utilize PanoAnnotator [54] to accurately label the room layout. Based on the colored point cloud and semantic segmentation information provided by Replica, we semi-automatically annotate the bounding box for each object in each room. Following the NYU-37 object labels [40], we select 25 categories of objects that are commonly seen in indoor scenes. Because the complete object mesh is not given in Replica, we look through large-scale 3D shape datasets ShapeNet [4], 3D-FUTURE [12] and ReplicaCAD [44] to match the object observed in the image. Finally, we get 2,700 photorealistic panoramas with depth images, room layouts, 3D object bounding boxes, and object meshes. More samples of ReplicaPano can be found in the supplementary files." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "In this section, we compare our model with both holistic scene understanding and single-task methods and perform ablation studies to analyze the effectiveness of the key components." }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b37", "b56", "b32", "b57", "b56", "b41", "b42", "b35", "b22", "b31" ], "table_ref": [], "text": "Dataset. We use two panoramic datasets in our experiments. 1) iGibson-Synthetic. The panoramic images are synthesized using the iGibson simulator [38]. Same as the setting in DeepPanoContext [57], we use 10 scenes for training and 5 scenes for testing. 2) ReplicaPano. To demonstrate our work's efficiency in real-world scenes, among 27 rooms, we use 16 for training, 4 for validation, and 7 for testing. Metrics. The results of each sub-task are evaluated with the metrics used in previous works [33,58,57]. Object detection is measured using mean average precision (mAP) with the threshold of 3D bounding box IoU set at 0.15. The room layout estimation error is tested by standard metrics for indoor layout reconstruction (i.e., 2D-IoU and 3D-IoU) followed by Pintore et.al [42,43,36]. Since the object mesh generation in our method is significantly different from other scene understanding work, we only compare the result with that of others qualitatively. Implementation. The borrowed monocular depth estimation network (i.e., Unifuse [23]) and 3D auto-encoder network (i.e., ONet [32]) are finetuned individually on each dataset from the weights pretrained on Matterport3D and ShapeNet, respectively. The input point cloud for the object detection network is sampled to 50K by Fibonacci sampling from the estimated depth. The auto-encoder network takes 300 points from the surface of each watertight model as input and embeds each sample as a vector of size 512. In the context model, ten percent of tokens are randomly masked. We trained object detection, layout estimation, and mesh generation jointly with randomly initialized parameters on a single NVIDIA V100 GPU. More training details are given in the supplementary files." }, { "figure_ref": [ "fig_3" ], "heading": "Comparisons with State-of-the-art Methods", "publication_ref": [ "b56", "b32", "b57", "b29", "b41", "b42", "b35", "b48", "b35" ], "table_ref": [], "text": "Object Detection We compare our 3D object detection results with previous state-of-the-art holistic scene understanding and single-task learning methods. DeepPanoContext [57] is the only method to achieve total 3D scene understanding directly on panoramic image. Total3D [33] and IM3D [58] that work with perspective image are extended to panorama for comparison on iGibson-Synthetic dataset. In order to show the effectiveness of depth prior in the scene understanding task, We extend DeepPanoContext with an estimated depth map as follows: we use PointNet++ to extract the object geometry feature and concatenate this feature with other appearance features to estimate 3D bounding boxes. As for the single task comparison, the point-based object detection method Group-Free [30] is chosen as baseline. The results of each method on iGibson-Synthetic are shown in Tab. 1. Since DeepPanoContext shows higher performance than Total3D and IM3D, we only compare it and its extension on ReplicaPano, results can be found in Tab. 2.\nAs shown in Tab. 1 and Tab. 2, our proposed method consistently outperforms both holistic understanding methods and the point-based detection baseline on most categories and the average mAP. We can see that DeepPanoContext has been significantly improved by integrating the estimated depth map, which indicates the depth prior is absolutely necessary. The table shows our method gains better results for categories that are closely related to room layout, such as door and rug, since the transformer-based context model encourages rational spatial relationships among objects and room layout. For a few categories such as floor lamp and chair, DeepPanoContext-depth performs better, the gap between these categories owns to two factors: 1) The depth estimation model failed to recover tiny structure, for example, the pole of a floor lamp, which deteriorates the performance of our method. 2) DeepPanoContext-depth uses a finetuned 2D detector to initialize the estimation and achieve good performance for heavily occluded objects (e.g., chairs are occluded behind a table). Improving depth quality and introducing a 2D detector into our method may help to improve the accuracy further. Layout Estimation Previous panoramic scene understanding work does not give quantitative analysis in terms of the layout estimation, thus we only compare our method with recent state-of-the-art layout estimation methods [42, Evaluation metrics include 2D and 3D intersection-over-union (IoU) following [42,43,36]. 49,36]. As shown in Tab. 3, our method achieves the best performance among other baselines, indicating joint training with the context model helps to improve the layout estimation from a single panorama. Holistic Scene Reconstruction Qualitative comparison with DeepPanoContext and DeepPanoContext-depth are demonstrated in Fig. 5, our method obtains the best indoor scene reconstruction, including the object pose, room layout, and object shape reconstruction." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b22", "b38" ], "table_ref": [], "text": "In this section, we conduct some ablation studies on iGibson-Synthetic to clarify the importance of each component in our method. Impact of depth quality We first investigate how the accuracy of the depth map impacts the final 3D object detection. Two depth estimation networks, Unifuse [23] and PanoFormer [39] are involved in Tab. 4, which reveal that object detection results benefit from higher depth quality. In addition, we observe that even if the proposed method uses a depth estimator without finetuning, the performance still slightly outpasses that of DeepPanoContext (Tab. 1), which employed a 2D detector for initialization.\nEffect of architecture and loss To figure out the effect of each module, we provide detailed ablation experiments in terms of object detection and layout estimation. The results are summarized in Tab. 5. The first 2 rows show the room layout benefit from perspective features. The third row indicates that introducing joint training and physical violation loss consistently improves the results of object detection and layout estimation. As for the fourth and fifth rows, we can conclude that our method can generate better representation and relationships among objects and the room layout, with the help of global image tokens and the token masking strategy, thus obtaining better results on each task." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a new method for end-toend 3D indoor scene understanding from a single RGB panoramic image with depth prior. To better learn the context information in the panorama, we use a Transformerbased context model to learn the relationship between objects and room layout. In addition, we introduce a new realworld dataset for panoramic holistic scene understanding.\nExperiments demonstrate that our method achieves stateof-the-art performance on both synthetic and real-world datasets." } ]
Panoramic image enables deeper understanding and more holistic perception of 360 • surrounding environment, which can naturally encode enriched scene context information compared to standard perspective image. Previous work has made lots of effort to solve the scene understanding task in a bottom-up form, thus each sub-task is processed separately and few correlations are explored in this procedure. In this paper, we propose a novel method using depth prior for holistic indoor scene understanding which recovers the objects' shapes, oriented bounding boxes and the 3D room layout simultaneously from a single panorama. In order to fully utilize the rich context information, we design a transformer-based context module to predict the representation and relationship among each component of the scene. In addition, we introduce a real-world dataset for scene understanding, including photo-realistic panoramas, high-fidelity depth images, accurately annotated room layouts, and oriented object bounding boxes and shapes. Experiments on the synthetic and real-world datasets demonstrate that our method outperforms previous panoramic scene understanding methods in terms of both layout estimation and 3D object detection.
PanoContext-Former: Panoramic Total Scene Understanding with a Transformer
[ { "figure_caption": "Figure 2 .2Figure 2. The framework of the proposed holistic scene understanding pipeline. (a) The LEN module maps an panorama to a watertight 3D mesh of the room layout. (b) The ODN module jointly solves the oriented object bounding box and shape based on the estimated depth map of the indoor scene. (c) The Context module integrates various embeddings from LEN and ODN modules to fully explore the relationship among each component of the scene. Finally, refined features go through different heads, and the layout, oriented object bounding boxes, and shapes are recovered to reconstruct the full scene.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Architecture of the Context module.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Object-Layout physical violation example. The physical violation loss is calculated only when the object intersection with layout (b). There is no physical constraint when the object is completely inside (a) or outside the layout (c).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparisons on 3D object detection and scene reconstruction. In the top four rows, we compare our object detection results with DeepPanoContext (DPC), DeepPanoContext with depth map (DPC-depth), and ground truth in the panoramic view. The color of the bounding boxes represents their categories. The bottom four rows show the results of scene reconstruction, with two magnified object reconstruction results presented on the right-hand side. Note that the first three columns are the results on iGibson-Synthetic, and the last three columns are the results on ReplicaPano.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": ".65 31.79 43.13 68.42 10.27 16.42 34.42 20.83 62.38 33.78 37.45 Im3D-Pano 33.08 72.15 37.43 70.45 75.20 11.58 6.06 43.28 18.99 78.46 41.02 44.34 DeepPanoContext 27.78 73.96 46.85 74.22 75.29 21.43 20.69 52.03 50.39 77.09 59.91 52.69 DeepPanoContext-depth 39.41 78.03 51.44 75.24 81.46 51.97 60.01 55.56 42.58 79.99 60.07 61.43 Group-Free 27.83 96.04 61.57 84.69 87.69 82.20 27.20 56.46 77.99 79.21 8.29 62.65 Ours 38.47 98.15 66.61 82.77 89.55 87.49 40.31 59.53 80.71 83.42 13.83 67.35 Comparisons of object detection on iGibson-Synthetic with state-of-the-art. We use mean average precisions with 3D IoU threshold 0.15 and evaluate 11 common object categories following [33, 58, 57]. DeepPanoContext-depth is the extended version with depth map. .49 11.42 70.39 32.38 20.02 9.10 30.13 82.24 63.22 12.19 38.36 Group-Free 59.56 42.21 52.83 34.07 19.65 32.90 80.59 51.47 44.64 52.76 47.07 Ours 63.69 46.74 54.02 30.41 20.04 48.53 80.96 46.42 51.53 47.82 49.02", "figure_data": "43,", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparisons of object detection on ReplicPano.", "figure_data": "MethodiGibson-Synthetic 2D-IoU↑ 3D-IoU↑ 2D-IoU↑ 3D-IoU↑ ReplicaPanoHorizionNet89.2289.1884.5683.59HoHoNet90.1389.9784.7684.05Led2Net90.3990.3084.6283.91Deep3dLayout 90.6590.4084.8783.50Ours92.2492.0485.9884.58", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparisons of layout estimation on iGibson-Synthetic and ReplicaPano.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The impact of depth accuracy. Evaluation metrics include absolute relative error (Abs. Rel.) and root mean square error (RMSE) for depth and mAP for object detection. Panoformerpretrain is pre-trained on Matterport3D[3], while *-finetune means the depth estimator gets finetuned on iGibson-Synthetic.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The ablation studies on iGibson-Synthetic dataset, demonstrates how our proposed designs improve the accuracy on object detection and layout estimation. We show in the last row the full architecture setup.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
Yuan Dong; Chuan Fang; Liefeng Bo; Zilong Dong; Ping Tan
[ { "authors": "Iro Armeni; Sasha Sax; Silvio Amir R Zamir; Savarese", "journal": "", "ref_id": "b0", "title": "Joint 2d-3d-semantic data for indoor scene understanding", "year": "2017" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b1", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Angel Chang; Angela Dai; Thomas Funkhouser; Maciej Halber; Matthias Niessner; Manolis Savva; Shuran Song; Andy Zeng; Yinda Zhang", "journal": "", "ref_id": "b2", "title": "Matterport3d: Learning from rgb-d data in indoor environments", "year": "2017" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b3", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Yixin Chen; Siyuan Huang; Tao Yuan; Siyuan Qi; Yixin Zhu; Song-Chun Zhu", "journal": "", "ref_id": "b4", "title": "Holistic++ scene understanding: Single-view 3d holistic scene parsing and human pose estimation with human-object interaction and physical commonsense", "year": "2019" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b5", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Jifeng Dai; Haozhi Qi; Yuwen Xiong; Yi Li; Guodong Zhang; Han Hu; Yichen Wei", "journal": "", "ref_id": "b6", "title": "Deformable convolutional networks", "year": "2017" }, { "authors": "Saumitro Dasgupta; Kuan Fang; Kevin Chen; Silvio Savarese", "journal": "", "ref_id": "b7", "title": "Delay: Robust spatial layout estimation for cluttered indoor scenes", "year": "2016" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b9", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Yilun Du; Zhijian Liu; Hector Basevi; Ales Leonardis; Bill Freeman; Josh Tenenbaum; Jiajun Wu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b10", "title": "Learning to exploit stability for 3d scene parsing", "year": "2018" }, { "authors": "Huan Fu; Rongfei Jia; Lin Gao; Mingming Gong; Binqiang Zhao; Steve Maybank; Dacheng Tao", "journal": "International Journal of Computer Vision", "ref_id": "b11", "title": "3d-future: 3d furniture shape with texture", "year": "2021" }, { "authors": "Thibault Groueix; Matthew Fisher; Vladimir G Kim; Bryan C Russell; Mathieu Aubry", "journal": "", "ref_id": "b12", "title": "A papier-mâché approach to learning 3d surface generation", "year": "2018" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b13", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Varsha Hedau; Derek Hoiem; David Forsyth", "journal": "IEEE", "ref_id": "b14", "title": "Recovering the spatial layout of cluttered rooms", "year": "2009" }, { "authors": "Ronghang Hu; Amanpreet Singh", "journal": "", "ref_id": "b15", "title": "Unit: Multimodal multitask learning with a unified transformer", "year": "2021" }, { "authors": "Siyuan Huang; Siyuan Qi; Yinxue Xiao; Yixin Zhu; Ying Nian Wu; Song-Chun Zhu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Cooperative holistic scene understanding: Unifying 3d object, layout, and camera pose estimation", "year": "2018" }, { "authors": "Siyuan Huang; Siyuan Qi; Yixin Zhu; Yinxue Xiao; Yuanlu Xu; Song-Chun Zhu", "journal": "", "ref_id": "b17", "title": "Holistic 3d scene parsing and reconstruction from a single rgb image", "year": "2018" }, { "authors": "Moos Hueting; Pradyumna Reddy; Vladimir Kim; Ersin Yumer; Nathan Carr; Niloy Mitra", "journal": "", "ref_id": "b18", "title": "Seethrough: finding chairs in heavily occluded indoor scene images", "year": "2017" }, { "authors": "Muhammad Zubair; Irshad ; Thomas Kollar; Michael Laskey; Kevin Stone; Zsolt Kira", "journal": "IEEE", "ref_id": "b19", "title": "Centersnap: Single-shot multi-object 3d shape reconstruction and categorical 6d pose and size estimation", "year": "2022" }, { "authors": "Muhammad Zubair Irshad; Sergey Zakharov; Rares Ambrus; Thomas Kollar; Zsolt Kira; Adrien Gaidon", "journal": "Springer", "ref_id": "b20", "title": "Shapo: Implicit representations for multi-object shape, appearance, and pose optimization", "year": "2022" }, { "authors": "Hamid Izadinia; Qi Shan; Steven M Seitz", "journal": "", "ref_id": "b21", "title": "Im2cad", "year": "2017" }, { "authors": "Hualie Jiang; Zhe Sheng; Siyu Zhu; Zilong Dong; Rui Huang", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b22", "title": "Unifuse: Unidirectional fusion for 360 panorama depth estimation", "year": "2021" }, { "authors": "Martial David C Lee; Takeo Hebert; Kanade", "journal": "IEEE", "ref_id": "b23", "title": "Geometric reasoning for single image structure recovery", "year": "2009" }, { "authors": "Kevin Lin; Lijuan Wang; Zicheng Liu", "journal": "", "ref_id": "b24", "title": "End-to-end human pose and mesh reconstruction with transformers", "year": "2021" }, { "authors": "Kevin Lin; Lijuan Wang; Zicheng Liu", "journal": "", "ref_id": "b25", "title": "Mesh graphormer", "year": "2021" }, { "authors": "Haolin Liu; Yujian Zheng; Guanying Chen; Shuguang Cui; Xiaoguang Han", "journal": "Springer", "ref_id": "b26", "title": "Towards high-fidelity single-view holistic reconstruction of indoor scenes", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b27", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b28", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Ze Liu; Zheng Zhang; Yue Cao; Han Hu; Xin Tong", "journal": "", "ref_id": "b29", "title": "Group-free 3d object detection via transformers", "year": "2021" }, { "authors": "Arun Mallya; Svetlana Lazebnik", "journal": "", "ref_id": "b30", "title": "Learning informative edge maps for indoor scene layout prediction", "year": "2015" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b31", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Yinyu Nie; Xiaoguang Han; Shihui Guo; Yujian Zheng; Jian Chang; Jian Jun Zhang", "journal": "", "ref_id": "b32", "title": "Total3dunderstanding: Joint layout, object pose and mesh reconstruction for indoor scenes from a single image", "year": "2020" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b33", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Songyou Peng; Michael Niemeyer; Lars Mescheder; Marc Pollefeys; Andreas Geiger", "journal": "Springer", "ref_id": "b34", "title": "Convolutional occupancy networks", "year": "2020" }, { "authors": "Giovanni Pintore; Eva Almansa; Marco Agus; Enrico Gobbetti", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b35", "title": "Deep3dlayout: 3d reconstruction of an indoor layout from a spherical panoramic image", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b36", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Bokui Shen; Fei Xia; Chengshu Li; Roberto Martín-Martín; Linxi Fan; Guanzhi Wang; Claudia Pérez-D'arpino; Shyamal Buch; Sanjana Srivastava; Lyne Tchapmi", "journal": "IEEE", "ref_id": "b37", "title": "igibson 1.0: A simulation environment for interactive tasks in large realistic scenes", "year": "2021" }, { "authors": "Zhijie Shen; Chunyu Lin; Kang Liao; Lang Nie; Zishuo Zheng; Yao Zhao", "journal": "", "ref_id": "b38", "title": "Panoformer: Panorama transformer for indoor 360 {\\deg} depth estimation", "year": "2022" }, { "authors": "Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus", "journal": "ECCV", "ref_id": "b39", "title": "Indoor segmentation and support inference from rgbd images", "year": "2012" }, { "authors": "Julian Straub; Thomas Whelan; Lingni Ma; Yufan Chen; Erik Wijmans; Simon Green; Jakob J Engel; Raul Mur-Artal; Carl Ren; Shobhit Verma", "journal": "", "ref_id": "b40", "title": "The replica dataset: A digital replica of indoor spaces", "year": "2019" }, { "authors": "Cheng Sun; Chi-Wei Hsiao; Min Sun; Hwann-Tzong Chen", "journal": "", "ref_id": "b41", "title": "Horizonnet: Learning room layout with 1d representation and pano stretch data augmentation", "year": "2019" }, { "authors": "Cheng Sun; Min Sun; Hwann-Tzong Chen", "journal": "", "ref_id": "b42", "title": "Hohonet: 360 indoor holistic understanding with latent horizontal features", "year": "2021" }, { "authors": "Andrew Szot; Alex Clegg; Eric Undersander; Erik Wijmans; Yili Zhao; John Turner; Noah Maestre; Mustafa Mukadam; Devendra Chaplot; Oleksandr Maksymets; Aaron Gokaslan; Vladimir Vondrus; Sameer Dharur; Franziska Meier; Wojciech Galuba; Angel Chang; Zsolt Kira; Vladlen Koltun; Jitendra Malik; Manolis Savva; Dhruv Batra", "journal": "", "ref_id": "b43", "title": "Habitat 2.0: Training home assistants to rearrange their habitat", "year": "2021" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "PMLR", "ref_id": "b44", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "Jonathan Tremblay; Thang To; Balakumar Sundaralingam; Yu Xiang; Dieter Fox; Stan Birchfield", "journal": "", "ref_id": "b45", "title": "Deep object pose estimation for semantic robotic grasping of household objects", "year": "2018" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b46", "title": "Attention is all you need", "year": "2017" }, { "authors": " Fu-En; Hou-Ning Wang; Hsien-Tzu Hu; Juan-Ting Cheng; Shang-Ta Lin; Meng-Li Yang; Hung-Kuo Shih; Min Chu; Sun", "journal": "", "ref_id": "b47", "title": "Self-supervised learning of depth and camera motion from 360 {\\deg} videos", "year": "2018" }, { "authors": " Fu-En; Yu-Hsuan Wang; Min Yeh; Wei-Chen Sun; Yi-Hsuan Chiu; Tsai", "journal": "", "ref_id": "b48", "title": "Led2-net: Monocular 360deg layout estimation via differentiable depth rendering", "year": "2021" }, { "authors": "Yikai Wang; Tengqi Ye; Lele Cao; Wenbing Huang; Fuchun Sun; Fengxiang He; Dacheng Tao", "journal": "", "ref_id": "b49", "title": "Bridged transformer for vision and point cloud 3d object detection", "year": "2022" }, { "authors": "Haiping Wu; Bin Xiao; Noel Codella; Mengchen Liu; Xiyang Dai; Lu Yuan; Lei Zhang", "journal": "", "ref_id": "b50", "title": "Cvt: Introducing convolutions to vision transformers", "year": "2021" }, { "authors": "Jianxiong Xiao; Krista A Ehinger; Aude Oliva; Antonio Torralba", "journal": "IEEE", "ref_id": "b51", "title": "Recognizing scene viewpoint using panoramic place representation", "year": "2012" }, { "authors": "Peng Xu; Xiatian Zhu; David A Clifton", "journal": "", "ref_id": "b52", "title": "Multimodal learning with transformers: A survey", "year": "2022" }, { "authors": "Shang-Ta Yang; Chi-Han Peng; Peter Wonka; Hung-Kuo Chu", "journal": "", "ref_id": "b53", "title": "Panoannotator: A semi-automatic tool for indoor panorama layout annotation", "year": "2018" }, { "authors": "Shang-Ta Yang; Fu-En Wang; Chi-Han Peng; Peter Wonka; Min Sun; Hung-Kuo Chu", "journal": "", "ref_id": "b54", "title": "Dula-net: A dual-projection network for estimating room layouts from a single rgb panorama", "year": "2019" }, { "authors": "Kun Yuan; Shaopeng Guo; Ziwei Liu; Aojun Zhou; Fengwei Yu; Wei Wu", "journal": "", "ref_id": "b55", "title": "Incorporating convolution designs into visual transformers", "year": "2021" }, { "authors": "Cheng Zhang; Zhaopeng Cui; Cai Chen; Shuaicheng Liu; Bing Zeng; Hujun Bao; Yinda Zhang", "journal": "", "ref_id": "b56", "title": "Deeppanocontext: Panoramic 3d scene understanding with holistic scene context graph and relation-based optimization", "year": "2021" }, { "authors": "Cheng Zhang; Zhaopeng Cui; Yinda Zhang; Bing Zeng; Marc Pollefeys; Shuaicheng Liu", "journal": "", "ref_id": "b57", "title": "Holistic 3d scene understanding from a single image with implicit representation", "year": "2021" }, { "authors": "Yinda Zhang; Shuran Song; Ping Tan; Jianxiong Xiao", "journal": "Springer", "ref_id": "b58", "title": "Panocontext: A whole-room 3d context model for panoramic scene understanding", "year": "2014" }, { "authors": "Jia Zheng; Junfei Zhang; Jing Li; Rui Tang; Shenghua Gao; Zihan Zhou", "journal": "Springer", "ref_id": "b59", "title": "Structured3d: A large photo-realistic dataset for structured 3d modeling", "year": "2020" }, { "authors": "Nikolaos Zioulis; Antonis Karakottas; Dimitrios Zarpalas; Petros Daras", "journal": "", "ref_id": "b60", "title": "Omnidepth: Dense depth estimation for indoors spherical panoramas", "year": "2018" }, { "authors": "Chuhang Zou; Alex Colburn; Qi Shan; Derek Hoiem", "journal": "", "ref_id": "b61", "title": "Layoutnet: Reconstructing the 3d room layout from a single rgb image", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 357.91, 312.75, 187.2, 9.83 ], "formula_id": "formula_0", "formula_text": "Z = [F image , F layout , F point , F object ].(1)" }, { "formula_coordinates": [ 4, 348.48, 448.14, 192.76, 11.05 ], "formula_id": "formula_1", "formula_text": "Q = ZW Q , K = ZW K , V = ZW V . (2" }, { "formula_coordinates": [ 4, 541.24, 450.56, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 322.31, 537.84, 222.8, 25.24 ], "formula_id": "formula_3", "formula_text": "MSA(Q, K, V) = sof tmax QK T √ d ⊙ M V,(3)" }, { "formula_coordinates": [ 4, 333.11, 628.22, 212, 30.55 ], "formula_id": "formula_4", "formula_text": "MHSA(Q, K, V) = H h=1 MSA(Q, K, V)W h . (4)" }, { "formula_coordinates": [ 5, 59.89, 459.43, 226.47, 9.65 ], "formula_id": "formula_5", "formula_text": "L layout = λ p * L pos + λ n * L norm + λ e * L sharp . (5)" }, { "formula_coordinates": [ 5, 112.08, 636.54, 174.28, 30.55 ], "formula_id": "formula_6", "formula_text": "L shape = 1 K K k=1 ℓ 1 θ -θ ,(6)" }, { "formula_coordinates": [ 5, 359.42, 257.1, 185.69, 59.04 ], "formula_id": "formula_7", "formula_text": "L object = 1 L L l=1 L l obj , L l obj = x∈{L object loss } β x * L x .(7)" }, { "formula_coordinates": [ 5, 341.82, 484.94, 203.29, 81.78 ], "formula_id": "formula_8", "formula_text": "L physic = 1 K K k=1 1 ins L 3d violation , L 3d violation = 8 i=1 (relu(x k i -max(X L )) + relu(min(X L ) -x k i )),(8)" }, { "formula_coordinates": [ 5, 324.03, 655.19, 221.08, 9.65 ], "formula_id": "formula_9", "formula_text": "L = σ l * L layout + σ o * L object + σ p * L physic .(9)" } ]
2023-05-21
[ { "figure_ref": [ "fig_33" ], "heading": "List of Tables ", "publication_ref": [ "b23", "b32", "b28", "b23", "b32", "b28", "b11", "b23", "b32", "b28" ], "table_ref": [ "tab_6", "tab_1", "tab_1", "tab_1", "tab_1", "tab_1" ], "text": "Table 5.\n1 Comparison results of [18], [27], [23], and our method on Recall, Precision, and F-Score. . . . . . . . . . . . . . . . . . . . . . . . . Table 5.2 Comparison results of [18], [27], [23], and our method based on the number of dendrite cores detected. . . . . . . . . . . . . . . . . Table 5.3 Ablation study on ESD, HSD, and HSR. We show the result for ESD in the first row, ESD + HSD in the second row and ESD + HSD + HSR in the third row. In the ablation study, we use the deviation distance 10 pixels. . . . . . . . . . . . . . . . . . . . Table 5. 4 Ablation study on ESD, HSD, and HSR based on the number of dendrite cores detected. We show the result for ESD at the first row, ESD + HSD at the second row and ESD + HSD + HSR at the third row. In the ablation study, we use the deviation distance 10 pixels. . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 5.5 Exploiting the optimal crop size after ESD when the deviation distance is 10 pixels. In this work, we try the crop size: 40*40, 60*60, 80*80, 100*100, and no crop, respectively. . . . . . . . . . . Table 5. 6 Exploiting the optimal intensity in the cropped areas when deviation distance is 10 pixels and cropped size is 80*80. 'Gaussian' means destroy the structure of easy samples by only using Gaussian smoothing. '0 + Gaussian' means first filling the cropped areas with intensity 0 and then processing the cropped boundary by Gaussian smoothing. . . . . . . . . . . . . . . . . . . . . . . . . Figure 5.1 Three visualization results of [18], [27], [23], and our method." }, { "figure_ref": [], "heading": "List of Figures", "publication_ref": [], "table_ref": [], "text": "The upper points of the red triangles and the lower points of green triangles denote the ground truth and predicted dendrite cores respectively. We highlight the hard samples of dendrite cores detected by our method by the yellow boxes. . . . . . . . . Chapter 1" }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b39", "b35", "b22", "b38", "b27", "b46", "b17", "b23", "b31", "b25", "b32", "b28", "b28", "b18" ], "table_ref": [], "text": "Dendrites are tree-like structures of crystals which are formed during some materials' solidification [34]. In the procedure of solidification, the morphology of dendrite can has a huge change which leads to a significant effect on the properties of materials [30] [6] [3] [17]. For example, [33] claims that learning the growth mechanism of metal dendrites in the electrochemical procedure can extend battery life significantly and [22] mentions the study of dendrites structure is helpful to predict solidification As Deep Neural Network (DNN) has achieved significant success in the natural image process, material scientists are exploiting DNN to solve the problems in their field. For example, [41] proposes to use DNN to reconstruct 3D objects by detecting 2D objects on each cross-section, and [12] proposes to use DNN to perform X-ray CT segmentation task. However, there are few kinds of research focusing on dendrite core detection. Different from the typical detection problems in the computer vision field, detecting the dendrite core aims to detect a single point location instead of the bounding-box. Therefore, the existing regressing bounding-box based detection methods such as [18], [26], [20], and [27] can not work well on this task. From Fig. 1.3 we can find that the regressing bounding-box based methods try to tightly cover the dendrite instead of focusing on the dendrite core. Besides, the appearance of the dendrites varies a lot, therefore the calculated center point location based on the upper-left and lower-right corners of the bounding box is usually inaccurate when used as the estimate of the dendrite core. As for the key points detection algorithms, they also can not work well on this task because of the complex properties of the dendrites mentioned above. For example, the key point detection method in [23],\nwhich is designed for detecting human key points, can not be used to detect the dendrite cores, even after increasing network complexity, due to the blurry or serious incompleteness of the dendrites.\nInspired by [23], In this work, we formulate the dendrite core detection problem as a segmentation task and propose a novel detection method to detect the dendrite core directly. Our whole pipeline contains three steps: Easy Sample Detection (ESD),\nHard Sample Detection(HSD), and Hard Sample Refinement (HSR). Specifically, ESD and HSD focus on the easy samples and hard samples of dendrite cores respectively.\nBoth of them employ the same Central Point Detection Network (CPDN) but not sharing parameters. To make HSD only focus on the feature of hard samples of dendrite cores, we destroy the structure of the easy samples of dendrites which are detected by ESD and force HSD to learn the feature of hard samples. HSR is a binary classifier which is used to filter out the false positive prediction of HSD.\nOur main contribution is twofold.\nWe propose a novel detection method to detect the dendrite cores directly.\nWe conduct a series of experiments for exploiting the optimal crop size and crop intensity to destroy the structure of the easy samples of dendrites. The Convolutional Neural Network (CNN) is similar to the fully connected neural network. Both CNN and the fully connected neural network are comprised of neurons and can optimize their parameters through the learning process. However, compared with the fully connected neural network, CNN has several salient advantages. First, CNN is much more computationally efficient because of parameter sharing. Second, the sparsity of connection in CNN makes each output of the convolution layer only depend on a small number of inputs which efficiently prevent the overfitting. Third, CNN can keep the position information of the 2D images. Since the AlexNet [13] is proposed in 2012, CNN has almost become the most popular deep learning architecture in the computer vision field. Usually, the whole CNN contains several components such as convolution operation, pooling, activation function, stride, and padding. We will elaborate on each of these components in the following paragraphs." }, { "figure_ref": [ "fig_4" ], "heading": "Convolution", "publication_ref": [], "table_ref": [], "text": "The convolution operation is the most important component in CNN. Specifically, the convolution operation is a mathematical operation between the input vector and the corresponding filter or kernel. As shown in Fig. 2.1, it performs the convolution operation at the red circle location of the input vector. It makes element-wise multiplication and sums the result. In this case, the kernel size is 3*3, therefore, it will take a 3*3 area at each location of the input vector. In practice, in each convolu-tion layer, it chooses many different kernels to do the convolution operation because each of these kernels can extract different feature information from the input vector.\nBased on the previous study, the shallow convolution layers of CNN can extract lowlevel features from the input vector such as edges, and the deeper layers can extract possible objects such as faces or even more complex features. \n1 1 0 1 0 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 1 1 (a) 1 0 1 0 1 0 10 1 (b) 2 (c)" }, { "figure_ref": [], "heading": "Activation function, stride, and padding", "publication_ref": [], "table_ref": [], "text": "The convolution operation is linear. In order to make CNN more powerful, it passes the result of the convolution operation through a non-linear function such as ReLU in " }, { "figure_ref": [ "fig_4" ], "heading": "Residual learning framework", "publication_ref": [ "b14" ], "table_ref": [], "text": "It is difficult to train a deep neural network when it contains a lot of layers. As the depth of the deep neural network increases, the accuracy degrades quickly. In order to solve the degradation problem, [9] propose the \"deep residual learning framework\".\nx and F(x) in Fig. 2.4 denote the input feature and the residual respectively. H(x) in Eq. (2.1)\nF(x) = H(x) -x (2.1)\ndenotes the desired underlying mapping. Instead of making the layers fit H(x) directly, the residual learning framework makes the layers fit H(x) -x. In this case, if the input feature x contains all the useful information the residual learning framework will push F(x) to zero. By this mechanism, the deep residual network can be optimized easily and trained end-to-end. " }, { "figure_ref": [], "heading": "Introduction of Stacked-Hourglass Network", "publication_ref": [ "b28", "b13", "b12", "b32", "b31", "b23", "b31", "b23", "b31", "b40", "b25", "b28", "b33", "b45" ], "table_ref": [], "text": "The Stacked-Hourglass Network [23] is designed for predicting human key points. The single Hourglass module in Fig. nates of the bounding-boxes corresponding to the objects. For instance, as discussed in [8], it first generates the potential bounding-boxes based on the region proposals and then makes the classification and refines the bounding-boxes. To enhance the speed of detection, [7] and [27] make an improvement on the proposal stage by computing the proposals with a deep convolutional neural network. In order to further increase the detection speed, [26] and [18] propose the one-stage detection which spatially separates bounding-boxes and associates with class probabilities. Different from [26], [18] applied multiple aspect ratios and scales over different feature maps.\nUnlike [26][18], [35][25] regress the location of the bounding-boxes in the image level directly instead of the feature level. Regarding the flexibility of detected boundingboxes, [20] suggests generating inclined proposals with angle information. Then it adjusts the angle of predicted bounding-boxes to make the detected result more accurate. However, these methods mentioned above focus more on the edges of the classified into background, mouth, nose, and so on. In [23], it concatenates multiple UNet [28] structures together that repeatedly downsample and upsample the features and catch the human key points information at different scales. To reduce the model parameter and speed up the detection, [40] pay attention to the context information and propose a method called Cascaded Context Mixer which can integrate the spatial level and channel level information together and refine them step by step." }, { "figure_ref": [], "heading": "CNN based detection and segmentation on the material science images", "publication_ref": [ "b43", "b26", "b34", "b7", "b46", "b24", "b6", "b29" ], "table_ref": [], "text": "Detection. In order to improve the detection speed and accuracy, in recent years, many works are trying to solve the detection problem in the material science field by using the deep learning based method. For example, [38] proposed to use Faster R-CNN based method to detect the internal defects from the CT scanning image of the metal three-dimensional lattice structure. To prevent overfitting to a small training dataset, it reduces the number of convolution layers and pooling layers. In [21], a region based approach is implemented by CNN to detect and localize the anomalies in the scanning electron microscope images of nanofibrous materials. It also uses CNN-extracted features to evaluate the degree of difference between the anomaly samples and anomaly-free samples. In [29], CNN and a Deep Feed Forward Network are combined to detect the anomaly of Carbon Fiber Reinforced Polymer thermograms. It can detect the anomaly in thermograms in real-time without any manual intervention. To speed up the detection and reduce the model parameters, [2] proposed a method called WDD-Net which applies depthwise separable convolution and global pooling to detect the wafer structural defects. Besides the detection of anomalies and defects, in [41], CNN is applied to detect the fiber cross-section in the 2D microscopic images, and the detected 2D fiber cross-sections are then used to reconstruct the 3D fiber structure.\nSegmentation. Besides detection, CNN is also widely used for the segmentation of various material science images. For example, [19] proposed to train a CNN model based on DeepLab to perform the segmentation of AL-LA alloy microscopic images. It takes advantage of the local symmetric information and applies symmetric rectification to enhance the segmentation accuracy. In [4], FibeR-CNN is\nproposed to analyze the fiber images based on the segmentation result, which employs R-CNN as the backbone and inserts additional convolution layers to help the prediction. To avoid labeling a large number of ground truth samples manually, [1] designed a semi-supervised learning method that applies UNet as the backbone to solve the segmentation problem of aluminum alloy metallographic image. This semi-supervised method only requires labeling a small number of images and can get better segmentation results than the traditional segmentation methods. To further improve the segmentation performance, [24] proposed the use of the Generative Adversarial Network (GAN) based method to perform the segmentation on carbon steel microstructure images. " }, { "figure_ref": [], "heading": "Easy Sample Detection (ESD)", "publication_ref": [], "table_ref": [], "text": "In this step, it only focuses on detecting the easy samples of dendrite cores from the input image. We denote the input image I 1 and ground truth heatmap H 1 in ESD where I 1 ∈ R H×W ×3 and H 1 ∈ R H×W ×1 . The ground truth heatmap H 1 is binarized that one denotes the dendrite core and zero denotes the background. D 1 will produce a heatmap Ĥ1 to denote the detected location of each dendrite core based on the input image\nI 1 Ĥ1 = D 1 (I 1 ) (4.1)\nwhere Ĥ1 has the same dimension as H 1 . We aim to make ESD detect the easy samples of dendrite cores with a very high confidence to improve the Precision. The dendrite cores that can not be detected will be processed in HSD. Therefore, we set up a relatively higher confidence threshold α to binarize the predicted heatmap Ĥ1 . If the value of the location (i, j) in Ĥ1 greater than α, we set Ĥ1 (i, j) to one, otherwise set the value to zero, i.e.,\nĤ1 (i, j) =        1, if Ĥ1 (i, j)>α, 0, otherwise. (4.2)" }, { "figure_ref": [], "heading": "Hard Samples Detection (HSD)", "publication_ref": [], "table_ref": [], "text": "In this step, we only focus on the hard samples of dendrite cores which can not be detected in ESD such as the blurred or incomplete samples of dendrites. We denote the input image I 2 and ground truth heatmap H 2 in HSD where I 2 and H 2 have same dimension as I 1 and H 1 . To make D 2 only focus on the feature of hard samples of dendrite cores, I 2 shall only contain the hard samples of dendrites. We can construct I 2 by cropping the structure of easy samples of dendrites which are detected by D 1 from I 1 , i.e.,\nI 1 (i -s : i + s, j -s : j + s) = 0, if Ĥ1 (i, j) = 1 (4.3)\nwhere s decide the crop size which will be discussed in Chapter 5.4. The ground truth H 2 is generated by: In HSD, it usually detects the hard samples of dendrite cores with a relatively lower confidence score. In order to improve the Recall, we setup a relative lower threshold \nH 2 (i, j) =        0, if Ĥ1 (i, j)=1, H 1 (i, j), otherwise.\nβ to binarize the predicted heatmap Ĥ2 Ĥ2 (i, j) =        1, if Ĥ2 (i, j)>β," }, { "figure_ref": [], "heading": "Hard Samples Refinement (HSR)", "publication_ref": [], "table_ref": [], "text": "In order to improve the Recall in HSD, we set up a relative lower confidence threshold β which leads to predict a lot of false positive samples of dendrite cores. To solve this problem, we add HSR after HSD. The HSR is a binary classifier which is gotten by fine tuning the ResNet-50 pretrained model on the dendrite dataset. We denote the input of HSR I 3 which is a small patch with size 80*80 and can be gotten by\nI (i,j) 3 = I 2 (i -40 : i + 40, j -40 : j + 40), if Ĥ2 (i, j) = 1(4.7)\nHSR will output one for the input\nI (i,j) 3 if Ĥ2 (i, j\n) is a true positive prediction, otherwise output zero. We denote H 2 to be the refined result of Ĥ2 and can get H 2 by\nH 2 (i, j) =        1, if HSR(I (i,j) 3 )=1, 0, otherwise. (4.8)\nThe effectiveness of HSR will be discussed later in Chapter 5.3. Finally, we get the predicted heatmap Ĥ by \nĤ(i, j) =        1, if Ĥ1 (i, j)=1 or H 2 (i, j)=1,0, otherwise." }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [], "table_ref": [], "text": "We use the L 2 loss as objective function to train D 1 and D 2 separately which is shown as:\nL 1 = λ 1 * || Ĥ1 -H 1 || 2 (4.10) L 2 = λ 2 * || Ĥ2 -H 2 || 2 (4.11)" }, { "figure_ref": [ "fig_33" ], "heading": "Implementation details", "publication_ref": [ "b23", "b32", "b28" ], "table_ref": [ "tab_1" ], "text": "We train D 1 in ESD with a learning rate of 0.0004. Then, we train D 2 in HSD with a learning rate of 0.0001 by fixing the parameter of D 1 . We train both D 1 and D 2 100 epochs. Then, we train HSR 50 epochs with a learning rate of 0.0001. We set up the threshold α to 0.4, β to 0.1, and λ 1 and λ 2 in the loss function to 0.5.\nThe experiments are conducted on the same platform with two NVIDIA Tesla V100 GPUs.\nTable 5.1 Comparison results of [18], [27], [23], and our method on Recall, Precision, and F-Score. We display the visualization results on three samples taken from the dendrite dataset in Fig. 5.1. From the visualization result, we can find that our method has the ability to detect more hard samples of dendrite cores such as the blurred or incomplete dendrites inside the yellow boxes, while the other methods can not." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [], "table_ref": [], "text": "To validate the effectiveness of our method, we consider three variants: ESD (Chapter of Precision compared to the HSD. As a result, by taking advantages of both HSD and HSR, our method achieves the best F-score." }, { "figure_ref": [], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "We show the visualization result in " }, { "figure_ref": [], "heading": "Exploiting the crop size", "publication_ref": [], "table_ref": [], "text": "During the procedure of destroying the structure of easy samples of dendrites detected by ESD, the crop size is an important factor that can affect the final detected result. In order to exploit the reasonable crop size, we have conducted a series of In addition to the crop size, the crop intensity also has a big influence on the prediction result of HSD. In this work, we conduct a series of experiments to exploit the optimal crop intensity on dendrite dataset. Specifically, we try the fixed intensities of 0, 128, that with the intensity 0, it can get the best F-score. Finally, we choose the intensity 0 in this work." }, { "figure_ref": [], "heading": "Processing the cropped area with Gaussian smoothing", "publication_ref": [ "b6", "b25" ], "table_ref": [], "text": "In addition to fill the cropped area with the fixed intensity, we also exploit using the Gaussian smoothing to process the cropped area. The standard deviation used to generate the kernel of Gaussian smoothing is in the range [1,20] and the kernel size One potential limitation of our method is that we use different components to focus on improving the Recall and Precision separately which increases the difficulty to train the whole pipeline. E.g., we use Hard Sample Detection to increase the Recall and use Hard Sample Refinement to increase the Precision. In future work, we will exploit new approaches to combine all the components into a single network." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Metrics. We follow the previous object detection methods and use Recall, Precision, and F-score to evaluate our method.\nBaselines. We compare our method with three state-of-the-art object detection algorithms: Faster R-CNN, SSD, and Stacked-Hourglass." }, { "figure_ref": [], "heading": "Comparison Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [ "b32", "b28" ], "table_ref": [], "text": "We compare our method with three state-of-the-art object detection methods and show the comparison results in Table 5.1 and Table 5.2. From Table 5.1 and Table 5.2, we can observe that with the deviation distance(the distance between the predicted location and ground truth) of 10 pixels, our method can reach the best result on Recall, Precision, and F-score. Specifically, comparing to the regressing boundingbox based detection [27], our method can achieve 5.77% relative higher Recall, 7.29% relative higher Precision, and 6.53% relative higher F-score and comparing to the segmentation based detection [23], our method can achieve 0.77% relative higher Recall, 1.58% relative higher Precision, and 1.18% relative higher F-score." } ]
Dendrite core is the center point of the dendrite. The information of dendrite core is very helpful for material scientists to analyze the properties of materials. Therefore, detecting the dendrite core is a very important task in the material science field. Meanwhile, because of some special properties of the dendrites, this task is also very challenging. Different from the typical detection problems in the computer vision field, detecting the dendrite core aims to detect a single point location instead of the bounding-box. As a result, the existing regressing bounding-box based detection methods can not work well on this task because the calculated center point location based on the upper-left and lower-right corners of the bounding-box is usually not precise. In this work, we formulate the dendrite core detection problem as a segmentation task and proposed a novel detection method to detect the dendrite core directly. Our whole pipeline contains three steps: Easy Sample Detection (ESD), Hard Sample Detection(HSD), and Hard Sample Refinement (HSR). Specifically, ESD and HSD focus on the easy samples and hard samples of dendrite cores respectively. Both of them employ the same Central Point Detection Network (CPDN) but do not share parameters. To make HSD only focus on the feature of hard samples of dendrite cores, we destroy the structure of the easy samples of dendrites which are detected by ESD and force HSD to learn the feature of hard samples. HSR is a binary classifier which is used to filter out the false positive prediction of HSD. We evaluate our method on the dendrite dataset. Our method outperforms the state-of-the-art baselines on three metrics, i.e., Recall, Precision, and F-score.
[ { "figure_caption": "Figure 1 .1Figure 1.1 An illustration of dendrite samples. The red points denote the dendrite cores. . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "11Figure 1.1 An illustration of dendrite samples. The red points denote the dendrite cores. . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 1 . 212Figure 1.2 Examples of microscopic images with blurry and noise in (a)(b) and (c)(d) respectively and the incomplete dendrites in (e)(f). . .", "figure_data": "", "figure_id": "fig_2", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 1 . 313Figure 1.3 Visualization results of the regressing bounding-box based method.The red boxes denote the ground truth and the green boxes denote the predictions. The yellow arrows denote the edges that the predicted boxes try to fit. . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_3", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2.1 (a) denotes the input vector, (b) denotes the filter or kernel, and (c) denotes the result of convolution operation at the red circle location of the input vector. . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 222Figure 2.2 ReLU function. . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_5", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 323Figure 2.3 Max pooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_6", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 424Figure 2.4 Residual learning block. The weight layer denotes the convolution layer. The figure comes from [9]. . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_7", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 525Figure 2.5 The figures come from [23]. (a) is the single Hourglass module and (b) is the whole Stacked-Hourglass Network. . . . . . . . . .", "figure_data": "", "figure_id": "fig_8", "figure_label": "25", "figure_type": "figure" }, { "figure_caption": "Figure 4 .Figure 4 . 2442Figure 4.1 (a) denotes our whole pipeline, (b) denotes the Central Point Detection Network (CPDN), and (c) denotes the Bottleneck Block. Both D 1 and D 2 employ CPDN but not sharing parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_9", "figure_label": "442", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 252Figure 5.2 Two visualization results of the ESD and ESD + HSD. The upper points of the red triangles and the lower points of green triangles denote the ground truth and the predicted dendrite cores respectively. The yellow boxes denote the hard samples of dendrite cores detected by the HSD. . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_10", "figure_label": "52", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 353Figure 5.3 Two visualization results of the ESD + HSD and ESD + HSD + HSR. The upper points of the red triangles and the lower points of green triangles denote the ground truth and the predicted dendrite cores respectively. The yellow boxes denote the false positive predictions filtered out by the HSR. . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_11", "figure_label": "53", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 454Figure 5.4 The visualization of different crop sizes. (a) denotes the crop size 40*40, (b) denotes the crop size 60*60, (c) denotes the crop size 80*80, and (d) denotes the crop size 100*100. . . . . . . . . .", "figure_data": "", "figure_id": "fig_12", "figure_label": "54", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 555Figure 5.5 Exploiting the optimal crop size after ESD. Red points denote the Recall, green points denote the Precision, and blue points denote the F-score. When the crop size is equal to 80*80, it gets the best F-score. . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_13", "figure_label": "55", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 656Figure 5.6 Exploiting the optimal intensity to fill the cropped area. The intensity in the cropped areas is 0 for (a), 255 for (b), and 128 for (c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_14", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 7 Figure 5 . 85758Figure 5.7 Exploiting the optimal intensity to fill the cropped area. In this work, we try three intensities 0, 128, and 255, respectively. Red points denote the Recall, green points denote the Precision, and blue points denote the F-score. When the intensity is equal to 0, it gets the best F-score. . . . . . . . . . . . . . . . . . . . . . .", "figure_data": "", "figure_id": "fig_15", "figure_label": "5758", "figure_type": "figure" }, { "figure_caption": "defects. The dendrite core is the center point of the dendrite. Several dendrite samples are shown in Fig.1.1 where red points denote the dendrite cores. The information of dendrite core is very helpful for the material scientists to analyze the properties of materials. Therefore, detecting the dendrite core is a very important task in the material science field. Meanwhile, because of some special properties of the dendrites, this task is also very challenging. First, the microscopic images of the dendrites can be very blurred as shown in Fig.1.2 (a)(b), and it can be very difficult to locate the dendrite cores from the microscopic image. Second, besides the blurry, there are many different noises in the microscopic images such as the dark and bright points in Fig.1.2 (c) and the black spots in Fig.1.2 (d). These noises will increase the difficulty of locating the dendrites. Third, some of the dendrites in the microscopic image are incomplete such as the dendrites in the yellow boxes in Fig.1.2 (e)(f). The appearance of these incomplete dendrites is totally different from that of the complete ones and only the materials scientists with expertise can recognize them. As a result, locating the dendrites and annotating the dendrite cores are very time-consuming and expensive.", "figure_data": "", "figure_id": "fig_16", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 . 111Figure 1.1 An illustration of dendrite samples. The red points denote the dendrite cores.", "figure_data": "", "figure_id": "fig_17", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 1 . 212Figure 1.2 Examples of microscopic images with blurry and noise in (a)(b) and (c)(d) respectively and the incomplete dendrites in (e)(f).", "figure_data": "", "figure_id": "fig_18", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 1 . 3 2 Background 2 . 113221Figure 1.3 Visualization results of the regressing bounding-box based method. The red boxes denote the ground truth and the green boxes denote the predictions. The yellow arrows denote the edges that the predicted boxes try to fit.", "figure_data": "", "figure_id": "fig_19", "figure_label": "13221", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2.1 (a) denotes the input vector, (b) denotes the filter or kernel, and (c) denotes the result of convolution operation at the red circle location of the input vector.", "figure_data": "", "figure_id": "fig_20", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 2 . 2 . 0 Figure 2 . 222022Fig.2.2. The ReLU function drops the values smaller than zero. The stride specifies the distance it moves the filter or kernel at each step. We can choose a big stride if we want to have less overlap between adjacent convolution operations. In order to keep the output feature having the same dimension as the input feature, we also pad zeros around the input vector.", "figure_data": "", "figure_id": "fig_21", "figure_label": "22022", "figure_type": "figure" }, { "figure_caption": "2. 1 . 3 Pooling13The pooling operation reduces the feature's dimension which can save the training time and prevent overfitting. There are no trainable parameters in the pooling operation. For the max-pooling in Fig.2.3, it keeps the maximum value around the 2*2 window. In this case, the max-pooling downsamples the feature dimension to 2*2 from 4*4.", "figure_data": "", "figure_id": "fig_22", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 323Figure 2.3 Max pooling.", "figure_data": "", "figure_id": "fig_23", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 424Figure 2.4 Residual learning block. The weight layer denotes the convolution layer. The figure comes from [9].", "figure_data": "", "figure_id": "fig_24", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "Figure 2 . 525Figure 2.5 The figures come from [23]. (a) is the single Hourglass module and (b) is the whole Stacked-Hourglass Network.", "figure_data": "", "figure_id": "fig_25", "figure_label": "25", "figure_type": "figure" }, { "figure_caption": "objects' bounding boxes instead of the center point locations. As a result, the calculated center point locations based on the upper-left and lower-right corners of the bounding-boxes are not precise. Segmentation based methods. The segmentation based methods classify the pixels in the input image into different groups and each group corresponds to one category. Different from the regressing bounding-box based methods, the segmentation based methods can detect the center point locations of the objects directly. For example, for the human key points detection task, the pixels in the input image are", "figure_data": "", "figure_id": "fig_26", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Chapter 4 Methodology4In this work, we formulate the dendrite core detection problem as a segmentation task. The whole pipeline contains three steps: Easy Sample Detection (ESD), Hard Sample Detection(HSD), and Hard Sample Refinement (HSR). Specifically, ESD and HSD focus on the easy samples and hard samples of dendrite cores respectively. Both of them employ the same Central Point Detection Network (CPDN), which is shown in Fig. 4.1 (b), but not sharing parameters. To make it clear, we denote the CPDN D 1 in ESD and D 2 in HSD. HSR is used to improve the Precision of HSD. The whole pipeline is shown in Fig. 4.1 (a). We will elaborate on ESD in Chapter 4.1, HSD in Chapter 4.2, CPDN in Chapter 4.3, and HSR in Chapter 4.4 respectively.", "figure_data": "", "figure_id": "fig_27", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4.1 (a) denotes our whole pipeline, (b) denotes the Central Point Detection Network (CPDN), and (c) denotes the Bottleneck Block. Both D 1 and D 2 employ CPDN but not sharing parameters.", "figure_data": "", "figure_id": "fig_29", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(4. 4 )D 242will produce another heatmap Ĥ2 which denotes the detected locations of hard samples of dendrite cores based on I 2 , i.e., Ĥ2 = D 2 (I 2 ) (4.5)", "figure_data": "", "figure_id": "fig_30", "figure_label": "42", "figure_type": "figure" }, { "figure_caption": "4 . 343Central Point detection network (CPDN)Both D 1 and D 2 employ the same CPDN but not sharing parameters. CPDN is implemented based on the architecture of U-Net. The encoder of CPDN consists of a convolution layer followed by several bottleneck layers and max-pooling layers. The bottleneck block consists of three group normalization layers and three convolution layers with kernel size 1*1, 3*3, and 1*1 respectively, followed by a ReLU operation as shown in Fig.4.1 (c). The decoder of CPDN consists of several upsample layers and bottleneck layers, followed by a convolution layer with kernel size 1*1. D 1 and D 2 are trained separately and optimized with L 2 loss.", "figure_data": "", "figure_id": "fig_31", "figure_label": "43", "figure_type": "figure" }, { "figure_caption": "(4. 9 )Figure 4 . 2942Figure 4.2 Building the training set for HSR. The positive samples are cropped around the randomly selected points inside circle A, the negative samples are cropped around the randomly selected points inside the green area of circle B and the blue area of circle C. The circles A, B, and C are around the dendrite core with radius 4, 15, 40 pixels respectively. (b) is an example of the positive sample and (c), (d) are the examples of negative samples.", "figure_data": "", "figure_id": "fig_32", "figure_label": "942", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .52 and Fig. 5.3 to prove the effectiveness of each component. From the visualization result, we can find that HSD has the ability to detect more hard samples of dendrite cores such as the dendrite cores inside the yellow boxes in Fig. 5.2. HSR can filter out the false-positive detection such as the dendrite cores inside the yellow boxes in Fig. 5.3. As a result, ESD, HSD, and HSR can work together to increase the F-score.", "figure_data": "", "figure_id": "fig_33", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 5 . 4 . 5 . 55455Fig. 5.4. We evaluate the Recall, Precision, and F-score on the dendrite dataset which is discussed in Chapter 5.1 with different crop sizes and show the results in Table 5.5 and Fig. 5.5. From Fig. 5.5, we can see that Without destroying the structure of easy samples of dendrites detected by ESD, the final F-score is very low. It reaches the best F-score when the crop size is 80*80. When continuing to increase the crop size to 100*100, the F-score decreases. The reason for this result is that the smaller crop size or without cropping can not destroy the structure of dendrites detected by ESD and the larger crop size may destroy the potential dendrites which should be detected by HSD. Based on these observations, we finally choose the crop size 80*80.", "figure_data": "", "figure_id": "fig_34", "figure_label": "5455", "figure_type": "figure" }, { "figure_caption": "is 11 * 11 .1111Based on the discussion in Chapter 5.5.1, using the intensity 0, the HSD can get the best F-score. Therefore, we use the Gaussian smoothing to process the boundaries of the cropped areas with intensity 0. The visualization results are shown in Fig. 5.8 (a). Besides, we also try to destroy the structure of easy samples by only using Gaussian smoothing. The visualization results are shown in Fig. 5.8 (b). From", "figure_data": "", "figure_id": "fig_35", "figure_label": "1111", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 1 Figure 5 . 2 Figure 5 . 3515253Figure 5.1 Three visualization results of[18],[27],[23], and our method. The upper points of the red triangles and the lower points of green triangles denote the ground truth and predicted dendrite cores respectively. We highlight the hard samples of dendrite cores detected by our method by the yellow boxes.", "figure_data": "", "figure_id": "fig_36", "figure_label": "515253", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 454Figure 5.4 The visualization of different crop sizes. (a) denotes the crop size 40*40, (b) denotes the crop size 60*60, (c) denotes the crop size 80*80, and (d) denotes the crop size 100*100.", "figure_data": "", "figure_id": "fig_38", "figure_label": "54", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 555Figure 5.5 Exploiting the optimal crop size after ESD. Red points denote the Recall, green points denote the Precision, and blue points denote the F-score. When the crop size is equal to 80*80, it gets the best F-score.", "figure_data": "", "figure_id": "fig_39", "figure_label": "55", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 656Figure 5.6 Exploiting the optimal intensity to fill the cropped area. The intensity in the cropped areas is 0 for (a), 255 for (b), and 128 for (c).", "figure_data": "", "figure_id": "fig_40", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 5 . 7 Figure 5 . 8 6 Conclusions57586Figure 5.7 Exploiting the optimal intensity to fill the cropped area. In this work, we try three intensities 0, 128, and 255, respectively. Red points denote the Recall, green points denote the Precision, and blue points denote the F-score. When the intensity is equal to 0, it gets the best F-score.", "figure_data": "", "figure_id": "fig_41", "figure_label": "57586", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "To reduce the number of training images, it also proposed a CNN based framework to generate the training data.Detecting the dendrite core is a very challenging and important task that can help material scientists, there are very few prior researchs that are exactly focused on this problem. Most of the above CNN based detection methods for the materials science images treat all the detection targets equally without distinguishing the easy and hard samples. Besides, without considering the unique and complex properties of dendrites, general CNN based detection methods for the natural images may not work well for this specific task. In this work, we formulate the dendrite core detection problem as a segmentation task and propose a novel detection method to detect the easy samples and hard samples of dendrite cores separately.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "2 Comparison results of[18],[27],[23], and our method based on the number of dendrite cores detected.", "figure_data": "Deviation Recall↑ Precision↑ F-Score↑SSD100.95850.94150.9499Faster-RCNN100.91120.90500.9081Stacked-Hourglass 100.95640.95590.9561Our100.96380.97100.9674MethodsDeviation Total T-Positive↑ F-Positive↓SSD1018821804112Faster-RCNN1018821715180Stacked-Hourglass 101882180083Ours1018821814545.2.2 Qualitative Results", "figure_id": "tab_1", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "4.1), HSD (Chapter 4.2), and HSR (Chapter 4.4). From Table5.3 and Table5.4, 3 Ablation study on ESD, HSD, and HSR. We show the result for ESD in the first row, ESD + HSD in the second row and ESD + HSD + HSR in the third row. In the ablation study, we use the deviation distance 10 pixels. Only using ESD, it gets a lower Recall which leads to a worse F-score. The HSD is effective to increase the Recall. Specifically, it can increase 3.52% of Recall compared to ESD. HSR is effective to filter out the false positive prediction of HSD and increase Precision. Specifically, it can increase 1.54%", "figure_data": "Methods Deviation Recall↑ Precision↑ F-Score↑ESD100.93410.97450.9538+HSD100.96700.95630.9616+HSR100.96380.97100.9674we can observe that:", "figure_id": "tab_2", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "4 Ablation study on ESD, HSD, and HSR based on the number of dendrite cores detected. We show the result for ESD at the first row, ESD + HSD at the second row and ESD + HSD + HSR at the third row. In the ablation study, we use the deviation distance 10 pixels.", "figure_data": "Methods Deviation Total T-Positive↑ F-Positive↓ESD101882180083+HSD101882182083+HSR101882181454experiments. We take different crop sizes to destroy the structure of easy samplesof dendrites. Specifically, we use the crop size 40*40, 60*60, 80*80, 100*100, and nocrop, respectively. The visualization results of the cropped images are displayed in", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "5 Exploiting the optimal crop size after ESD when the deviation distance is 10 pixels. In this work, we try the crop size: 40*40, 60*60, 80*80, 100*100, and no crop, respectively.", "figure_data": "Crop size Recall↑ Precision↑ F-Score↑no crop0.96330.82290.887640*400.96280.95460.958760*600.96750.95440.960980*800.96700.95630.9616100*1000.96540.95230.9588", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "6 Exploiting the optimal intensity in the cropped areas when deviation distance is 10 pixels and cropped size is 80*80. 'Gaussian' means destroy the structure of easy samples by only using Gaussian smoothing. '0 + Gaussian' means first filling the cropped areas with intensity 0 and then processing the cropped boundary by Gaussian smoothing. Based on the discussion in Chapter 5.4, using crop size 80*80 can reach the best F-score. Therefore, we only consider the crop size 80*80 here. The visualization results of different crop intensity are shown in Fig.5.6. We also evaluate the Recall, Precision, and F-score with the different intensities. From Fig.5.7 we find", "figure_data": "IntensityRecall↑ Precision↑ F-Score↑00.96700.95630.96161280.96590.94780.95682550.96750.95490.9612Gaussian0.96170.95970.96070 + Gaussian 0.96170.95410.9579", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "6 we can see that with the fixed intensity 0, it can get the best F-score.", "figure_data": "SSDFasterSGOursCase1Case2Case3", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "5 Loss Function", "year": "" }, { "authors": "", "journal": "Setups", "ref_id": "b1", "title": "", "year": "" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "2 Comparison Results", "year": "" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "3 Ablation Study", "year": "" }, { "authors": "", "journal": "", "ref_id": "b4", "title": "4 Exploiting the crop size", "year": "" }, { "authors": "", "journal": "", "ref_id": "b5", "title": "Exploiting the intensity", "year": "" }, { "authors": "Dali Chen", "journal": "IEEE Access", "ref_id": "b6", "title": "Semi-supervised learning framework for aluminum alloy metallographic image segmentation", "year": "2021" }, { "authors": "Xiaoyan Chen", "journal": "IEEE Access", "ref_id": "b7", "title": "A light-weighted CNN model for wafer structural defect detection", "year": "2020" }, { "authors": " Daudin", "journal": "Acta Materialia", "ref_id": "b8", "title": "Particle-induced morphological modification of Al alloy equiaxed dendrites revealed by sub-second in situ microtomography", "year": "2017" }, { "authors": "Max Frei; Frank Einar; Kruis ", "journal": "Powder Technology", "ref_id": "b9", "title": "FibeR-CNN: Expanding Mask R-CNN to improve image-based fiber analysis", "year": "2021" }, { "authors": "Golnaz Ghiasi", "journal": "", "ref_id": "b10", "title": "Simple copy-paste is a strong data augmentation method for instance segmentation", "year": "2021" }, { "authors": "John W Gibbs", "journal": "Scientific Reports", "ref_id": "b11", "title": "The three-dimensional morphology of growing dendrites", "year": "2015" }, { "authors": "Ross Girshick", "journal": "", "ref_id": "b12", "title": "Fast r-cnn", "year": "2015" }, { "authors": "Ross Girshick", "journal": "", "ref_id": "b13", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2014" }, { "authors": "Kaiming He", "journal": "", "ref_id": "b14", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Kaiming He", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Spatial pyramid pooling in deep convolutional networks for visual recognition", "year": "2015" }, { "authors": "Lei Ke; Yu-Wing Tai; Chi-Keung Tang", "journal": "", "ref_id": "b16", "title": "Deep occlusion-aware instance segmentation with overlapping bilayers", "year": "2021" }, { "authors": "Tomasz Kazimierz; Konopczynski ", "journal": "", "ref_id": "b17", "title": "Deep Learning Segmentation Algorithms for X-ray CT data", "year": "2021" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b18", "title": "One weird trick for parallelizing convolutional neural networks", "year": "2014" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "Jack Lanchantin", "journal": "", "ref_id": "b20", "title": "General multi-label image classification with transformers", "year": "2021" }, { "authors": "Yann Lecun", "journal": "Neural Computation", "ref_id": "b21", "title": "Backpropagation applied to handwritten zip code recognition", "year": "1989" }, { "authors": " Li; Brody; Kazimirov", "journal": "Physical Review E", "ref_id": "b22", "title": "Real-time observation of dendrite coarsening in Sn-13% Bi alloy by synchrotron microradiography", "year": "2004" }, { "authors": "Wei Liu", "journal": "Springer", "ref_id": "b23", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "Boyuan Ma", "journal": "Symmetry", "ref_id": "b24", "title": "Deep learning-based image segmentation for al-la alloy microscopic images", "year": "2018" }, { "authors": "Jianqi Ma", "journal": "IEEE Transactions on Multimedia", "ref_id": "b25", "title": "Arbitrary-oriented scene text detection via rotation proposals", "year": "2018" }, { "authors": "Paolo Napoletano; Flavio Piccoli; Raimondo Schettini", "journal": "Sensors", "ref_id": "b26", "title": "Anomaly detection in nanofibrous materials by CNN-based self-similarity", "year": "2018" }, { "authors": "H Neumann-Heyme; K Eckert; C Beckermann", "journal": "Acta Materialia", "ref_id": "b27", "title": "General evolution equation for the specific interface area of dendrites during alloy solidification", "year": "2017" }, { "authors": "Alejandro Newell; Kaiyu Yang; Jia Deng", "journal": "Springer", "ref_id": "b28", "title": "Stacked hourglass networks for human pose estimation", "year": "2016" }, { "authors": "Aditi Panda; Ruchira Naskar; Snehanshu Pal", "journal": "IET Image Processing", "ref_id": "b29", "title": "Deep learning approach for segmentation of plain carbon steel microstructure images", "year": "2019" }, { "authors": "Lu Qi", "journal": "", "ref_id": "b30", "title": "Multi-Scale Aligned Distillation for Low-Resolution Detection", "year": "2021" }, { "authors": "Joseph Redmon", "journal": "", "ref_id": "b31", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "Shaoqing Ren", "journal": "", "ref_id": "b32", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b33", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Numan Saeed", "journal": "Infrared Physics Technology", "ref_id": "b34", "title": "Automatic defects detection in CFRP thermograms, using convolutional neural networks and transfer learning", "year": "2019" }, { "authors": "Shinji Sakane", "journal": "Journal of Crystal Growth", "ref_id": "b35", "title": "Three-dimensional morphologies of inclined equiaxed dendrites growing under forced convection by phase-field-lattice Boltzmann method", "year": "2018" }, { "authors": "Pierre Sermanet", "journal": "", "ref_id": "b36", "title": "Overfeat: Integrated recognition, localization and detection using convolutional networks", "year": "2013" }, { "authors": "Mohammad Javad; Shafiee ", "journal": "", "ref_id": "b37", "title": "Fast YOLO: A fast you only look once system for real-time embedded object detection in video", "year": "2017" }, { "authors": "Minghua Sun", "journal": "Scientific Reports", "ref_id": "b38", "title": "Structural and morphological evolution of lead dendrites during electrochemical migration", "year": "2013" }, { "authors": "Tomohiro Takaki", "journal": "Journal of Crystal Growth", "ref_id": "b39", "title": "Unexpected selection of growing dendrites by verylarge-scale phase-field simulation", "year": "2013" }, { "authors": "Zhi Tian", "journal": "", "ref_id": "b40", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Peng Wang", "journal": "", "ref_id": "b41", "title": "Contrastive learning based hybrid networks for long-tailed image classification", "year": "2021" }, { "authors": "Xinyi Wu", "journal": "", "ref_id": "b42", "title": "Dannet: A one-stage domain adaptation network for unsupervised nighttime semantic segmentation", "year": "2021" }, { "authors": " Zhang Yuyan", "journal": "Acta Armamentarii", "ref_id": "b43", "title": "Internal Defect Detection of Metal Three-dimensional Multi-layer Lattice Structure Based on Faster R-CNN", "year": "2019" }, { "authors": "Gang Zhang", "journal": "", "ref_id": "b44", "title": "Refinemask: Towards high-quality instance segmentation with fine-grained features", "year": "2021" }, { "authors": "Jing Zhang; Zhe Chen; Dacheng Tao", "journal": "International Journal of Computer Vision", "ref_id": "b45", "title": "Towards high performance human keypoint detection", "year": "2021" }, { "authors": "Youjie Zhou", "journal": "IEEE Transactions on Image Processing", "ref_id": "b46", "title": "Large-scale fiber tracking through sparsely sampled image sequences of composite materials", "year": "2016" }, { "authors": "Yukun Zhu", "journal": "", "ref_id": "b47", "title": "segdeepm: Exploiting segmentation and context in deep neural networks for object detection", "year": "2015" } ]
[ { "formula_coordinates": [ 16, 107.81, 215.13, 337.06, 169.72 ], "formula_id": "formula_0", "formula_text": "1 1 0 1 0 0 1 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 1 1 (a) 1 0 1 0 1 0 10 1 (b) 2 (c)" }, { "formula_coordinates": [ 18, 261.03, 241.4, 260.29, 10.68 ], "formula_id": "formula_1", "formula_text": "F(x) = H(x) -x (2.1)" }, { "formula_coordinates": [ 24, 123.75, 611.15, 397.57, 47.38 ], "formula_id": "formula_2", "formula_text": "I 1 Ĥ1 = D 1 (I 1 ) (4.1)" }, { "formula_coordinates": [ 25, 232.53, 570.26, 288.79, 45.23 ], "formula_id": "formula_3", "formula_text": "Ĥ1 (i, j) =        1, if Ĥ1 (i, j)>α, 0, otherwise. (4.2)" }, { "formula_coordinates": [ 26, 159.62, 202.93, 361.7, 14.6 ], "formula_id": "formula_4", "formula_text": "I 1 (i -s : i + s, j -s : j + s) = 0, if Ĥ1 (i, j) = 1 (4.3)" }, { "formula_coordinates": [ 26, 201.87, 286.31, 206.32, 45.23 ], "formula_id": "formula_5", "formula_text": "H 2 (i, j) =        0, if Ĥ1 (i, j)=1, H 1 (i, j), otherwise." }, { "formula_coordinates": [ 26, 89.93, 487.14, 289.17, 70.11 ], "formula_id": "formula_6", "formula_text": "β to binarize the predicted heatmap Ĥ2 Ĥ2 (i, j) =        1, if Ĥ2 (i, j)>β," }, { "formula_coordinates": [ 27, 145.41, 329.33, 375.91, 15.76 ], "formula_id": "formula_7", "formula_text": "I (i,j) 3 = I 2 (i -40 : i + 40, j -40 : j + 40), if Ĥ2 (i, j) = 1(4.7)" }, { "formula_coordinates": [ 27, 271.06, 365.2, 70.53, 15.76 ], "formula_id": "formula_8", "formula_text": "I (i,j) 3 if Ĥ2 (i, j" }, { "formula_coordinates": [ 27, 216.04, 439.49, 305.28, 45.23 ], "formula_id": "formula_9", "formula_text": "H 2 (i, j) =        1, if HSR(I (i,j) 3 )=1, 0, otherwise. (4.8)" }, { "formula_coordinates": [ 27, 199.78, 547.68, 212.83, 45.23 ], "formula_id": "formula_10", "formula_text": "Ĥ(i, j) =        1, if Ĥ1 (i, j)=1 or H 2 (i, j)=1,0, otherwise." }, { "formula_coordinates": [ 29, 247.69, 168.21, 273.62, 62.42 ], "formula_id": "formula_11", "formula_text": "L 1 = λ 1 * || Ĥ1 -H 1 || 2 (4.10) L 2 = λ 2 * || Ĥ2 -H 2 || 2 (4.11)" } ]
2023-05-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b1", "b22", "b0", "b26", "b28", "b21", "b24", "b7", "b0", "b23", "b36", "b18", "b12", "b2", "b3", "b10", "b40", "b38", "b38", "b40" ], "table_ref": [], "text": "Generative Adversarial Networks (GANs) have been a powerful tool for generating complex data distributions, e.g., image data. The original GAN suffers from optimisation instability and mode collapse, partially remedied later by an alternative training scheme using integral probability metric (IPM) in lieu of Jensen-Shannon divergence. The IPMs, e.g., metrics based on Wasserstein distances or Maximum Mean Discrepancy (MMD), consistently yield good measures between generated and real data distributions, thus resulting in more powerful GANs on empirical data (Gulrajani et al. [2017], Arjovsky and Bottou [2017], Li et al. [2017]).\nMore recently, Ansari et al. [2020] proposed an IPM based on the characteristic function (CF) of measures on R d , which has the characteristic property, boundedness, and differentiability. Such properties enable the GAN constructed using this IPM as discriminator (\"CF-GAN\") to stabilise training and improve generative performance. However, ineffective in capturing the temporal dependency of sequential data, such CF-metric fails to address high-frequency cases due to the curse of dimensionality. To tackle this issue, we take the continuous time perspective of time series and lift discrete time series to the path space (Lyons [1998], Lyons et al. [2007], Levin et al. [2013]). This allows us to treat time series of variable length, unequal sampling, and high frequency in a unified approach. We propose a path characteristic function (PCF) distance to characterise distributions on the path space, and propose the corresponding PCF distance as a novel IPM to quantify the distance between measures on the path space.\nBuilt on top of the unitary feature of paths (Lou et al. [2022]), our proposed PCF has theoretical foundations deeply rooted in the rough path theory (Chevyrev et al. [2016]), which exploits the non-commutativity and the group structure of the unitary feature to encode information on order of paths. The CF may be regarded as the special case of PCF with linear random path and 1 × 1 unitary matrix. We show that the PCF distance (PCFD) possesses favourable analytic properties, including boundedness and differentiability in model parameters, and we establish the linkages between PCFD and MMD. These results vastly generalise classical theorems on measures on R d (Ansari et al. [2020]), with much more technically involved proofs due to the infinite-dimensionality of path space.\nOn the numerical side, we design an efficient algorithm which, by optimising the trainable parameters of PCFD, maximises the discriminative power and improves the stability and efficiency of GAN training. Inspired by Li et al. [2020], Srivastava et al. [2017], we integrate the proposed PCF into the IPM-GAN framework, utilising an auto-encoder architecture specifically tailored to sequential data. This model design enables our algorithm to generate and reconstruct realistic time series simultaneously, which has advantages in diverse applications such as dimension reduction in downstream tasks and preservation of data privacy (Kieu et al. [2018], Gilpin [2020]). To assess the efficacy of our PCF-GAN, we conduct extensive numerical experiments on several standard time series benchmarking datasets for both generation and reconstruction tasks.\nWe summarize key contributions of this work below:\n• proposing a new metric for the distributions on the path space via PCF; • providing theoretical proofs for analytic properties of the proposed loss metric which benefit GAN training; • introducing a novel PCF-GAN to generate & reconstruct time series simultaneously; and • reporting substantial empirical results validating the out-performance of our approach, compared with several state-of-the-art GANs with different loss functions on various time series generation and reconstruction tasks.\nRelated work. Given the wide practical usages of, and challenges for, synthesising realistic time series (Assefa et al. [2020], Bellovin et al. [2019]), various approaches are proposed to improve the quality of GANs for synthetic time series generation (see, e.g., Esteban et al. [2017], Yoon et al. [2019], Xu et al. [2020]). COT-GAN in Xu et al. [2020] shares a similar philosophy with PCF-GAN by introducing a novel discriminator based on causal optimal transport, which can be seen as an improved variant of the Sinkhorn divergence tailored to sequential data. TimeGAN (Yoon et al. [2019]) shares a similar auto-encoder structure, which improves the quality of the generator and enables time series reconstruction. However, the reconstruction and generation modules of TimeGAN are separated in contrast to the PCF-GAN, whereas it has the additional stepwise supervised loss and the discriminative loss." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "The characteristic function of a measure on R d , namely that the Fourier transform, plays a central role in probability theory and analysis. The path characteristic function (PCF) is a natural extension of the characteristic function to the path space." }, { "figure_ref": [], "heading": "Characteristic function distance (CFD) between random variables in R d", "publication_ref": [ "b23", "b0" ], "table_ref": [], "text": "Let X be an R d -valued random variable with the law µ = P • X -1 . The characteristic function of X, denoted as Φ X : R d → C, maps each λ ∈ R d to the expectation of its complex unitary transform:\nΦ X : λ -→ E X∼µ e i λ,X . Here U λ : R d → C, x → e i λ,\nx is the solution to the linear controlled differential equation:\ndU λ (x) = iU λ (x) λ, dx , U λ (0) = 1, (1\n)\nwhere 0 is the zero vector in R d and •, • is the Euclidean inner product on R d .\nIt is proved in Li et al. [2020], Ansari et al. [2020] that if the support of Λ is R d , then CFD Λ is a distance metric, so that CFD 2 Λ (X, Y ) = 0 if and only if X and Y have the same distribution. This justifies the usage of CFD 2 Λ as a discriminator for GAN training to learn finite-dimensional random variables from data." }, { "figure_ref": [], "heading": "Unitary feature of a path", "publication_ref": [ "b25", "b24", "b21", "b16", "b29", "b5", "b27", "b7", "b24" ], "table_ref": [], "text": "Let BV [0, T ]; R d be the space of R d -valued paths of bounded variation over [0, T ]. Consider\nX := x : [0, T ] → R d+1 : x(t) = (t, x(t)) for t ∈ [0, T ]; x ∈ BV [0, T ]; R d ; x(0) = 0 . (3) For a discrete time series x = (t i , x i ) N i=0 , where 0 = t 0 < t 1 < • • • < t N = T and x i ∈ R d (i ∈ {0, • • • , N }),\nwe can embed it into some x ∈ X whose evaluation at (t i ) N i=1 coincides with x. This is well suited for sequence-valued data in the high-frequency limit with finer time-discretisation and is often robust in practice (Lyons [2014], Lou et al. [2022]). Such embeddings are not unique. In this work, we adopt the linear interpolation for embedding, following Levin et al. [2013], Kidger et al. [2019], Ni et al. [2020].\nLet C m×m := {m × m complex matrices}, I m be the identity matrix, and * be conjugate transpose. Write U (m) and u(m) for the Lie group of m × m unitary matrices and its Lie algebra, resp.:\nU (m) = {A ∈ C m×m : A * A = I m }, u(m) := {A ∈ C m×m : A * + A = 0}.\nDefinition 2.1. Let x ∈ BV [0, T ]; R d be a continuous path and M : R d → u(m) be a linear map. The unitary feature of x under M is the solution y : [0, T ] → U (m) to the following equation:\ndy t = y t • M (dx t ), x 0 = I m .(4)\nWe write U M (x) := y T , i.e., the endpoint of the solution path.\nBy a slight abuse of notations, U M (x) is also called the unitary feature of x under M . Unitary feature is a special case of the Cartan/path development, for which one may consider paths taking values in any Lie group G. We take only G = U (m) here; m = d in general (Boedihardjo and Geng [2020], Lyons and Xu [2017]).\nExample 2.2. x0) . In particular, when m = 1, u(1) is reduced to iR and M (y) = i λ M , y for some λ M ∈ R d .\nFor M ∈ L R d , u(m) and x ∈ BV [0, T ]; R d linear, U M (X) = e M (x T -\nMotivated by the universality and characteristic property of unitary features (Chevyrev et al. [2016], see Appendix A.3), we constructed a unitary layer which transforms any d-dimensional time series\nx = (x 0 , • • • , x N )\nto the unitary feature of its piecewise linear interpolation X. It is a special case of the path development layer Lou et al. [2022], when Lie algebra is chosen as u(m). In fact, the explicit formula holds:\nU M (X) = N +1 i=1 exp (M (∆x i )), where ∆x i := x i -x i-1 and exp is the matrix exponential. Convention 2.3. The space L R d , u(m) in which M of Eq. (4) resides is isomorphic to u(m) d , where u(m) is Lie algebra isomorphic to R m(m-1) 2\n. For each θ ∈ u(m) d given by anti-Hermitian matrices θ (i) d i=1 , a linear map M is uniquely induced:\nM (x) = d i=1 θ (i) x, e i , ∀x ∈ R d .\n3 Path characteristic function loss" }, { "figure_ref": [], "heading": "Path characteristic function (PCF)", "publication_ref": [ "b7" ], "table_ref": [], "text": "The unitary feature of a path x ∈ X plays a role similar to that played by e i x,λ to an R d -valued random variable. Thus, for a random path X, the expected unitary feature can be viewed as the characteristic function for measures on the path space (Chevyrev et al. [2016]).\nDefinition 3.1. Let X be an X -valued random variable and P X be its measure. The path characteristic function (PCF) of X of order m ∈ N is the map Φ (m)\nX : L R d , u(m) → C m×m given by Φ X (M ) := E[U M (X)] = X U M (x) dP X (x).\nThe path characteristic function (PCF) Φ X :\n∞ m=0 L R d , u(m) → ∞ m=0 C m×m is defined by the natural grading: Φ X L(R d ,u(m)) = Φ (m)\nX for each m ∈ N.\nIn the above, U M (x) ∈ U (m) is the unitary feature of the path x under M . See Definition 2.1.\nSimilarly to the characteristic function of R d -valued random variables, the PCF always exists. Moreover, we have the following important result, whose proof is presented in Appendix A. Theorem 3.2 (Characteristicity). Let X and Y be X -valued random variables. They have the same distribution (denoted as\nX d = Y) if and only if Φ X = Φ Y ." }, { "figure_ref": [], "heading": "A new distance measure via PCF", "publication_ref": [ "b1", "b22", "b35" ], "table_ref": [], "text": "We now introduce a novel and natural distance metric, which measures the discrepancy between distributions on the path space via comparing their PCFs. Throughout, d HS denotes the metric associated with the Hilbert-Schmidt norm • HS on C m×m :\nd HS (A, B) := A -B 2 HS = tr [(A -B)(A -B) * ]. Definition 3.3. Let X, Y : [0, T ] → R d be\nstochastic processes and P M be a probability distribution on u(m) d := L R d , u(m) (recall Convention 2.3). Define the squared PCF-based distance (PCFD) between X and Y with respect to P M as\nPCFD 2 M (X, Y) = E M ∼P M d 2 HS Φ X (M ), Φ Y (M ) .(5)\nWe shall not distinguish between M and P M for simplicity.\nPCFD exhibits several mathematical properties, which provide the theoretical justification for its efficacy as the discriminator on the space of measures on the path space, leading to empirical performance boost. First, PCFD has the characteristic property. Lemma 3.4 (Separation of points). Let X, Y ∈ P(X ) and X = Y. Then there exists m ∈ N, such that if M is a u(m) d -valued random variable with full support, then PCFD M (X, Y) = 0.\nFurthermore, PCFD M has a simple uniform upper bound for any fixed m ∈ N: Lemma 3.5. Let M be a u(m) d -valued random variable. Then, for any BV [0, T ]; R d -valued random variables X and Y, it holds that PCFD 2 M (X, Y) ≤ 2m 2 .\nUnder mild conditions, PCFD is a.e. differentiable with respect to a continuous parameter, thus ensuring the feasibility of gradient descent in training. Theorem 3.6 (Lipschitz dependence on continuous parameter). Let X and Z be subsets of BV [0, T ]; R d , (Θ, ρ) be a metric space, Q be a Borel probability measure on Z, and M be a Borel probability measure on u(m\n) d . Assume that g : Θ × Z → X , (θ, Z) → g θ (Z) is Lipschitz in θ such that Tot.Var. [g θ (Z) -g θ (Z)] ≤ ω(Z)ρ (θ, θ ). In addition, suppose that E M ∼P M | M | 2 < ∞ and E Z∼Q [ω(Z)] < ∞. Then PCFD M (g θ (Z), X) is Lipschitz in θ. Moreover, it holds that |PCFD M (g θ (Z), X) -PCFD M (g θ (Z), X)| ≤ E M ∼P M [| M | 2 ] E Z∼Q [ω(Z)] ρ (θ, θ )\nfor any θ, θ ∈ Θ, Z ∈ Z, X ∈ X , and M ∈ P u(m) d .\nRemark 3.7. The parameter space (Θ, ρ) is usually taken to be R d for some d ∈ N. In this case, by Rademacher's theorem PCFD M (g θ (Z), X) is a.e. differentiable in θ.\nSimilarly to metrics on measures over R d (cf. Arjovsky and Bottou [2017], Li et al. [2017]), we construct a metric based on PCFD, denoted as PCFD, on the space P(X ) of Borel probability measures over the path space, and we prove that it metrises the weak-star topology on P(X ). Throughout, d → denotes the convergence in law.\nTheorem 3.8 (Informal, convergence in law). Let {X n } n∈N and X be X -valued random variables with measures supported in a compact subset of X . Then PCFD(X n , X) → 0 ⇐⇒ X n d → X.\nThe formal statement and proof can be found in Lemma B.2 and Theorem B.8 in the Appendix.\nSimilar to Sriperumbudur et al. [2010] for R d , we prove that PCFD can be interpreted as an MMD with a specific kernel κ (see Appendix B.3). Example B.12 illustrates that the PCFD has the superior test power for hypothesis testing on stochastic processes compared with CF distance on the flattened time series." }, { "figure_ref": [], "heading": "Computing PCFD under empirical measures", "publication_ref": [ "b23", "b23" ], "table_ref": [], "text": "Now, we shall illustrate how to compute the PCFD on the path space.\nLet X := {x i } n i=1 and Ȳ := {y i } n i=1 be i.i.d. drawn respectively from X -valued random variables X and Y. First, for any linear map M ∈ u(m) d , the empirical estimator of Φ X (M ) is the average of unitary features of all observations\nX = {x i } n i=1 , i.e., Φ X(M ) = 1 n n i=1 U M (x i ). We then parameterise the u(m) d -valued random variable M via the empirical measure M θ M , i.e., M θ M = k i=1 δ Mi , where θ M := {M i } k i=1 ∈ u(m)\nd×k are the trainable model parameters. Finally, define the corresponding empirical path characteristic function distance (EPCFD) as\nEPCFD θ M X, Ȳ = 1 k k i=1 Φ X(M i ) -Φ Ȳ (M i ) 2 HS . (6\n)\nFigure 1: Flowchart of calculating the PCF Φ X (M θ ).\nOur approach to approximating M via the empirical distribution differs from that in Li et al. [2020], where M is parameterised by mixture of Gaussian distributions. In §4.1 and §5, it is shown that, by optimising the empirical distribution, a moderately sized k is sufficient for achieving superior performance, in contrast to a larger sample size required by Li et al. [2020]." }, { "figure_ref": [ "fig_0" ], "heading": "PCF-GAN for time series generation 4.1 Training of the EPCFD", "publication_ref": [ "b17" ], "table_ref": [], "text": "In this subsection, we apply the EPCFD to GAN training for time series generation as the discriminator.\nWe train the generator to minimise the EPCFD between true and synthetic data distribution, whereas the empirical distribution of M characterised by θ M ∈ u(m) d×k is optimised by maximising EPCFD.\nBy an abuse of notation, let X := R d×n T (Z := R e×n T , resp.) denote the data (noise, resp.) space, composed of R d (R e , resp.) time series of length n T . As discussed in §2.2, X and Z can be viewed as path spaces via linear interpolation. Like the standard GANs, our model is comprised of a generator G θg : Z → R d×n T and the discriminator EPCFD θ M : P(X ) × P(X ) → R + , where θ M ∈ u(m) k×d is the model parameter of the discriminator, which fully characterises the empirical measure of M.\nThe pre-specified noise random variable\nZ = (Z ti ) n T -1 i=0\nis the discretised Brownian motion on [0, 1] with time mesh 1 n T . The induced distribution of the fake data is given by G θg (Z). Hence, the min-max objective of our basic version PCF-GAN is\nmin θg max θ M EPCFD θ M (G θg (Z), X).\nWe apply mini-batch gradient descent to optimise the model parameters of the generator and discriminator in an alternative manner. In particular, to compute gradients of the discriminator parameter θ M , we use the efficient backpropagation algorithm through time introduced in Lou et al. [2022], which effectively leverages the Lie group-valued outputs and the recurrence structure of the unitary feature. The initialisation of θ M for the optimisation is outlined in the Appendix B.4.1.\nLearning time-dependent Ornstein-Uhlenbeck process Following Kidger et al. [2021], we apply the proposed PCF-GAN to the toy example of learning the distribution of synthetic time series data simulated via the time-dependent Ornstein-Uhlenbeck (OU) process. Let (X t ) t∈[0,T ] be an R-valued stochastic process described by the SDE, i.e., dX t = (µt -θX t ) dt + σdB t with X 0 ∼ N (0, 1), where (B t ) t∈[0,T] is 1D Brownian motion and N (0, 1) is the standard normal distribution. We set µ = 0.01, θ = 0.02, σ = 0.4 and time discretisation δt = 0.1. We generate 10000 samples from t = 0 to t = 63, down-sampled at each integer time point. Figure 2 shows that the synthetic data generated by our GAN model, which uses the EPCFD discriminator, is visually indistinguishable from true data. Also, our model accurately captures the marginal distribution at various time points. " }, { "figure_ref": [], "heading": "PCF-GAN: learning with PCFD and sequential embedding", "publication_ref": [ "b36", "b23", "b36" ], "table_ref": [], "text": "In order to effectively learn the distribution of high-dimensional or complex time series, using solely the EPCF loss as the GAN discriminator fails to be the best approach, due to the computational limitations imposed by the sample size k and the order m of EPCFD. To overcome this issue, we adopt the approach Srivastava et al. [2017], Li et al. [2020], and train a generator that matches the distribution of the embedding of time series via the auto-encoder structure. Figure 3 illustrates the mechanics of our model.\nTo proceed, let us first recall the generator G θg : Z → X and introduce the embedding layer F θ f , which maps X to Z (the noise space). Here θ f is the model parameters of the embedding layer and will be learned from data. To this end, it is natural to optimize the model parameters θ g of the generator by minimising the generative loss L generator , which is the EPCFD distance of the embedding between true distribution X and synthetic distribution G θg (Z); in formula,\nL generator (θ g , θ M , θ f ) = EPCFD θ M (F θ f (G θg (Z)), F θ f (X))). (7\n)\nFigure 3: Visualization of the PCF-GAN architecture\nEncoder(F θ f )-decoder(G θg ) struc- ture:\nThe motivation to consider the auto-encoder structure is based on the observation that the embedding might be degenerated when optimizing L generator . For example, no matter whether true and synthetic distribution agrees or not, F θ f could be simply a constant function to achieve the perfect generator loss 0. Such a degeneracy can be prohibited if F θ f is injective. In heuristic terms, the \"good\" embedding should capture essential information about real time series of X and allows the reconstruction of time series X from its embedding F θ f (X). This motivates us to train the embedding F θ f such that F θ f • G θg is close to the identity map. If this condition is satisfied, it implies that F θ f and G θg are pseudo-inverses of each other, thereby ensuring the desired injectivity. In this way, F θ f and G θg serve as the encoder and decoder of raw data, respectively.\nTo impose the injectivity of F θ f , we consider two additional loss functions for training θ f as follows:\nReconstruction loss L recovery : It is defined as the l 2 samplewise distance between the original and reconstructed noise by\nF θ f • G θg , i.e., L recovery = E[|Z -F θ f (G θg (Z))| 2 ]. Note that L recovery = 0 implies that F θ f (G θg (z)) = z,\nfor any sample z in the support of Z almost surely.\nRegularization loss L regularization : It is proposed to match the distribution of the original noise variable Z and embedding of true distribution X. It is motivated by the observation that if the perfect generator G θ (Z) = X and F θ f • G θg is the identity map, then Z = F θ f (X). Specifically,\nL regularization = EPCFD θ M (Z, F θ f (X)),(8)\nwhere we distinguish θ M from θ M in L generator . The regularization loss effectively stabilises the training and resolves the mode collapse Srivastava et al. [2017] due to the lack of infectivity of the embedding.\nTraining the embedding parameters θ f : The embedding layer F θ f aims to not only discriminate the real and fake data distributions as a critic, but also preserve injectivity. Hence we optimise the embedding parameter θ f by the following hybrid loss function:\nmax θ f (L generator -λ 1 L recovery -λ 2 L regularization ) ,(9)\nwhere λ 1 and λ 2 are hyper-parameters that balance the three losses.\nTraining the EPCFD parameters (θ M , θ M ): Note that L generator and L regularization have trainable parameters of EPCFD, i.e., θ M and θ M . Similar to the basic PCF-GAN, we optimize θ M and θ M by maximising the EPCFD to improve the discriminative power.\nmax θ M L generator , max θ M L regularization(10)\nBy doing so, we enhance the discriminative power of EPCFD θ M and EPCFD θ M . Consequently, this facilitates the training of the generator such that the embedding of the true data aligns with both the noise distribution and the reconstructed noise distribution.\nIt is important to emphasise two key advantages of our proposed PCF-GAN. First, it possesses the ability to generate synthetic time series with reconstruction functionality, thanks to the auto-encoder structure of IGM. Second, by virtue of the uniform boundedness of PCFD shown in Lemma 3.5, our PCF-GAN does not require any additional gradient constraints of the embedding layer and EPCFD parameters, in contrast to other MMD-based GANs and Wasserstein-GAN. It helps with the training efficiency and alleviates the vanishing gradient problem in training sequential networks like RNNs.\nWe provide the pseudo-code for the proposed PCF-GAN in Algorithm 1." }, { "figure_ref": [], "heading": "Numerical Experiments", "publication_ref": [], "table_ref": [], "text": "To validate its efficacy, we apply our proposed PCF-GAN to a broad range of time series data and benchmark with state-of-the-art GANs for time series generation using various test metrics. # train the unitary linear maps in EPCFD 5:\nSample from distributions: X ∼ P d , Z ∼ P z ." }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "Generator Loss:\nL generator = EPCFD θ M (F θ f (X), F θ f (G θg (Z))) 7: Update: θ M ← θ M + η • θ M L generator 8: Regularization Loss: L regularization = EPCFD θ M (Z, F θ f (X)) 9: Update: θ M ← θ M + η • θ M (L regularization ) 10:\n# train the embedding 11:\nReconstruction Loss:\nL recovery = E[|Z -F θ f (G θg (Z))| 2 ] 12:\nLoss on critic:\nL c = L generator -λ 1 • L recovery -λ 2 • L regularization 13: Update: θ f ← θ f + η • θc L c 14:\nend for 15:\n# train the generator 16:\nSample from distributions: X ∼ P d , Z ∼ P z . 17:\nGenerator Loss: Smaller the values, indicating closer the distributions, are better. To compute three evaluation metrics, we randomly generated 10,000 samples of true and synthetic (reconstructed) distribution resp. The mean and standard deviation of each metric based on 10 repeated random sampling are reported.\nL generator = EPCFD M (F θ f (X), F θ f (G θg (Z))) 18: Update: θ g ← θ g -η • θg L g 19: end while" }, { "figure_ref": [], "heading": "Time series generation", "publication_ref": [], "table_ref": [ "tab_2", "tab_2" ], "text": "Table 2 indicates that PCF-GAN consistently outperforms the other baselines across all datasets, as demonstrated by all three test metrics. Specifically, in terms of the discriminative score, PCF-GAN achieves a remarkable performance with values of 0.0108 and 0.0784 on the Rough volatility and Stock datasets, respectively. These values are 61% and 39% lower than those achieved by the second-best model. Regarding the predictive score, PCF-GAN achieves the best result across all four datasets. While COT-GAN surpasses PCF-GAN in terms of the Sig-MMD metric on the EEG dataset, PCF-GAN consistently outperforms the other models in the remaining three datasets. For a qualitative analysis of generative quality, we provide the visualizations of generated samples for all models and datasets in Appendix D without selective bias. Furthermore, to showcase the effectiveness of our auto-encoder architecture for the generation task, we present an ablation study in Appendix D. Visually, the PCF-GAN achieves better reconstruction results than TimeGAN by producing more accurate reconstructed time series samples. Notably, the reconstructed samples from PCF-GAN preserve the temporal dependency of original time series for all four datasets, while some reconstructed samples from TimeGAN in EEG and Stock datasets are completely mismatched. This is further quantified in Table 2 on the reconstruction task, where the reconstructed samples from PCF-GAN consistently outperform those from TimeGAN in terms of all test metrics. By leveraging the effective critic F θ f , we achieve enhanced performance with a moderate increase in parameters (ranging from 1200 to 6400) within θ M of EPCFD. The training of these additional parameters is highly efficient in PCF-GAN, while still outperforming all baseline models. Specifically, our algorithm is approximately twice as fast as TimeGAN (using three extra critic modules) and three times as fast as COT-GAN (with one additional critic module and the Sinkhorn algorithm). However, it takes 1.5 times as long as RGAN due to the extra training required on θ M ." }, { "figure_ref": [], "heading": "Training stability and efficiency", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion & Broader impact", "publication_ref": [], "table_ref": [], "text": "Conclusion We introduce a novel, principled and efficient PCF-GAN model based on PCF for generating high-fidelity sequential data. With theoretical support, it achieves state-of-the-art generative performance with additional reconstruction functionality in various tasks of time series generation.\nLimitation and future work In this work, we use LSTM-based networks for the autoencoder and do not explore other sequential models (e.g., transformers). The suitable choice of network architecture for the autoencoder may further improve the efficacy of the proposed PCF-GAN, which merits further investigation. Additionally, although we establish the link between PCFD and MMD, it is interesting to design efficent algorithms to compute the kernel specified in Appendix B.3.\nBroader impact Like other GAN models, this model has the potential to aid data-hungry algorithms by augmenting small datasets. Additionally, it can enable data sharing in domains such as finance and healthcare, where sensitive time series data is plentiful. However, it is important to acknowledge that the generation of synthetic data also carries the risk of potential misuse (e.g. generating fake news).\nIn Appendix A, we collect some notations and properties for paths and unitary feature of a path. Appendix B gives a thorough introduction to the distance function via the path characteristics function.\nDetailed proofs for the theoretical results on PCFD are provided. Appendix C discusses experimental details and Appendix D presents supplementary numerical results." }, { "figure_ref": [], "heading": "A Preliminaries", "publication_ref": [ "b28" ], "table_ref": [], "text": "A.1 Paths with bounded variation Definition A.1. Let X : [0, T ] → R d be a continuous path. The total variation of X on the interval [0, T ] is defined by Tot.Var.(X) := sup\nD⊂[0,T ] X t -X t -1 (11)\nwhere the supremum is taken over all finite partitions\nD = {t } N =0 of [0, T ]. When Tot.Var.(X) is finite, say that X is a path of bounded variation (BV-path) on [0, T ] and denote X ∈ BV [0, T ]; R d .\nBV-paths can be defined without the continuity assumption, but we shall not seek for greater generality in this work. It is well-known that X BV := X C 0 ([0,T ]) + Tot.Var.(X) defines a norm (the BV-norm). There is a more general notion of paths of finite p-variation for p ≥ 1 (see Lyons et al. [2007]), where the case p = 1 corresponds to BV-paths discussed above. We restrict ourselves to p = 1, as this is sufficient for the study of sequential data in practice as piecewise linear approximations of continuous paths.\nDefinition A.2. (Concatenation of paths) Let X : [0, s] → R d and Y : [s, t] → R d be two continuous paths. Their concatenation denoted as the path X Y : [0, t] → R d is defined by (X Y ) u = X u , u ∈ [0, s], Y u -Y s + X s , u ∈ [s, t].\nDefinition A.3 (Tree-like equivalence). A continuous path X : [0, T ] → R d is called tree-like if there is an R-tree T , a continuous function φ : [0, T ] → T , and a function ψ : T → R d such that φ(0) = φ(T ) and X = ψ • φ." }, { "figure_ref": [], "heading": "Let", "publication_ref": [], "table_ref": [], "text": "← -X : [0, T ] → R d denote the time-reversal of continuous path X, namely that ← -X (t) = X(T -t). We say that X and Y are in tree-like equivalence (denoted as\nX ∼ τ Y ) if X ← - Y is tree-like.\nAn important example is when path X is a time re-parameterisation of Y . That is, for X ∈ BV [0, T ]; R d , take a nondecreasing surjection λ : [0, T ] → [T 1 , T 2 ], and take X(t) = Y (λ(t))." }, { "figure_ref": [], "heading": "A.2 Matrix groups and algebras", "publication_ref": [], "table_ref": [], "text": "The unitary group and symplectic group are subsets of the space of m × m matrices:\nU (m) := A ∈ C m×m : A * A = AA * = I m , Sp(2m, C) := A ∈ C 2m×2m : A * J m A = J m .\nwhere J m := 0 I m -I m 0 and I m ∈ C m×m is the identity. Their corresponding Lie algebras are\nu(m) := A ∈ C m×m : A * + A = 0 , sp(2m, C) := A ∈ C 2m×2m : A * J m + J m A = 0 .\nThe unitary group is compact and is a group of isometries of matrix multiplication with respect to the Hilbert-Schmidt norm. Such properties are crucial for establishing theorems and properties related to the path characteristic function (PCF), as discussed in subsequent sections.\nThe compact symplectic group Sp(m) is the simply-connected maximal compact real Lie subgroup of Sp(2m, C). It is the real form of Sp(2n, C), and satisfies Sp(m) = Sp(2m, C) ∩ U (2m).\nNote that U (m) and Sp(m) are both real Lie groups, albeit they have complex entries in general." }, { "figure_ref": [], "heading": "A.3 Unitary feature of a path", "publication_ref": [ "b24", "b14", "b8" ], "table_ref": [], "text": "Recall Definition 2.1 for the unitary feature, reproduced below:\nDefinition A.4. Let M : R d → u(m) be a linear map and let X ∈ BV [0, T ]; R d be a BV-path.\nThe unitary feature [a.k.a. the path development on the unitary group U (m)] of x under M is the solution to the equation\ndy t = y t • M (dx t ) for all t ∈ [0, T ] with Y 0 = I m .\nWe write U M (x) := y T .\nDefinition 2.1 is motivated by [Chevyrev et al., 2016, §4]. Consider M ∈ L R d , u(H fd ) with H fd ranging over all finite-dimensional complex Hilbert spaces. Extend M by naturality to the tensor algebra over R d ; that is, define\nM : T R d ≡ ∞ k=0 R d ⊗k → u(H fd\n) by linearity and the following rule:\nM (v 1 ⊗ . . . ⊗ v k ) := M (v 1 ) . . . M (v k ) for any k ∈ N and v 1 , . . . , v k ∈ R d .\nThen denote by A R d the totality of such M . Any element in A R d is a unitary representation of the Lie group G R d := group-like elements in T R d\n. See [Chevyrev et al., 2016, p.4059].\nThe following two lemmas are contained in Lou et al. [2022].\nLemma A.5. [Multiplicativity] Let X ∈ BV [0, s], R d and Y ∈ BV [s, t], R d . Denote by X * Y their concatenation: (X * Y )(v) = X(v) for v ∈ [0, s] and Y (v) -Y (s) + X(s) for v ∈ [s, t]. Then U M (X * Y ) = U M (X) • U M (Y ) for all M ∈ L R d , u(m) .\nWe shall compute by Lemma A.5 and Example 2.2 the unitary feature of piecewise linear paths.\nLemma A.6 (Invariance under time-reparametrisation). Let X ∈ BV([0, T ], R d ) and let λ : t → λ t be a non-decreasing C 1 -diffeomorphism from [0, T ] onto [0, S]. Define X λ t := X λt for t ∈ [0, T ].\nThen, for all M ∈ L R d , u(m) and for every s, t ∈ [0, T ], it holds that U M (X λs,λt ) = U M X λ s,t .\nA key property of the unitary feature is that it completely determines the law of random paths:\nTheorem A.7 (Uniqueness of unitary feature). For any two paths X 1 = X 2 in X , there exists an\nM ∈ L R d , u(m) with some m ∈ N such that U M (X 1 ) = U M (X 2 ).\nProof. For X 1 = X 2 in X , by uniqueness of signature over BV-paths (cf. Hambly and Lyons [2010]) one has Sig(X 1 ) = Sig(X 2 ) in G R d . Here we use the fact that the signatures of BV-paths are group-like elements in the tensor algebra. Then, as A R d separates points over G R d (cf. [Chevyrev et al., 2016, Theorem 4.8]), there is\nM ∈ L R d , u(m) such that M [Sig(X 1 )] = M [Sig(X 2 )]; hence M (X 1 ) = M (X 2 ). Therefore, by considering the U (m)-valued equation dY t = Y t • M (dX t ) with Y 0 = I m , we conclude that U M (X 1 ) = U M (X 2 ).\nTheorem A.8 (Universality of unitary feature). Let K ⊂ BV [0, T ]; R d be a compact subset. For any continuous function f : K → C and any > 0, there exists an m ∈ N and finitely many\nM 1 , • • • , M N ∈ L R d , u(m ) as well as L 1 , . . . , L N ∈ L (U (m ); C), such that sup X∈K f (X) - N i=1 L i • U Mi (X) < .(12)\nProof. It follows from [Lou et al., 2022, Theorem A.4] and the universality of signature in Chevyrev et al. [2018] that Eq. ( 12) holds with M j ∈ L R d , m∈N u(m) and L j ∈ L m∈N U (m); C and /2 in place of . By a simple approximation via restricting the ranges of M j and domains of L j , we may obtain (without relabelling) M j ∈ L R d , m m=0 u(m) and L j ∈ L ( m m=0 U (m); C) that verify Eq. ( 12). We conclude by the flag structure of U (1) ⊂ U (2) ⊂ U (3) ⊂ . . . and u(1) ⊂ u(2) ⊂ u(3) ⊂ . . .." }, { "figure_ref": [], "heading": "B Path Characteristic loss B.1 Path Characteristic function", "publication_ref": [ "b14" ], "table_ref": [], "text": "Theorem B.1. Let X be a X -valued random variable with associated probability measure P X . The path characteristic function Φ X uniquely characterises P X .\nProof. Assume that P X1 = P X2 . Then Sig(X 1 ) = Sig(X 2 ) by the uniqueness of signature over BVpaths (cf. Hambly and Lyons [2010]). It is proved in [Lou et al., 2022, Lemma 2.6\n] that U M (X i ) = M (Sig(X i )) for any M ∈ L R d , u(m) ; i ∈ {1, 2}. Hence Φ Xi = X M (Sig(x)) dP Xi (x).\nBut as in the proof of Theorem A.7, A R d separates points over G R d (cf. [Chevyrev et al., 2016, Theorem 4.8]) and the signature of BV-paths lies in G R d . Therefore, there is an\nM ∈ L R d , u(m) such that Φ X = Φ Y ." }, { "figure_ref": [], "heading": "B.2 Distance metric via path characteristic function", "publication_ref": [ "b31", "b37", "b23", "b24", "b40", "b38", "b19" ], "table_ref": [], "text": "Lemma B.2. PCFD M in Eq. (5) defines a pseudometric on the path space X for any m ∈ N and M ∈ P L R d , u(m) . In addition, suppose that {M j } j∈N is a countable dense subset in P L R d , m∈N u(m) . Then the following defines a metric on X :\nPCFD(X, Y) := ∞ j=1 min 1, PCFD Mj (X, Y) 2 j . (13\n) In Lemma B.2 above, L R d , m∈N u(m) ∼ = R d * ⊗ π m∈N u(m)\nwhere ⊗ π is the completion of the projective tensor product and m∈N u(m) is a Banach space under the norm T := m∈N T (m) HS < ∞. Here T (m) denotes the m th -projection of T on u(m). Therefore, such a sequence {M j } j∈N always exists since P L R d , m∈N u(m) , being the space of Borel probability measures over a Polish space, is itself a Polish space. See Parthasarathy [1967].\nProof. Non-negativity, symmetry, and that PCFD M (X, X) = 0 are clear. That PCFD M (X, Y) ≤ PCFD M (X, Z) + PCFD M (Z, Y) follows from the triangle inequality of the Hilbert-Schmidt norm and the linearity of expectation. This shows that PCFD M is a pseudometric for each M.\nIn addition, PCFD M (X, Y) = 0 implies that The result below is formulated in terms of the Hilbert-Schmidt norm of matrices in C m×m . Any other norm on C m×m is equivalent to that, modulo a constant depending on m only. In fact, the strict inequality T op ≤ T HS for T ∈ C m×m holds. See, e.g., [Zimmer, 1990, Lemma 3.1.10, p.55]. e (1-r)Γ(t,1-s) ∂Γ ∂s (t, s)e rΓ(t,s) dr ds HS , thanks to an identity for differentiation of matrix exponential and the inequality T 1 T 2 HS ≤ T 1 op T 2 HS . Here e (1-r)Γ(t,1-s) and e rΓ(t,s) take values in U (m), hence of operator norm 1 for any parameters t, s, r. So we infer that\nL(R d ,u(m)) d 2 HS Φ X (M ), Φ Y (M ) dP M (M ) = L(R d ,u(m)) E [U M (X)] -E [U M (Y)] 2 HS dP M (M ) = 0. So, if P M is supported on the whole of L R d , u(m) , then Φ X (M ) = Φ Y (M )\ne M L(t) -e M L(t) HS ≤ 1 0 1 0 ∂Γ ∂s (t, s) HS dr ds = M L(t) -L(t) HS ≤ |M | L(t) -L(t) e ,\nwhere the first inequality holds for Bochner integrals. See [Yosida, 1980, Corollary 1, p.133].\nLemma B.5 (Subadditivity of unitary feature). Let X, Y ∈ BV [0, T ]; R d be BV-paths, and U M be the unitary feature associated with M ∈ L R d , u(m) = u(m) d . For any 0 < t < T we have\nU M (X) -U M (Y ) HS ≤ U M (X 0,t ) -U M (Y 0,t ) HS + U M (X t,T ) -U M (Y t,T ) HS .\nProof. We apply the multiplicative property of unitary feature in Lemma A.5, the triangle inequality, and the unitary invariance of the Hilbert-Schmidt norm to estimate that\nU M (X) -U M (Y ) HS = U M (X 0,t ) • U M (X t,T ) -U M (Y 0,t ) • U M (Y t,T ) HS ≤ (U M (X 0,t ) -U M (Y 0,t )) • U M (X t,T ) HS + U M (Y 0,t )(U M (X t,T ) -U M (Y t,T )) HS = U M (X 0,t ) -U M (Y 0,t ) HS + U M (X t,T ) -U M (Y t,T ) HS . Proposition B.6. For X, Y ∈ X , the unitary feature U M (X) with M ∈ L R d , u(m) = u(m) d satisfies U M (X) -U M (Y) HS ≤ |M | Tot.Var.[X -Y],\nwhere Tot.Var.[X -Y] denotes the total variation over [0, T ] of the path X -Y.\nProof. Given BV-paths X, Y with the same initial point, consider arbitrary piecewise linear approximations {X n }, {Y n } with partition points 0 = t 0 < t 1 < • • • < t n = T . Applying Lemma B.5 recursively, we obtain that\nU M (X n ) -U M (Y n ) HS ≤ n-1 i=0 U M (X n ti,ti+1 ) -U M (Y n ti,ti+1 )\nHS By definition of unitary feature and Lemma B.4, one deduces that\nU M (X n ) -U M (Y n ) HS ≤ n-1 i=0 |M | X n ti,ti+1 -Y n ti,ti+1 e .\nWe may now conclude by taking supremum over all partitions and sending n → ∞.\nTheorem B.7 (Dependence on continuous parameter). Let X and Z be subsets of BV [0, T ]; R d , (Θ, ρ) be a metric space, Q be a Borel probability measure on Z, and M be a Borel probability measure on u(m) d . Assume that g :\nΘ × Z → X , (θ, Z) → g θ (Z) is Lipschitz in θ such that Tot.Var. [g θ (Z) -g θ (Z)] ≤ ω(Z)ρ (θ, θ ). In addition, suppose that E M ∼P M | M | 2 < ∞ and E Z∼Q [ω(Z)] < ∞. Then PCFD M (g θ (Z), X) is Lipschitz in θ. Moreover, it holds that |PCFD M (g θ (Z), X) -PCFD M (g θ (Z), X)| ≤ E M ∼P M | M | 2 E Z∼Q [ω(Z)] ρ (θ, θ )\nfor any θ, θ ∈ Θ, Z ∈ Z, X ∈ X , and M ∈ P u(m) d .\nProof. As PCFD M is a pseudometric (Lemma B.2), we have\n|PCFD M (g θ (Z), X) -PCFD M (g θ (Z), X)| ≤ PCFD M (g θ (Z), g θ (Z)) .\nWe may control the right-hand side as follows, using subsequentially the definitions of PCFD and PCF, Proposition B.6, and the assumptions in this theorem:\nPCFD M (g θ (Z), g θ (Z)) = u(m) d Φ g θ (Z) (M ) -Φ g θ (Z) (M ) 2 HS dP M 1 2 = u(m) d Z [U M (g θ (Z)) -U M (g θ (Z))] dQ(Z) 2 HS dP M 1 2 ≤ u(m) d |M | 2 Z Tot.Var. [g θ (Z) -g θ (Z)] dQ(Z) 2 dP M 1 2 ≤ E M ∼P M | M | 2 Z ω(Z)ρ (θ, θ ) dQ(Z) .\nThis completes the proof.\nThe unitary feature is universal in the spirit of the Stone-Weierstrass theorem; i.e., continuous functions on paths can be uniformly approximated by linear functionals on unitary features.\nAs PCFD metrises the weak topology on the space of path-valued random variables, it emerges as a more sensible distance metric for training time series generations than metrics without this property; e.g., the Jensen-Shannon divergence. Theorem B.8 (Metrisation of weak-star topology). Let K ⊂ X be a compact subset. Suppose that {M j } j∈N is a countable dense subset in P L R d , m∈N u(m) . Then PCFD defined by Eqn. 13 metrises the weak-star topology on P\n(K). That is, PCFD(X n , X) → 0 ⇐⇒ X n d → X as n → ∞,\nwhere d → denotes convergence in distribution of random variables.\nThe metrisability of P(K) follows from general theorems in functional analysis: K is a compact metric space, hence C 0 (K) is separable ([Fabian et al., 2001, Lemma 3.23]). Then, viewing P(K) as the unit circle in C 0 (K) * via Riesz representation, we infer from [Fabian et al., 2001, Proposition 3.24] that P(K) is metrisable in the weak-star topology, which is equivalent to the distributional convergence of random variables.\nProof. The backward direction is straightforward. By the Riesz representation theorem of Radon measures, the distributional convergence is equivalent to that\nK f dP Xn → K f dP X for all continu- ous f ∈ C(K). Thus K U M dP Xn -K U M dP X HS → 0, namely that Φ Xn [M ] → Φ X [M ] for each M ∈ L R d ; u(m)\n. The unitary feature U M is bounded as it is U (m)-valued for some m, so we deduce from the dominated convergence theorem that PCFD(X n , X) → 0. [2017]. We used the Ksig library Toth and Oberhauser [2020] to calculate the Sig-MMD metrics.\nThe codes in Li et al. [2020] were used to compute characteristics function distance in Example B.12.\nComputing infrastructure. The experiments were performed on a computational system running Ubuntu 22.04.2 LTS, comprising three Quadro RTX 8000 and two RTX A6000 GPUs. Each experiment was run independently on a single GPU, with the training phase taking between 6 hours to 3 days, depending on the dataset and models used.\nArchitectures. To ensure a fair comparison, we employed identical network architectures, with two layers of LSTMs having 32 hidden units, for both the generator and discriminator across all models. For the generator, the output of the LSTM (full sequence) was passed through a Tanh activation function and a linear output layer. All generative models take a multi-dimensional discretized Brownian motion as the noise distribution, scaling it to ensure values were controlled within the range [-1, 1]. The dimension and scaling factor varied based on the dataset and were specified in the individual sections as below.\nThe PCF-GAN uses the development layers on the unitary matrix Lou et al. [2022] to calculate the PCFD distance. For all experiments, we fixed the unitary matrix size and coefficient λ 2 for the regularization loss to 10 and 1, respectively. The number of unitary linear maps and the coefficient λ 1 of the recovery loss were determined via hyper-parameter tuning, which varied depending on the dataset (see individual section for details).\nRegarding TimeGAN, the following approach described in Yoon et al. [2019] and employed embedding, supervisor, and recovery modules. Each of these modules had two layers of LSTMs with 32 hidden units. For COT-GAN, we used two separate modules for discriminators, each with two layers of LSTMs with 32 hidden units. Based on the recommendation from COT- GAN Xu et al. [2020] and informal hyperparameter tuning, we set λ = 10 and = 1 for all experiments.\nOptimisation & training. We used the ADAM optimizer for all experiments Kingma and Ba [2014], with a learning rate of 0.001 for both generators and discriminators. The learning rate for the unitary development network is 0.005. The initial decay rates in the ADAM optimizer are set β 1 = 0, β 2 = 0.9. The discriminator was trained for two iterations per iteration of the generator's training.\nNotably, the inclusion of the two additional losses significantly improved model performance on high-dimensional time series datasets, such as Air Quality and EEG, indicating that the proposed auto-encoder architecture effectively learns meaningful low-dimensional sequential embeddings. Conversely, the exclusive use of the reconstruction loss led to a notable decrease in model performance, suggesting that the l 2 samplewise distance might not be suitable for time series data. However, the additional regularization loss helped overcome this issue by ensuring that the sequential embedding space is confined to a predetermined noise space, such as the discretized Brownian motion. As a result, the regularization loss helped to mitigate the problems that arose when relying solely on the reconstruction loss. " }, { "figure_ref": [], "heading": "D.2 Generated samples", "publication_ref": [], "table_ref": [], "text": "In this section, we present random samples from the four benchmark datasets generated by PCF-GAN, TimeGAN, RGAN, and COT-GAN. Although interpreting the sample plots of the generated time series poses a challenge, our observations reveal that PCF-GAN successfully generates time series that capture the temporal dependencies exhibited in the original time series across all datasets. Conversely, COT-GAN generates trajectories that are relatively smoother compared to the real time series samples, demonstrated on Stock and EEG datasets, by Figure 8 and Figure 10 respectively. Figure 10 shows that TimeGAN occasionally produces samples with higher oscillations than those found in the real samples. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "The research of SL is supported by NSFC (National Natural Science Foundation of China) Grant No. 12201399, and the Shanghai Frontiers Science Center of Modern Analysis. This research project is also supported by SL's visiting scholarship at New York University-Shanghai. HN is supported by the EPSRC under the program grant EP/S026347/1, the Alan Turing Institute under the EPSRC grant EP/N510129/1. SL and HN are supported by the SJTU-UCL joint seed fund WH610160507/067. LH is supported by University College London and the China Scholarship Council under the UCL-CSC scholarship (No. 201908060002)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Conversely, suppose that PCFD(X n , X) → 0. Then\nfor any m ∈ N and M ∈ P L R d ; u(m) , in particular for those with full support. In view of the universality Theorem A.8 proved above, for any fixed > 0 and any continuous function f ∈ C 0 (K), by approximating f with sum of finitely many L i • U Mi (the notations are as in Theorem A.8), one infers that for n and m sufficiently large, it holds that\nBy considering those measures with spt(M ) = L R d ; u(m ) , we deduce that\nThis is tantamount to the distributional convergence." }, { "figure_ref": [], "heading": "B.3 Relation with MMD", "publication_ref": [ "b35" ], "table_ref": [], "text": "We now discuss linkages between PCFD and MMD (maximum mean discrepancy) defined over P(X ), the space of Borel probability measures (equivalently, probability distributions) on X . Definition B.9. Given a kernel function κ : X × X → R, the MMD associated to κ is the function MMD κ : P(X ) × P(X ) → R + given as follows: for independent random variables X, Y on X , set MMD 2 κ (P X , P\nThe PCFD can be interpreted as an MMD on measures of the path space with a specific kernel.\nCompare with Sriperumbudur et al. [2010] for the case of R d . Proposition B.10 (PCFD as MMD). Given M ∈ P u(m) d and X -valued random variables X and Y with induced distributions P X and P Y , resp. Then\nThroughout, designates concatenation of paths and ←y is the path obtained by running y backwards. The operation x ←y on the path space is analogous to xy on R d . If y = x, then x ←y is the null path. See the Appendix for proofs and further discussions. Remark B.11 (Computational cost complexity). By Proposition B.10, PCFD is an MMD. However, to compute EPCFD, we may directly calculate the expected distance between the PCFs, without going over the kernel calculations in the MMD approach. Our method is significantly more efficient, especially for large datasets. The computational complexity of EPCFD is linear in sample size, whereas the MMD approach is quadratic.\nProof. By definition of PCFD, we have\nrespectively. Using Fubini's theorem and observing that Φ X (M ), Φ Y (M ) HS ∈ L 2 (P M ) (as Φ X (M ) and Φ Y (M ) are U (m)valued, they indeed lie in L ∞ (C m×m ; P M ) as U (m) is a compact Lie group under the Hilbert-Schmidt metric), we deduce that\n]. The first equality then follows from the identification κ(x, y) = E M ∼P M [ U M (x), U M (y) HS ] and the definition of MMD κ .\nOn the other hand, by Lemma A.5 and the definition of the Hilbert-Schmidt inner product on U (m), one may rewrite the kernel function as follows:\n, where denotes the concatenation of paths. The second equality now follows." }, { "figure_ref": [], "heading": "B.4 Empirical PCFD B.4.1 Initialisation of M", "publication_ref": [], "table_ref": [], "text": "A linear map M ∈ L R d , u(m) can be canonically represented by d independent anti-Hermitian matrices M 1 , . . . , M d ⊂ u(m) ∈ C m×m . To sample empiracal distribution of M ∈ P L R d , u(m) from P M , we propose a sampling scheme over u(m). This can also be used as an effective initialisation of model parameters θ M ∈ u(m) d×k for the empirical measure of M.\nIn practice, when working with the Lie algebra u(m), i.e., the vector space of m × m complex-valued matrices that are anti-Hermitian (A * + A = 0, where A * is the transpose conjugate of A), we view each anti-Hermitian matrix as an 2m × 2m real matrix via the isomorphism of R-vector spaces R 2m×2m ∼ = C m×m . Under the above identification, we have the decomposition\nwhere o(m) is the Lie algebra of anti-symmetric m × m real matrices, Sym m×m is the space of m × m real symmetric matrices, z(m) consists of m × m real diagonal matrices and Sym m×m /z(m) denotes the quotient space of real symmetric matrices by the real diagonal matrices.\nThe sampling procedure of P M , is given as follows. First, we simulate R m×m valued and i.i.d random variables A and B, whose elements are i.i.d and satisfy the pre-specified distribution in P(R). We have the decomposition B = D ⊕ E, where D and E are a diagonal random matrix and a off-diagonal random matrix respectively. Then we construct the anti-symmetric matrix R = 1 √ 2 (A T -A) and matrix in the quotient space Sym m×m /z(m), C = 1 √ 2 (E T + E), and diagonal matrix D. Correspondingly, we simulate u(m)-valued random variables by virtue of Eq. ( 14). As the empirical measure of the M can be fully characterised by the model parameters θ M ∈ u(m) d×k , we sample d × k i.i.d. samples which take values in u(m)." }, { "figure_ref": [], "heading": "B.4.2 Hypothesis test", "publication_ref": [ "b23", "b20", "b39" ], "table_ref": [], "text": "In the following, we illustrate the efficacy of the proposed trainable EPCFD metric in the context of the hypothesis test on stochastic processes.\nExample B.12 (Hypothesis testing on fractional Brownian motion). Consider the 3-dimensional Brownian motion B := (B t ) t∈[0,T ] and the fraction Brownian motion B h := (B h t ) t∈[0,T ] with the Hurst parameter h. We simulated 5000 sample paths for both B and B h with 50 discretized time steps. We apply the proposed optimized EPCFD metric to the two-sample testing problem: the null hypothesis H 0 :\nWe compare the optimized EPCFD metric with EPCFD metric with the prespecified distribution (PCF) and the characteristic function distance (CF) on the flattened time series Li et al. [2020]. The optimized PCFs are trained on a separate set of 5000 training samples to maximise the PCFD. The details of training can be found at Appendix C.2.\nWe conduct the permutation test to compute the power of a test (i.e. the probability of correct rejection of the null H 0 ) and Type I error (i.e. the probability of false acceptances of the null H 0 ) for varying h ∈ {0.2 + 0.1 • i} 6 i=0 . Note that when h = 0.5, B and B h have the same distribution and hence are indistinguishable. Therefore, the better the test metric is, the test power should be closer to 0 when h is close to 0.5, whereas it should be closer to 1 when h is away from 0.5. We refer to Lehmann et al. [2005] for more in-depth information on hypothesis testing and permutation test statistics.\nThe plot of the test power and Type 1 error in Figure 6 shows that CF fails in the two sample tests, whilst both EPCFD and optimised EPCFD can distinguish the samples from the stochastic process when h = 0.5. It indicates that the EPCFD captures the distribution of time series much more effectively than the conventional CF metric. Moreover, optimization of EPCFD increases the test power while decreasing the type1 error, particularly when h is closer to 0.5. For TimeGAN, we followed the training scheme for each module as suggested in the original paper. The batch size was 64 for all experiments. These hyperparameters do not substantially affect the results.\nTo improve the training stability of GAN, we employed three techniques. Firstly, we applied a constant exponential decay rate of 0.97 to the learning rate for every 500 generator training iterations. Secondly, we clipped the norm of gradients in both generator and discriminator to 10. Thirdly, we used the Cesaro mean of the generator weights after certain iterations to improve the performance of the final model, as suggested by Yazıcı et al. [2019]. In all cases, we selected the number of training iterations such that all methods could produce stable generative samples. The optimal number of training iterations and weight averaging scheme varied for each dataset. More details can be found in the respective sections.\nTest metrics. Discriminative score. The network architecture of the post-hoc classifier consists of two layers of LSTMs with 16 hidden units. The dataset was split into equal proportions of real and generated time series with labels 0 and 1, with an 80% / 20% train/test split for training and evaluation. The discriminative model was trained for 30 epochs using Adam with a learning rate of 0.001 and a batch size of 64. The best classification error on the test set was reported.\nPredictive score. The network architecture of the post-hoc sequence-to-sequence regressor consists of two layers of LSTMs with 16 hidden units. The model was trained on the generated time series and evaluated on the real time series, using the first 80% of the time series to predict the last 20%. The predictive model was trained for 50 epochs using Adam with a learning rate of 0.001 and a batch size of 64. The best mean squared error on the test set was reported.\nSig-MMD. We directly computed the Sig-MMD by taking inputs of the real time series samples and generated time series samples. We used the radial basis function kernel applying to the truncated signature feature up to depth 5." }, { "figure_ref": [], "heading": "C.2 Time dependent Ornstein-Uhlenbeck process", "publication_ref": [], "table_ref": [], "text": "On this dataset, we experimented with the basic version of PCF-GAN, which only utilized the EPCFD as the discriminator without the autoencoder structure. The batch size is 256. The model are trained with 20000 generator training iterations and weight averaging on the generator was performed over the final 4000 generator training iterations. We used the 2-dimensional discretized Brownian motion as the noise distribution." }, { "figure_ref": [], "heading": "C.2.1 Rough volatility model", "publication_ref": [ "b30" ], "table_ref": [], "text": "We followed Ni et al. [2021] considering a rough stochastic volatility model for an asset price process (S t ) t∈[0,1] , which satisfies the below stochastic differential equation,\nwhere ξ(t) denotes the forward variance and B H t denotes the frational Brownian motion (fBM) given by\nwhere (Z t ) t∈[0,1] , (B t ) t∈[0,1] are (possibly correlated) Brownian motions. In our experiments, the synthetic dataset is sampled from Equation (15) with t ∈ [0, 1], H = 0.25, ξ(t) ∼ N (0.1, 0.01), η = 0.5 and initial condition log(S 0 ) ∼ N (0, 0.05). Each sample path is sampled uniformly from [0, 1] with the time discretization δt = 0.005, which consists of 200 time steps. We train the generators to learn the joint distribution of the log price and log volatility.\nAll methods are trained with 30000 generator training iterations and weight averaging on the generator was performed over the final 5000 generator training iterations. The input noise vectors have 5 dimension and 200 time steps.\nFor PCF-GAN, the coefficient λ 1 for the recovery loss was 50, and the number of unitary linear maps was 6." }, { "figure_ref": [], "heading": "C.2.2 Stocks", "publication_ref": [ "b23" ], "table_ref": [], "text": "We selected 10 large market cap stocks, which are Google, Apple, Amazon, Tesla, Meta, Microsoft, Nvidia, JP Morgan, Visa and P&G, from 2013 to 2021. The dataset consists of 5 features, including daily open, close, high, low prices and volume, available on https://finance.yahoo.com/ lookup. We truncated the long stock time series into 20 days. The data were normalized with standard Min-Max normalisation on each feature channel. The Stock dataset used in our study is similar to the one employed in Li et al. [2020] but with a broader range of assets. Unlike the previous approach, we avoided sampling the time series using rolling windows with a stride of 1 to mitigate the presence of strong dependencies between samples.\nAll methods are trained with 30000 generator training iterations and weight averaging on the generator was performed over the final 5000 generator training iterations. The input noise vectors have 5 feature dimensions and 20 time steps.\nFor PCF-GAN, the coefficient λ 1 for the recovery loss was 400, and the number of unitary linear maps was 6." }, { "figure_ref": [], "heading": "C.2.3 Beijing Air Quality", "publication_ref": [ "b42" ], "table_ref": [], "text": "We used a dataset of the air quality in Beijing from the UCI repository Zhang et al. [2017] and available on https://archive.ics.uci.edu/ml/datasets/Beijing+ Multi-Site+Air-Quality+Data. Each sample is a 10-dimensional time series of the SO2, NO2, CO, O3, PM2.5, PM10 concentrations, temperature, pressure, dew point temperature and wind speed. Each time series is recorded hourly over the course of a day. The data were normalized with standard Min-Max normalisation on each feature channel.\nAll methods are trained with 20000 generator training iterations and weight averaging on the generator was performed over the final 4000 generator training iterations. The input noise vectors have 5 dimensions and 24 time steps.\nFor PCF-GAN, the coefficient λ 1 for the recovery loss was 50, and the number of unitary linear maps was 6." }, { "figure_ref": [], "heading": "C.2.4 EEG", "publication_ref": [], "table_ref": [], "text": "We obtained the EEG eye state dataset from https://archive.ics.uci.edu/ml/datasets/ EEG+Eye+State. The data is from one continuous EEG measurement on 14 variables with 14980 time steps. We truncated the long time series into smaller ones with 20 time steps. The data are subtracted by channel-wise mean, divided by three times the channel-wise standard deviation, and then passed through a tanh nonlinearity.\nAll methods are trained with 30000 generator training iterations and weight averaging on the generator was performed over the final 5000 generator training iterations. The input noise vectors have the 8 dimensional and 20 time steps.\nFor PCF-GAN, the coefficient λ 1 for the recovery loss was 50, and the number of unitary linear maps was 8." }, { "figure_ref": [], "heading": "D Supplementary results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1 Ablation study", "publication_ref": [], "table_ref": [], "text": "An ablation study was conducted on the PCF-GAN model to evaluate the importance of its various components. Specifically, the reconstruction loss and regularization loss were disabled in order to assess their impact on model performance across benchmark datasets and various test metrics. Table 3 consistently demonstrated that the PCF-GAN model outperformed the ablated versions, confirming the significance of these two losses in the overall model performance." }, { "figure_ref": [], "heading": "D.3 Reconstructed samples", "publication_ref": [], "table_ref": [], "text": "In this section, we present additional reconstructed time series samples generated by PCF-GAN and TimeGAN. Figure 11 illustrates that PCF-GAN consistently outperforms TimeGAN by producing higher-quality reconstructed samples across all datasets. " } ]
Generating high-fidelity time series data using generative adversarial networks (GANs) remains a challenging task, as it is difficult to capture the temporal dependence of joint probability distributions induced by time-series data. Towards this goal, a key step is the development of an effective discriminator to distinguish between time series distributions. We propose the so-called PCF-GAN, a novel GAN that incorporates the path characteristic function (PCF) as the principled representation of time series distribution into the discriminator to enhance its generative performance. On the one hand, we establish theoretical foundations of the PCF distance by proving its characteristicity, boundedness, differentiability with respect to generator parameters, and weak continuity, which ensure the stability and feasibility of training the PCF-GAN. On the other hand, we design efficient initialisation and optimisation schemes for PCFs to strengthen the discriminative power and accelerate training efficiency. To further boost the capabilities of complex time series generation, we integrate the auto-encoder structure via sequential embedding into the PCF-GAN, which provides additional reconstruction functionality. Extensive numerical experiments on various datasets demonstrate the consistently superior performance of PCF-GAN over state-of-the-art baselines, in both generation and reconstruction quality.
PCF-GAN: generating sequential data via the characteristic function of measures on the path space
[ { "figure_caption": "Figure 2 :2Figure 2: Left: Sample paths generated from the time-dependent OU process and synthetic paths from PCF-GAN. Right: The marginal distribution comparison at t ∈ {10, 20, 30, 40, 50, 60}.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Evaluation metrics: The following three metrics are used to assess the quality of generative models. For time series generation/reconstruction, we compare the true and fake/reconstructed distribution by G θg • F θ f via the below test metrics. (1) Discriminative scoreYoon et al. [2019]: We train a post-hoc classifier to distinguish true and fake data. We report the classification error on the test data. The better generative model yields a lower classification error, as it means that the classifier struggles to differentiate between true and fake data. (2) Predictive scoreYoon et al. [2019],Esteban et al. [2017]: We train a post-hoc sequence-to-sequence regression model to predict the latter part of a time series given the first part from the generated data. We then evaluate and report the mean square error (MSE) on the true time series data. The lower MSE indicates better the generated data can be used to train a predictive model. (3) Sig-MMD Chevyrev and Oberhauser [2022],Toth and Oberhauser [2020]: We use MMD with the signature feature as a generic metric on time series distribution.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "5. 22Time series reconstruction As TimeGAN is the only baseline model incorporating reconstruction capability, for reconstruction tasks we only compare with TimeGAN. The reconstructed examples of time series using both PCF-GAN and TimeGAN are shown in Figure 4; see Appendix D for more samples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Examples of time series reconstruction using PCF-GAN and TimeGAN.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Training curves for PCF-GAN of real (left) and generated (right) time series distributions on the Rough Volatility dataset at different training iterations. Plotted by an moving average over a window with 500 iterations.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure5demonstrates the training progress of the PCF-GAN on RV dataset. Compared to the fluctuating generator loss typically observed in traditional GANs, the PCF-GAN yields better convergence by leveraging the autoencoder structure. This is achieved by minimising reconstruction and regularisation losses, which ensures the injectivity of F θ f and enables production of a semantic embedding throughout the training process. The decay of generator loss in the embedding space directly reflects the improvement in the quality of the generated time series. This is particularly useful for debugging and conducting hyperparameter searches. Furthermore, decay in both recovery and regularisation loss signifies the enhanced performance of the autoencoder.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "for any M ∼ P M . Now, by density of {M j } j∈N in P L R d , m∈N u(m) , there exists a subsequence M j(m)such that M j(m) has full support on L R d , u(m) for each m ∈ N. Thus, PCFD(X, Y) = 0 implies that Φ X = Φ Y on a dense subset of L R d , u(m)for every m ∈ N. We conclude by the characteristicity Theorem 3.2 and a continuity argument. Lemma B.3 (Lemma 3.5). Let M be an L R d , u(m) -valued random variable. Then for any BV [0, T ]; R d -valued random variables X and Y, it holds that PCFD M (X, Y) ≤ 2m 2 . Proof. As (C m×m , • HS ) is a Hilbert space, from the Pythagorean theorem one deduces that d 2 HS (Φ X (M ), Φ Y (M )) ≤ Φ X (M ) 2 HS + Φ Y (M ) 2 HS . Both Φ X (M ), Φ Y (M ) are expectations of U (m)-valued random variables, and U HS := tr(U U * ) = tr(I m ) = √ m for U ∈ U (m). Thus d HS (Φ X (M ), Φ Y (M )) ≤ √ 2m. We take expectation over P M to conclude.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Lemma B. 4 .4Let L, L : [a, b] → R d be two linear paths, and let M ∈ u(m) d := L R d , u(m) as before. Denote by • e the usual Euclidean norm on R d and |M | the operator norm of M : R d , • e → (u(m), • HS ). Then we have e M (L(t)) -e M ( L(t)) HS ≤ |M | L(t) -L(t) e for each t ∈ [a, b].Proof. Let Γ(t, s) := M (1 -s)L(t) + s L(t) with t ∈ [a, b] and s ∈ [0, 1]. This is the linear interpolation between Γ(t, 0) = M • L(t) and Γ(t, 1) = M • L(t). Then we have e M L(t) -e M L", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Plots of the test power (Left) and the Type-I error (Right) against the Hurst parameter h ∈ [0.2, 0.8] on the two sample tests for the Brownian motion B against Fractional Brownian motions (B h ) by using three metrics, i.e., PCFD, optimized EPCFD and CFD.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Generated samples from all models on Rough volatility dataset", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Full details on numerics (dataset, evaluation metrics, and hyperparameter choices) are in Appendix C. Additional ablation studies and visualisations of generated samples are reported in Appendix D. We take Recurrent GAN (RGAN)Esteban et al. [2017], TimeGANYoon et al. [2019], and COT-GAN Xu et al. [2020] as benchmarking models. These are representatives of GANs exhibiting strong empirical performance for time series generation. For fairness, we compare our model to the baselines while fixing the generators and embedding/discriminator to be the common sequential neural network (2 layers of LSTMs). We benchmark our model on four different time series datasets with various characteristics: dimensions, sample frequency, periodicity, noise level, and correlation. (1) Rough Volatility: Highfrequency synthetic time series data with low noise-to-signal. (2) Stock: The daily historical data on ten publicly traded stocks from 2013 to 2021, including as features the volume and high, low, Algorithm 1 PCF-GAN.1: Input: P d (real time series distribution), P z (noise distribution), θ M ,θ M ,θ f , θ g (model parameters for EPCFD, critic F and generator G), λ 1 , λ 2 ∈ R + (penalty weights), b (batch size), η ∈ R (learning rate), n c the iteration number of discriminator per generator update, . 2: while θ M , θ M , θ M , θ c , θ g not converge do 3:for i ∈ {1, . . . , n c } do", "figure_data": "Dataset: 4:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Summuary statistics for four datasets", "figure_data": "Dataset Dimension Length Sample rate Auto-cor (lag 1) Auto-cor (lag 5) Cross-corRV2200-0.9670.916-0.014Stock5201day0.9580.9220.604Air10241hour0.9470.7520.0487EEG14208ms0.5170.4570.418", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison of PCF-GAN and baselines. Best for each task shown in bold.", "figure_data": "TaskGenerationReconstructionDatasetTest MetricsRGANCOT-GANTimeGANPCF-GANTimeGAN (R) PCF-GAN(R)Discriminative .0271±.048 .0499±.068 .0327±.019 .0108±.006.5000±.000.2820±.082RVPredictive.0393±.000 .0395±.000 .0395±.001 .0390±.000.0590±.003.0398±.001Sig-MMD.0163±.004 .0116±.003 .0027±.004 .0024±.0013.308±1.34.0960±.050Discriminative .1283±.015 .4966±.002 .3286±.063 .0784±.028.4943±.002.3181±.038StockPredictive.0132±.000 .0144±.000 .0139±.000 .0125±.000.1180±.012.0127±.000Sig-MMD.0248±.008 .0029± .000 .0272±.006 .0017±.000.7587±.186.0078±.004Discriminative .4549±.012 .4992±.002 .3460±.025 .2326±.058.4999±.000.4140±.013AirPredictive.0261±.001 .0260±.001 .0256±.000 .0237±.000.0619±.004.0289±.000Sig-MMD.0456±.015 .0128±.002 .0146±.026 .0126±.006.4141±.078.0359±.012Discriminative .4908±.003 .4931±.007 .4771±.008 .3660±.025.5000±.000.4959±.003EEGPredictive.0315±.000 .0304±.000 .0342±.001 .0246±.000.0499±.001.0328±.001Sig-MMD.0602±.010 .0102±.002 .0640±.025 .0180±.004.0700±.021.0641±.019", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of PCF-GAN Dataset Test Metrics PCF-GAN w/o L recovery w/o L regularization w/o L regularization &L recovery", "figure_data": "Discriminative .0108±.006 .0178±.017.0152±.020.0101±.007RVPredictive.0390±.000 .0389±.000.0390±.003.0391±.001Sig-MMD.0024±.001 .0037±.001.0036 ±.002.0027±.001Discriminative .0784±.028 .0963±.011.2538±.052.0815±.001StockPredictive.0125±.000 .0123±.000.0127±.000.0126±.001Sig-MMD.0017±.000 .0062±.002.0024±.001.0021±.001Discriminative .2326±.058 .3940±.068.4783±.029.3875±.009AirPredictive.0237±.000 .0239±.000.0283±.001.0240±.000Sig-MMD.0126±.005 .0111±.003.0232±.004.0163±.004Discriminative .3660±.025 .4942±.010.5000±.000.4649±.015EEGPredictive.0246±.000 .0299±.000.0636±.007.0248±.000Sig-MMD.0180±.004 .0296±.0081.197±.234.0278±007", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Hang Lou; Siran Li; Hao Ni
[ { "authors": "A F Ansari; J Scarlett; H Soh", "journal": "", "ref_id": "b0", "title": "A characteristic function approach to deep implicit generative modeling", "year": "2020" }, { "authors": "M Arjovsky; L Bottou", "journal": "", "ref_id": "b1", "title": "Towards principled methods for training generative adversarial networks", "year": "2017" }, { "authors": "S A Assefa; D Dervovic; M Mahfouz; R E Tillman; P Reddy; M Veloso", "journal": "", "ref_id": "b2", "title": "Generating synthetic data in finance: opportunities, challenges and pitfalls", "year": "2020" }, { "authors": "S M Bellovin; P K Dutta; N Reitinger", "journal": "Stan. Tech. L. Rev", "ref_id": "b3", "title": "Privacy and synthetic datasets", "year": "2019" }, { "authors": "L Biewald", "journal": "", "ref_id": "b4", "title": "Experiment tracking with weights and biases", "year": "2020" }, { "authors": "H Boedihardjo; X Geng", "journal": "", "ref_id": "b5", "title": "Sl_2 (r)-developments and signature asymptotics for planar paths with bounded variation", "year": "2020" }, { "authors": "I Chevyrev; H Oberhauser", "journal": "Journal of Machine Learning Research", "ref_id": "b6", "title": "Signature moments to characterize laws of stochastic processes", "year": "2022" }, { "authors": "I Chevyrev; T Lyons", "journal": "The Annals of Probability", "ref_id": "b7", "title": "Characteristic functions of measures on geometric rough paths", "year": "2016" }, { "authors": "I Chevyrev; V Nanda; H Oberhauser", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b8", "title": "Persistence paths and signature features in topological data analysis", "year": "2018" }, { "authors": "K P Chwialkowski; A Ramdas; D Sejdinovic; A Gretton", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "Fast two-sample testing with analytic representations of probability measures", "year": "2015" }, { "authors": "C Esteban; S L Hyland; G Rätsch", "journal": "", "ref_id": "b10", "title": "Real-valued (medical) time series generation with recurrent conditional gans", "year": "2017" }, { "authors": "M Fabian; P Habala; P Hájek; V Montesinos Santalucía; J Pelant; V Zizler", "journal": "Springer-Verlag", "ref_id": "b11", "title": "Functional analysis and infinite-dimensional geometry", "year": "2001" }, { "authors": "W Gilpin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Deep reconstruction of strange attractors from time series", "year": "2020" }, { "authors": "I Gulrajani; F Ahmed; M Arjovsky; V Dumoulin; A C Courville", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b13", "title": "Improved training of Wasserstein GANs", "year": "2017" }, { "authors": "B Hambly; T Lyons", "journal": "Annals of Mathematics", "ref_id": "b14", "title": "Uniqueness for the signature of a path of bounded variation and the reduced path group", "year": "2010" }, { "authors": "C R Heathcote", "journal": "Biometrika", "ref_id": "b15", "title": "The integrated squared error estimation of parameters", "year": "1977" }, { "authors": "P Kidger; P Bonnier; I Perez Arribas; C Salvi; T Lyons", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Deep signature transforms", "year": "2019" }, { "authors": "P Kidger; J Foster; X Li; T J Lyons", "journal": "PMLR", "ref_id": "b17", "title": "Neural SDEs as infinite-dimensional GANs", "year": "2021" }, { "authors": "T Kieu; B Yang; C S Jensen", "journal": "IEEE", "ref_id": "b18", "title": "Outlier detection for multidimensional time series using deep neural networks", "year": "2018" }, { "authors": "D P Kingma; J Ba; Adam", "journal": "", "ref_id": "b19", "title": "A method for stochastic optimization", "year": "2014" }, { "authors": "E L Lehmann; J P Romano; G Casella", "journal": "Springer", "ref_id": "b20", "title": "Testing statistical hypotheses", "year": "2005" }, { "authors": "D Levin; T Lyons; H Ni", "journal": "", "ref_id": "b21", "title": "Learning from the past, predicting the statistics for the future, learning an evolving system", "year": "2013" }, { "authors": "C.-L Li; W.-C Chang; Y Cheng; Y Yang; B Póczos; Gan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Towards deeper understanding of moment matching network", "year": "2017" }, { "authors": "S Li; Z Yu; M Xiang; D Mandic", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b23", "title": "Reciprocal adversarial learning via characteristic functions", "year": "2020" }, { "authors": "H Lou; S Li; H Ni", "journal": "", "ref_id": "b24", "title": "Path development network with finite-dimensional Lie group representation", "year": "2022" }, { "authors": "T Lyons", "journal": "", "ref_id": "b25", "title": "Rough paths, signatures and the modelling of functions on streams", "year": "2014" }, { "authors": "T J Lyons", "journal": "Revista Matemática Iberoamericana", "ref_id": "b26", "title": "Differential equations driven by rough signals", "year": "1998" }, { "authors": "T J Lyons; W Xu", "journal": "Journal of Functional Analysis", "ref_id": "b27", "title": "Hyperbolic development and inversion of signature", "year": "2017" }, { "authors": "T J Lyons; M Caruana; T Lévy", "journal": "Springer", "ref_id": "b28", "title": "Differential equations driven by rough paths", "year": "2007" }, { "authors": "H Ni; L Szpruch; M Wiese; S Liao; B Xiao", "journal": "", "ref_id": "b29", "title": "Conditional sig-wasserstein gans for time series generation", "year": "2020" }, { "authors": "H Ni; L Szpruch; M Sabate-Vidales; B Xiao; M Wiese; S Liao", "journal": "", "ref_id": "b30", "title": "Sig-wasserstein gans for time series generation", "year": "2021" }, { "authors": "K R Parthasarathy", "journal": "Academic Press, Inc", "ref_id": "b31", "title": "Probability measures on metric spaces, volume 3 of Probability and Mathematical Statistics", "year": "1967" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Kopf; E Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "", "ref_id": "b32", "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b33", "title": "", "year": "2019" }, { "authors": "O Roesler; L Bader; J Forster; Y Hayashi; S Heßler; D Suendermann-Oeft", "journal": "", "ref_id": "b34", "title": "Comparison of eeg devices for eye state classification", "year": "2014" }, { "authors": "B K Sriperumbudur; A Gretton; K Fukumizu; B Schölkopf; G R Lanckriet", "journal": "The Journal of Machine Learning Research", "ref_id": "b35", "title": "Hilbert space embeddings and metrics on probability measures", "year": "2010" }, { "authors": "A Srivastava; L Valkov; C Russell; M U Gutmann; C Sutton", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "VEEGAN: Reducing mode collapse in GANs using implicit variational learning", "year": "2017" }, { "authors": "C Toth; H Oberhauser", "journal": "PMLR", "ref_id": "b37", "title": "Bayesian learning from sequential data using Gaussian processes with signature covariances", "year": "2020" }, { "authors": "T Xu; L K Wenliang; M Munn; B Acciaio; Cot-Gan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Generating sequential data via causal optimal transport", "year": "2020" }, { "authors": "Y Yazıcı; C.-S Foo; S Winkler; K.-H Yap; G Piliouras; V Chandrasekhar", "journal": "", "ref_id": "b39", "title": "The unusual effectiveness of averaging in GAN training", "year": "2019" }, { "authors": "J Yoon; D Jarrett; M Van Der Schaar", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b40", "title": "Time-series generative adversarial networks", "year": "2019" }, { "authors": "K Yosida", "journal": "Springer-Verlag", "ref_id": "b41", "title": "Functional analysis", "year": "1980" }, { "authors": "S Zhang; B Guo; A Dong; J He; Z Xu; S X Chen", "journal": "Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences", "ref_id": "b42", "title": "Cautionary tales on air-quality improvement in beijing", "year": "2017" }, { "authors": "R J Zimmer", "journal": "University of Chicago Press", "ref_id": "b43", "title": "Essential results of functional analysis", "year": "1990" } ]
[ { "formula_coordinates": [ 2, 107.64, 664.57, 240.16, 11.23 ], "formula_id": "formula_0", "formula_text": "Φ X : λ -→ E X∼µ e i λ,X . Here U λ : R d → C, x → e i λ," }, { "formula_coordinates": [ 2, 209.39, 694.61, 290.74, 9.65 ], "formula_id": "formula_1", "formula_text": "dU λ (x) = iU λ (x) λ, dx , U λ (0) = 1, (1" }, { "formula_coordinates": [ 2, 500.13, 694.93, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 3, 107.67, 233.27, 396.33, 41.65 ], "formula_id": "formula_3", "formula_text": "X := x : [0, T ] → R d+1 : x(t) = (t, x(t)) for t ∈ [0, T ]; x ∈ BV [0, T ]; R d ; x(0) = 0 . (3) For a discrete time series x = (t i , x i ) N i=0 , where 0 = t 0 < t 1 < • • • < t N = T and x i ∈ R d (i ∈ {0, • • • , N })," }, { "formula_coordinates": [ 3, 141.42, 352.44, 329.15, 11.72 ], "formula_id": "formula_4", "formula_text": "U (m) = {A ∈ C m×m : A * A = I m }, u(m) := {A ∈ C m×m : A * + A = 0}." }, { "formula_coordinates": [ 3, 225.25, 401.5, 278.75, 9.68 ], "formula_id": "formula_5", "formula_text": "dy t = y t • M (dx t ), x 0 = I m .(4)" }, { "formula_coordinates": [ 3, 168.41, 484.64, 307.67, 11.23 ], "formula_id": "formula_6", "formula_text": "For M ∈ L R d , u(m) and x ∈ BV [0, T ]; R d linear, U M (X) = e M (x T -" }, { "formula_coordinates": [ 3, 108, 540.84, 73.94, 9.65 ], "formula_id": "formula_7", "formula_text": "x = (x 0 , • • • , x N )" }, { "formula_coordinates": [ 3, 108, 562.04, 397.74, 52.89 ], "formula_id": "formula_8", "formula_text": "U M (X) = N +1 i=1 exp (M (∆x i )), where ∆x i := x i -x i-1 and exp is the matrix exponential. Convention 2.3. The space L R d , u(m) in which M of Eq. (4) resides is isomorphic to u(m) d , where u(m) is Lie algebra isomorphic to R m(m-1) 2" }, { "formula_coordinates": [ 3, 339.84, 617.38, 148.87, 14.11 ], "formula_id": "formula_9", "formula_text": "M (x) = d i=1 θ (i) x, e i , ∀x ∈ R d ." }, { "formula_coordinates": [ 4, 202.33, 87.14, 264.71, 40.8 ], "formula_id": "formula_10", "formula_text": "X : L R d , u(m) → C m×m given by Φ X (M ) := E[U M (X)] = X U M (x) dP X (x)." }, { "formula_coordinates": [ 4, 108, 130.22, 396.35, 30 ], "formula_id": "formula_11", "formula_text": "∞ m=0 L R d , u(m) → ∞ m=0 C m×m is defined by the natural grading: Φ X L(R d ,u(m)) = Φ (m)" }, { "formula_coordinates": [ 4, 205.49, 223.56, 136.88, 13.16 ], "formula_id": "formula_12", "formula_text": "X d = Y) if and only if Φ X = Φ Y ." }, { "formula_coordinates": [ 4, 108, 310.8, 316.13, 30.44 ], "formula_id": "formula_13", "formula_text": "d HS (A, B) := A -B 2 HS = tr [(A -B)(A -B) * ]. Definition 3.3. Let X, Y : [0, T ] → R d be" }, { "formula_coordinates": [ 4, 192.94, 368.44, 311.06, 13.08 ], "formula_id": "formula_14", "formula_text": "PCFD 2 M (X, Y) = E M ∼P M d 2 HS Φ X (M ), Φ Y (M ) .(5)" }, { "formula_coordinates": [ 4, 108, 567.11, 396, 53.83 ], "formula_id": "formula_15", "formula_text": ") d . Assume that g : Θ × Z → X , (θ, Z) → g θ (Z) is Lipschitz in θ such that Tot.Var. [g θ (Z) -g θ (Z)] ≤ ω(Z)ρ (θ, θ ). In addition, suppose that E M ∼P M | M | 2 < ∞ and E Z∼Q [ω(Z)] < ∞. Then PCFD M (g θ (Z), X) is Lipschitz in θ. Moreover, it holds that |PCFD M (g θ (Z), X) -PCFD M (g θ (Z), X)| ≤ E M ∼P M [| M | 2 ] E Z∼Q [ω(Z)] ρ (θ, θ )" }, { "formula_coordinates": [ 5, 108, 241.4, 396, 40.35 ], "formula_id": "formula_16", "formula_text": "X = {x i } n i=1 , i.e., Φ X(M ) = 1 n n i=1 U M (x i ). We then parameterise the u(m) d -valued random variable M via the empirical measure M θ M , i.e., M θ M = k i=1 δ Mi , where θ M := {M i } k i=1 ∈ u(m)" }, { "formula_coordinates": [ 5, 186.55, 302.31, 313.58, 30.32 ], "formula_id": "formula_17", "formula_text": "EPCFD θ M X, Ȳ = 1 k k i=1 Φ X(M i ) -Φ Ȳ (M i ) 2 HS . (6" }, { "formula_coordinates": [ 5, 500.13, 313.04, 3.87, 8.64 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 5, 277.35, 624.43, 65.04, 13.33 ], "formula_id": "formula_19", "formula_text": "Z = (Z ti ) n T -1 i=0" }, { "formula_coordinates": [ 5, 226.87, 668.11, 138.34, 15.31 ], "formula_id": "formula_20", "formula_text": "min θg max θ M EPCFD θ M (G θg (Z), X)." }, { "formula_coordinates": [ 6, 174.38, 508.17, 325.75, 10.32 ], "formula_id": "formula_21", "formula_text": "L generator (θ g , θ M , θ f ) = EPCFD θ M (F θ f (G θg (Z)), F θ f (X))). (7" }, { "formula_coordinates": [ 6, 500.13, 508.49, 3.87, 8.64 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 6, 108, 527.67, 150.09, 19.87 ], "formula_id": "formula_23", "formula_text": "Encoder(F θ f )-decoder(G θg ) struc- ture:" }, { "formula_coordinates": [ 7, 108, 103.71, 396.24, 23.88 ], "formula_id": "formula_24", "formula_text": "F θ f • G θg , i.e., L recovery = E[|Z -F θ f (G θg (Z))| 2 ]. Note that L recovery = 0 implies that F θ f (G θg (z)) = z," }, { "formula_coordinates": [ 7, 228.26, 174.16, 275.74, 11.87 ], "formula_id": "formula_25", "formula_text": "L regularization = EPCFD θ M (Z, F θ f (X)),(8)" }, { "formula_coordinates": [ 7, 212.69, 271.07, 291.31, 15.33 ], "formula_id": "formula_26", "formula_text": "max θ f (L generator -λ 1 L recovery -λ 2 L regularization ) ,(9)" }, { "formula_coordinates": [ 7, 239.67, 351.02, 264.33, 16.96 ], "formula_id": "formula_27", "formula_text": "max θ M L generator , max θ M L regularization(10)" }, { "formula_coordinates": [ 8, 108.5, 164.27, 310.99, 55.09 ], "formula_id": "formula_28", "formula_text": "L generator = EPCFD θ M (F θ f (X), F θ f (G θg (Z))) 7: Update: θ M ← θ M + η • θ M L generator 8: Regularization Loss: L regularization = EPCFD θ M (Z, F θ f (X)) 9: Update: θ M ← θ M + η • θ M (L regularization ) 10:" }, { "formula_coordinates": [ 8, 108.5, 219.95, 274.93, 21.22 ], "formula_id": "formula_29", "formula_text": "L recovery = E[|Z -F θ f (G θg (Z))| 2 ] 12:" }, { "formula_coordinates": [ 8, 108.5, 232.44, 300.33, 30.56 ], "formula_id": "formula_30", "formula_text": "L c = L generator -λ 1 • L recovery -λ 2 • L regularization 13: Update: θ f ← θ f + η • θc L c 14:" }, { "formula_coordinates": [ 8, 108.5, 286.98, 284.79, 30.7 ], "formula_id": "formula_31", "formula_text": "L generator = EPCFD M (F θ f (X), F θ f (G θg (Z))) 18: Update: θ g ← θ g -η • θg L g 19: end while" }, { "formula_coordinates": [ 13, 280.01, 209.5, 223.99, 16.99 ], "formula_id": "formula_32", "formula_text": "D⊂[0,T ] X t -X t -1 (11)" }, { "formula_coordinates": [ 13, 108, 231.43, 397.74, 23.16 ], "formula_id": "formula_33", "formula_text": "D = {t } N =0 of [0, T ]. When Tot.Var.(X) is finite, say that X is a path of bounded variation (BV-path) on [0, T ] and denote X ∈ BV [0, T ]; R d ." }, { "formula_coordinates": [ 13, 108, 345.14, 396, 51.16 ], "formula_id": "formula_34", "formula_text": "Definition A.2. (Concatenation of paths) Let X : [0, s] → R d and Y : [s, t] → R d be two continuous paths. Their concatenation denoted as the path X Y : [0, t] → R d is defined by (X Y ) u = X u , u ∈ [0, s], Y u -Y s + X s , u ∈ [s, t]." }, { "formula_coordinates": [ 13, 353.35, 451.7, 125.3, 16.46 ], "formula_id": "formula_35", "formula_text": "X ∼ τ Y ) if X ← - Y is tree-like." }, { "formula_coordinates": [ 13, 205.25, 542.56, 201.5, 27.8 ], "formula_id": "formula_36", "formula_text": "U (m) := A ∈ C m×m : A * A = AA * = I m , Sp(2m, C) := A ∈ C 2m×2m : A * J m A = J m ." }, { "formula_coordinates": [ 13, 197.82, 600.99, 216.36, 27.8 ], "formula_id": "formula_37", "formula_text": "u(m) := A ∈ C m×m : A * + A = 0 , sp(2m, C) := A ∈ C 2m×2m : A * J m + J m A = 0 ." }, { "formula_coordinates": [ 14, 191.28, 152.67, 229.44, 9.68 ], "formula_id": "formula_38", "formula_text": "dy t = y t • M (dx t ) for all t ∈ [0, T ] with Y 0 = I m ." }, { "formula_coordinates": [ 14, 238.67, 219.91, 177.51, 15.11 ], "formula_id": "formula_39", "formula_text": "M : T R d ≡ ∞ k=0 R d ⊗k → u(H fd" }, { "formula_coordinates": [ 14, 149.24, 253.81, 313.52, 11.72 ], "formula_id": "formula_40", "formula_text": "M (v 1 ⊗ . . . ⊗ v k ) := M (v 1 ) . . . M (v k ) for any k ∈ N and v 1 , . . . , v k ∈ R d ." }, { "formula_coordinates": [ 14, 108, 321.26, 396, 35.14 ], "formula_id": "formula_41", "formula_text": "Lemma A.5. [Multiplicativity] Let X ∈ BV [0, s], R d and Y ∈ BV [s, t], R d . Denote by X * Y their concatenation: (X * Y )(v) = X(v) for v ∈ [0, s] and Y (v) -Y (s) + X(s) for v ∈ [s, t]. Then U M (X * Y ) = U M (X) • U M (Y ) for all M ∈ L R d , u(m) ." }, { "formula_coordinates": [ 14, 108, 381.42, 397.74, 24.14 ], "formula_id": "formula_42", "formula_text": "Lemma A.6 (Invariance under time-reparametrisation). Let X ∈ BV([0, T ], R d ) and let λ : t → λ t be a non-decreasing C 1 -diffeomorphism from [0, T ] onto [0, S]. Define X λ t := X λt for t ∈ [0, T ]." }, { "formula_coordinates": [ 14, 108, 452.49, 280.51, 11.23 ], "formula_id": "formula_43", "formula_text": "M ∈ L R d , u(m) with some m ∈ N such that U M (X 1 ) = U M (X 2 )." }, { "formula_coordinates": [ 14, 108, 517.39, 396, 36.61 ], "formula_id": "formula_44", "formula_text": "M ∈ L R d , u(m) such that M [Sig(X 1 )] = M [Sig(X 2 )]; hence M (X 1 ) = M (X 2 ). Therefore, by considering the U (m)-valued equation dY t = Y t • M (dX t ) with Y 0 = I m , we conclude that U M (X 1 ) = U M (X 2 )." }, { "formula_coordinates": [ 14, 108, 586.73, 396, 52.14 ], "formula_id": "formula_45", "formula_text": "M 1 , • • • , M N ∈ L R d , u(m ) as well as L 1 , . . . , L N ∈ L (U (m ); C), such that sup X∈K f (X) - N i=1 L i • U Mi (X) < .(12)" }, { "formula_coordinates": [ 15, 108, 163.32, 397.74, 25.18 ], "formula_id": "formula_46", "formula_text": "] that U M (X i ) = M (Sig(X i )) for any M ∈ L R d , u(m) ; i ∈ {1, 2}. Hence Φ Xi = X M (Sig(x)) dP Xi (x)." }, { "formula_coordinates": [ 15, 108, 202.76, 396, 22.6 ], "formula_id": "formula_47", "formula_text": "M ∈ L R d , u(m) such that Φ X = Φ Y ." }, { "formula_coordinates": [ 15, 202.22, 302.3, 297.63, 30.32 ], "formula_id": "formula_48", "formula_text": "PCFD(X, Y) := ∞ j=1 min 1, PCFD Mj (X, Y) 2 j . (13" }, { "formula_coordinates": [ 15, 108, 313.03, 396, 47.26 ], "formula_id": "formula_49", "formula_text": ") In Lemma B.2 above, L R d , m∈N u(m) ∼ = R d * ⊗ π m∈N u(m)" }, { "formula_coordinates": [ 15, 108, 483.49, 342.98, 66.97 ], "formula_id": "formula_50", "formula_text": "L(R d ,u(m)) d 2 HS Φ X (M ), Φ Y (M ) dP M (M ) = L(R d ,u(m)) E [U M (X)] -E [U M (Y)] 2 HS dP M (M ) = 0. So, if P M is supported on the whole of L R d , u(m) , then Φ X (M ) = Φ Y (M )" }, { "formula_coordinates": [ 16, 157.82, 313.5, 301.89, 49.59 ], "formula_id": "formula_51", "formula_text": "e M L(t) -e M L(t) HS ≤ 1 0 1 0 ∂Γ ∂s (t, s) HS dr ds = M L(t) -L(t) HS ≤ |M | L(t) -L(t) e ," }, { "formula_coordinates": [ 16, 124.17, 411.16, 348.73, 11.5 ], "formula_id": "formula_52", "formula_text": "U M (X) -U M (Y ) HS ≤ U M (X 0,t ) -U M (Y 0,t ) HS + U M (X t,T ) -U M (Y t,T ) HS ." }, { "formula_coordinates": [ 16, 108, 458.98, 395.5, 111.28 ], "formula_id": "formula_53", "formula_text": "U M (X) -U M (Y ) HS = U M (X 0,t ) • U M (X t,T ) -U M (Y 0,t ) • U M (Y t,T ) HS ≤ (U M (X 0,t ) -U M (Y 0,t )) • U M (X t,T ) HS + U M (Y 0,t )(U M (X t,T ) -U M (Y t,T )) HS = U M (X 0,t ) -U M (Y 0,t ) HS + U M (X t,T ) -U M (Y t,T ) HS . Proposition B.6. For X, Y ∈ X , the unitary feature U M (X) with M ∈ L R d , u(m) = u(m) d satisfies U M (X) -U M (Y) HS ≤ |M | Tot.Var.[X -Y]," }, { "formula_coordinates": [ 16, 170.23, 631.57, 260.21, 30.32 ], "formula_id": "formula_54", "formula_text": "U M (X n ) -U M (Y n ) HS ≤ n-1 i=0 U M (X n ti,ti+1 ) -U M (Y n ti,ti+1 )" }, { "formula_coordinates": [ 16, 180.26, 679.24, 256.47, 30.32 ], "formula_id": "formula_55", "formula_text": "U M (X n ) -U M (Y n ) HS ≤ n-1 i=0 |M | X n ti,ti+1 -Y n ti,ti+1 e ." }, { "formula_coordinates": [ 17, 107.58, 99.03, 396.42, 73.56 ], "formula_id": "formula_56", "formula_text": "Θ × Z → X , (θ, Z) → g θ (Z) is Lipschitz in θ such that Tot.Var. [g θ (Z) -g θ (Z)] ≤ ω(Z)ρ (θ, θ ). In addition, suppose that E M ∼P M | M | 2 < ∞ and E Z∼Q [ω(Z)] < ∞. Then PCFD M (g θ (Z), X) is Lipschitz in θ. Moreover, it holds that |PCFD M (g θ (Z), X) -PCFD M (g θ (Z), X)| ≤ E M ∼P M | M | 2 E Z∼Q [ω(Z)] ρ (θ, θ )" }, { "formula_coordinates": [ 17, 150.98, 226.01, 310.04, 10.98 ], "formula_id": "formula_57", "formula_text": "|PCFD M (g θ (Z), X) -PCFD M (g θ (Z), X)| ≤ PCFD M (g θ (Z), g θ (Z)) ." }, { "formula_coordinates": [ 17, 149.33, 273.16, 311.64, 150.83 ], "formula_id": "formula_58", "formula_text": "PCFD M (g θ (Z), g θ (Z)) = u(m) d Φ g θ (Z) (M ) -Φ g θ (Z) (M ) 2 HS dP M 1 2 = u(m) d Z [U M (g θ (Z)) -U M (g θ (Z))] dQ(Z) 2 HS dP M 1 2 ≤ u(m) d |M | 2 Z Tot.Var. [g θ (Z) -g θ (Z)] dQ(Z) 2 dP M 1 2 ≤ E M ∼P M | M | 2 Z ω(Z)ρ (θ, θ ) dQ(Z) ." }, { "formula_coordinates": [ 17, 252, 550.67, 253.75, 13.16 ], "formula_id": "formula_59", "formula_text": "(K). That is, PCFD(X n , X) → 0 ⇐⇒ X n d → X as n → ∞," }, { "formula_coordinates": [ 17, 108, 671.04, 397.65, 35.81 ], "formula_id": "formula_60", "formula_text": "K f dP Xn → K f dP X for all continu- ous f ∈ C(K). Thus K U M dP Xn -K U M dP X HS → 0, namely that Φ Xn [M ] → Φ X [M ] for each M ∈ L R d ; u(m)" } ]
10.18653/v1/2020.iwslt-1.3
2023-05-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Speech-to-Speech Machine Translation (SSMT) is the task of automatically translating spoken utterances of one language into spoken utterances of another language; similarly, Text-to-Text Machine Translation (TTMT) is the task of translating the text of one language into that of another. SSMT and TTMT have many important applications. 1 https://www.education.gov.in/sites/upload_ files/mhrd/files/NEP_Final_English_0.pdf India's new National Education Policy 1 (NEP) includes a new language policy that states the preferred medium of instruction should be the mother tongue, local or regional language till Class 5 or even Class 8. Our system will aid the teachers by providing a platform for the live translation of lectures in vernacular Indian languages. The tourism industry in India handled about 700 million tourists (677M domestic and 7M foreign) in 20212 , and it requires translation on a day-to-day basis. About 43.5 million3 cases are pending in Indian Judiciary. A major reason for this is that documents, such as FIR, police charge sheets, and also court judgments are in vernacular languages, and for these appeals to be listed for hearing in higher courts, these documents need to be translated into English4 . TTMT is crucial in Indian Judiciary. The healthcare sector is also a major application of translation as translating healthcare information of diseases such as Covid-195 plays an important role in controlling the spread of diseases. The SSMT system can also be used by doctors to efficiently communicate the diagnosis to patients in their native language.\nOur English-Hindi, English-Marathi, and Hindi-Marathi SSMT system is developed by cascading the Automatic Speech Recognition (ASR), Disfluency Correction (DC), Machine Translation (MT), and Text-to-Speech (TTS) models. The system is efficiently deployed on our servers to serve multiple concurrent users with very low latency. We have also created a public web service through which users can easily access our SSMT and TTMT systems. We have collected feedback on our SSMT system from various stakeholders.\nOur contributions are: 1. Deployment of scalable speech-to-speech and text-to-text machine translation systems for English-Hindi, English-Marathi, and Hindi-Marathi (salient linguistic properties of the languages are mentioned in Appendix A.1) language pairs. 2. Demonstrating a corpus filtering toolkit that can be used to extract high-quality parallel corpus from the noisy pseudo parallel corpus and misaligned parallel corpus." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b30", "b4", "b10", "b15", "b10", "b19", "b8", "b17", "b40", "b41", "b57", "b20", "b21", "b25", "b22", "b33", "b36", "b46", "b9", "b53", "b5", "b55", "b28", "b48", "b14", "b6", "b23", "b54" ], "table_ref": [], "text": "We find two popular SSMT approaches in the literature. Recently an end-to-end speech translation system (Lee et al., 2022) that uses a single neural network is developed. But, it requires a huge amount of high-quality speech-to-speech parallel corpus.\nWe follow the pipeline-based approach, which does not require the speech-to-speech parallel corpus, and involves connecting different components in a cascade to form the SSMT pipeline (Bahar et al., 2020). The literature for each component of our SSMT system is discussed below. Hidden Markov Models improved the performance of traditional ASR (Dahl et al., 2012) by associating every phoneme of a given language to an HMM model with transition and emission probabilities obtained from the training corpus (Gales and Young, 2008). Over the last 25 years, the amount of labeled training data has increased in many languages, which has allowed deep learning-based systems to leverage recorded and transcribed speech. From replacing acoustic models in traditional ASR (Dahl et al., 2012) to functioning as an end-to-end speech recognition system (Hannun et al., 2014;Chan et al., 2016;Graves et al., 2013;Rao et al., 2017), deep learning has played a pivotal role in transforming ASR across many languages. Appendix A.2 describes the mathematical formulation and evaluation metrics of ASR.\nDisfluency correction is an essential preprocessing step to clean disfluent sentences before passing the text through downstream tasks like machine translation (Rao et al., 2007;Wang et al., 2010). There are three main approaches in developing DC systems: noisy channel-based techniques (Honal and Schultz, 2004;Jamshid Lou andJohnson, 2017), parsing-based techniques (Honnibal andJohnson, 2014;Jamshid Lou and Johnson, 2020), and sequence tagging-based techniques (Hough and Schlangen, 2015;Ostendorf and Hahn, 2013). Synthetic disfluent data generation by infusing disfluent elements in fluent sentences has received attention recently to compensate for the lack of annotated data in low resource languages (Passali et al., 2022;Saini et al., 2020). Appendix A.3 discusses the surface structure of disfluencies and their various types.\nNMT models are data hungry (Cho et al., 2014;Sutskever et al., 2014;Bahdanau et al., 2015;Vaswani et al., 2017). Kim et al. (2019) proposed pivot-language-based transfer learning techniques for NMT in which the encoder and decoder of source-pivot and pivot-target NMT models are used to initialize the source-target model. Sennrich et al. (2016a) proposed the backtranslation technique in which the synthetic data is created by translating monolingual data. Sen et al. (2021) proposed the phrase pair injection technique in which source-target phrase pairs generated from the source-target parallel corpus using SMT are augmented with source-target parallel corpus. The bad-quality phrase pairs can be filtered out using LaBSE-based (Feng et al., 2022) corpus filtering techniques (Batheja and Bhattacharyya, 2022).\nText-to-Speech generates intelligible and naturalsounding speech using only the input text prompt. Statistical frameworks model speech synthesis through a transition network optimizing a defined cost function by concatenating different phonemes (Hunt and Black, 1996). Deep learningbased systems directly learn the mapping between phoneme-sequence and mel spectrograms, which accurately represent acoustic and prosodic information. WaveNet (van den Oord et al., 2016) and Tacotron (Shen et al., 2018b) are examples of such black box architectures." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss in detail all components and their linking that form the SSMT pipeline." }, { "figure_ref": [ "fig_0" ], "heading": "System Overview", "publication_ref": [], "table_ref": [], "text": "The SSMT system consists of a cascade of four components as demonstrated by Figure 1: The input speech is passed to the ASR system, which transcribes the input speech in the source language. This transcription may also contain disfluencies from the speech, which are removed using the DC system. This source language text is translated into the target language text using the MT system. The TTS system then generates the corresponding target language speech." }, { "figure_ref": [], "heading": "Automatic Speech Recognition", "publication_ref": [ "b47", "b45", "b38", "b18" ], "table_ref": [], "text": "Recent deep learning techniques utilize unlabelled speech data using self-supervision and masked language modeling (Schneider et al., 2019;Baevski et al., 2020a). These methods pre-train large transformer models on vast quantities of unlabelled speech data to learn high-quality speech representations, followed by finetuning on limited labeled data to generate transcripts of a spoken utterance. Ruder et al. (2019) extends this technique for multilingual training facilitating finetuning in many low-resource languages. More recently, the Whisper ASR system (Radford et al., 2022) promises robust speech recognition leveraging vast quantities of weakly supervised parallel data to train large transformers for speech recognition and translation. Our ASR system is inspired by Gupta et al. (2021), which pre-trains a wav2vec 2.0 model (Appendix A.4.1) using unlabelled speech data in Indian languages followed by finetuning on labeled data in English and Hindi respectively. We further train this model on Signal to Noise Ratio (SNR) modulated audio samples to make the transcription quality more robust in noisy environments." }, { "figure_ref": [], "heading": "Disfluency Correction", "publication_ref": [ "b29", "b27" ], "table_ref": [], "text": "Our disfluency correction system is based on Kundu et al. (2022) which trains a large multilingual transformer model using real and synthetic data in English and Hindi. We use Google's MuRIL transformer (Khanuja et al., 2021) model since it has better representations for Indian languages and is trained on annotated data for token classification." }, { "figure_ref": [], "heading": "Machine Translation", "publication_ref": [], "table_ref": [], "text": "The TTMT is a task of automatically translating source language text into target-language text. In this section, we explain the data-preprocessing and the system development phases." }, { "figure_ref": [], "heading": "LABSE-based Corpus Filtering", "publication_ref": [ "b13", "b6", "b11", "b34" ], "table_ref": [], "text": "The task of Parallel Corpus Filtering aims to provide a scoring mechanism that helps extract good-quality parallel corpus from a noisy pseudo-parallel corpus. Feng et al. (2020) proposed the LaBSE model, which is a multilingual sentence embedding model trained on 109 languages, including some Indic languages. Batheja and Bhattacharyya (2022) proposed a combined approach of Phrase Pair Injection and LaBSEbased corpus filtering that helps improve the quality of MT systems. We have developed a LaBSEtoolkit that performs the following three tasks:\n1 This limitation creates a hurdle in training good-quality NMT models for lowresource language pairs. In such cases, a related pivot language can be used as an assisting language for the low-resource language pair. The naive cascade pivoting (De Gispert and Marino, 2006) approach suffers from the problems of double decoding time and propagating errors. To avoid these problems we need to train a single source-target NMT model that utilizes the resources of the pivot language. We use the pivot-based transfer learning approach in which we first train a source-pivot and pivot-target NMT model. Then we initialize the source-target model NMT with the encoder and decoder of the source-pivot and pivot-target NMT models. Then we finetune the source-target NMT model on the source-target parallel data. We use the transformer architecture for all the NMT models. The models are trained using the fairseq (Ott et al., 2019) library. We use the ctrans-late26 library for fast and efficient inference of the NMT models in deployment." }, { "figure_ref": [], "heading": "Text-To-Speech", "publication_ref": [ "b54" ], "table_ref": [], "text": "Models like WaveNet (van den Oord et al., 2016) and Tacotron (Shen et al., 2018b) are autoregressive speech synthesis frameworks that predict speech frames one-time step at a time. Since such recurrent prediction frameworks result in slow inference during deployment, non-autoregressive deep learning models like FastSpeech (Ren et al., 2019a) and Forward Tacotron7 were developed which generate speech frames in a single run. Our TTS model is an adaptation of the Forward Tacotron architecture trained on high-quality speech synthesis datasets." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the datasets and models that we used for all the experiments." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "Language Pair # of Sentence Pairs English-Hindi 9.4M English-Marathi 6.2M Hindi-Marathi 2.55M " }, { "figure_ref": [], "heading": "ASR", "publication_ref": [ "b0", "b42", "b16", "b35", "b29", "b39", "b32", "b1" ], "table_ref": [ "tab_1" ], "text": "We choose the CommonVoice dataset where speakers read text prompts and record audio using Computers, mobiles, etc., without professional recording instruments (Ardila et al., 2020). This method infuses noise due to the speaker's environmental conditions. We further augment noise to a part of this dataset to retrain the baseline model (Reddy et al., 2019). Due to resource constraints, we use approximately 25 hours of aligned speech data for English and Hindi, with 20 hours for training and 5 hours for testing. For evaluating English ASR, we use data from the test set of NPTEL2020 -Indian English speech dataset 8 . DC Our DC models are trained on Switchboard, the largest English annotated disfluency correction dataset (Godfrey et al., 1992). For English, we combine the Switchboard dataset with labeled synthetic disfluent data from Passali et al. (2021) to create an equitable distribution of samples from all disfluency types for training and testing. For Hindi, we utilize parallel disfluent-fluent sentences from Switchboard and augment them with synthetic disfluent sentences in Hindi created from transcribed fluent utterances by rule-based disfluency injection methods (Kundu et al., 2022). The Hindi DC model is evaluated on a gold standard dataset created by human-transcribed speech samples from YouTube podcasts and interviews. MT We have used various open-source parallel corpora such as Samanantar (Ramesh et al., 2022), Anuvaad, LoResMT (Ortega et al., 2021) workshop dataset, ILCI (Jha, 2010), and Spoken Tutorial dataset. We also create a parallel corpus for En-Mr of 120K sentences with the help of translation startups. The detailed dataset statistics of the parallel corpora used are mentioned in Table 1. TTS To train our TTS models, we use the In-dicTTS dataset (Baby et al., 2016) containing noisefree read speech. The corpus consists of 4.57 and 4.82 hours of speech data in Hindi and Marathi, respectively. We re-sample speech files to 22.05 kHz and remove terminal silence. We use the e-speak" }, { "figure_ref": [], "heading": "Stakeholder Feedback", "publication_ref": [ "b7" ], "table_ref": [ "tab_2" ], "text": "Co-founder of a voice services start-up We want to build a conversation bot for payment through UPI123Pay. We would like to integrate MT with our system. Doctors from a Mental Health Hospital SSMT can be used in tele-counseling for national telementor health programs. An employee of an agricultural technology company\nWe would like to explore the integration of the SSMT with our application for farmers. An employee of a stock trading startup Interested in collaboration. An employee of an electric vehicle charging station solutions company Use SSMT in providing voice support for EV charging stations. An employee of an e-commerce company Very nice initiative and approach. Requirement of Translation of e-commerce website. phonemizer (Bernard and Titeux, 2021) to convert graphemes to phonemes during training and inference. Dataset details are specified in Table 2 and training details are mentioned in Appendix A.5.4." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b27" ], "table_ref": [], "text": "ASR We use the Hugging Face checkpoint of Vakyansh English and Hindi ASR models and finetune it on the noisy dataset we created. Our experiments use a 12 GB NVIDIA GeForce RTX GPU, which significantly reduces training time. For more details, please refer to Appendix A.5.1 and A.6.1. DC The MuRIL (Khanuja et al., 2021) checkpoint from Hugging Face is used to finetune our DC models. We use the transformers package to train this BERT-based encoder for binary token classification, i.e., the model needs to predict whether each word in the sentence is disfluent or fluent. Training details and baseline comparisons have been provided in Appendix A.5.2 and A.6.2 respectively. MT The NMT models are based on the Transformer architecture. We use the fairseq library to train all models. The detailed model architecture and training details are mentioned in Appendix A.4.2 and A.5.3. The BLEU scores of the bestperforming NMT models for all language pairs on different test sets containing sentences from different domains are mentioned in Appendix A.6.3. TTS The forward tacotron architecture replaces 12 memory-consuming self-attention transformer layers of FastSpeech (Ren et al., 2019b) with the recurrent prediction framework from (Shen et al., 2018a). The autoregressive nature of training is removed by adding a length regulator to predict mel spectrograms in a single pass. In this section, we discuss the performance of our scalable SSMT system. Since there are no automatic evaluation metrics to evaluate an SSMT system, we perform subjective evaluation by conducting a widescale survey. We asked 101 participants to rate five samples per language pair on three key performance indicators (KPIs): Translation Quality (TQ), Speech Quality (SQ), and Interpretability (I) on a scale of 0 to 5. The participants were asked to listen to human-generated source language speech and SSMT-generated target language speech. Results of this survey are described in Table 4.\nOur SSMT system performs well on all three KPIs for all three translation directions. The English-Hindi SSMT system demonstrates the highest translation quality by receiving a TQ score of 4.43. For all three directions, the Speech Quality and Interpretability scores are more than 4.5 out of 5, which shows that the TTS system produces good-quality speech output. The performance of our individual ASR, DC, MT, and TTS systems are described in Appendix A.6. The BLEU scores of the MT models represent the quality of our TTMT system." }, { "figure_ref": [], "heading": "Stakeholder Feedback", "publication_ref": [], "table_ref": [ "tab_3", "tab_7" ], "text": "We showcased our SSMT and TTMT systems at large public events to get exposure and gather feedback. Various potential stakeholders like industry personnel, government officials, professors, students, and individuals interacted with our systems. Table 3 lists remarks made by a few of them.\nAlong with an appreciation for our solutions for their robustness, we also received valuable suggestions for improving our systems and extending our work. It led to improvements like reduced latency for both the MT systems, the addition of features including an automatic pause detection in the SSMT and an automatic language detection in the TTMT, enhanced UI, etc. Discussions with a diverse set of people educated us about unique, sought-after applications of the SSMT and TTMT systems. The eagerness expressed by people from various backgrounds to adopt our systems for their specific use cases has shown new directions to extend our work. We compare the median repose times of the deployed system with the baseline system. The baseline system consists of a single SSMT pipeline running on Nvidia RTX 2080Ti GPU. The deployed system consists of 104 SSMT pipelines running on the Nvidia DGX A100 machine which consists of 8 Nvidia A100 80GB GPUs.\nWe have deployed the SSMT system as a web service using ReactJS for the frontend and FastAPI for the backend. The frontend has the functionality to record the input speech and make API calls to the backend with the input speech as a payload. The backend runs the SSMT pipeline (ASR, DC, MT, and TTS models) and the input speech is passed through the SSMT pipeline to generate the output speech. The outputs of the ASR (input transcript), MT (output sentence), and TTS (output speech) are sent to the frontend which displays the output.\nWe have deployed the ASR, DC, MT, and TTS models on the Nvidia DGX A100 machine. The Nvidia DGX A100 machine has 8 Nvidia A100 GPUs and each GPU has 80GB of GPU memory. Each SSMT pipeline (which consists of ASR, DC, MT, and TTS models) occupies a space of around 6GB on GPU memory. In order to efficiently support multiple users at a time and utilize all the GPU memory, we deploy multiple SSMT pipelines on the machine. On each GPU we deploy 13 SSMT pipelines and in total, we deploy 104 SSMT pipelines across the 8 GPUs in the Nvidia DGX A100 machine. We also perform load testing of the SST system using the Locust tool. Table 5 shows the median response times of the SSMT system for different numbers of concurrent users. For 1000 concurrent users the SSMT system has a median response time of 4.4 sec." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this work, we develop Speech-to-Speech Machine Translation (SSMT) and Text-to-Text Machine Translation (TTMT) systems for English, Hindi, and Marathi languages. For SSMT, we follow the cascade-based approach that includes Automatic Speech Recognition (ASR), Disfluency Correction (DC), Machine Translation (MT), and Text-to-Speech Synthesis (TTS) components. We also develop the LaBSE-based parallel corpus filtering tool to extract high-quality parallel sentences from a noisy pseudo-parallel corpus for training the TTMT system. We deploy our SSMT and TTMT systems to be scalable so that multiple concurrent users are able to access them with very low latency. We test our systems in the real world and gathered invaluable feedback from various stakeholders that helped further improve our systems.\nWe are actively working towards incorporating more Indian languages into our SSMT and TTMT systems. We are also focusing on a more robust ASR system that is resistant to different accents, dialects, and noises in speech. Future work in TTS will incorporate more Indian languages as well as modifications in our architecture to generate more human-sounding speech. We are working on deploying our SSMT and TTMT systems on multiple GPU clusters to increase the number of concurrent users that can be served maintaining low latency." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The SSMT system brings certain limitations due to design decisions and the inherent complexity of the task at hand. Some of the major ones are:\n1. Our SSMT system is a cascade of multiple components. This architecture implies that errors from any single component are propagated through the pipeline. Hence each component in the system must be robust to ensure high-quality output. 2. The cascade-based SSMT approach has multiple models which increase the computational requirements and latency of the system. 3. End-to-End SSMT systems can potentially capture all the information in the input speech signal, like emotions and accents of speakers. These abilities are missing in the cascadebased SSMT approach that we follow." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our work aims to develop and deploy a scalable speech-to-speech machine translation system. For training all the ASR, DC, MT, and TTS models, we have used publicly available datasets, and we have cited their sources as well. The training data for NMT, which also includes the data generated by us earlier as a part of another project has already been submitted to concerned agencies which have put the data in the public domain. No user information was present in the datasets protecting users' privacy and identity. Publicly available datasets can sometimes contain biased data. We understand that every dataset is subject to intrinsic bias and that computational models will inevitably learn biased information from any dataset." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Salient Linguistic Properties (Referred from Section 1)" }, { "figure_ref": [], "heading": "A.1.1 English", "publication_ref": [], "table_ref": [], "text": "English belongs to the Indo-European language family and shows major influences from French and Latin. English has around 600 million native speakers and around 2 billion total speakers. English follows the subject-verb-object (SVO) word order. English has largely abandoned the inflectional case system in favor of analytic constructions. English distinguishes at least seven major word classes: verbs, nouns, adjectives, adverbs, determiners (including articles), prepositions, and conjunctions. English nouns are only inflected for number and possession. English pronouns conserve many traits of the case and gender inflection." }, { "figure_ref": [], "heading": "A.1.2 Hindi", "publication_ref": [], "table_ref": [], "text": "Hindi belongs to the Indo-Aryan language family and has around 300 million native speakers. Hindi is written in the Devanagari script. Most of the modern Hindi vocabulary is borrowed from Sanskrit. Hindi also has influence from Persian. Hindi follows the subject-object-verb word order.\nIn There are 7000+ languages worldwide, but more than half of the world's population uses only 23. Thus ASR has the potential to break the linguistic and communication barriers among the world population. Like any AI system, the amount of data available is significant in designing a state-of-theart system in ASR. There has been a vast amount of research in popular languages like English, Spanish, and German, known as high resource due to the large-scale availability of data and open-source research in these languages. However, low resource languages such as Hindi, Marathi, and Tamil do not have high-performance systems due to the lack of transcribed data.\nWe treat the acoustic input signal as O = {o 1 , o 2 , o 3 , o 4 , . . . } a series of observations and define a sequence of words as the desired output W = {w 1 , w 2 , w 3 , w 4 , . . . }.\nWe would like to get those sequence of words W from the language L, which maximizes the following condition given the acoustic input O." }, { "figure_ref": [ "fig_1" ], "heading": "Ŵ = arg max", "publication_ref": [], "table_ref": [], "text": "W ∈L P (W |O)(1)\nWe can use Bayes rule to rewrite this as -\nŴ = arg max W ∈L P (O|W )P (W ) P (O)(2)\nFor every possible sequence of words W, the denominator of equation 2 is the same. Since we are dealing with the argmax operator, we can ignore the denominator and write the final expression as -\nŴ = arg max W ∈L P (O|W )P (W )(3)\nEach component in a ASR system plays an important role in calculating the above two probabilities.\nWord Error Rate (WER) is a common metric used to evaluate ASR systems. Derived from the Levenshtein distance, WER calculates ground-truth deviations at the word level instead of the phoneme level. Word error rate can then be computed as:\nW ER = S + D + I N = S + D + I S + D + C (4)\nwhere S is the number of substitutions, D is the number of deletions, I is the number of insertions, C is the number of correct words, N is the number of words in the reference A.3 Background of DC Systems (Referred from Section 2)\nDisfluency correction systems learn the mapping between disfluency structure and types of disfluencies while detecting the presence/absence of disfluent utterances. The structure of any disfluent utterance is composed of three parts -reparandum, interregnum, and repair. The reparandum consists of the words incorrectly uttered by the speaker and will need correction or complete removal. Thus this section consists of one or more words that will be repeated or corrected (in case of Repetition or Correction) or abandoned completely (in case of a False Start). It is often followed by a marker called the interruption point which is the point at which the speaker realizes that they have made a mistake. The interregnum consists of acknowledgment words that the previous utterance may not be correct. This part consists of an editing term, a nonlexicalized filler pause like \"uh\" or \"um\", discourse markers like \"well\", or \"you know\", or interjections. Interregnum is followed by the repair, which consists of words spoken to correct previous errors.\nWords from the reparandum are finally corrected or repeated (in case of Repetition or Correction), or a completely new sentence is started (in case of False Start) in the repair section. In many cases, the interruption point and interregnum may be a simple pause in utterance and thus can be empty in the structure. Figure 2 illustrates the surface structure of disfluent utterances through an example. " }, { "figure_ref": [], "heading": "A.3.1 Filled Pause", "publication_ref": [], "table_ref": [], "text": "Filled pauses consist of utterances that have no semantic meaning.\nExample -What about the uh party we have to go to?" }, { "figure_ref": [], "heading": "A.3.2 Interjection", "publication_ref": [], "table_ref": [], "text": "Interjections are similar to filled pauses, but their inclusion in sentences indicates affirmation or negation.\nExample -Ugh, what a night it has been!" }, { "figure_ref": [], "heading": "A.3.3 Discourse Marker", "publication_ref": [], "table_ref": [], "text": "Discourse markers help the speaker begin a conversation or keep a turn while speaking. Just like filled pauses and interjections, these words do not add semantic meaning to the sentence." }, { "figure_ref": [], "heading": "Language", "publication_ref": [ "b45", "b38", "b18", "b45", "b38", "b18", "b35", "b27", "b16", "b29" ], "table_ref": [], "text": "Model Word Error Rate English wav2vec 2.0 -base (Baevski et al., 2020b) 49.20 wav2vec 2.0 -XLSR (Ruder et al., 2019) 31.57 Whisper -small (Radford et al., 2022) 28.40 wav2vec 2.0 -Vakyansh (Gupta et al., 2021) 32.80 wav2vec 2.0 -Vakyansh & Noisy finetuning 28.20 Hindi wav2vec 2.0 -XLSR (Ruder et al., 2019) 44.08 Whisper -small (Radford et al., 2022) 34.60 wav2vec 2.0 -Vakyansh (Gupta et al., 2021) 19.14 wav2vec 2.0 -Vakyansh & Noisy finetuning 16.19 (Passali et al., 2021) improved the performance of our baseline MuRIL transformer (Khanuja et al., 2021) trained on the Switchboard corpus (Godfrey et al., 1992). We get similiar results in Hindi when adding synthetic disfluent sentences from (Kundu et al., 2022) improved the performance over a zero shot baseline.\nExample -Well, we are going to the party." }, { "figure_ref": [], "heading": "A.3.4 Repetition or Correction", "publication_ref": [], "table_ref": [], "text": "This disfluency type covers the repetition of certain words in the sentence and correcting words that were incorrectly uttered.\nExample -If I can't don't go to the party today, it is not going to look good." }, { "figure_ref": [], "heading": "A.3.5 False Start", "publication_ref": [], "table_ref": [], "text": "False starts occur when a previous chain of thought is abandoned, and a new idea is begun.\nExample -Tuesdays don't work for me, how about Wednesday?" }, { "figure_ref": [], "heading": "A.3.6 Edit", "publication_ref": [], "table_ref": [], "text": "The Edit disfluency type refers to the set of words that are uttered to correct previous statements.\nExample -We need two tickets, I'm sorry, three tickets for the flight to New York." }, { "figure_ref": [], "heading": "A.4 Model Architecture", "publication_ref": [ "b55" ], "table_ref": [], "text": "A.4.1 wav2vec 2.0 (Referred from Section 3.2)\nThe wav2vec 2.0 model (Baevski et al., 2020b) consists of a multi-layer convolutional feature encoder that acts on the raw audio input to generate speech representations z 1 , z 2 , . . . , z T and quantized tar-gets q 1 , q 2 , . . . , q T for self-supervision. These representations are fed into a Transformer model (Vaswani et al., 2017) to learn context representations c 1 , c 2 , . . . , c T which capture the information contained in the entire sequence. The masked learning objective hides certain time steps in the speech representation space and the objective during training is to predict the quantized targets for these time steps." }, { "figure_ref": [], "heading": "A.4.2 Neural Machine Translation", "publication_ref": [ "b55" ], "table_ref": [], "text": "We used the Transformer (Vaswani et al., 2017) architecture for all the NMT models in our experiments. We used the Indic NLP library for preprocessing the Indic language data and Moses for preprocessing the English language data. For Indic languages, we normalize and tokenize the data. For English, we lowercase and tokenize the data. We use the byte pair encoding (Sennrich et al., 2016b) technique to convert the words in the data into subwords. We perform byte pair encoding with 24,000 merge operations on the dataset. We used the fairseq library for training all Transformer based NMT models in all our experiments. We used the Adam optimizer with beta values of 0.9 and 0.98. We used the inverse square root learning rate scheduler with 4000 warm-up updates and a learning rate of 5e-4. The dropout probability value used was 0.1. We used label-smoothed cross-entropy loss with a label-smoothing value of 0.1. The batch size used was 4096 tokens. We train all the models for 200,000 steps and pick the models that give the best loss on the validation set as the final model. We train all our models on a single Nvidia A100 80GB GPU." }, { "figure_ref": [], "heading": "A.5.4 Text-to-Speech", "publication_ref": [ "b37" ], "table_ref": [ "tab_10", "tab_11" ], "text": "The Forward Tacotron model is sensitive to learning rate scheduling during training. For every language, learning rates were experimentally determined. While training the Tacotron model for extracting alignments, learning rates had to be changed after fixed optimization steps. This pro-cess took only 25K steps since we only train for attention alignments. Since we use only around 5 hours of data in each language, we reduce the total optimization steps from 300K to 40K which was sufficient for model convergence.\nA.6 Results (Referred from Section 5)\nA.6.1 Automatic Speech Recognition (Referred from Section 4.2) Due to the lack of work in Indian languages disfluency correction, we report the performance of our models compared to a strong baseline with real labeled data. We observe that adding synthetic data in complex disfluencies such as Repetitions and False Starts improves both the precision and recall of our models (Table 7).\nA.6.3 Machine Translation (Referred from Section 4.2)\nTable 8 shows the BLEU scores of the bestperforming NMT models for all the language pairs. These NMT models are used in the SSMT pipeline. We compute the BLEU scores using the sacrebleu (Post, 2018) library." }, { "figure_ref": [], "heading": "A.6.4 Text To Speech", "publication_ref": [ "b12" ], "table_ref": [ "tab_12" ], "text": "To compare the performance of our model with a strong baseline, we train the autoregressive Tacotron 2 architecture with the dataset we use to train our Forward Tacotron model. Instead of training from scratch, we use a transliteration module to convert Hindi and Marathi sentences in Devanagari to Roman characters (Debnath et al., 2020). This jumpstarts the training since the model finetunes its grapheme to phoneme embedding without learning them for a new script. TTS systems are evaluated using subjective surveys of speech outputs measuring the scores (out of 5.0) for two parameters -Audio Quality (AQ) and Interpretability (I). The Mean Opinion Score (MOS) is calculated as an average of these two metrics. Table 9 shows the results of our TTS system evaluation." } ]
In this work, we present our deploymentready Speech-to-Speech Machine Translation (SSMT) system for English-Hindi, English-Marathi, and Hindi-Marathi language pairs. We develop the SSMT system by cascading Automatic Speech Recognition (ASR), Disfluency Correction (DC), Machine Translation (MT), and Text-to-Speech Synthesis (TTS) models. We discuss the challenges faced during the research and development stage and the scalable deployment of the SSMT system as a publicly accessible web service. On the MT part of the pipeline too, we create a Text-to-Text Machine Translation (TTMT) service in all six translation directions involving English, Hindi, and Marathi. To mitigate data scarcity, we develop a LaBSE-based corpus filtering tool to select high-quality parallel sentences from a noisy pseudo-parallel corpus for training the TTMT system. All the data used for training the SSMT and TTMT systems and the best models are being made publicly available. Users of our system are (a) Govt. of India in the context of its new education policy 1 (NEP), (b) tourists who criss-cross the multilingual landscape of India, (c) Indian Judiciary where a leading cause of the pendency of cases (to the order of 10 million as on date) is the translation of case papers, (d) farmers who need weather and price information and so on. We also share the feedback received from various stakeholders when our SSMT and TTMT systems were demonstrated in large public events.
VAKTA-SETU: A Speech-to-Speech Machine Translation Service in Select Indic Languages
[ { "figure_caption": "Figure 1 :1Figure 1: Speech-to-Speech Machine Translation pipeline", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Surface structure of disfluencies There are six types of disfluencies encountered in real life -Filled Pause, Interjection, Discourse Marker, Repetition or Correction, False Start, and Edit. This section describes each type of disfluency and gives some examples in English.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Dataset Statistics for the task of NMT", "figure_data": "LanguageASRTTS(# of hours) (# of hours)English25.57NAHindi24.924.57MarathiNA4.82", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Dataset Statistics for the task of ASR and TTS", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Feedback received from potential stakeholders of our SSMT and TTMT systems.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Human Evaluation Scores of the SSMT system. The number of participants in the survey was 101.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Median response times (in milliseconds) of the SSMT system for different numbers of concurrent users.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Hindi, nouns are inflected for number, gender, and case. Hindi has two numbers, singular and plural. It has two grammatical genders, masculine and feminine. And it has two cases direct and oblique. The gender of inanimate objects is not predictable from the form or meaning. Pronouns are inflected for numbers and cases. Adjectives are of two types declinable and indeclinable. Verbs are inflected for person, number, gender, tense, mood, and aspect.", "figure_data": "A.1.3 MarathiMarathi belongs to the Indo-Aryan language familyand has around 83 million native speakers. Marathiis written in the Devanagari script. Marathi em-ploys agglutinative, inflectional, and analyticalforms. Marathi has three grammatical genders,masculine, feminine, and neuter. Marathi followsthe subject-object-verb word order. Marathi alsoshares vocabulary and grammar with Dravidian lan-guages. Marathi follows a split-ergative pattern ofverb agreement and case marking. An unusualfeature of Marathi, as compared to other Indo-European languages, is that it displays inclusiveand exclusive we, common to the Dravidian lan-guages.A.2 Background of ASR Systems (Referredfrom Section 2)Speech recognition is often the first task in inter-active & intelligent NLP agents and is a crucialmodule for downstream systems. However, ASRdid not receive much attention till the first half ofthe 20 th century. After the 1950s, corporationsworldwide started investing in recognition tech-nologies, paving the way for high-quality researchand production.", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results of evaluating ASR baselines on chosen English & Hindi test sets. Word Error Rate (WER) is calculated as a percentage and follows an inverse relationship with recognition accuracy. Higher WER indicates a worse model and a lower WER indicates a better model.", "figure_data": "LanguageModelPrecision RecallF1 ScoreEnglishMuRIL -SWBD94.9694.3394.64MuRIL -SWBD & LARD97.9295.0996.48HindiMuRIL -SWBD68.2458.4662.97MuRIL -SWBD & Syn Hi85.3879.4182.29", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "F1 scores of DC models in English & Hindi. Adding synthetic disfluent sentences from", "figure_data": "", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "BLEU scores of the NMT models for English-Marathi, English-Hindi, and Hindi-Marathi language pairs on different test sets. We used the sacrebleu(Post, 2018) library to compute the BLEU scores. The FLORES test set consists of 1012 parallel sentences across various domains, so it is a multi-domain test set. The Tico-19 test set consists of 2100 sentences from the healthcare domain. The ILCI test set consists of 2000 sentences from the tourism and healthcare domain.", "figure_data": "The transformer model has 6 encoder layersand 6 decoder layers. The number of encoder atten-tion heads is 8 and the number of decoder attentionheads is 8. The encoder and decoder embeddingdimensions are 512. The encoder and decoder feed-forward layer dimensions are 2048. The number ofhyperparameters of the Transformer model that weused is 75M.A.5 Training DetailsA.5.1 Automatic Speech Recognition(Referred from Section 4.2)The ASR architecture we use is based on thewav2vec 2.0 (Baevski et al., 2020b) framework andconsists of 12 blocks of model dimension 768 and", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Results of the subjective evaluation for our TTS systems; Audio Quality (AQ) provides information about the speech quality generated and general prosody of the system output; Interpretability (I) asks surveyors to evaluate the audio outputs for clarity of speech and understanding of the semantic content; All scores are reported", "figure_data": "out of 5.08 attention blocks. Audio samples are re-sampledA.5.3 Neural Machine Translation (Referredto 16K KHz and cropped to 250,000 audio framesfrom Section 4.2)with a dropout of 0.1. This base model is pre-trained on unlabelled speech data for almost 300Kiterations starting with a learning rate of 5e-1. Weoptimize the pre-training loss function using Adam.During finetuning, a fully connected layer is addedafter the transformer block for character-level pre-diction. For our experiments, we use a pretrainedtransformer encoder so that a limited amount ofdata is only used to finetune the weights of theouter fully connected layer.A.5.2 Disfluency Correction (Referred fromSection 4.2)We use the MuRIL (Khanuja et al., 2021) trans-former model (muril-base-cased) from HuggingFace for our DC experiments. MuRIL consists ofa BERT base encoder model pretrained on textualdata in 17 Indian languages for the masked lan-guage modeling and translated language modelingobjectives. Training is performed for 1M steps witha maximum sequence length of 512 and a globalbatch size of 4096. The AdamW optimizer is usedwith a learning rate of 5e-4. The final model has236M parameters. We utilize this pretrained check-point and finetune it for disfluency correction byadding a subword token classifier on top of the en-coder. For each subword identified by the MuRILtokenizer, the model predicts if the token is disflu-ent or fluent.", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Table6provides details about the experimentation we perform with different iterations of the wav2vec 2.0 architecture. We also use the Whisper-small model for comparison. Our experiments show that noisy finetuning by training baseline systems with synthetically injected noisy data improves recognition accuracy in Indian English and Hindi. A similar experiment for the Whisper architecture will be conducted in future work.", "figure_data": "A.6.2 Disfluency Correction (Referred fromSection 4.2)", "figure_id": "tab_13", "figure_label": "", "figure_type": "table" } ]
Shivam Mhaskar; Vineet Bhat; Akshay Batheja; Sourabh Deoghare; Paramveer Choudhary; Bhattacharyya Cfilt
[ { "authors": "Rosana Ardila; Megan Branson; Kelly Davis; Michael Kohler; Josh Meyer; Michael Henretty; Reuben Morais; Lindsay Saunders; Francis Tyers; Gregor Weber", "journal": "European Language Resources Association", "ref_id": "b0", "title": "Common voice: A massivelymultilingual speech corpus", "year": "2020" }, { "authors": "Arun Baby; Anju Thomas; L Nishanthi; Consortium", "journal": "", "ref_id": "b1", "title": "Resources for Indian languages", "year": "2016" }, { "authors": "Alexei Baevski; Henry Zhou; Abdelrahman Mohamed; Michael Auli", "journal": "", "ref_id": "b2", "title": "a. Wav2vec 2.0: A framework for self-supervised learning of speech representations", "year": "2020" }, { "authors": "Alexei Baevski; Henry Zhou; Abdelrahman Mohamed; Michael Auli", "journal": "", "ref_id": "b3", "title": "Wav2vec 2.0: A framework for self-supervised learning of speech representations", "year": "2020" }, { "authors": "Parnia Bahar; Patrick Wilken; Tamer Alkhouli; Andreas Guta; Pavel Golik; Evgeny Matusov; Christian Herold", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Start-before-end and endto-end: Neural speech translation by AppTek and RWTH Aachen University", "year": "2020" }, { "authors": "Dzmitry Bahdanau; Kyung ; Hyun Cho; Yoshua Bengio", "journal": "", "ref_id": "b5", "title": "Neural machine translation by jointly learning to align and translate", "year": "2015" }, { "authors": "Akshay Batheja; Pushpak Bhattacharyya", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Improving machine translation with phrase pair injection and corpus filtering", "year": "2022" }, { "authors": "Mathieu Bernard; Hadrien Titeux", "journal": "Journal of Open Source Software", "ref_id": "b7", "title": "Phonemizer: Text to phones transcription for multiple languages in python", "year": "2021" }, { "authors": "William Chan; Navdeep Jaitly; Quoc Le; Oriol Vinyals", "journal": "", "ref_id": "b8", "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "year": "2016" }, { "authors": "Kyunghyun Cho; Bart Van Merriënboer; Caglar Gulcehre; Dzmitry Bahdanau; Fethi Bougares; Holger Schwenk; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "year": "2014" }, { "authors": "George E Dahl; Dong Yu; Li Deng; Alex Acero", "journal": "IEEE Transactions on Audio, Speech, and Language Processing", "ref_id": "b10", "title": "Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition", "year": "2012" }, { "authors": "Adrià De; Gispert ; Jose B Marino", "journal": "", "ref_id": "b11", "title": "Catalanenglish statistical machine translation without parallel corpus: bridging through spanish", "year": "2006" }, { "authors": "Ankur Debnath; S Shridevi; Gangotri Patil; Ramakrishnan Nadiger; Angarai Ganesan", "journal": "", "ref_id": "b12", "title": "Lowresource end-to-end sanskrit tts using tacotron2, waveglow and transfer learning", "year": "2020" }, { "authors": "Fangxiaoyu Feng; Yinfei Yang; Daniel Cer; Naveen Arivazhagan; Wei Wang", "journal": "", "ref_id": "b13", "title": "Languageagnostic bert sentence embedding", "year": "2020" }, { "authors": "Fangxiaoyu Feng; Yinfei Yang; Daniel Cer; Naveen Arivazhagan; Wei Wang", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Languageagnostic BERT sentence embedding", "year": "2022" }, { "authors": "Mark Gales; Steve Young", "journal": "Foundations and Trends® in Signal Processing", "ref_id": "b15", "title": "The application of hidden markov models in speech recognition", "year": "2008" }, { "authors": "John J Godfrey; Edward Holliman; J Mcdaniel", "journal": "", "ref_id": "b16", "title": "Switchboard: telephone speech corpus for research and development", "year": "1992" }, { "authors": "Alex Graves; Abdel-Rahman Mohamed; Geoffrey Hinton", "journal": "", "ref_id": "b17", "title": "Speech recognition with deep recurrent neural networks", "year": "2013" }, { "authors": "Anirudh Gupta; Harveen Singh Chadha; Priyanshi Shah; Neeraj Chimmwal; Ankur Dhuriya; Rishabh Gaur; Vivek Raghavan", "journal": "", "ref_id": "b18", "title": "CLSRIL-23: cross lingual speech representations for indic languages", "year": "2021" }, { "authors": "Awni Hannun; Carl Case; Jared Casper; Bryan Catanzaro; Greg Diamos; Erich Elsen; Ryan Prenger; Sanjeev Satheesh; Shubho Sengupta; Adam Coates; Andrew Ng", "journal": "", "ref_id": "b19", "title": "Deepspeech: Scaling up end-toend speech recognition", "year": "2014" }, { "authors": "Matthias Honal; Tanja Schultz", "journal": "", "ref_id": "b20", "title": "Correction of disfluencies in spontaneous speech using a noisychannel approach", "year": "2004" }, { "authors": "Matthew Honnibal; Mark Johnson", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b21", "title": "Joint incremental disfluency detection and dependency parsing", "year": "2014" }, { "authors": "Julian Hough; David Schlangen", "journal": "", "ref_id": "b22", "title": "Recurrent neural networks for incremental disfluency detection", "year": "2015" }, { "authors": "A J Hunt; A W Black", "journal": "", "ref_id": "b23", "title": "Unit selection in a concatenative speech synthesis system using a large speech database", "year": "1996" }, { "authors": "Paria Jamshid; Lou ; Mark Johnson", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Disfluency detection using a noisy channel model and a deep neural language model", "year": "2017" }, { "authors": "Paria Jamshid; Lou ; Mark Johnson", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Improving disfluency detection by self-training a selfattentive model", "year": "2020" }, { "authors": "Girish Nath; Jha ", "journal": "European Language Resources Association (ELRA)", "ref_id": "b26", "title": "The TDIL program and the Indian langauge corpora intitiative (ILCI)", "year": "2010" }, { "authors": "Simran Khanuja; Diksha Bansal; Sarvesh Mehtani; Savya Khosla; Atreyee Dey; Balaji Gopalan; Dilip Margam; Pooja Aggarwal; Rajiv Teja Nagipogu; Shachi Dave; Shruti Gupta; Subhash Gali; Partha Vish Subramanian; Talukdar", "journal": "", "ref_id": "b27", "title": "Muril: Multilingual representations for indian languages", "year": "2021" }, { "authors": "Yunsu Kim; Petre Petrov; Pavel Petrushkov; Shahram Khadivi; Hermann Ney", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Pivot-based transfer learning for neural machine translation between non-English languages", "year": "2019" }, { "authors": "Rohit Kundu; Preethi Jyothi; Pushpak Bhattacharyya", "journal": "International Committee on Computational Linguistics", "ref_id": "b29", "title": "Zero-shot disfluency detection for Indian languages", "year": "2022" }, { "authors": "Ann Lee; Peng-Jen Chen; Changhan Wang; Jiatao Gu; Sravya Popuri; Xutai Ma; Adam Polyak; Yossi Adi; Qing He; Yun Tang; Juan Pino; Wei-Ning Hsu", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Direct speech-to-speech translation with discrete units", "year": "2022" }, { "authors": "Max Morrison; Rithesh Kumar; Kundan Kumar; Prem Seetharaman; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b31", "title": "Chunked autoregressive GAN for conditional waveform synthesis", "year": "2022" }, { "authors": "John Ortega; Atul Kr; Katharina Ojha; Chao-Hong Kann; Liu", "journal": "", "ref_id": "b32", "title": "Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages", "year": "2021" }, { "authors": "M Ostendorf; S Hahn", "journal": "", "ref_id": "b33", "title": "A sequential repetition model for improved disfluency detection", "year": "2013" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Tatiana Passali; Alexios Gidiotis; Efstathios Chatzikyriakidis; Grigorios Tsoumakas", "journal": "", "ref_id": "b35", "title": "Towards human-centered summarization: A case study on financial news", "year": "2021" }, { "authors": "Tatiana Passali; Thanassis Mavropoulos; Grigorios Tsoumakas; Georgios Meditskos; Stefanos Vrochidis", "journal": "European Language Resources Association", "ref_id": "b36", "title": "LARD: Large-scale artificial disfluency generation", "year": "2022" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Tao Xu; Greg Brockman; Christine Mcleavey; Ilya Sutskever", "journal": "", "ref_id": "b38", "title": "Robust speech recognition via large-scale weak supervision", "year": "2022" }, { "authors": "Gowtham Ramesh; Sumanth Doddapaneni; Aravinth Bheemaraj; Mayank Jobanputra; A K Raghavan; Ajitesh Sharma; Sujit Sahoo; Harshita Diddee; J Mahalakshmi; Divyanshu Kakwani; Navneet Kumar; Aswin Pradeep; Srihari Nagaraj; Kumar Deepak; Anoop Vivek Raghavan; Pratyush Kunchukuttan; Mitesh Kumar; Khapra Shantadevi", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b39", "title": "Samanantar: The largest publicly available parallel corpora collection for 11 indic languages", "year": "2022" }, { "authors": "Kanishka Rao; Hasim Sak; Rohit Prabhavalkar", "journal": "", "ref_id": "b40", "title": "Exploring architectures, data and units for streaming end-to-end speech recognition with rnntransducer", "year": "2017" }, { "authors": "Sharath Rao; Ian Lane; Tanja Schultz", "journal": "", "ref_id": "b41", "title": "Improving spoken language translation by automatic disfluency removal: evidence from conversational speech transcripts", "year": "2007" }, { "authors": "Chandan Reddy; Ebrahim Beyrami; Jamie Pool; Ross Cutler; Sriram Srinivasan; Johannes Gehrke", "journal": "", "ref_id": "b42", "title": "A scalable noisy speech dataset and online subjective test framework", "year": "2019" }, { "authors": "Yi Ren; Yangjun Ruan; Xu Tan; Tao Qin; Sheng Zhao; Zhou Zhao; Tie-Yan Liu", "journal": "Curran Associates Inc", "ref_id": "b43", "title": "FastSpeech: Fast, Robust and Controllable Text to Speech", "year": "2019" }, { "authors": "Yi Ren; Yangjun Ruan; Xu Tan; Tao Qin; Sheng Zhao; Zhou Zhao; Tie-Yan Liu", "journal": "Curran Associates Inc", "ref_id": "b44", "title": "FastSpeech: Fast, Robust and Controllable Text to Speech", "year": "2019" }, { "authors": "Sebastian Ruder; Anders Søgaard; Ivan Vulić", "journal": "", "ref_id": "b45", "title": "Unsupervised cross-lingual representation learning", "year": "2019" }, { "authors": "Nikhil Saini; Jyotsana Khatri; Preethi Jyothi; Pushpak Bhattacharyya", "journal": "", "ref_id": "b46", "title": "Generating fluent translations from disfluent text without access to fluent references: IIT Bombay@IWSLT2020", "year": "2020" }, { "authors": "Steffen Schneider; Alexei Baevski; Ronan Collobert; Michael Auli", "journal": "", "ref_id": "b47", "title": "wav2vec: Unsupervised pre-training for speech recognition", "year": "2019" }, { "authors": "Sukanta Sen; Mohammed Hasanuzzaman; Asif Ekbal; Pushpak Bhattacharyya; Andy Way", "journal": "Natural Language Engineering", "ref_id": "b48", "title": "Neural machine translation of low-resource languages using smt phrase pair injection", "year": "2021" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Improving neural machine translation models with monolingual data", "year": "2016" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "Jonathan Shen; Ruoming Pang; Ron Weiss; Mike Schuster; Navdeep Jaitly; Zongheng Yang; Zhifeng Chen; Yu Zhang; Yuxuan Wang; Rj Skerrv-Ryan; Rif Saurous; Yannis Agiomvrgiannakis; Yonghui Wu", "journal": "", "ref_id": "b51", "title": "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions", "year": "2018" }, { "authors": "Jonathan Shen; Ruoming Pang; Ron J Weiss; Mike Schuster; Navdeep Jaitly; Zongheng Yang; Zhifeng Chen; Yu Zhang; Yuxuan Wang; Rj Skerrv-Ryan; Rif A Saurous; Yannis Agiomvrgiannakis; Yonghui Wu", "journal": "", "ref_id": "b52", "title": "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions", "year": "2018" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "Advances in neural information processing systems", "ref_id": "b53", "title": "Sequence to sequence learning with neural networks", "year": "2014" }, { "authors": "Aäron Van Den Oord; Sander Dieleman; Heiga Zen; Karen Simonyan; Oriol Vinyals; Alex Graves; Nal Kalchbrenner; Andrew Senior; Koray Kavukcuoglu", "journal": "", "ref_id": "b54", "title": "WaveNet: A Generative Model for Raw Audio", "year": "2016" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b55", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b56", "title": "", "year": "" }, { "authors": "Wen Wang; Gokhan Tur; Jing Zheng; Necip Fazil Ayan", "journal": "", "ref_id": "b57", "title": "Automatic disfluency removal for improving spoken language translation", "year": "2010" } ]
[ { "formula_coordinates": [ 11, 170.33, 251.02, 118.81, 16.1 ], "formula_id": "formula_0", "formula_text": "W ∈L P (W |O)(1)" }, { "formula_coordinates": [ 11, 114.39, 298.32, 174.75, 24.43 ], "formula_id": "formula_1", "formula_text": "Ŵ = arg max W ∈L P (O|W )P (W ) P (O)(2)" }, { "formula_coordinates": [ 11, 115.58, 396.87, 173.55, 18.86 ], "formula_id": "formula_2", "formula_text": "Ŵ = arg max W ∈L P (O|W )P (W )(3)" }, { "formula_coordinates": [ 11, 98.2, 552, 190.94, 24.43 ], "formula_id": "formula_3", "formula_text": "W ER = S + D + I N = S + D + I S + D + C (4)" } ]
2023-05-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Accessing the scientific literature through the most effective information retrieval (IR or search) technologies is crucial for locating evidence. Hence, rapid implementation of IR systems optimised for such a context and a comparison of their efficacy were required. In consequence, the COVID-19 global health crisis has exacerbated the issue. The immediate need for information on a worldwide scale has resulted in an exponential increase in scientific literature publication. Relevant and dependable information has become an urgent necessity.\nConcerning the application of IR in such a pandemic, there are numerous fundamental research concerns that must be answered, including the identification of essential IR modalities, the development of domain-specific search engines, and the quantitative evaluation of search engine performance. In an effort to combat the pandemic, massive COVID-19-related corpora are being compiled, often containing erroneous data that is no longer amenable to human analysis.\nThe challenge evaluation is a standard method for evaluating IR systems on a wide scale. The Text Retrieval Conference (TREC), organised by the US National Institute of Standards and Technology, is the largest and most well-known approach (NIST). The TREC framework was applied to the COVID-19 Open Research Dataset (CORD-19), a dynamic repository of scientific papers on COVID-19 and historical coronavirus research connected to COVID-19. The dataset is described in full under Section corpus. The major objective of the TREC-COVID Competition was to develop a collection of test data for evaluating search engines' ability to navigate the complicated information landscape during events such as a pandemic. In this research, we evaluate various Information Retrieval frameworks, including BM25, Contriever, Bag of Embeddings, etc., based on their ability to rank documents according to their relevance to input queries.\nIn this study, we begin by utilising document data from the CORD-19 dataset and pre-processing it with its metadata. We opted to examine several IR models because the hand classified data categorising documents as relevant, slightly relevant, or irrelevant was unavailable. Our baseline model is BM25, which compares the relevance of searches to the document by utilising the Abstract and Title. Additionally, we retrieve the list of relevant documents from other faster models and compare them to the standard documents. Finally, we compare our results to those manually labelled according to the TREC-COVID IR Challenge." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b0", "b1", "b2", "b0", "b3", "b4", "b2" ], "table_ref": [], "text": "The COVID-19 pandemic has led to a global surge in scientific literature, and the need for efficient information retrieval (IR) systems has become more critical than ever. The CORD-19 dataset, a large-scale open access resource of scientific papers on COVID-19 and related historical coronavirus research, has emerged as a valuable resource for COVID-19 research. As a result, several studies have been conducted to explore the use of IR systems and natural language processing (NLP) and machine learning (ML) techniques to improve IR performance on the CORD-19 dataset.\nOne notable study conducted by [1] developed the CORD-19 dataset, which has been widely used in COVID-19 research. Other studies have used NLP and ML techniques to improve IR performance on the CORD-19 dataset. For example, [2] developed a deep learning-based text mining framework for COVID-19 literature, while [3] used unsupervised topic modeling and machine learning techniques to analyze the COVID-19 \"infodemic\" using the CORD-19 dataset.\nIn addition to improving IR performance, the CORD-19 dataset has been used for drug repurposing and knowledge graph construction. [1] developed a COVID-19 literature knowledge graph construction and drug repurposing report generation system using the CORD-19 dataset. [4] used the CORD-19 dataset to mine the literature for COVID-19 research and identify potential drug candidates for further investigation.\nStudies have also evaluated the CORD-19 dataset and its use in IR systems. [5] provided an overview of the CORD-19 dataset and its potential uses, while [3] applied domain adaptation techniques to improve IR performance on the CORD-19 dataset. Overall, the CORD-19 dataset has been widely used and evaluated in the context of COVID-19 research, particularly in the development and evaluation of IR systems using NLP and ML techniques, as well as for drug repurposing and knowledge graph construction.\nThe COVID-19 pandemic has highlighted the need for rapid implementation and evaluation of IR systems. The CORD-19 dataset has emerged as a valuable resource for COVID-19 research and has been extensively studied to improve IR performance, analyze the \"infodemic,\" and identify potential drug candidates. As such, the CORD-19 dataset has become a crucial component of COVID-19 research, with many studies evaluating its effectiveness and exploring its potential uses.\nIn conclusion, the CORD-19 dataset has been widely used and evaluated in the context of COVID-19 research, particularly in the development and evaluation of IR systems using NLP and ML techniques, as well as for drug repurposing and knowledge graph construction. The dataset has provided valuable insights and has played a significant role in advancing COVID-19 research." }, { "figure_ref": [], "heading": "Document Set Description", "publication_ref": [], "table_ref": [], "text": "TREC-COVID makes use of the CORD-19 document collection. CORD-19 contains fresh articles and preprints on COVID-19, as well as previous studies on coronaviruses such as SARS and MERS. The CORD-19 release on April 10, 2020, which will be utilised for the first round of TREC-COVID, has 51K papers, with full text available for 39K. We train and evaluate our models using whole 51000 documents." }, { "figure_ref": [], "heading": "File Structure:", "publication_ref": [], "table_ref": [], "text": "1. CORD-19 -folder holding the CORD-19 papers and metadata as of May 19, 2020.\n2. topics-rnd3.csv -file containing each topic's topic-id, query, question, and narrative.\n3. docids-rnd3.txt -the set of documents that can be projected to be relevant; these papers do not include those that have been rated in prior rounds for a topic.\n4. qrels.csv -human annotated relevant and irrelevant document categorization for the 40 queries." }, { "figure_ref": [], "heading": "Topics:", "publication_ref": [], "table_ref": [], "text": "TREC-COVID topics were written by its organisers with biomedical training, and were inspired by consumer questions submitted to the National Library of Medicine, discussions by medical influencers on social media, and suggestions solicited on Twitter in late March 2020 via the #COVIDSearch tag. They are representative of the pandemic's high-level concerns. An initial set of 30 subjects was established, with 5 new topics added for each subsequent round. As a result, we ran our retrieval on 40 topics, which can be accessed in the topics-rnd3.csv file.\nThe subject file is an xml file that contains all of the round's topics. A topic is formatted as follows (this is an example topic, not part of the official topic set):\nEach topic is composed of three fields:\n1. query: a short keyword query 2. question: a more precise natural language question 3. narrative: a more detailed description that expands on the question, frequently specifying specific types of facts that would fall under the topic score 4. average: in addition, we generate another set of questions that are the average of the aforementioned three embeddings. This is a frequently used and acknowledged technique in NLP." }, { "figure_ref": [], "heading": "Query Question Narrative", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Coronavirus response to weather changes", "publication_ref": [], "table_ref": [], "text": "How does the coronavirus respond to changes in the weather?\nSeeking a variety of information about virus survival in various weather/climate settings, as well as information about virus transmission in various climatic circumstances." }, { "figure_ref": [], "heading": "Coronavirus social distancing impact", "publication_ref": [], "table_ref": [], "text": "Has social distancing had an impact on slowing the spread of COVID-19?\nSeeking particular information on studies that have investigated COVID-19 transmission in one or more social (or non-social) techniques." }, { "figure_ref": [], "heading": "Coronavirus outside body", "publication_ref": [], "table_ref": [], "text": "How long can the coronavirus live outside the body?\nSeeking a variety of information about the virus's survivability outside the human body (surfaces, liquids, etc.) while remaining alive for transmission to another human. " }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Pre-processing", "publication_ref": [], "table_ref": [], "text": "In the data-set, for each document, we are provided with two types of JSON files: pdf_json and pmc_json, which differ only in their format. For simplicity, we used only the pdf_json form. We first cleared the data set from NaN values. Then we extracted the abstract of each document and stored it separately along with the document ID. Later, we used this data for various tasks detailed in his report.\nFor the BM25 model as well as the Bag of Embeddings model the pre-processing done was very similar to the one we did in our class assignments. The corpus consisted of .json parsed files for each research article as mentioned above and we leveraged that to read the abstract, title and the body text as well (only for BM25), created a single string for the whole article and moved ahead with the pre-processing and cleaning part. The pre-processing and the cleaning involved the following things:- For the queries also same pre-processing was done before running the query" }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "We implemented and compared the following IR models:" }, { "figure_ref": [], "heading": "BM25 -The Baseline Model [6]", "publication_ref": [], "table_ref": [], "text": "A type of ranking function that ranks a set of documents based on the query terms that exist in each document, independent of how the query terms inside a document are related. The BM25 term weighting algorithms have been widely and successfully employed in a variety of collections and search activities.\nThe ranking function: BM25 is a bag-of-words retrieval algorithm that ranks a set of documents based on the query phrases that exist in each document, regardless of how close they are to each other. It refers to a group of scoring functions that have somewhat varied components and parameters. The following is one of the more notable instantiations of the function.\nGiven a query Q, containing keywords q 1 , . . . , q n , the BM25 score of a document D is:\nscore(D, Q) = n i=1 DF (q i ) • f (q i , D) • (k 1 + 1) f (q i , D) + k 1 • 1 -b + b • |D| svgdl\nwhere f (q i , D) is q i 's term frequency in the document D, |D| is the length of the document D in words, and avgdl is the average document length in the text collection from which documents are drawn. k 1 and b are free parameters, usually chosen, in absence of an advanced optimization, as k 1 ∈ [1.2, 2.0] and b = 0.75IDF (q i ) is the IDF (inverse document frequency) weight of the query term q i . It is usually computed as:\nIDF (q i ) = ln N -n (q i ) + 0.5 n (q i ) + 0.5 + 1\nwhere N is the total number of documents in the collection, and n (q i ) is the number of documents containing q i ." }, { "figure_ref": [], "heading": "Contriever [7]", "publication_ref": [], "table_ref": [], "text": "Neural network-based information retrieval has achieved state-of-the-art performance on datasets and problems where extensive training sets are available. Unfortunately, they do not translate well to new domains or applications without training data and are frequently surpassed by unsupervised term-frequency approaches like BM25. Contriever is a basic self-supervised IR model based on contrastive learning that is competitive with BM25.\nContrastive Learning: Contrastive learning is a method that takes advantage of the fact that every document is distinctive in some way. This signal is the only information provided in the absence of manual monitoring. Using a contrastive loss (citecontrastive), the final algorithm learns by differentiating between documents. This loss compares pairs of document representations that are either positive (from the same document) or negative (from distinct documents). Formally, the contrastive textttInfoNCE loss is defined as follows:\nL (q, k + ) = exp (s (q, k + ) /τ ) K i=0 exp (s (q, k i ) /τ ) ,\nThis loss increases the relevance value of comparable examples and decreases the relevance score of dissimilar ones. This loss function can also be interpreted as follows: given the query representation q, the objective is to recover or retrieve the representation k + corresponding to the positive document from among all the negatives k i . The left-hand side representations in the score s are referred to as questions, whereas the right-hand side representations are referred to as keys.\nBuilding positive pairings from a single document: A fundamental feature of contrastive learning is how to construct positive pairs from a single input. This stage in computer vision is applying two independent data augmentations to the same image, yielding two \"views\" that constitute a positive pair. While contriever primarily considers similar independent text modifications, they also investigate dependent transformations that aim to lessen the correlation across perspectives. They are:\n1. Inverse Cloze Task 2. Independent cropping 3. Further data augmentation.\nBuilding large sets of negative pairs: The maintenance of a high number of negative pairs is a crucial feature of contrastive learning. The majority of common frameworks handle negatives differently, and the two utilised here are:\n1. Negative pairs within a batch. 2. Negative pairs across a batch. Instead of a single point in the semantic space, a word is represented as a probability distribution that encompasses the complete semantic area. It corresponds specifically to the multivariate normal distribution with spherical covariance.\nGiven a second embedded word, the \"overlap\" between these two distributions can be used to determine the degree of similarity between the two words." }, { "figure_ref": [], "heading": "BERT Embeddings [9]", "publication_ref": [], "table_ref": [], "text": "Google created the state-of-the-art Bidirectional Encoder Representation from Transformers (BERT) technique for natural language processing pre-training. Our corpus will be embedded with a BERT embedding that has been pre-trained." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In this section of our study paper, we compare the retrieval performance of three distinct models, namely BERT, Contriever, and BM25. To evaluate the models, the top 50 most relevant documents were extracted for each of the 40 questions in the dataset using each of the three models. This procedure generated three distinct lists of document IDs for each query, one list for each model.\nTo establish a fair comparison across the models, we determined the average score for the extracted documents, as described in the section on methodology. We next did a pairwise intersection of the document ID lists from the three models for each query. This analysis generated three unique pairs: BERT and BM25, BERT and Contriever, and BM25 and Contriever.\nThe amount of shared documents obtained for each query by each pair of models provides insight into the degree of overlap between the pages deemed relevant by each pair. Finally, we computed the mean and standard deviation of the number of common documents across all queries for each pair in order to standardise the results and gain a deeper understanding of the consistency and agreement across the models in terms of document retrieval performance.\nThe following table summarizes our findings: The mean and standard deviation are greatest for the BERT-Contriever pair, followed by the BERT-BM25 pair, and then the BM25-Contriever pair. This indicates that BERT and Contriever have the greatest degree of agreement about document retrieval, but BM25 has the least overlap with both BERT and Contriever. By analysing the mean and standard deviation of the number of common documents obtained by each pair of models, we can draw inferences regarding the similarity and dissimilarity of their performance in obtaining relevant documents in our research topic." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, our study emphasises the significance of information retrieval systems for conveniently gaining access to essential scientific publications during a pandemic. The application of the TREC framework to the COVID-19 Open Research Dataset (CORD-19) allowed for the evaluation and comparison of various information retrieval (IR) frameworks such as BM25, Contriever, and Bag of Embeddings, with the primary objective of constructing a test collection for search engines dealing with the complex information landscape during events such as a pandemic.\nThe CORD-19 dataset, consisting of publications and preprints on COVID-19 and relevant historical coronavirus research, was utilised to train and assess the IR models. Our methodology entailed preprocessing document data, utilising metadata and abstracts to compare the performance of several IR models, with BM25 serving as the baseline.\nIn addition, the outcomes were compared to those manually labelled in the TREC-COVID IR Challenge.\nAccording to the results of our investigation, the performance of the IR models varied, with BERT and Contriever exhibiting the most document overlap, followed by BERT-BM25 and BM25-Contriever. These findings demonstrate the capacity of advanced IR models such as BERT and Contriever to retrieve pertinent information during a pandemic. However, the study also revealed the difficulties associated with processing huge datasets due to limited computer resources, necessitating the employment of tactics such as concentrating on abstracts and summaries. This study highlights the significance of well-tailored IR systems for managing the information deluge during crises such as the COVID-19 pandemic. This study's findings can influence future research and development of IR systems in order to enhance their performance in quickly developing situations, hence facilitating decision-making and response efforts.\n7 Challenges and Future Work\nOur main challenge was the availability of adequate computational resources to process the vast dataset. Despite our best efforts, including taking only the abstract and summary, using a summarised version, or limiting the number of documents processed, we were constrained by the limited computational resources. Moreover, we encountered another challenge when utilizing Sentence-BERT on an entire paragraph. To address this issue, we initially employed Pegasus, followed by S-BERT.\nIn future work, we aim to explore more advanced algorithms and hybrid models to enhance retrieval performance.\nOur current approach relies on vanilla models for all three tasks. Additionally, fine-tuning BERT specifically for the COVID-19 dataset represents another promising avenue for future research." } ]
This research study investigates the efficiency of different information retrieval (IR) systems in accessing relevant information from the scientific literature during the COVID-19 pandemic. The study applies the TREC framework to the COVID-19 Open Research Dataset (CORD-19) and evaluates BM25, Contriever, and Bag of Embeddings IR frameworks. The objective is to build a test collection for search engines that tackle the complex information landscape during a pandemic. The study uses the CORD-19 dataset to train and evaluate the IR models and compares the results to those manually labeled in the TREC-COVID IR Challenge. The results indicate that advanced IR models like BERT and Contriever better retrieve relevant information during a pandemic. However, the study also highlights the challenges in processing large datasets and the need for strategies to focus on abstracts or summaries. Overall, the research highlights the importance of effectively tailored IR systems in dealing with information overload during crises like COVID-19 and can guide future research and development in this field.
IR MODELS AND THE COVID-19 PANDEMIC: A COMPARATIVE STUDY OF PERFORMANCE AND CHALLENGES
[ { "figure_caption": "Figure 1 :1Figure 1: BERT input representation. The input embeddings are the sum of the token embeddings, the segmentation embeddings and the position embeddings", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Few illustrative examples of topics for TREC-COVID task.", "figure_data": "coronavirus asymptomaticWhat is known about those infected with Covid-19 but are asymptomatic?Studies on patients who are known to be Covid-19 infected but show no symptoms?Coronavirus hydroxy-chloroquineWhat evidence is there for the value of hydroxychloroquine in treating Covid-19?Basic scientific or clinical research evaluating the benefits and risks of treating Covid-19 with hydroxychloroquine.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Summary of the results comparing the retrieval performance of BERT, Contriever, and BM25", "figure_data": "Model Comparison BERT-Contriever BERT-BM25 BM25-ContrieverMean36.72518.111.775Standard Deviation17.92713.64711.014", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Moksh Shukla; Nitik Jain; Shubham Gupta
[ { "authors": "Lin Wang; Yu Chen; Xueqiang Liu; Jiajia Li; Pengyuan Chen; Yihao Zhang", "journal": "", "ref_id": "b0", "title": "Covid-19 literature knowledge graph construction and drug repurposing report generation", "year": "2021" }, { "authors": "Hani Joseph G Rizk; Georges G Kalbouneh; Dagher", "journal": "Frontiers in research metrics and analytics", "ref_id": "b1", "title": "Deep learning-based text mining framework for covid-19 literature", "year": "2021" }, { "authors": "Yujie Si; Yong Zhao; Rui Han", "journal": "Journal of Healthcare Engineering", "ref_id": "b2", "title": "An overview of the cord-19 challenge and the covid-19 literature exploration using natural language processing", "year": "2021" }, { "authors": "Shweta Khare; Mahesh Velauthapillai", "journal": "Journal of Healthcare Engineering", "ref_id": "b3", "title": "Mining the literature for covid-19 research", "year": "2021" }, { "authors": "Javaid Tariq; Muhammad Khalid; Atia Khatoon; Atta Adnan", "journal": "Journal of the Pakistan Medical Association", "ref_id": "b4", "title": "Cord-19: The covid-19 open research dataset", "year": "2020" }, { "authors": "Stephen Robertson; Hugo Zaragoza", "journal": "Now Publishers Inc", "ref_id": "b5", "title": "The probabilistic relevance framework: BM25 and beyond", "year": "2009" }, { "authors": "Gautier Izacard; Mathilde Caron; Lucas Hosseini; Sebastian Riedel; Piotr Bojanowski; Armand Joulin; Edouard Grave", "journal": "", "ref_id": "b6", "title": "Towards unsupervised dense information retrieval with contrastive learning", "year": "2021" }, { "authors": "Jin Peng; Yue Zhang; Xingyuan Chen; Yunqing Xia", "journal": "AAAI Press", "ref_id": "b7", "title": "Bag-of-embeddings for text classification", "year": "2016" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 174.55, 573.05, 254.56, 32.37 ], "formula_id": "formula_0", "formula_text": "score(D, Q) = n i=1 DF (q i ) • f (q i , D) • (k 1 + 1) f (q i , D) + k 1 • 1 -b + b • |D| svgdl" }, { "formula_coordinates": [ 4, 225.16, 681.71, 154.34, 23.23 ], "formula_id": "formula_1", "formula_text": "IDF (q i ) = ln N -n (q i ) + 0.5 n (q i ) + 0.5 + 1" }, { "formula_coordinates": [ 5, 231.05, 212.75, 149.91, 26.56 ], "formula_id": "formula_2", "formula_text": "L (q, k + ) = exp (s (q, k + ) /τ ) K i=0 exp (s (q, k i ) /τ ) ," } ]
10.18653/v1/D19-1166
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b35", "b3", "b10", "b13", "b16", "b39", "b7", "b26", "b5", "b13", "b16", "b7", "b15", "b10", "b39", "b14", "b31", "b42" ], "table_ref": [], "text": "Important findings in medicine are typically presented in technical, jargon-laden language in journal articles or reviews, which is difficult for laypeople to understand. This impedes transparent and fair access to critical medical information and ultimately hinders health literacy, which is \"one of the most promising and cost-effective approaches to overcome the Non-Communicable Disease challenge\" (Liu et al., 2020).\nText simplification models which automatically transform complex texts into simpler versions understandable by lay readers (Siddharthan, 2014;Alva-Manchego et al., 2020) have emerged as a promising means of providing wider access to published medical evidence. Recent work on simplification has fine-tuned large pre-trained models (Van Figure 1: Examples of our dataset and system outputs for multilingual medical text simplification. The dataset has simplifications available in four languages (English, Spanish, French, Farsi), and system outputs were analyzed for factual errors (red), fluency errors (blue), and simplicity (green) among other criteria. et al., 2020;Cardon and Grabar, 2020;Devaraj et al., 2021;Guo et al., 2022;Trienes et al., 2022;Basu et al., 2023), explored reinforcement learning (Phatak et al., 2022), and evaluated zero-shot performance via prompting (August et al., 2022).\nHowever, this work has so far exclusively considered monolingual text simplification. Consequently, we do not know how well models perform when the simplified text is not in the same language as the original, complex text. This limits the (potential) availability of simplified information to a few high-resource languages-especially for the medical domain-and leads to equity issues such that individuals who do not speak English will still face a high barrier to information access. 1 Work 1 While machine translation, to a certain extent, may be able arXiv: 2305.12532v4 [cs.CL] 18 Oct 2023 in this direction is impeded by a lack of data. The largest resources in text simplification are not in the medical domain, and parallel corpora for medical simplification only exist in single languages, namely English (Devaraj et al., 2021;Guo et al., 2022;Basu et al., 2023), French (Grabar and Cardon, 2018;Cardon and Grabar, 2020), and German (Trienes et al., 2022).\nThis paper advances the state of multilingual medical simplification, in which complex texts are to be directly simplified into target languages. Our contributions are as follows. We introduce MULTICOCHRANE (Figure 1), the first parallel dataset for medical text simplification across multiple languages: English, Spanish, French, and Farsi. MULTICOCHRANE contains aligned sentence pairs sourced from the Cochrane Library of Systematic Reviews, 2 which is a library of meta-analyses of treatment effectiveness.\nThese review articles include both a technical abstract and a plain-language summary (PLS), from which we derive the two subsets of MULTI-COCHRANE: (1) MC-CLEAN, 101 technical abstracts with expert-annotated manual alignments first derived in English, then semi-automatically aligned to other languages (partially verified by bilingual speakers). MC-CLEAN contains ∼5k sentence pairs across all 4 languages. (2) MC-NOISY, a larger but noisier subset created with an automatic sentence alignment model that was trained on MC-CLEAN. MC-NOISY is sourced from around 7.8K medical reviews, with ∼100K sentence pairs across languages.\nMULTICOCHRANE enables systematic evaluations of medical text simplification models. Here we evaluate a range of large pre-trained language models in both zero-shot and fine-tuned settings on the task of text simplification across four languages. In addition to automatic evaluations, we report human assessments covering simplicity, fluency and factuality (Devaraj et al., 2022). We also report the correlation between automatic metrics and these human assessments, which we find to be mostly weak. Our results show that while pre-trained models are effective at simplification in English, their abilities degrade significantly on other languages, to overcome this limitation, it is not ideal due to variability in translation quality across languages (Ruder et al., 2021) and other issues such as cost and interpretability as discussed in Vu et al. (2022). This paper evaluates a simplify-then-translate pipeline.\n2 https://www.cochranelibrary.com\nwhere they tend to introduce factuality errors. yields outputs that are comparatively factually accurate, but which tend to be extractive and so not adequately simplified. Outputs from Flan-T5 (Base; zero-shot) significantly degrade in quality when targeting languages other than English, producing many factual and fluency errors. We also analyze the approach of translating English simplifications to other languages, which in many instances is able bypass these issues.\nWe publicly release MULTICOCHRANE, model outputs, and all human judgments collected for this work (https://github.com/SebaJoe/ MultiCochrane), hoping to motivate future work on multilingual medical text simplification to advance model performance across languages." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b49", "b43", "b12", "b18", "b44", "b18", "b19", "b41", "b9", "b13", "b16", "b7", "b15", "b39", "b0", "b21", "b2", "b13", "b33", "b32", "b24", "b1", "b44" ], "table_ref": [], "text": "The largest resources used to train automatic simplification models are two general domain corpora: the Wikipedia-Simple Wikipedia corpus (Zhu et al., 2010;Woodsend and Lapata, 2011;Coster and Kauchak, 2011;Jiang et al., 2020), and the Newsela corpus (Xu et al., 2015;Jiang et al., 2020). This paper focuses on simplification in the medical domain, which is important if we are to bridge the information gap exemplified by low medical literacy levels worldwide (Kickbusch et al., 2013).\nMedical Simplification Datasets. While there have recently been increasing efforts to create parallel corpora for medical text simplification in English (Van den Bercken et al., 2019;Cao et al., 2020;Devaraj et al., 2021;Guo et al., 2022;Basu et al., 2023), data in other languages remain scarce. Grabar and Cardon (2018) constructed the CLEAR dataset in French, part of which is derived from 13 Cochrane articles; Trienes et al. (2022) introduced a dataset on German consisting of clinical notes. Other prior work in non-English medical text simplification has focused on lexical simplification (Abrahamsson et al., 2014;Kloehn et al., 2018;Alfano et al., 2020), where the primary focus was on substituting synonyms and semantically related terms. In terms of data sourcing, our work is most similar to Devaraj et al. (2021) (which is in English only). However, their work derived roughlyaligned paragraphs consisting of specific sections (methods and conclusions only) of the Cochrane abstracts and PLS; by contrast, we derive manual and automatically aligned sentences for full abstracts.\nMultilingual Simplification Datasets. Multilingual datasets have been introduced for the task of summarization. The MLSUM dataset (Scialom et al., 2020) is a large summarization corpus containing article/summary pairs in five languages, sourced from newspaper articles. General-domain non-English datasets for simplification also exist as extensively surveyed by Ryan et al. (2023). Notably, MUSS (Martin et al., 2022) used Common Crawl to mine millions of sequence pairs in English, Spanish, and French. The significant difference between MUSS and our dataset is MUSS's lack of cross-lingual alignment, i.e., alignment of sequence pairs from one language to another. Agrawal and Carpuat (2019) derived noisily aligned English-Spanish data from the Newsela corpus (Xu et al., 2015). However, to date, there is no aligned dataset for text simplification where one source sentence is paired with target sentences in multiple languages simultaneously as in our work." }, { "figure_ref": [], "heading": "The MULTICOCHRANE Corpus", "publication_ref": [ "b13" ], "table_ref": [], "text": "We source MULTICOCHRANE from publicly available technical abstracts and plain-language summaries of Cochrane systematic reviews; these are comparable texts, though not parallel (Devaraj et al., 2021). These abstracts and PLS exist in several languages (detailed in Sections 3.1.2 and 3.2), aligned almost one-to-one with their English counterparts. This allows annotations and alignments to be easily adapted to create a multilingual dataset. In total, we use 7,755 abstract-PLS pairs from the Cochrane Library. We first create sentence-aligned MC-CLEAN (Section 3.1) of 101 abstract-PLS pairs using a mixture of manual alignments for English and semi-automatic alignments with partial verification for other languages. The rest of the abstracts were used for the creation of the large-scale noisy MC-NOISY (Section 3.2) by automatic alignments and filtering. The total number of sentences in all subsets are shown in Table 1." }, { "figure_ref": [], "heading": "Clean Alignments: MC-CLEAN", "publication_ref": [], "table_ref": [], "text": "For MC-CLEAN, we first create a manually aligned subset of 101 abstracts for English. We then automatically align for other languages by exploiting Cochrane's natural 1-1 sentence alignment from English to other languages; this alignment was verified by annotators on a sizable sample of the data. Our annotation team consists of 5 trained linguistics students with native English proficiency and native or advanced mastery over one or more of the other languages covered in this work. These annotators do not have medical backgrounds, but they do have a strong background in research and in evaluating technical texts. We consider this to be sufficient enough to complete the tasks described in this paper." }, { "figure_ref": [], "heading": "English Annotation", "publication_ref": [ "b18", "b18", "b22", "b36" ], "table_ref": [ "tab_2" ], "text": "Annotators were provided with abstract-PLS pairs and were tasked with aligning sentences in the abstract with sentences in PLS according to their similarity in content. This is a challenging task: annotators had to be able to understand complex medical texts, and align sentences by repeatedly searching through text on both the simple and complex sides. To assist with their annotation, we built the ANNO-VIEWER tool inspired by Jiang et al. (2020) to enable annotators to efficiently align sentences (see screenshots in Appendix J). 3 This annotation tool also has functionality allowing the annotator to use existing semantic similarity measures (listed in Section 3.2) to help find sentence alignments faster. When a sentence is selected for alignment, the tool can sort the other document (either abstract or PLS) by which sentences are most similar to the selected sentence. After aggregating all alignments for a document into a bipartite graph for analysis, we processed the data into individual sentence pairs (see Appendix D for more details). Due to the high cognitive load imposed by aligning medical documents, we first selected a set of 25 documents aligned by at least 2 annotators (15 of which aligned by at least 3 annotators). Once we were confident of their inter-annotator agreement (see \"Inter-annotator Agreement\" below), the rest of the 75 documents were then single-annotated. Inter-annotator Agreement. We used the aforementioned 25 articles to calculate inter-annotator agreement based on the F1 score, following prior research on word and sentence alignment (Jiang et al., 2020;Lan et al., 2021). 4 For each annotator, their alignments were evaluated against the majority vote of alignments from other annotators. Overall, the macro-average F1 score across annotators was 0.89, indicating high agreement for the alignment task. Non-overlapping alignments are rare; a major source of disagreement between annotators was related to multiple alignments made to the same simple sentence (i.e., multi-alignments) that were either captured by some annotators but excluded by others. Since the content in the abstracts is highly technical and in some cases contrasted significantly in style and vocabulary with the plainlanguage summary, annotators had difficulties in determining whether some sentences conveyed similar meaning. Nevertheless, annotators were in general conservative, only aligning sentences when they were confident in the alignments. Therefore, there is a low prevalence of incorrect alignments. The overall alignments are high-precision. Annotators typically used ANNO-VIEWER's sentence similarity sorting function to verify their alignments or to identify additional multi-alignments. Dataset Summary. Table 2 presents additional statistics on the aligned and unaligned sentences in both complex and simple texts in the English portion of MC-NOISY. For complex texts (i.e., technical abstracts), an unaligned sentence indicates that its information is not present (i.e., deleted) in the simplified texts (i.e., plain-language summaries). On average, just over 60% of sentences in complex texts are deleted, indicating that most of the core information from complex texts are retained in simple texts. In PLS, an unaligned sentence indicates added information not present in the complex version. We observe that an overwhelming majority of added information elaborates on concepts or defines terms (Srikanth and Li, 2021). The average elaboration ratio (the ratio of unaligned to total sentences in a PLS) is less than half of the average deletion ratio (ratio of unaligned to total sentences in a technical abstract), indicating that simplified texts tend to be mostly concise with this core information. The average token compression ratio (ratio of simple to complex tokens) show the presence of longer simplified sentences." }, { "figure_ref": [], "heading": "Multilingual Data", "publication_ref": [ "b30", "b4", "b38" ], "table_ref": [], "text": "The Cochrane Library provides abstracts and PLS in multiple languages via translations that have been produced through a combination of volunteer and professional work, as well as human-verified machine translation (Ried, 2023). To create the multilingual portion of our dataset, i.e., pairs of semantically equivalent (source, target) sentences where source = English abstract, target ∈ PLS in {en, es, fr, ...}, we use the English PLS as a bridge to align the English abstract with PLS in other languages. The PLS of non-English languages mostly correspond 1-1 sentence-wise to the English PLS.\nWe use a sentence aligner that combines LASER multilingual sentence embeddings (Artetxe and Schwenk, 2019) with a dynamic programmingbased alignment algorithm (Thompson and Koehn, 2019) to align sentences in the English versions to those in other languages. We did this for the entire set of 7,755 abstract/PLS pairs. Multilingual sentence pairs in MC-CLEAN consist of these alignments that belong to the 101 articles manually aligned for English. The other 7,654 articles were used to create MC-NOISY (Section 3.2).\nHuman Verification. To verify that the multilingual sentence alignments across different languages of the PLS are valid, we asked 3 of our bilingual annotators to evaluate a random sample of 400 sentence pairs to verify that each alignment is a direct translation. Each annotator was highly proficient in both the languages of the alignments being verified. Annotators found zero instances of misalignment. We also analyzed the number of sentences of the English abstract/PLS and their multilingual counterparts to find any instances where is a huge mismatch. We did not find any such instances. The data did include instances where one or two sentences in the article may have been split or combined in their multilingual versions. The DP-based alignment algorithm that was used took care of these exceptions. These assessments, combined with information about the Cochrane Library's methodology in making these translations, instills high confidence that the derived multilingual sentence alignments are valid. Dataset Summary. While Cochrane provides a large amount of diverse multilingual data, every article is not mapped to every available language. Consequently, the number of articles available for each language varies. The distribution of these articles across language is shown in Figure 2.\nFor this work, we selected the top three most frequent non-English languages -Spanish, French, and Farsi -to use for training and evaluation in Section 5. We report the resulting number of paired sentences in each language in Table 1." }, { "figure_ref": [], "heading": "Fully Automated Alignments: MC-NOISY", "publication_ref": [ "b18" ], "table_ref": [], "text": "While MC-CLEAN provides clean and accurate alignments across multiple languages, human alignment of medical documents is challenging to scale. Fortunately, MC-CLEAN enables us to accurately evaluate a variety of automatic sentence alignment methods; a good automatic alignment method will help create silver-standard alignments expanding over the entirety of the Cochrane dataset. This is especially important for multilingual medical simplification, where the number of annotated alignments is significantly lower for non-English languages.\nAutomatic Alignment on English Data The automatic alignments for MC-NOISY were derived using a BERT-based CRF alignment model (Jiang et al., 2020). This method outperformed other alignment methods when evaluated on a subset of MC-CLEAN. Further details about these experiments can be found in Appendix E.\nMultilinguality We use the same 1-1 alignment property between English and other languages in Cochrane abstracts and PLS described above (Section 3.1.2) to derive the multilingual portion of MC-NOISY from the English data described in Section 3.2. As shown in Figure 2, the distribution of the articles and sentence pairs across language in the noisy dataset is similar to that of the humanannotated dataset. " }, { "figure_ref": [], "heading": "Filtering", "publication_ref": [ "b20" ], "table_ref": [], "text": "Although the CRF model handles unaligned sentences by default, the resulting data remains noisy. We therefore adapt the method described in Kim et al. (2021), initially used in the context of sentence-splitting, to further filter misalignments in MC-NOISY. Ultimately, this method was applied to the entire training set, which includes the entirety of MC-NOISY as well as the portion of MC-CLEAN not used for testing or validation. This method can be described as follows: For each sentence pairing in the training set, we derive the set of lemmatized tokens in the complex and simple sentence denoted by L c and L s respectively. If the proportion of the overlapping tokens from the simplified sentence exceeds a threshold r, it is included in the filtered dataset:\nr = |L c ∩ L s |/|L s |.\nWe used r = 0.5 to strike a balance between the number of potential misalignments in the dataset and the extractiveness of the data. Overall, as evident in Table 1, this filtering process reduced the dataset size by about half. Filtering reduced the MC-CLEAN portion by a larger proportion, indicating that human data is less lexically similar as opposed the noisy data." }, { "figure_ref": [], "heading": "Evaluation of Simplification Models", "publication_ref": [], "table_ref": [], "text": "MULTICOCHRANE enables extensive evaluation of multilingual sentence simplification systems simultaneously across 4 languages. In this section we present the first study of this kind (to our knowledge), assessing zero-shot and fine-tuned models (Section 5.1) with both automatic metrics (Sec-tion 5.2) and human evaluation (Section 5.3) for medical text simplification." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b5", "b34", "b11", "b46" ], "table_ref": [ "tab_5" ], "text": "We experiment with zero-shot and fine-tuned models, as well as a simplify-then-translate pipeline.\nZero-shot models:\n• GPT-3 zero-shot. we evaluate zero-shot simplifications generated by the GPT-3-davinci-003 model using a prompt similar to that used in August et al. (2022), adapted for multilinguality:\nMy fifth grader asked me what this sentence means:\n[TEXT TO SIMPLIFY] I rephrased it for him in [LANG], in plain language a fifth grader can understand:\n• GPT-3 simplify-then-translate. GPT-3 has been shown to have strong performance on English simplification for medical texts (Shaib et al., 2023). Thus, we evaluate a pipeline model first using GPT-3 for English simplification using the above prompt, and then translating the simplified text to other languages with a strong translation model (Google's Cloud Translation API).\n• Flan-T5 zero-shot. We also evaluate zero-shot Flan-T5 base (Chung et al., 2022) performance.\nWe used a simple prompt, specifically prepending \"Simplify this sentence:\" to the input (complex) sentence. 5 When simplifying to languages other than English, the prompt is changed to \"Simplify this sentence in [LANG]:\". Unfortunately, we were unable to generate simplifications in Farsi using Flan-T5, so we only evaluated this system on English, Spanish, and French.\nFine-tuned models:\n• mT5 We fine-tune separate mT5 base models (Xue et al., 2021) on different language pairs: (English, English), (English, Spanish), (English, French), and (English, Farsi).\n• Flan-T5 fine-tuned We further evaluate finetuned versions of Flan-T5 base for each language pair, following the setup of its zero-shot counterpart. This system also failed to generate Farsi outputs. Training Setup. We evaluate models on train/test/validation splits shown in Table 3. The train split is composed of both the entirety of MC-NOISY and a portion of MC-CLEAN while test and validation splits are subsets of MC-CLEAN. Fine-tuned models (mT5, Flan-T5) were trained over a single epoch with a learning rate of 5e-5 and used the AdamW optimizer. For both mT5 and Flan-T5 (zero-shot and fine-tuned), nucleus sampling with a top-p of 0.95 was used as the decoding strategy. GPT-3 was evaluated using a temperature of 0.7." }, { "figure_ref": [], "heading": "Automatic Evaluation", "publication_ref": [ "b27", "b47", "b45", "b6", "b37", "b13" ], "table_ref": [ "tab_6" ], "text": "Metrics. Model outputs were evaluated through five automatic evaluation metrics: BLEU (Post, 2018), BERTScore (Zhang et al., 2020), SARI (Xu et al., 2016), and the Least-Common Subsequence distance metric (LCS) (Bakkelund, 2009) This dis-tance metric has an inverse relationship with the actual least common subsequence (lower means more extractive).\nWe do not include Flesch-Kincaid as a metric, for reasons stated in Tanprasert and Kauchak (2021), which provides a compelling analysis on how easily it can be misinterpreted, and instead opt for human evaluation of simplicity in Section 5.3.\nResults. Results of these evaluations are presented in Table 4. The data in this table show a common trend in that filtering (Section 3.2) largely improves performance, indicating that filters improve data quality. A very visible trend in this data, is that across the board, with the exception of the GPT results, English simplifications vastly outperform those of other languages. This probably due to the vast resource gap between English and the other languages, as well a possible bias towards English in the pre-trained mT5-base model.\nWe observe exceedingly low BLEU scores when the simplified version is used as reference, compared to the complex version. Since BLEU is lexical-centric, this reveals that models tend to translate rather than simplify. Simplified sentences in the Cochrane dataset frequently featured insertions and elaborations, creating at times an information mismatch between the model outputs and the corresponding reference simplifications. Moreover, the style in which PLS were written varied greatly (Devaraj et al., 2021).\nAnother attribute that is of importance when judging simplification is the level of extractiveness, or how much the output copied the input. Based on the LCS metric, it seems that while GPT-3 produces less extractive outputs than English mT5, for the other languages the opposite is true. We discuss this further in Section 5.3.\nExtractiveness and poor multilingual simplifications are also evident in the results for zero-shot Flan-T5 generations. For multilingual simplifications, in particular, fine-tuning significantly increased the LCS distance while improving BLEU and BERTScore metrics, indicating higher quality and less extractive simplifications. Interestingly, English simplifications generated by both settings have a significantly lower LCS distance compared those generated by other systems. Why this occurs is unclear to us and requires further analysis.\nThe simplify-translate approach to simplification in different languages elicited similar results as its English counterpart and clearly produces less Table 5: Summary of human evaluation results across all evaluated systems as well as the reference simplification, showing average factuality (0-2, lower=better), fluency (0-2, lower=better) and simplicity (-2 (oversimplify)-2 (too hard)) ratings. Best system performance bolded.\nextractive generations compared to zero-shot multilingual simplification. In terms of LCS and SARI, this approach resulted in similar scores as those produced by fine-tuned models. While it is appears that the simplify-translate approach improved upon the zero-shot multilingual performance for GPT, it is difficult to draw conclusions about its performance relative to fine-tuned models using automatic metrics alone." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [ "b14", "b14" ], "table_ref": [], "text": "We perform human evaluation of the factuality, linguistic quality, and simplicity of model outputs. We also evaluate the reference simplifications. 100 outputs were evaluated for each system. While the alignment may be sound, the references may still contain excess deletions or inconsistencies with the original text; prior work Devaraj et al. (2022) found similar phenomena with the Newsela and Wiki datasets. Annotators were blinded to the sys-tem ID (or whether they are evaluating the manually aligned pairs from the texts themselves).\nMetrics.\n(1) Factual faithfulness: We measure overall factuality using a 0-2 scale, where 0 indicates no factual issues and 2 indicates severe factual issues.\n(2) Fluency: We rate linguistic fluency of outputs using a scale of 0-2, where 0 indicates fluent output and 2 indicates severe linguistic issues.\n(3) Simplicity: Simplicity is assessed on a scale of -2 to 2, where -2 indicates heavy oversimplification, 0 indicates ideal simplification, and 2 indicates heavy under-simplification. The specific criteria are further described in Appendix A.\nResults. The human evaluation results presented in Table 5 agree with many of the automatic metrics. First, annotators found English outputs from the mT5 models to be more factual and fluent than those from the other languages. Similarly, factuality and fluency also improved with the filtered dataset for the most part (as was also indicated by the automatic metrics). This is probably due to filtering removing less extractive sentence pairs from the training set. However, for French and Farsi, filtering slightly worsened the factuality and fluency, perhaps due to the fewer number of examples in the data. For simplicity, with the exception of English, filtering also seems to make outputs simpler.\nThere is a stark difference between the GPT-3 and the mT5 outputs. GPT-3 produced significantly more faithful and fluent text while mT5 outputs were deemed simpler with the exception of English. However, qualitative feedback from annotators suggests that non-English GPT-3 outputs are skewed due to a severe degree of extractiveness; in some cases, the output is simply a word-for-word translation of the input. This inflates the factuality and fluency scores, but this level of extractiveness is manifest in the simplicity scores; GPT-3 does proper simplification only for the English to English case.\nThe trends in the results for Flan-T5 mirror those for GPT-3. When comparing zero-shot generations to fine-tuned generations, it is clear that Flan-T5 is limited in its ability to perform multilingual simplification. Not only are generations less simplified, but they frequently have factual and fluency errors.\nThe simplify-translate approach proved to be fairly strong compared to other systems which directly performed multilingual simplification. The current state of direct multilingual simplification in these large language models seems to be not yet surpassing monolingual simplification combined with machine translation. However, as discussed in Section 1, a pipeline system does come with its drawbacks in terms of cost and robustness.\nAnnotators also evaluated the reference labels of each complex input from the dataset. Overall, with the exception of Farsi, these references were deemed relatively fluent and simplified. However, due to insertions and deletions of information that naturally occur in the data, factuality errors were detected by annotators; these findings echo those in Devaraj et al. (2022).\nExamples of common errors. One common error that was found in outputs from all evaluated systems were mistranslated acronyms. Medical acronyms often differ from one language to another. However, many systems are limited in their ability to properly translate this technical knowledge. Usually, the acronym is either not translated, or translated incorrectly, often randomly. The following example in French was generated by mT5 fine-tuned on the unfiltered version of MC-NOISY.\nInput complex sentence: Stress and a sedentary lifestyle are major determinants of cardiovascular disease (CVD). Output: Le stress et le règlement inactif sont des causes importantes de maladies cardiovasculaires (AVC) les plus courants. Gloss: The stress and inactive regulation are the most common important causes of cardiovascular disease (AVC)\nThe correct translation of CVD, or cardiovascular disease, in French is MCV, not AVC (French acronym for stroke).\nFluency issues were especially common in outputs generated by mT5 and Flan-T5. In the case of Flan-T5, in particular, misspellings and incorrect applications of gendered language are quite prevalent. Consider the following example in Spanish generated by Flan-T5:\nInput complex sentence: The rates were 24.5 for the sponge and 10.9 for the diaphragm in the UK trial. Output: Las tasas de asistencia respiratoria fueron de un 2,4 para la secreción del vejiga y 10,9 para el diafragmata. Gloss: The rates of respiratory assistance were 2.4 for the secretion of the bladder and 10.9 for the diaphragm.\nThis example has a number of errors. There are many hallucinations that include the insertion and substitution of irrelevant information. However, with respect to fluency, the main issue is the incorrect gender used for the word \"vejiga\" as well as the misspelling of the word \"diafragma\" as \"diafragmata\". The important thing to note here is that most of these errors are localized to technical words or phrases. More work needs to be done to adapt large language models to these niche vocabularies." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper introduces the first human-aligned multilingual text simplification dataset, MC-CLEAN, for the medical domain for the languages of English, Spanish, French, and Farsi. We further expanded this dataset using automatic alignments to create a much more vast but noisier dataset, MC-NOISY, that could be used for training language models. We also performed an evaluation of multilingual simplification for various systems, testing some of their zero-shot capabilities as well their performance when fine-tuned on MC-NOISY." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "MULTICOCHRANE has a few limitations. Firstly, for a given article, there does not exist a consistent set of multilingual versions for that article. As exemplified in Figure 2, the number of articles in which there exists a multilingual version varies depending on the language. This uneven distribution does favor more well resourced languages, with Farsi being a notable exception.\nLooking at the wider scope of text simplification, sentence-level text simplification does have a few drawbacks. Especially within the medical domain, contextual information is helpful for generating good simplifications. In MC-CLEAN elaborative information was often left unaligned. Simplifying at a paragraph-level or a document-level could have made use of this additional data to generate more useful simplifications.\nBecause of limitations with the amount of computing capability and time, the largest models available for Flan-T5 and mT5 were not used for evaluation. The models that we used were the best available models that we could have evaluated within our reasonable constraints." }, { "figure_ref": [], "heading": "Ethical Concerns", "publication_ref": [ "b13", "b17", "b8" ], "table_ref": [], "text": "This work does not use any full texts of Cochrane reviews. The research use of publicly available abstracts, including both technical abstracts and multi-lingual plain language summaries, is considered fair use. Note that the Cochrane dataset, in similar capacity as ours, has already been released publicly by prior work (Devaraj et al., 2021;Guo et al., 2020).\nRegarding compensation for annotators, all the annotators were paid a wage of $15 for every hour of work.\nWe recognize the dual use of language models and the possibility of factual errors and hallucinations in their generations, and we believe that this line of work is necessary both to advance technology for social good while also acknowledging these risks. The inaccessibility of medical information and health illiteracy are some of the leading reasons for real health consequences including more hospital admissions, emergency room visits, and poorer overall health (Berkman et al., 2011). Making medical information accessible is one of the best ways to tackle health literacy, and that is the core of what a simplification system is aimed to do. Additionally, medical misinformation is one of the most series issues highlighted in the COVID-19 pandemic; one contributing reason for this is the lack of health literacy among the general public. By simplifying trustworthy evidence, we hope to empower the public with a keener eye for such misinformation. Factual errors is one of the key aspects studied in this work; we perform a thorough evaluation dissecting issues that can come from these models, especially in a multilingual setting. We believe rigorous evaluation, as done in this work, is one of our best tools to demystify language models, and help the community understand the issues at hand. With such understanding, we hope to point to future directions that collectively, we will be able to provide factual, readable, multilingual access to medical texts." }, { "figure_ref": [], "heading": "A Human Evaluation Framework", "publication_ref": [], "table_ref": [], "text": "The following is the exact framework that annotators used for evaluating simplification outputs." }, { "figure_ref": [], "heading": "Factual (faithful to input)", "publication_ref": [], "table_ref": [], "text": "Choices -0-2 Give an overall rating on how factually consistent the model-generated output is on a scale from 0 to 2, where 0 is completely factual, 1 indicates inconsequential factual errors, and 2 indicates severe factual errors.\nExample: Rating 0: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease. Output: We found that many study participants had bad side effects from heart disease treatments.\nRating 1: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease. Output: We discovered some participants had bad side effects from heart disease treatments.\nRating 2: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease. Output: We discovered participants had no side effects from heart disease treatments." }, { "figure_ref": [], "heading": "Fluency", "publication_ref": [], "table_ref": [], "text": "Choices -0-2 Give an overall rating on how well the model-generated output follows grammatical rules and appears as a natural sentence, where 0 indicates no fluency issues, 1 indicates superficial issues, and 2 indicates severe fluency issues that obfuscate the meaning of the sentence.\nExample: Rating 1: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease.\nOutput: We found that many study participants had badly side effects from heart disease treatments.\nRating 2: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease. Output: In heart disease treatments, many participants had effects bad on the side." }, { "figure_ref": [], "heading": "Simplicity", "publication_ref": [], "table_ref": [], "text": "Choices -(-2 -2) Rate the level of simplification that occurred from the original English sentence to the model-generated output. Note: since the input readability is varied, please make judgements independent of the input.\n-2: Output is severely oversimplified to the point where the original intent of the original sentence has been lost.\n-1: Output is oversimplified, missing some important details. However, the intent of the original sentence is preserved. 0: Output is ideally simplified. It is readable to the average layman and does not omit important details.\n1: Output should be simplified more and include uncommon technical terms without any elaboration/explanation. 2: Output should be simplified MUCH more. There is little to no change in the style of the sentence from the original sentence, and the output is difficult for the average layman to understand." }, { "figure_ref": [], "heading": "Example:", "publication_ref": [], "table_ref": [], "text": "Rating -2: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease.\nOutput: Study participants had some effects while being treated for some disease.\nRating -1: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease.\nOutput: Study participants had bad effects while being treated for heart disease.\nRating 0: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease.\nOutput: We found that many study participants had bad side effects from heart disease treatments.\nRating 1: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease.\nOutput: We found that many study participants had experienced adverse effects from ischemic heart disease treatments.\nRating 2: Original: We discovered a high percentage of study participants experienced adverse reactions while being treated for ischemic heart disease.\nOutput: We discovered a high percentage of study participants experienced antagonistic reactions while being remedied for ischemic heart disease. " }, { "figure_ref": [ "fig_0" ], "heading": "B Human Evaluation Score Distributions", "publication_ref": [], "table_ref": [], "text": "Figure 3 show the distributions of the human evaluation scores for all evaluated systems." }, { "figure_ref": [], "heading": "C Human Evaluation Agreement", "publication_ref": [ "b28" ], "table_ref": [ "tab_8" ], "text": "In addition to evaluating outputs in their respective languages, all human evaluators also evaluated a set of 100 examples in English. These were collected to estimate how similarly annotators would evaluate outputs.\nTo analyze agreement, we use Randolph's kappa (Randolph, 2005), a free-marginal version of Fleiss' kappa. These values are presented in Table 7 for the different human evaluation criteria along with one for all of them combined. These results show there is moderate agreement among annotators." }, { "figure_ref": [], "heading": "D Alignment Analysis", "publication_ref": [], "table_ref": [], "text": "Since annotators were given total freedom to align sentences, many different alignment relationships are present in MC-CLEAN. To quantify the proportion of alignments that belong to these alignment relationships, two methods were used for analysis. For future reference, 1-1, N-1, 1-N, and N-M alignments refer to number of complex sentence aligned to the number of simple sentences in an alignment group.\nMethod A. This method treats alignments in a document as edges in a bipartite graph, with complex and simple sentences as vertices. Relationships are found by tracking the connected components in this graph through a depth-first search. The type of relationship is determined by a 2-tuple of the number of complex sentences and the number of simple sentences in a connected component. Method B. This method is an extension of Method A that specifically targets on breaking up N-M alignment relationships. While Method A counts any connected component as an alignment group, this method requires that each component must be fully connected (every complex sentence is aligned to every simple sentence in the group) as well. As such, an N-M alignment group that is not fully connected is broken up into smaller fully connected components. The results of this analysis is presented in Table 8. The results from Method A can be viewed as a lower bound for 1-1, N-1, and 1-N, alignment types. Method B could produce different results depending on how the N-M alignment groups are broken up. It is difficult to know exactly how to break up an alignment group in Method B without knowing the annotator's intentions, so heuristic methods were used. For these results, the alignment groups were broken down such that simple sentences that were aligned to all complex sentences or vice versa were first grouped together. Then, the rest of the alignments were split into 1-N or N-1 groups." }, { "figure_ref": [], "heading": "E Automatic Alignment Experiments", "publication_ref": [ "b48", "b44", "b44", "b25", "b29", "b47", "b18", "b18" ], "table_ref": [], "text": "While automatic sentence alignment is standard in text simplification (Zhang and Lapata, 2017;Xu et al., 2015), we are not aware of evaluation work for sentence alignment algorithms on medical texts. The construction of MC-NOISY allows us to create a tiered version of MULTICOCHRANE that is large enough for training state-of-the-art neural text simplification models.\nWe evaluate a series of automatic sentence alignment methods to determine which approach is the most compatible with the medical data at hand to derive the English portion of MC-NOISY.\n• Jaccard Similarity. The Jaccard Similarity score of every possible sentence pair was calculated. We consider pairs that score over a threshold of 0.3 (selected on the validation set) to be \"aligned\"; this is the alignment method used in Xu et al. (2015).\n• TF-IDF Similarity. We generated TF-IDF vectors for every possible pair. Pairs were scored based on the cosine similarity of their embeddings. We aligned each sentence in the PLS to the most similar sentence in the corresponding complex version. This is the alignment method used in Paetzold et al. (2017).\n• Sentence-BERT Similarity. Similar to TF-IDF, but here we use Sentence-BERT embeddings (Reimers and Gurevych, 2019) instead.6 \n• BERTScore. The BERTScore (Zhang et al., 2020) between the complex and simplified sentences was calculated for every possible pair, and like for TF-IDF and Sentence-BERT, we considered the highest-scoring pair for each sentence in the PLS to be aligned.\n• Neural CRF Alignment. We also evaluate a BERT-based CRF alignment model (Jiang et al., 2020). We test both the original model trained on the Newsela dataset, and a model fine-tuned on the training set of MC-CLEAN. The hyperparameters used for fine-tuning are the same those used for the original model.\nFor experiments, we used 395 alignments from 22 MC-CLEAN English articles as the test set, and 84 aligned sentence pairs from 5 articles as the validation set. The rest of the 1149 alignments from 74 articles were used for fine-tuning the neural CRF alignment model (Jiang et al., 2020). Table 9 reports the precision, recall, and F1 scores of these methods against reference annotated alignments in MC-CLEAN. We observe that the CRF alignment model trained on the Cochrane dataset achieved the best F1 measure; this is unsurprising, as it is the state-of-the-art supervised method for sentence alignment. are the results of this hallucinating behavior. The following is an example of an extreme hallucination found among the generated outputs for the English version." }, { "figure_ref": [], "heading": "F Human Evaluation Correlation", "publication_ref": [], "table_ref": [], "text": "Input complex sentence: Randomised controlled trials of tai chi lasting at least three months involving healthy adults or adults at high risk of" }, { "figure_ref": [], "heading": "CVD.", "publication_ref": [], "table_ref": [], "text": "Output: Another study investigated the effect of physical exercise as a treatment for people with carpal tunnel syndrome.\nWhile the output does mention a \"study\" and the use of \"physical exercise,\" which is somewhat related to a randomized controlled trial about tai chi, it remains wildly off-topic compared to the original sentence." }, { "figure_ref": [], "heading": "H Additional Statistics", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Table 12 shows the statistics for the average number of tokens per sentence in MULTICOCHRANE." }, { "figure_ref": [], "heading": "I Notable Examples I.1 Good Simplifications", "publication_ref": [], "table_ref": [], "text": "Below are some examples of good simplifications made by the systems. GPT3 systems in particular were good at explaining medical terms and information present in parentheses. Meanwhile, other systems like Flan-T5-ft (r=0.5) and Flan-T5 (0-shot) had fluency issues and hallucinations even in the best simplifications and had more difficulty with longer input sentences. mT5 outputs had issues with extractiveness and retained complex medical vocabulary. This sentence is talking about a study that was done to compare two different types of guidance when using a medical tool called a catheter. The study found that if ultrasound guidance was used, it was more successful than other types of guidance. Out of 134 catheters that were used, the success rate was 1.78 times higher when ultrasound guidance was used. This result was reliable because two different tests were done, and the evidence was of moderate quality.\n-Spanish GPT3 (0-shot) Four RCTs compared ultrasound with palpation, and one compared ultrasound with Doppler auditory assistance.\nCuatro estudios de investigación evaluaron la eficacia de usar ultrasonido para detectar problemas médicos, comparándolo con usar la palpación (tocar con las manos) y uno de los estudios comparó el uso del ultrasonido con ayuda de un aparato llamado Doppler para escuchar los sonidos del cuerpo.\nFour investigative studies evaluated the efficacy of using ultrasound to detect medical problems, comparing it to using the palpation (touching with the hands) and one of the studies compared the use of the ultrasound with the help of an apparatus called Doppler to listen to the sounds of the body. English mT5 (r=0.5) There were no serious adverse events related to either nitrous oxide or carbon dioxide pneumoperitoneum (three studies; 196 participants; very low quality of evidence).\nThere were no serious side effects related to either nitrous oxide or carbon dioxide pneumoperitoneum.\n-French mT5 (r=0.5) Thiospot was more effective than placebo (SMD -2.61, 95% CI -3.76 to -1.47).\nLa thiospots était plus efficace que le placebo.\nThiospot was more effective than the placebo." }, { "figure_ref": [], "heading": "Spanish GPT3-simp-trans", "publication_ref": [], "table_ref": [], "text": "There were no serious adverse events related to either nitrous oxide or carbon dioxide pneumoperitoneum (three studies; 196 participants; very low quality of evidence).\nLos tres estudios realizados con 196 personas mostraron que no hubo efectos negativos al respirar gas nitrógeno o dióxido de carbono durante un procedimiento médico.\nThe three studies carried out with 196 people showed that there were no negative effects in breathing nitrogen gas or carbon dioxide during a medical procedure. We researched additional published studies until the 16 of January 2018." }, { "figure_ref": [], "heading": "Spanish", "publication_ref": [], "table_ref": [], "text": "Flan-T5-ft (r=0.5) We included seven trials, involving 735 participants, in this review.\nSe incluyeron siete ensayos, que incluyeron 735 participantes.\nSeven trials, involving 735 participants, were included.\nFrench Flan-T5-ft (r=0.5) The quality of the evidence was rated as high.\nLa qualité des preuves était élevée.\nThe quality of the evidence was high." }, { "figure_ref": [], "heading": "Spanish", "publication_ref": [], "table_ref": [], "text": "Flan-T5 (0-shot) We searched until January 2016.\nTambién buscamos hasta enero 2016.\nWe also searched until January 2016. " }, { "figure_ref": [], "heading": "I.2 Fluency", "publication_ref": [], "table_ref": [], "text": "Fluency errors across systems include repetition and grammatical errors, as well as the use of foreign words in the Flan outputs.\nRepetition. This category includes examples of fluency errors involving the repetition of a single phrase or word that confuses the meaning of the output and, in some cases, makes it completely meaningless. This error was present in all systems except for the GPT3 (0-shot) and GPT3-simp-trans systems.\nLanguage System Input Output Gloss English mT5 (r=0)\nIn people with symptomatic intracerebral artery stenosis, recurrent stroke was significantly reduced by RIC (risk ratio (RR) 0.32, 95% confidence interval (CI) 0.12 to 0.83; 2 trials, 182 participants, low-quality evidence).\nCompared to other antipsychotic drugs, recurrent stroke was significantly reduced by recurrent stroke.\n-English mT5 (r=0.5) Similarly, our results suggest, but do not confirm, the possibility of a positive influence of expertise in the use of ultrasound on the first attempt success rate.\nA significantly significantly significant difference in the use of ultrasound on the first attempt success rate.\n-Spanish Flan-T5-ft (r=0.5) We located 10 RCTs involving 2961 participating surgeons performing an operation in which the use of blunt needles was compared to the use of sharp needles.\nSe identificaron 10 ECA con 2961 médicos que realizaron una operación que fue realizada con agujas y cuando se comparó agujas tópicas con agujas tópicas y con agujas agujas.\nWe identified 10 RCTs with 2961 doctors who performed an operation that was performed with needles and when topical needles were compared with topical needles and with needles needles." }, { "figure_ref": [], "heading": "Spanish mT5 (r=0)", "publication_ref": [], "table_ref": [], "text": "Results of the ongoing trials will add to the evidence base but additional longer-term, high-quality trials are needed.\nLos resultados de los ensayos en curso adicionales a corto plazo, adicionales, de alta calidad, se necesitan más estudios de alta calidad.\nThe results of the ongoing, additional short-term trials, additional [trials], of high quality, more highquality studies are needed." }, { "figure_ref": [], "heading": "French mT5 (r=0)", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Smoking prevalence in Indigenous youth is twice that of the non-Indigenous population, with tobacco experimentation commencing at an early age.\nLa prévalence du tabagisme chez les jeunes jeunes jeunes développent des problèmes d'éstomac à l'avenir et présentent une amélioration des symptômes.\nThe prevalence of smoking in the youth youth youth developed stomach problems in the future and presents an improvement of the symptoms.\nFrench mT5 (r=0.5) The rates were 24.5 for the sponge and 10.9 for the diaphragm in the UK trial.\nLes taux étaient étaient 25 pour le tube de sponge et 10,9 pour le diaphragme au Royaume-Uni.\nThe rate was was 25 for the sponge tube and 10.9 for the diaphragm in the UK.\nTable 14 Grammar. This category includes examples of grammatical issues in the output sentences, such as the overuse of prepositions, spelling errors, and disagreement between articles and nouns. Multiple prepositions appeared sequentially in a way that was incoherent, and there were instances where articles did not agree with nouns in gender or number. Concerning the modifications of the manual technique of the selection methods of bové(?) models, the majority of participants in their study didn't show a significant difference between the training methods in the student model of the evaluation model in terms of improvement of informatic skills for the different training methods in the blockage model." }, { "figure_ref": [], "heading": "Spanish", "publication_ref": [], "table_ref": [], "text": "Flan-T5-ft (r=0.5) carbon dioxide, helium, argon, nitrogen, nitrous oxide, and room air) used for establishing pneumoperitoneum in participants undergoing laparoscopic general abdominal or gynaecological pelvic surgery.\nEl uso de los dióxidos carbonosas, el alto carbono, la argón, la náusea y el fármaco nitroso (es decir, los flujos náuseos y el fármaco nitroso).\nThe use of the carbon dioxides, the high carbon, the argon, the nausea and the nitrous drug (which is to say, the nausea flows and the nitrous drug)." }, { "figure_ref": [], "heading": "Spanish", "publication_ref": [], "table_ref": [], "text": "Flan-T5-ft (r=0.5) Indigenous youth in particular commence smoking at an early age, and a disproportionate burden of substance-related morbidity and mortality exists as a result.\nEs probable que la morbilidad de los nios de ingesta a un término temprano o un pas a tener un empeoramiento de salud y muerte debido a una pérdida del tratamiento.\nIt is likely that the morbidity of the chiren of intake to an early term or a ste to have a worsening of health and death due to a loss of the treatment.\nTable 15 Foreign words (Flan-T5). This category includes examples of fluency errors involving the use of foreign words in the output sentences. For example, Flan-T5 (0-shot) produced a French output that started with the English phrase 'in four studies', and Flan-T5-ft (r=0.5) produced an English output that started with the Spanish word 'otros'. This type of error was mostly found in the Flan-T5-ft (r=0.5) and Flan-T5 (0-shot) systems." }, { "figure_ref": [], "heading": "Language System Input Output Gloss", "publication_ref": [], "table_ref": [], "text": "French Flan-T5-ft (r=0.5) It usually involves restricting blood flow to limbs and then releasing the ischaemic blood to promote a neuroprotective effect.\nL'exercice consiste généralement à la restriction du flux sanguin de la membres des limbs et ensuite à libérer les bloods ischémies pour promouvoir un effet neuroprotecteur.\nThe exercise generally involves the restriction of the blood flow to the members of the limbs and then to free the ischemic blood to promote a neuroprotective effect." }, { "figure_ref": [], "heading": "French", "publication_ref": [], "table_ref": [], "text": "Flan-T5 (0-shot) In four studies, the use of blunt needles reduced the number of self-reported needle stick injuries with a RR of 0.31 (95% CI 0.14 to 0.68).\nIn four studies, l'utilisation des nejéros strru00f7u a réduit l'injurage autoremprésenté aux nejéros d'une rRR de 0,31, en moyenne de 0,14 à 0,68.\nIn four studies, the use of nejéros strru00f7u (?) to reduce the autoremprésenté (?) injury to the nejéros (?) of a risk ration of .31, on average from .14 to .68." }, { "figure_ref": [], "heading": "English", "publication_ref": [], "table_ref": [], "text": "Flan-T5-ft (r=0.5) Therefore, other gases have been introduced as alternatives to carbon dioxide for establishing pneumoperitoneum.\nOtros such as carbon dioxide are used as alternative gases for establishing pneumoperitoneum.\n-" }, { "figure_ref": [], "heading": "Language System", "publication_ref": [], "table_ref": [], "text": "Input Output Gloss Spanish Flan-T5 (0-shot) Remote ischaemic conditioning (RIC) has been developed as a neuroprotective strategy to prevent and treat ischaemic stroke.\nEl afecto izquierdo de condicionamiento remoto (RIC) ha sido desarrollado como una estrategia neuropromotiva para prevenir y tratar strokes izquierdo.\nThe remote conditioning left effect (RIC) has been developed as a neutropromotive to help prevent and treat left strokes. " }, { "figure_ref": [], "heading": "I.3 Factuality", "publication_ref": [], "table_ref": [], "text": "Factuality errors displayed by the systems could be broadly categorized into 3 types -minor hallucinations, extreme hallucinations, and contradictions.\nMinor hallucinations. This category includes examples where the output contained minor details that were not present in the input (hallucination). The systems would hallucinate details like numbers, dates, medical terms, incorrect expansions of acronyms, etc. GPT3 systems would also misinterpret the context of words present in the input. In one of the examples, the word 'rates' in the input is referring to 'pregnancy rates' but GPT3 misinterpreted the word to mean 'cost'.\nLanguage System Input Output Gloss" }, { "figure_ref": [], "heading": "French mT5 (r=0)", "publication_ref": [], "table_ref": [], "text": "To assess interventions used in the management of all types of melasma: epidermal, dermal, and mixed.\nCette revue a examiné différents type de mélanome, de la peau, d'oeil et des organes, et s'est améliorée dans des années futures.\nThis review has examined different types of melanoma, of the skin, of the eye and of the organs, and improved in the future years.\nSpanish GPT3 (0-shot) It usually involves restricting blood flow to limbs and then releasing the ischaemic blood to promote a neuroprotective effect.\nEsta oración está hablando de un procedimiento médico llamado Preacondicionamiento Isquémico (IPC). El procedimiento consiste en cortar temporalmente el flujo de sangre a ciertas partes del cuerpo y luego devolver la sangre al cuerpo. El objetivo de este procedimiento es proteger el cerebro y el sistema nervioso del daño.\nThis sentence is talking about a medical procedure called Ischemic Preconditioning (IPC). The procedure consists of temporarily cutting off the flow of blood to certain parts of the body and then returning the blood to the body. The objective of this procedure is to protect the brain and the nervous system from damage. Spanish mT5 (r=0.5) We included five RCTs reporting 444 arterial cannulations in paediatric participants.\nSe incluyeron cinco ensayos controlados aleatorios que informaron 556 cannulaciones arteriales en los participantes niños.\nFive randomized controlled trials that reported 556 arterial cannulations in pediatric participants were included." }, { "figure_ref": [], "heading": "French mT5 (r=0)", "publication_ref": [], "table_ref": [], "text": "The rates were 24.5 for the sponge and 10.9 for the diaphragm in the UK trial.\nLes taux étaient de 18,3 pour le diaphragme et de 10,9 pour le diaphragme avec un diaphragme UK dans l'essai.\nThe rates were 18.3 for the diaphragm and 10.9 for the diaphragm with a UK diaphragm in the trial." }, { "figure_ref": [], "heading": "English mT5 (r=0)", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "We included three trials (involving 371 participants) in the analysis of the effects of RIC on ischaemic stroke prevention.\nWe included three trials with 373 participants and provided one trial with seven trials involving 373 participants.\n- -English GPT3 (0-shot) The rates were 24.5 for the sponge and 10.9 for the diaphragm in the UK trial.\nLanguage\nIn the UK trial, it cost 24.5 pounds to buy a sponge and 10.9 pounds to buy a diaphragm.\n-Table 17 Extreme hallucinations. This category includes examples of hallucinations that were extremely irrelevant. Information hallucinated in the output sentences has no connection to the information present in the input. This degree of hallucinations was much more prevalent and extreme in the Flan systems as compared to other systems. For the victims of acute stroke, the women administered (?) the transplant had an increased probability of being victims or of the disease thanks to a remmanant (?) treatment by an injection of a recurrent coagulation." }, { "figure_ref": [], "heading": "Language System", "publication_ref": [], "table_ref": [], "text": "Input Output Gloss Spanish mT5 (r=0.5) In people with carotid stenosis undergoing carotid stenting, there was no significant difference in the incidence of ischaemic stroke between participants treated with RIC and non-RIC (RR 0.22, 95% CI 0.01 to 4.03; 1 trial, 189 participants, low-quality evidence); however the stroke severity (assessed by infarct volume) was significantly lower in participants treated with RIC (mean difference (MD) -0.17 mL, 95% CI -0.23 to -0.11; 1 trial, 189 participants, low-quality evidence).\nEn los pacientes con enfermedad isquémica cuando se acortan los brazos en los pacientes tratados con ENIC y ningún riesgo de caídas.\nIn the patients with ischemic disease, when arms are shorted in the patients treated with ENIC and no risk of falls." }, { "figure_ref": [], "heading": "Table 18", "publication_ref": [], "table_ref": [], "text": "Contradictions. This category includes examples where the output directly contradicts ideas stated in the input. This error was infrequent in the GPT3 systems but was equally prevalent in other systems." }, { "figure_ref": [], "heading": "Language System", "publication_ref": [], "table_ref": [], "text": "Input Output Gloss\nFrench Flan-T5 (0-shot) There was considerable heterogeneity between the studies included in this review and studies were small and at some risk of bias Il y a lieu d'échantillonner des études incluant dans cette étude et les études étaient grandes et en hausse une hausse de la biase.\nThere is a reason to sample the studies included in this study and the studies were big and increasing an increase of bias." }, { "figure_ref": [], "heading": "English mT5 (r=0)", "publication_ref": [], "table_ref": [], "text": "There is qualitative evidence that oral morphine has much the same efficacy as other available opioids.\nIt " }, { "figure_ref": [], "heading": "I.4 Simplification", "publication_ref": [], "table_ref": [], "text": "Simplification errors included instances of the system being overly extractive, oversimplification of terms and ideas, and excessive deletion of entire parts of the original text.\nExtractive. GPT3 (0-shot) was highly extractive and failed to explain complex medical language in everyday terms that a general audience could understand. The system also left in statistics from the input sentence that are not interpretable to the general public. mT5 (r=0) exhibited similar patterns with regards to extractiveness. Esta frase describe un estudio que se hizo para comparar diferentes tratamientos para ayudar a mejorar la circulación sanguínea. Estos tratamientos incluyeron: terapia con trombolíticos en comparación con un placebo (1 estudio); dosis baja en comparación con dosis alta de trombolíticos (1); alteplasa en comparación con uroquinasa (1); tiempo corto de trombolíticos en comparación con tiempo largo (1); terapia con trombolíticos en comparación con despeje de fibrina con una aguja (1); despeje de fibrina en comparación con cambio de catéter con alambre (1); y cambio de catéter con alambre en comparación con cambio sin despeje de fibrina con angioplastia (1).\nThis sentence describes a study that was conducted to compare different treatments to help improve the blood circulation. These treatments included: therapy with thrombolytics in comparison with a placebo (1 study); low dose in comparison with high dose of thrombolytics (1); alteplase compared to urokinase (1); short time of thrombolytics compared with long time (1); therapy with thrombolytics compared to clearance of fibrin with a needle (1); clearance of fibrin in comparison with wire catheter change (1); and wire catheter change in comparison with change without clearance of fibrin with angioplasty (1)." }, { "figure_ref": [], "heading": "Spanish mT5 (r=0)", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "On current data there is no evidence to support physical intervention over the use of pharmaceutical agents in the acute setting.\nEsta evidencia no apoya la intervención física sobre la administración de agentes farmacológicos en la práctica aguda. Table 20 Oversimplification Flan-T5-ft (r=0.5) had issues with excessive deletions, resulting in fragmented sentences that failed to communicate the meaning of the input sentence. Excessive deletion in mT5 (r=0) and mT5 (r=0.5) manifested as a loss of information that was necessary to preserve the meaning of the input sentence. The resulting outputs were missing critical information, causing them to be generic and vague, such as in the Spanish mT5 (r=0.5) example that discusses the use of RIC for ischemic stroke prevention." }, { "figure_ref": [], "heading": "Language System Input Output Gloss", "publication_ref": [], "table_ref": [], "text": "French Flan-T5-ft (r=0.5) We rated the quality of evidence for all outcomes as moderate owing to imprecision due to wide confidence intervals, modest sample sizes and limited numbers of events." }, { "figure_ref": [], "heading": "La qualité des preuves", "publication_ref": [], "table_ref": [], "text": "The quality of the evidence French mT5 (r=0) Four RCTs compared ultrasound with palpation, and one compared ultrasound with Doppler auditory assistance.\nCinq ECR ont comparé l'échographie avec une palpation.\nFive RCTs have compared ultrasounds with palpation.\nFrench mT5 (r=0.5) We included three trials (involving 371 participants) in the analysis of the effects of RIC on ischaemic stroke prevention.\nLes auteurs de la revue ont inclus un total de 371 participants.\nThe authors of the review included a total of 371 participants. Spanish mT5 (r=0.5) The quality of the evidence was rated as high.\nCalidad de la evidencia Quality of the evidence Spanish mT5 (r=0.5) We included three trials (involving 371 participants) in the analysis of the effects of RIC on ischaemic stroke prevention.\nSe incluyeron tres ensayos que incluyeron a 372 participantes.\nThree trials with 372 participants were included." }, { "figure_ref": [], "heading": "English mT5 (r=0)", "publication_ref": [], "table_ref": [], "text": "There were no serious adverse events related to either nitrous oxide or carbon dioxide pneumoperitoneum (three studies; 196 participants; very low quality of evidence).\nThere were no serious adverse events.\n-" }, { "figure_ref": [], "heading": "Language System", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Input Output Gloss English mT5 (r=0.5) We reran the searches for all databases in March 2016, and when we update the review, we will deal with the two studies of interest found through this search that are awaiting classification.\nThe search for most scientific databases in March 2016.\n-Table 21 I.5 Other Errors Repetition of output (GPT3 (0-shot)). The GPT3 (0-shot) system had a unique error that exclusively appeared in the French outputs where a separate paraphrase was generated in addition to the main output sentence. The system sometimes gave a generated output, then repeated the same information in different words using the phrase 'in other words' or simply generating two different versions of a simplified output. We estimated that the quality of the evidence for all the results was moderate on account of an imprecision due to large confidence intervals, the size of the modest samples and to a limited number of events. We estimated that the evidence wasn't very precise because the confidence intervals were big, the not very big samples and there were few events." }, { "figure_ref": [], "heading": "Table 22", "publication_ref": [], "table_ref": [], "text": "Acronyms. Across all systems, a frequent issue that arose when simplifying medical text was acronyms. These issues can be further broken down into failure to translate into the target output language, hallucinated acronym generations, and factually incorrect generations.\nAcronym translation. Systems failed to translate the input sentence's acronym from English into Spanish or French. GPT3 (0-shot) would explicitly state that the acronym was in English, rather than translating, in some cases. We have included randomly controlled trials (RCT) to compare the RIC with a fake RIC or medical management in people that suffered a stroke or at risk of a stroke." }, { "figure_ref": [], "heading": "French", "publication_ref": [], "table_ref": [ "tab_5", "tab_6", "tab_2" ], "text": "Flan-T5 (0-shot) To assess the benefits and harms of RIC for preventing ischaemic stroke and for treating people with ischaemic stroke and those at risk for ischaemic stroke.\névaluer les avantages et les éléments de la RIC en vue de supprimer le stroke insécutif et en vue de réduire les risques liées à l'insécutif.\nTo evaluate the advantages and the elements of RIC in order to remove the insécutif (?) stroke and in order to reduce the risks associated with the insécutif (?).\nTable 23 Acronym hallucination. Flan-T5-ft (r=0.5) hallucinated acronyms in the output sentence that do not exist in the target language. This occurred across all languages and in both Flan and mT5 systems. La SEP réduisait la gravité du AVC ischémique, mais les essais confondus avaient des preuves de très faible qualité.\nThe SEP reduced the severity of ischemic stroke, but the included trials have evidence of very weak quality.\nTable 24 Acronym factuality. Flan-T5 (0-shot) and mT5 (r=0) produced factuality issues in the output sentences. In the Spanish output example, Flan generated the acronym for AIDS (SIDA), though the input was referring to randomized controlled trials (RCTs). In the French example, mT5 (r=0) generated the acronym for a stroke (AVC), but the input sentence refers to cardiovascular disease (CVD).\nLanguage System Input Output Gloss Spanish Flan-T5 (0-shot) Eight RCTs with 733 women in total that compared brief co-incubation and the standard insemination protocol were included.\nOcho estudios compararon la co-incubación breve con un protocolo de inserción estándar en pacientes con SIDA en total.\nEight studies compared the brief co-incubation with a protocol of standard insertion in patients with AIDS in total.\nFrench mT5 (r=0) Stress and a sedentary lifestyle are major determinants of cardiovascular disease (CVD).\nLe stress et le règlement inactif sont des causes importantes de maladies cardiovasculaires (AVC) les plus courants.\nStress and inactive regulation are the most common important causes of cardiovascular disease (AVC).\nTable 25 Input hallucination (GPT3). The systems GPT3-simp-trans and GPT3 (0-shot) show some minor hallucination by referencing the input sentence in the output. Some output generations would start with 'this phrase' or 'this sentence' in talking about the input. This error was not found in any of the other systems. This phrase talks about a study that was done on people suffering from carotid stenosis and who were treated by carotid stenting. The study shows that there isn't a significant difference between people treated by RIC (Revascularization Intracarotid) and those who weren't, at the level of risk of stroke. However, the people treated by RIC presented a lessened severity of strokes (measured by the volume of infarction)." }, { "figure_ref": [], "heading": "Spanish", "publication_ref": [ "b18", "b14" ], "table_ref": [ "tab_2" ], "text": "GPT3-simp-trans Three trials randomised participants to nitrous oxide pneumoperitoneum (100 participants) or carbon dioxide pneumoperitoneum (96 participants).\nEsta oración describe un experimento científico que probó dos tipos diferentes de tratamientos médicos en dos grupos de personas. El primer grupo tenía 100 personas y se les dio un tratamiento con óxido nitroso. El segundo grupo tenía 96 personas y se les dio un tratamiento con dióxido de carbono. This sentence describes a scientific experiment that tested two different types of medical treatment in two groups of people. The first group had 100 people and they were given a treatment with nitrous oxide. The second group had 96 people and they were given a treatment with carbon dioxide.\nTable 26 J ANNO-VIEWER ANNO-VIEWER is inspired by a similar annotation tool that was used in Jiang et al. (2020) for correcting crowdsourced alignment labels used for training and evaluating a neural CRF model for aligning sentences.\nThe primary function of this tool is to enable annotators to make alignments efficiently. ANNO-VIEWER also enables annotators to make factuality annotations for alignments. These annotations are largely based from the factuality annotations used in Devaraj et al. (2022), and also includes an additional field to annotate for elaborations in a similar manner.\nThis annotation tool also has the additional functionality of allowing annotators to use existing similarity measures to help find sentence alignments faster. When a sentence is selected for alignment, the tool can sort the other document by which sentences are most similar (determined by the automatic similarity measure) to the selected sentence." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We acknowledge Pouya Nekouei for his help with the Farsi language. This research was partially supported by National Science Foundation (NSF) grants IIS-2145479, IIS-2144493 and IIS-2112633, and by the National Institutes of Health (NIH) under the National Library of Medicine (NLM) grant 2R01LM012086." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "‫داﺷﺗﮫ‬ ‫اﯾﺳﮑﻣﯾﮏ‬ ‫ﻣﻐزی‬ ‫.ﺳﮑﺗﮫ‬" } ]
Automated text simplification aims to produce simple versions of complex texts. This task is especially useful in the medical domain, where the latest medical findings are typically communicated via complex, technical articles. This creates barriers for laypeople seeking access to up-to-date medical findings, consequently impeding progress on health literacy. Most existing work on medical text simplification has focused on monolingual settings, with the result that such evidence would be available only in just one language (most often, English). This work addresses this limitation via multilingual simplification, i.e., directly simplifying complex texts into simplified texts in multiple languages. We introduce MULTICOCHRANE, the first sentence-aligned multilingual text simplification dataset for the medical domain in four languages: English, Spanish, French, and Farsi. We evaluate fine-tuned and zero-shot models across these languages with extensive human assessments and analyses. Although models can generate viable simplified texts, we identify several outstanding challenges that this dataset might be used to address.Preclinical studies have suggested that RIC may have beneficial effects in ischaemic stroke patients and those at risk of ischaemic stroke. English: Studies have suggested that RIC may have beneficial effects for preventing and treating ischaemic stroke. Spanish: Los estudios han indicado que el CIR puede tener efectos beneficiosos en la prevención y el tratamiento del accidente cerebrovascular isquémico. French: Des études ont suggéré que le CID pourrait avoir des effets bénéfiques sur la prévention et le traitement de l'AVC ischémique. Farsi: ‫ﮐﮫ‬ ‫اﻧد‬ ‫ﮐرده‬ ‫ﭘﯾﺷﻧﮭﺎد‬ ‫طﺎﻟﻌﺎت‬ RIC ‫درﻣﺎن‬ ‫و‬ ‫ﭘﯾﺷﮕﯾری‬ ‫ﺑرای‬ ‫ﻣﻔﯾدی‬ ‫اﺛرات‬ ‫اﺳت‬ ‫ﻣﻣﮑن‬ ‫ﺑﺎﺷد‬
Multilingual Simplification of Medical Texts
[ { "figure_caption": "Figure 3 :3Figure3: These graphs show the distribution of human evaluation scores for each evaluated system in English, Spanish, French, and Farsi.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Screenshots of ANNO-VIEWER. The sentence sorting tool is featured in the bottom image.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Statistics on unaligned and aligned sentences for the English portion of MC-CLEAN.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Train/test/validation splits used for evaluation.Both the number of articles and the total number ofalignments are displayed (for training: unfiltered (r =0)/filtered (r = 0.5)).SystemBLEU BSSARILCSGPT3zero2.380.8774 42.111 0.569EnglishFlan-T5zero mT5r=0 mT5r=0.58.12 7.66 8.820.8810 39.057 0.340 0.8785 40.219 0.466 0.8843 39.579 0.379Flan-T5r=0.5 8.700.8875 39.526 0.319GPT3zero5.700.7412 37.972 0.361SpanishGPT3trans Flan-T5zero mT5r=03.11 3.04 4.970.7188 41.123 0.561 0.7100 37.890 0.367 0.7337 41.224 0.564mT5r=0.55.180.7381 39.522 0.508Flan-T5r=0.5 5.210.7376 40.064 0.486GPT3zero4.420.7325 38.096 0.380FrenchGPT3trans Flan-T5zero mT5r=02.82 1.63 2.350.7134 40.941 0.579 0.6989 39.452 0.449 0.7146 40.412 0.602mT5r=0.52.960.7157 39.096 0.553Flan-T5r=0.5 3.890.7284 40.989 0.528GPT3zero1.360.7103 41.080 0.497FarsiGPT3trans mT5r=01.16 2.210.7094 43.738 0.577 0.7111 43.280 0.622mT5r=0.52.670.7139 43.154 0.631", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Automatic metrics for all generations.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Randolph's kappa on English outputs evaluated for agreement.", "figure_data": "Factuality Fluency Simplicity Combined0.3620.5490.3720.464Alignment Type Method A Method B1 -136.4%50.3%N -120.3%36.6%1 -N9.4%10.1%N -M33.9%2.9%", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Distribution of alignment types in MC-CLEAN for each method used.", "figure_data": "", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Table 10 displays the Pearson correlation between automatic metrics and the human evaluation scores. Overall the correlation between them is mostly", "figure_data": "Auto MetricFact. Fluency Simpl.Dataseten→en en→es en→fr en→faEnglishBLEU BERTScore SARI LCS-0.124 -0.174 -0.078 0.319-0.032 -0.174 0.033 0.1440.037 -0.030 0.014 -0.254ComplexMC-NOISY + filtering MC-CLEAN39.7 44.8 41.639.5 44.5 41.639.4 44.4 41.741.1 46.6 43.2SpanishBLEU BERTScore SARI-0.190 -0.105 -0.037-0.100 -0.083 0.0910.017 0.010 -0.367SimpleMC-NOISY + filtering MC-CLEAN36.0 32.3 36.648.0 43.6 47.254.1 49.3 55.052.7 48.0 49.5LCS0.1960.095-0.395BLEU-0.197-0.1080.042Table 12: Statistics for average number of tokens perFrenchBERTScore SARI LCS-0.219 0.029 0.392-0.086 0.035 0.1880.047 -0.186 -0.253sentence for complex and simple sentences in MC-NOISY and MC-CLEAN.BLEU-0.120-0.0590.055FarsiBERTScore SARI-0.281 -0.243-0.074 0.0310.227 0.286LCS0.3190.257-0.347Language BLEU BSSARILCSEnglish0.250.8442 43.677 0.695Spanish1.400.7055 41.009 0.651French0.900.6806 41.987 0.695Farsi1.000.6862 44.204 0.698Table 11: Automatic metrics for mT5 fine-tuned onlyon MC-Clean data.weak. BLEU and BERTScore, not surprisingly,does have a weak inverse correlation with factu-ality and fluency measures. The metric with thestrongest correlation to human scores is LCS, show-ing a moderate positive correlation with factuality(more abstractive sentences tend to contain morefactual errors), as well as a moderate inverse corre-lation with the simplicity score (more abstractivesentences tend to be simpler). The other simplifica-tion metric, SARI, had varied results depending onthe language.G Fine-tuning with MC-CLEAN OnlyWe attempted fine-tuning mT5 with just data fromMC-CLEAN. The same training methodology wasused, with the only difference being the number ofepochs being increased to 5. Overall, due to thesignificantly lower amount of alignments in MC-", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_15", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_17", "figure_label": "16", "figure_type": "table" } ]
Sebastian Joseph; Kathryn Kazanas; Keziah Reina; Vishnesh J Ramanathan; Wei Xu; Byron C Wallace; Junyi Jessy Li
[ { "authors": "Emil Abrahamsson; Timothy Forni; Maria Skeppstedt; Maria Kvist", "journal": "", "ref_id": "b0", "title": "Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language", "year": "2014" }, { "authors": "Sweta Agrawal; Marine Carpuat", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Controlling text complexity in neural machine translation", "year": "2019" }, { "authors": "Marco Alfano; Biagio Lenzitti; Giosuè Lo Bosco; Cinzia Muriana; Tommaso Piazza; Giovanni Vizzini", "journal": "International journal of medical informatics", "ref_id": "b2", "title": "Design, development and validation of a system for automatic help to medical text understanding", "year": "2020" }, { "authors": "Fernando Alva-Manchego; Carolina Scarton; Lucia Specia", "journal": "Computational Linguistics", "ref_id": "b3", "title": "Data-driven sentence simplification: Survey and benchmark", "year": "2020" }, { "authors": "Mikel Artetxe; Holger Schwenk", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b4", "title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", "year": "2019" }, { "authors": "Tal August; Lucy Lu Wang; Jonathan Bragg; Marti A Hearst; Andrew Head; Kyle Lo", "journal": "", "ref_id": "b5", "title": "Paper plain: Making medical research papers approachable to healthcare consumers with natural language processing", "year": "2022" }, { "authors": "Daniel Bakkelund", "journal": "", "ref_id": "b6", "title": "An lcs-based string metric", "year": "2009" }, { "authors": "Chandrayee Basu; Rosni Vasu; Michihiro Yasunaga; Qian Yang", "journal": "", "ref_id": "b7", "title": "Med-easi: Finely annotated dataset and models for controllable simplification of medical texts", "year": "2023" }, { "authors": " Nancy D Berkman; Katrina E Stacey L Sheridan; David J Donahue; Karen Halpern; Crotty", "journal": "Annals of internal medicine", "ref_id": "b8", "title": "Low health literacy and health outcomes: an updated systematic review", "year": "2011" }, { "authors": "Yixin Cao; Ruihao Shui; Liangming Pan; Min-Yen Kan; Zhiyuan Liu; Tat-Seng Chua", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Expertise style transfer: A new task towards better communication between experts and laymen", "year": "2020" }, { "authors": "Rémi Cardon; Natalia Grabar", "journal": "", "ref_id": "b10", "title": "French biomedical text simplification: When small and precise helps", "year": "2020" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b11", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "William Coster; David Kauchak", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Simple English Wikipedia: A new text simplification task", "year": "2011" }, { "authors": "Ashwin Devaraj; Iain Marshall; Byron Wallace; Junyi Jessy Li", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Paragraph-level simplification of medical texts", "year": "2021" }, { "authors": "Ashwin Devaraj; William Sheffield; Byron Wallace; Junyi Jessy Li", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Evaluating factuality in text simplification", "year": "2022" }, { "authors": "Natalia Grabar; Rémi Cardon", "journal": "", "ref_id": "b15", "title": "Clear-simple corpus for medical french", "year": "2018" }, { "authors": "Yue Guo; Wei Qiu; Gondy Leroy; Sheng Wang; Trevor Cohen", "journal": "", "ref_id": "b16", "title": "Cells: A parallel corpus for biomedical lay language generation", "year": "2022" }, { "authors": "Yue Guo; Weijian Qiu; Yizhong Wang; Trevor A Cohen", "journal": "", "ref_id": "b17", "title": "Automated lay language summarization of biomedical scientific reviews", "year": "2020" }, { "authors": "Chao Jiang; Mounica Maddela; Wuwei Lan; Yang Zhong; Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Neural CRF model for sentence alignment in text simplification", "year": "2020" }, { "authors": "Ilona Kickbusch; Franklin Jürgen M Pelikan; Agis D Apfel; Tsouros", "journal": "Regional Office for Europe", "ref_id": "b19", "title": "Health literacy: the solid facts", "year": "2013" }, { "authors": "Joongwon Kim; Mounica Maddela; Reno Kriz; Wei Xu; Chris Callison-Burch", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "BiSECT: Learning to split and rephrase sentences with bitexts", "year": "2021" }, { "authors": "Nicholas Kloehn; Gondy Leroy; David Kauchak; Yang Gu; Sonia Colina; Nicole P Yuan; Debra Revere", "journal": "Journal of medical Internet research", "ref_id": "b21", "title": "Improving consumer understanding of medical text: Development and validation of a new subsimplify algorithm to automatically generate term explanations in english and spanish", "year": "2018" }, { "authors": "Wuwei Lan; Chao Jiang; Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Neural semi-Markov CRF for monolingual word alignment", "year": "2021" }, { "authors": "Chenxi Liu; Dan Wang; Chaojie Liu; Junnan Jiang; Xuemei Wang; Haihong Chen; Xin Ju; Xinping Zhang", "journal": "Family medicine and community health", "ref_id": "b23", "title": "What is the meaning of health literacy? a systematic review and qualitative synthesis", "year": "2020" }, { "authors": "Louis Martin; Angela Fan; Éric De La Clergerie; Antoine Bordes; Benoît Sagot", "journal": "European Language Resources Association", "ref_id": "b24", "title": "MUSS: Multilingual unsupervised sentence simplification by mining paraphrases", "year": "2022" }, { "authors": "Gustavo Paetzold; Fernando Alva-Manchego; Lucia Specia", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "MASSAlign: Alignment and annotation of comparable documents", "year": "2017" }, { "authors": "Atharva Phatak; David W Savage; Robert Ohle; Jonathan Smith; Vijay Mago", "journal": "JMIR Medical Informatics", "ref_id": "b26", "title": "Medical text simplification using reinforcement learning (teslea): Deep learning-based text simplification approach", "year": "2022" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Randolph Justus", "journal": "", "ref_id": "b28", "title": "Free-marginal multirater kappa (multirater k [free]): An alternative to fleiss' fixed-marginal multirater kappa", "year": "2005" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Juliane Ried", "journal": "", "ref_id": "b30", "title": "About translation at cochrane", "year": "2023" }, { "authors": "Sebastian Ruder; Noah Constant; Jan Botha; Aditya Siddhant; Orhan Firat; Jinlan Fu; Pengfei Liu; Junjie Hu; Dan Garrette; Graham Neubig; Melvin Johnson", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "XTREME-R: Towards more challenging and nuanced multilingual evaluation", "year": "2021" }, { "authors": "Michael Ryan; Tarek Naous; Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Revisiting non-English text simplification: A unified multilingual benchmark", "year": "2023" }, { "authors": "Thomas Scialom; Paul-Alexis Dray; Sylvain Lamprier; Benjamin Piwowarski; Jacopo Staiano", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "MLSUM: The multilingual summarization corpus", "year": "2020" }, { "authors": "Chantal Shaib; Millicent Li; Sebastian Joseph; Iain Marshall; Junyi ; Jessy Li; Byron Wallace", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Summarizing, simplifying, and synthesizing medical evidence using GPT-3 (with varying success)", "year": "2023" }, { "authors": "Advaith Siddharthan", "journal": "ITL-International Journal of Applied Linguistics", "ref_id": "b35", "title": "A survey of research on text simplification", "year": "2014" }, { "authors": "Neha Srikanth; Jessy Junyi; Li", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Elaborative simplification: Content addition and explanation generation in text simplification", "year": "2021" }, { "authors": "Teerapaun Tanprasert; David Kauchak", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Flesch-kincaid is not a text simplification evaluation metric", "year": "2021" }, { "authors": "Brian Thompson; Philipp Koehn", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Vecalign: Improved sentence alignment in linear time and space", "year": "2019" }, { "authors": "Jan Trienes; Jörg Schlötterer; Hans-Ulrich Schildhaus; Christin Seifert", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Patient-friendly clinical notes: Towards a new text simplification dataset", "year": "2022" }, { "authors": "Hoang Van; David Kauchak; Gondy Leroy", "journal": "International Committee on Computational Linguistics", "ref_id": "b40", "title": "AutoMeTS: The autocomplete for medical text simplification", "year": "2020" }, { "authors": "Laurens Van Den Bercken; Robert-Jan Sips; Christoph Lofi", "journal": "", "ref_id": "b41", "title": "Evaluating neural text simplification in the medical domain", "year": "2019" }, { "authors": "Tu Vu; Aditya Barua; Brian Lester; Daniel Cer; Mohit Iyyer; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Overcoming catastrophic forgetting in zero-shot cross-lingual generation", "year": "2022" }, { "authors": "Kristian Woodsend; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Learning to simplify sentences with quasi-synchronous grammar and integer programming", "year": "2011" }, { "authors": "Wei Xu; Chris Callison-Burch; Courtney Napoles", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b44", "title": "Problems in current text simplification research: New data can help", "year": "2015" }, { "authors": "Wei Xu; Courtney Napoles; Ellie Pavlick; Quanze Chen; Chris Callison-Burch", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b45", "title": "Optimizing statistical machine translation for text simplification", "year": "2016" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b47", "title": "BERTScore: Evaluating text generation with BERT", "year": "2020" }, { "authors": "Xingxing Zhang; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Sentence simplification with deep reinforcement learning", "year": "2017" }, { "authors": "Zhemin Zhu; Delphine Bernhard; Iryna Gurevych", "journal": "", "ref_id": "b49", "title": "A monolingual tree-based translation model for sentence simplification", "year": "2010" } ]
[ { "formula_coordinates": [ 5, 403.76, 543.44, 87.76, 10.63 ], "formula_id": "formula_0", "formula_text": "r = |L c ∩ L s |/|L s |." }, { "formula_coordinates": [ 23, 82.07, 75.72, 37.87, 8.06 ], "formula_id": "formula_1", "formula_text": "Language" } ]
2023-05-21
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b9", "b22", "b8", "b0", "b26", "b12", "b36", "b18", "b0", "b34", "b11", "b30", "b32", "b32", "b32", "b32" ], "table_ref": [], "text": "Knowledge distillation (KD) [10] is a widely used framework to transfer knowledge between models. KD benefits real-world applications by distilling knowledge from large teacher models to small student models, such as deploying language models on edge devices [23]. However, KD usually requires a large amount of data (see Figure 1 for an example), which cannot be met in many real-world scenarios such as biomedical domain [9,1,27,13].\nAlthough these popular data augmentation methods can be used directly in knowledge distillation, they are initially designed for vanilla fine-tuning or few-shot settings. For example, SR and EDA tend to only change a small proportion of tokens in the text to avoid severe semantic shifts and preserve task-specific labels, [36] uses T5 [19] to change a few tokens of the original input to generate a new input that flips the label. In knowledge distillation, we are not required to know the labels of augmented data since the teacher model can provide the label distribution. Therefore, these data augmentation methods may be sub-optimal for knowledge distillation. In this paper, we hope to shed light on the following question:\nWhat data augmentation is the best fit for knowledge distillation?\nThis question contains two sub-questions: (1) which data augmentation paradigm does KD prefer? More specifically, should we use higher-quality but lower-quantity synthesized data [34] generated by autoregressive models or lower-quality but higher-quantity augmented data generated by classic data augmentation methods [12,30,32]? We call the first type \"synthesized data\" and the second type \"augmented data\" to simplify the notion. ( 2) how should we make the data augmentation paradigm perform well in KD? The first sub-question is about selecting a paradigm; the second is about adapting a paradigm initially designed for non-KD settings to KD. For the first sub-question, our experiments (Section 3) show that quantity is more important than quality from the perspective of KD. This observation encourages us to use augmented data that is cheap and can be deployed in an online manner (so that we can obtain a large amount of augmented data without the concern of storage). This property is different from fine-tuning, which has been shown that more data will not result in more performance gain [32]. We answer the second sub-question (Section 4) from the observation that knowledge distillation is more tolerant to semantic shifts and can provide the label probability distribution for arbitrary inputs. Previous data augmentation methods find that changing a small proportion of tokens gives the best empirical results (e.g., 10% tokens for fine-tuning [32]), which may not be optimal for KD. Intuitively, changing more tokens can cause more semantic shifts, produce more diverse data, thus fully utilize teacher models. Since the proportion of changed tokens reflects the degree of semantic shift, we use \"semantic shift degree\" or \"degree\" to denote the proportion of changed tokens to simplify the notion. We theoretically (Section 4) and empirically (Section 5) find that a smaller data size prefers a more significant semantic shift degree and show that 30% degree (i.e., 30% tokens are changed) produces the best KD results in most cases. At last, we show how our findings benefit the real-world biomedical application (Section 6).\nOur findings can be summarized as follows:\n• In KD, augmented data is preferable to synthesized data due to the low cost. The quantity of augmented data is important to ensure well-distilled student models. Therefore, KD encourages cheap augmented data and online augmentation. • Augmented data should have a larger semantic shift degree to fully utilize teacher models and achieve a better KD performance. Generally speaking, the 30% degree is a \"sweet spot\" in KD (our findings), and 10% degree is a \"sweet spot\" in fine-tuning [32]. • Our experiments and theoretical analysis show that smaller datasets prefer a larger degree.\n• Though KD prefers a larger semantic shift degree, an extremely large degree is not a good option since it will cause an out-of-distribution (OOD) problem and thus hurt the KD performance.\nOur work sheds light on the difference between KD and fine-tuning from the perspective of data augmentation and gives guidance on how to use data augmentation for KD. By adapting current data augmentation methods to the KD setting, we show the potential power of data augmentation in knowledge distillation. Therefore, we encourage the community to explore more KD-specific data augmentation methods." }, { "figure_ref": [ "fig_4", "fig_2" ], "heading": "Background", "publication_ref": [ "b33", "b4", "b19", "b36", "b5", "b3", "b32", "b32", "b31" ], "table_ref": [ "tab_0" ], "text": "Knowledge distillation aims to distill knowledge between models. Formally speaking, given a dataset D = {(x, y)}, a fine-tuned teacher model f t : X → Y, and student model to be trained f g : X → Y, knowledge distillation train the student model with a loss\nL KD = D α L f t (f g (x), y) + (1 -α) Distance(f g (x), f t (x))\nwhere L f t denotes the fine-tuning loss (e.g., cross-entropy) and Distance measures the distance between f g (x) and f t (x) (e.g., KL divergence and mean-square-error). α ∈ [0, 1] is the weight between two losses. Empirically, smaller α leads to better KD performance. Augmented data (or synthesized data) D = {(x , y )} could also be used in KD\nL KD = D β L f t (f g (x ), y ) + (1 -β) Distance(f g (x ), f t (x )) L = γ L KD + (1 -γ) L KD β, γ ∈ [0, 1]\ncontrol weights among losses. Classic data augmentation methods, such as synonym replacement, only generate x and assume y = y. Autoregressive models such as GPT-3 can generate x as well as y .\n3 Which Data Augmentation Paradigm Does KD Prefer?\nData augmentation methods could be divided into two paradigms based on their quality and efficiency.\n(1) Augmented data (i.e., lower-quality but higher-quantity): SR, kNN, and EDA are all in this paradigm. The main feature of these types of data is that they are very cheap to be produced. In practice, we can easily obtain a large amount of such data. (2) synthesized data (i.e., higher-quality but lower-quantity): Large language models could help generate high-quality synthesized data but with a high computational expense.\nWe conduct an experiment to compare their performance in KD. To get stable results and diminish the effect of variance, we choose MNLI (392.7k) [33] as our benchmark. We fine-tune a BERT (Large) (336M) [5] on the full MNLI dataset as the teacher model. DistilBERT (Base) (66M) [20] is selected as the student model. To highlight the effect of data augmentation, we only use 1% of the MNLI training data (i.e., ∼ 3900 input-output pairs) and evaluate the KD on the MNLI-M validation set. All experiments in the paper are running on 1x V100 16GB unless stated otherwise.\nWe choose kNN as a representative of augmented data. kNN generates augmented data by randomly replacing r% tokens to one of its top-k neighbors in the embedding space. We use the default hyperparameters used in previous fine-tuning work [36], i.e., we set k = 15 and r = 10. Since kNN is cheap, there are two ways of augmentation. Offline augmentation will first augment a fixed amount of data (4x and 8x KD data in our experiments) and mix augmented data with KD data. Online augmentation will augment the same amount of data as the batch size whenever we sample a batch from KD data. Therefore, online augmentation could augment more data with more distillation steps. Besides, we do not need to store augmented data with online augmentation. In our experiments, we set the batch size to 64 and distillation steps to 50,000, therefore online augmenting total 64 * 50, 000/3900 = 820.5x KD data.\nWe use back translation [6] to generate synthesized data. To ensure the data is of high quality, we choose NLLB 1.3B [4], one of the largest machine translation models that support translation between 200 languages. To ensure the diversity of data, we use eight target languages1 in total. Each target language generates the same amount of synthesized data as KD data. Therefore, the synthesized data is 8x times the KD data. We do not choose ChatGPT or GPT-3 for due to the slow response of APIs, which will take an unbearable time to generate enough data. However, it is possible to estimate the upper-bound performance of ChatGPT by assuming the synthesized data is the ground-truth data in the original dataset. The time cost of ChatGPT can be estimated by the API call time cost.\nFigure 3 and Table 1 show the KD performance with different data augmentation methods and the time costs. We can conclude that (1) More data improves KD performance more. The kNN and back translation help KD perform better with more data, which is different from fine-tuning that too much data is not helpful [32]. Figure 2 shows the fine-tuning results under the same setting as KD. We observe that more data will not lead to performance gain (BT 4x vs. BT 8x, kNN 4x vs. kNN 8x vs. kNN online) in fine-tuning, but high-quality data helps performance improvement, which differs from KD. Moreover, we can see that kNN and BT do not help improve performance and even have a performance drop with increased iterations in fine-tuning. This may be because the assumption that y = y may not hold and introduces noises. In the smaller dataset (1% MNLI training data only has ∼ 3900 inputs), the noise could affect the performance more significantly, causing a performance drop. ( 2) KD prefers augmented data more than synthesized data. Synthesized data performs better than augmented data when they are of the same amount (4x, 8x); augmented data outperforms synthesized data when their computation costs are the same. The online kNN augmentation takes only 103 mins (and less GPU usage) but outperforms back translation 8x (112 mins) and ChatGPT 1x (260 mins). Besides, we could also find that online augmentation has not converged yet (unlike offline augmentation, which converges quickly and stays steady), meaning the online augmentation's performance could still be improved with increased iterations. Based on the above observations, we suggest using augmented data in an online manner for KD. Although augmented data fits KD better than synthesized data, they may still be sub-optimal. Finetuning requires manual labels. Therefore, augmented data cannot be too far away from the original data (e.g., 10% semantic shift degree is generally the best option in fine-tuning [32]) to preserve taskspecific labels. In contrast, label distributions can be provided by teachers in knowledge distillation. Intuitively, a higher semantic shift degree will produce more diverse data and fully utilize teachers' abilities. It is also worth noting that an extremely large semantic shift degree is not an ideal solution because of out-of-distribution (OOD) problems. An extreme case is that we could randomly sample several tokens from a vocabulary and formulate them into natural language sentences to do KD, leading to lousy distillation performance [31].\nTo sum up, we could expect that an appropriate large semantic shift degree in KD will lead to better distillation performance, whereas a smaller or extremely large degree will lead to worse performance (sub-optimal and OOD). This intuition could be understood theoretically." }, { "figure_ref": [], "heading": "Theory Understanding", "publication_ref": [], "table_ref": [], "text": "Setting In this work, we study a formal framework for measuring the effect of data augmentation in KD. We assume a ground-truth data distribution D = {(x, y)} with x ∈ R d ∼ N (0, I) where d ≥ 2, and y ∼ B(f (x)). We also assume a Lipschitz bound of B on P (x). Training data D train and testing data D test are n and m i.i.d. sample from D, respectively. We also assume access to a teacher model g(x), so that ∀x, |g(x) -f (x)| < t . The teacher model does not have information on input density p(x). Assume for each training sample (x, y), we augment its information to span a local Gaussian distribution N (x, τ I). We then conduct an augmentation to the dataset: we i.i.d. sample m datum points from the following distribution: we have with 1 -δ probability, for any h ∈ H, T (h) -S (h) < + λ where λ = T (h ) + T (h ).\nP aug (x ) = 1 n 1 (2π) d τ x∈Dtrain e -x -x i 2 2 2τ , y = g(x ),\nThe proof can be found in Appendix A. Some takeaways for this theorem:\n• τ can be regarded as the semantic shift degree. Higher τ means a larger degree (i.e., more tokens to be changed).\n• The best choice of τ is given by n (data sizes). Smaller dataset prefers higher τ .\n• Lower τ loses the error bound provided by the theorem; higher τ introduces the OOD problem and breaks the theorem by failing to meet the condition ∀x, |g(x) -f (x)| < t , leading to higher errors.\nIn practice, the theorem encourages us to augment data with a higher semantic shift degree in KD (but not extremely large), and the less data we have, the higher degree we should use." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "We design experiments to (1) show the performance of KD with different semantic shift degrees, (2) show how data sizes affect the choice of semantic shift degrees, (3) and give an empirical suggestion for choosing the degree. Similar to Section 3, we use kNN as our data augmentation baseline (k = 15, online augmentation), BERT (Large) fine-tuned on the whole dataset as the teacher, and DistilBERT (Base) as the student.\nWe use the same random seed for all experiments. Mean-square-error is used as the Distance function.\nWe set α = 0.1, β = 0.0, γ = 0.5 in our experiments. We do not use y (β = 0.0) since kNN assumes y = y, which may be false and introduce noises. We run experiments with 0.01 and 0.001 learning rates and report the best result. The Batch size is 64 and the total distillation steps is 50,000." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "KD prefers a larger semantic shift degree", "publication_ref": [], "table_ref": [ "tab_2", "tab_1", "tab_1" ], "text": "To show how the semantic shift degrees of kNN affect KD performance, we set the degree from 10% to 80% and data size from 1% from 100%. Results are shown in Table . 2. We can observe that KD prefers larger degrees. 30% ∼ 50% degrees are generally the best fit for KD, whereas the 10% degree is always the best fit for fine-tuning when using data augmentation (Table 3). Moreover, we can see that larger degrees (30% ∼ 50% ) significantly improve the KD performance compared with the smaller degrees (10%). On the MNLI dataset, the performance gain differs by 2% when the data size is small (1% ∼ 2%). On the QNLI dataset, the performance gain differs from 3% to 4%, showing the consistent benefit of larger degrees. Another observation is that the model performance drops dramatically when degrees become extremely large, which is expected since the existence of the OOD problem. We can also conclude that KD prefers larger degrees when the dataset is smaller, which aligns with our theory. In the MNLI dataset, 40% degree is the best fit for the 1% dataset, 30% degree is the best fit for the 2% to 10% dataset, and 1% degree achieves the best performance on the full dataset. QNLI also has similar observations. Figure 4 visualizes the KD performance on MNLI with 1% KD data (i.e., the first row of Table 2). The performance curves form an inverted \"U\" pattern. Smaller degrees (blue) and extremely large degrees (red) are not the best options for KD. Another conclusion is that our findings in Table 2 are not because of the coincident variance since the performance gain is consistent and stable in Figure 4.\nTo show our claim is general, we also apply our methods to other datasets. Results are shown in Table . \n4. Similar to previous findings, we can conclude that KD prefers larger degrees with smaller datasets. Smaller datasets, such as CoLA (8.5k) and MRPC (3.7k), prefer 50% to 70% degrees. Large datasets, such as QQP (363.8k), prefer the 10% degree, which is the same as MNLI (392.7k). Interestingly, the RTE dataset is also small-scale (2.5k) but prefers the 30% degree, which is smaller than the CoLA (8.5k). This may be because of the task difference. Summarizing all findings, we empirically suggest the 30% degree for datasets with a moderate size (10k to 100k), 50% degree for datasets with a small size (less than 10k) and 10% degree for datasets with a large size (more than 100k). Triangle markers denote the peak of each curve." }, { "figure_ref": [ "fig_8" ], "heading": "Analysis", "publication_ref": [ "b15", "b18", "b32" ], "table_ref": [ "tab_4" ], "text": "Different teacher-student models To show our findings also hold for other teacher-student model pairs. We conduct experiments with three more different teacher-student model pairs: We change the model architecture from BERT to the T5 family. EncT5 [16] is a variant of T5 [19] that removes all decoder layers of T5 and only uses T5 encoders and a pooling layer to do text classification tasks. EncT5 has almost the same performance as T5 but is more computationally friendly (half of the parameters); therefore, we use EncT5 in our experiments. EncT5 (Large) has 354M parameters, and EncT5 (Small) only has 37M parameters2 . Following previous experiment settings, we use kNN (k = 15) to augment data and the CoLA dataset for the experiment. Figure 5 shows their KD performance gain (compared with vanilla KD) with different semantic shift degrees. We conclude from the figure that all teacher-student models prefer larger semantic shift degrees (more than 10%, which is the best option for fine-tuning). However, different models may prefer different degrees. For example, the 30% degree will reach the highest performance on BERT (Large) -BERT (Small), and the 50% degree performs best on EncT5 (Large) -EncT5 (Small). Besides, extremely large degrees such as 80% will cause a performance drop in all models.\nThe findings are the same as Section 5.1.\nDifferent data augmentation methods. Our conclusions are not limited to the kNN. We use two more data augmentation methods to support this claim: (1) Easy Data Augmentation (EDA) [32]: EDA is a combination of four lexical-level augmentation methods, i.e., synonym replacement, random insertion, random swap, and random deletion. (2) Random Replacement: Replace tokens with other random tokens in the vocabulary. We use 1% MNLI training data for distillation, the MNLI-M validation set for evaluation, accuracy as the metric, and report results in Table 5. We can see that EDA prefers 30% ∼ 40% degrees, which is similar to the kNN. Random replacement, however, prefers the 20% degree. Random replacement replaces tokens with random tokens, whereas kNN replaces tokens with similar tokens. Therefore, the random replacement has more semantic shifts than the kNN, even if the proportion of changed tokens is the same, making the best degree shifts to a smaller one. Besides, extremely large degrees harm performances, as previously observed. Table 6: Showcase of the augmented data (kNN) and synthesized data (back translation). Examples are selected from the MNLI dataset. Highlight tokens are changed tokens compared with the original input. kNN with a larger semantic shift degree changes more tokens and semantics.\nOriginal Input premise: How do you know? All this is their information again. hypothesis: This information belongs to them.\nkNN (k=15). 10% Semantic Shift Degree. kNN (k=15). 30% Semantic Shift Degree.\npremise: where do you know? all this is our information again. premise: where did you know? and -is their knowledge again of hypothesis: all information belongs to it. hypothesis: this information belongs to her. kNN (k=15). 70% Semantic Shift Degree.\nSynthesized Data (Back Translation)\npremise: 590 go themango? all these is a 406 still -premise: You got it?\nhypothesis: which 296 belongs with him. hypothesis: This is their information.\nShowcase of augmented data. Table 6 shows the augmented data and synthesized data generated by kNN and back translation. We can see that the 30% semantic shift degree destroys the semantics to some extent but still retains the main semantics. The 70% degree, however, will make the input meaningless, causing the OOD problem and performance drop. Synthesized data has a much better quality. However, they are too expensive to be produced." }, { "figure_ref": [], "heading": "Applications", "publication_ref": [], "table_ref": [], "text": "Table 7: Statistical data of some widely used biomedical datasets." }, { "figure_ref": [], "heading": "Datasets Training Test", "publication_ref": [ "b2", "b17", "b7", "b8", "b12", "b0", "b0", "b13", "b20" ], "table_ref": [ "tab_5" ], "text": "AIMed [3] 4938 549 BioInfer [18] 8544 950 LLL [8] 300 34 DDI [9] 2937 979 ChemProt [13] 4154 3458 GAD [1] 4796 534\nOne application of our findings is to better distill biomedical knowledge from large language models. Biomedical datasets usually require human experts to annotate, making them much smaller than common sense datasets. Table 7 shows some widely used biomedical datasets that aim to extract protein-protein interactions, drug-drug interactions, disease-gene relationships, or chemical-protein relations. Most of them only contain thousands or hundreds of data. Therefore, distilling biomedical knowledge from large-scale models to smallscale models is challenging. To show that our findings help distill biomedical knowledge, we choose GAD [1] to conduct experiments. GAD requires models to identify whether one disease has a relation to a specific gene from a given text. To avoid models inferring the answer from the name of the disease and gene rather than the given text, target entities have been masked with \"@GENE$\" and \"@DISEASE$\". For example, the given text \"this study proposes that A/A genotype at position -607 in @GENE$ gene can be used as a new genetic maker in Thai population for predicting @DISEASE$ development.\" has a label 1 (indicating there is a relation between the disease and gene). We use the same training/test splits as previous works [14,21]. We select BERT (Large) as the teacher model and BERT (Small) as the student model. kNN (k = 15) is used to augment data. Table 8 shows the KD results. We can conclude that augmented data benefits KD, and KD with larger degrees (20% ∼ 40%) enables the student model to perform similarly or better than the teacher model (80.67% accuracy), showing the effectiveness of our findings. Moreover, extremely larger degrees (70% ∼ 80%) will harm the KD and make the student model perform even worse than vanilla fine-tuning (78.55% accuracy). " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b9", "b14", "b24", "b4", "b27", "b21", "b23", "b10", "b11", "b30", "b32", "b35", "b1", "b16", "b36", "b18", "b34", "b1", "b34", "b36", "b29" ], "table_ref": [], "text": "Knowledge Distillation Knowledge distillation is first proposed by [10]. KD is often used to distill knowledge from large-scale to small-scale models to boost the small-scale models' performance. The main idea is to let student models learn from teacher models' output label probability distribution by minimizing KL divergence [10] or mean square errors [15]. In principle, KD is model-agnostic and could be applied between arbitrary models. For example, [25] distills knowledge from BERT [5] to BiLSTM. Model-specific KD attracts more attention with the rise of transformers [28]. [22] conducts KD not only on the output layer but also on the hidden layers. [24] introduces attention distillation in transformers. [11] combines attention distillation and hidden layer distillation. In our work, we use model-agnostic KD to make our conclusions more general.\nData Augmentation and Synthesized Data Classic data augmentation methods are usually cheap and focus on the token level. Synonym replacement [12] replaces tokens with synonyms. K-nearestneighbors [30] finds similar tokens from embeddings and then conducts replacement. Easy data augmentation [32] combines synonym replacement, random swap, random insertion, and random deletion. TreeMix [35] uses a constituency parser to mix two instances and generate a new one.\nOn the contrary, synthesized data is usually generated by autoregressive models [2,17], making it expensive but more diversified. [36] uses T5 [19] to modify the original input and generate data that flips the label. [34] uses in-context learning to guide GPT-3 [2] to generate more task-specific data. Synthesized data is usually preferred in the few-shot learning setting due to the high cost [34,36].\nData Augmentation in Knolwegde Distillation Although most augmented data and synthesized data are initially designed for fine-tuning settings, they can directly be applied in KD. However, they may be sub-optimal for KD. [29] finds that good KD augmented data should have a low entropy of the teacher model's output. They find dropping augmented data with high entropy could benefit KD.\nOur main difference is that we focus on adapting a data augmentation paradigm initially designed for fine-tuning settings to KD, whereas they focus on selecting a subset of augmented data generated by a specific data augmentation paradigm that can benefit KD most." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "We investigate what data augmentation methods KD prefers: (1) Cheap augmented data with online augmentation to ensure sufficient quantity, and (2) larger semantic shift degrees. Moreover, we show smaller datasets prefer larger semantic shift degrees in KD. These findings give guidance on how to use data augmentation for KD. Our work shows that KD has different preferences for data augmentation compared with fine-tuning, and a proper selection could improve KD results consistently. Therefore, we encourage the community to explore more KD-specific data augmentation methods. In the future, we will extend our findings from text classification to more tasks such as vision tasks, graph tasks, and natural language generation tasks." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We use the proportion of changed tokens to reflect the semantic shift degree. Though intuitive, it is not comprehensive enough. The discussion of Random Replacement in Section 5.2 could reveal this problem. Therefore, a more comprehensive way to represent semantic shifts is needed." }, { "figure_ref": [], "heading": "A Proof", "publication_ref": [], "table_ref": [], "text": "Theorem 1. Assume a data distribution D = {(x, y)} with x ∈ R d ∼ N (0, I) where d ≥ 2, and y ∼ B(f (x)). We also assume a Lipschitz bound of B on P (x). Training data D train and testing data D test are n and m i.i.d. sample from D, respectively. We also assume access to a teacher model g(x), so that ∀x, |g(x) -f (x)| < t . The teacher model is difference from training distribution in that it does not have information of input density p(x).\nWe assume for each training sample (x, y) individually, we augment its information to span a local gaussian distribution N (x, τ I). We then i.i.d. sample m datum points from the following distribution: we have with 1 -δ probability, for any h ∈ H:\nT (h) -S (h) < + λ,(1)\nwhere λ = T (h ) + T (h )." }, { "figure_ref": [], "heading": "Proof of Theorem 1", "publication_ref": [], "table_ref": [], "text": "Proof. We start from bounding d H∆H (D train , D aug ) ≤ 2 P train (x) -P aug (x) 1 .\nWe set a ball occupying 1 -4 of P train mass, which has radius For each grid center x, by applying Hoeffding's inequality, with 1 -δ 2ng probability:\n1 n\nx ∈Dtrain e -x-x 2 \nWe now prove that the second term approaches ground truth P (x) when τ → 0. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We thank Yangsibo Huang for the help with the experiments. This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004 and U.S. DARPA AIDA Program No. FA8750-18-2-0014. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." } ]
Knowledge distillation (KD) requires sufficient data to transfer knowledge from large-scale teacher models to small-scale student models. Therefore, data augmentation has been widely used to mitigate the shortage of data under specific scenarios. Classic data augmentation techniques, such as synonym replacement and k-nearest-neighbors, are initially designed for fine-tuning. To avoid severe semantic shifts and preserve task-specific labels, those methods prefer to change only a small proportion of tokens (e.g., changing 10% tokens is generally the best option for fine-tuning). However, such data augmentation methods are sub-optimal for knowledge distillation since the teacher model could provide label distributions and is more tolerant to semantic shifts. We first observe that KD prefers as much data as possible, which is different from fine-tuning that too much data will not gain more performance. Since changing more tokens leads to more semantic shifts, we use the proportion of changed tokens to reflect semantic shift degrees. Then we find that KD prefers augmented data with a larger semantic shift degree (e.g., changing 30% tokens is generally the best option for KD) than fine-tuning (changing 10% tokens). Besides, our findings show that smaller datasets prefer larger degrees until the out-of-distribution problem occurs (e.g., datasets with less than 10k inputs may prefer the 50% degree, and datasets with more than 100k inputs may prefer the 10% degree). Our work sheds light on the preference difference in data augmentation between fine-tuning and knowledge distillation and encourages the community to explore KD-specific data augmentation methods. Recently various data augmentation methods have been proposed, including synonym replacement (SR) [12], k-Nearest-Neighbors (kNN) [30], and easy data augmentation (EDA for short), which combines four lexical level augmentation methods: synonym replacement, random swap, random insertion, and random deletion [32]. These approaches have been proven useful to enlarge the data and help knowledge distillation [7,31]. These methods introduce little computational overhead, thus, are suitable for augmenting a large amount of data in a short time. Autoregressive models (e.g., T5 [19], , ChatGPT [17]) can generate high-quality synthesized data [34,36] but are much more computationally expensive than classic data augmentation methods (e.g., synonym replacement), which limits the size of synthesized data. Consequently, synthesized data generated by autoregressive models are preferred in the few-shot setting [34,36] rather than the full-data setting.
Understanding the Effect of Data Augmentation on Knowledge Distillation
[ { "figure_caption": "Figure 1 :1Figure 1: KD from a teacher model BERT (Large) to a student model DistilBERT (Base) on MNLI dataset. The teacher model is finetuned on the whole dataset. Results show that more distillation data will benefit distillation performance.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "4x KD + BT 8x KD + ChatGPT 1x (estimate) Vanilla KD KD + KNN 4x KD + KNN 8x KD + KNN online", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Fine-tuning performance with different data augmentation methods. BT denotes back translation, and Nx indicates that the augmented data is N times the number of original data.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4x KD + BT 8x KD + ChatGPT 1x (estimate) Vanilla KD KD + KNN 4x KD + KNN 8x KD + KNN online", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: KD performance with different data augmentation methods. BT denotes back translation, and Nx indicates that the augmented data is N times the number of original data.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "4Adapt Augmented Data to KD 4.1 Intuition", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Theorem 1 . 2 (d 2122and get the augmented dataset D aug . Then we formalize the above intuition and propose the following theorem: Let x u = 2d ln(2πd) + 12 ln(2) -4 ln( ), as long as + 5d + 6) ln 2 + (d 2 + d) ln x u + d ln d -d ln -d ln δ -d 2 ln τ", "figure_data": "", "figure_id": "fig_6", "figure_label": "122", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: KD performances with different proportions of changed tokens. 1% MNLI training set serves as KD data, and the MNLI-M validation set is used for evaluation.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: KD performance with different models and semantic shift degrees on the CoLA dataset. Triangle markers denote the peak of each curve.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "(1) BERT (Large) -BERT (Small): We keep the teacher model unchanged (336M) but use a smaller student model (29.1M) [26]. (2) DistilBERT (Base) -DistilBERT (Base): We keep the student model unchanged but use a smaller teacher model (itself), which is also called self-distillation. (3) EncT5 (Large) -EncT5 (Small):", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "y 2 (d 222= g(x ), and get the augmented dataset D aug . Let x u = 2d ln(2πd) + 12 ln(2) -4 ln( ), as long as + 5d + 6) ln 2 + (d 2 + d) ln x u + d ln d -d ln -d ln δ -d 2 ln τ", "figure_data": "", "figure_id": "fig_10", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "≤2d ln(2πd) + 12 ln(2) -4 ln( ) := x u Outside the ball, with 1 -δ 2 probability, no more than 4 + 2 n ln 2 δ proportaion of all training data appears. Each cube of size r has cumulative error from its center of at most 1 2 (B + B )V r, we require a grid size of at most size r = 2(B + B )(2x u ) d to control the overall cumulative error below 4 . This grid size results number of grids n g ≥ 2 x u r d = 4(B + B )(2x u ) d+1 d", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "2 2τ-2E x ∼D e -x-x", "figure_data": "", "figure_id": "fig_12", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "E x ∼D e -x-x 2 2 (d 2222d τ E x ∼D e -x-x + 5d + 6) ln 2 + (d 2 + d) ln x u + d ln d -d ln -d ln δ -d 2 ln τ we have P -P 1 < and ( Ts )(h) < + λ", "figure_data": "", "figure_id": "fig_13", "figure_label": "222", "figure_type": "figure" }, { "figure_caption": "Time cost of different data augmentation methods. ChatGPT time cost is an estimation. In our experiments, we found the speed of API calls is 1300 tokens/40s. We assume each instance has 128 tokens.", "figure_data": "MethodskNN (k=15)Back Translation ChatGPTHardware1x V100 16GB1x V100 16GBAPI# Augmented Data 4x/8x/820.5x(online)4x/8x1xTime Cost (min)0.5/1/10356/112260 *", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of KD performance (Accuracy %) with different KD data sizes and semantic shift degrees of kNN.", "figure_data": "KD Data Sizes0% (Vanilla KD)10%Semantic Shift Degree 20% 30% 40%50%60% 70% 80%MNLI (392.7k) (Evaluated on the MNLI-M validation set)1%68.8976.39 77.31 77.17 78.06 77.47 77.02 76.23 75.312%72.5877.81 78.87 79.24 79.0078.84 78.22 77.57 76.8810%78.8880.69 81.35 81.53 81.4681.280.96 80.55 80.21100%83.6584.14 84.05 84.0883.9683.85 83.63 83.8 83.45QNLI (104.7k)3%77.8681.47 83.44 84.3284.67 85.58 85.01 84.24 83.86%81.284.33 85.87 86.0486.67 87.19 86.19 85.67 84.4920%82.3185.51 87.45 87.69 88.12 87.68 87.79 86.87 86.31100%88.0789.23 89.82 90.22 90.39 90.07 90.33 90.28 89.84", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The student model's fine-tuning performance with different data sizes and semantic shift degrees of kNN.", "figure_data": "FT Data SizesSemantic Shift Degree 0% (Vanilla FT) 10% 30% 50% 70%MNLI (392.7k) (Evaluated on the MNLI-M validation set)1%66.6965.86 65.78 65.84 65.792%69.9168.91 68.61 68.47 68.6710%75.2275.20 75.19 75.19 75.12100%81.9282.27 82.08 82.09 81.74", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "More dataset results of KD with different semantic shift degrees.", "figure_data": "DatasetSemantic Shift Degree 0% (Vanilla KD) 10% 30%50%70%CoLA (8.5k) (Matthew)53.9557.97 59.05 59.78 59.13MRPC (3.7k) (Acc)85.1385.52 86.6186.44 87.03RTE (2.5k) (Acc)67.267.16 67.54 67.2665.73QQP (363.8k) (Acc)90.5590.8 90.7590.5990.69", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of KD performance with data augmentation methods and semantic shift degrees.", "figure_data": "Methods0% (Vanilla KD) 10%Semantic Shift Degree 20% 30% 40%50% 60% 70% 80%MNLI (392.7k) (Evaluated on the MNLI-M validation set)kNN76.39 77.3177.17 78.06 77.47 77.02 76.23 75.31EDA68.8975.81 76.83 77.48 77.22 76.88 76.88 76.15 75.8Random Replacement75.92 76.94 76.8675.98 74.91 73.43 72.27 71.6", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "KD performance (accuracy) on the GAD dataset. The teacher model has a 80.67% accuracy after fine-tuning. Fine-tuning the student model can only have 78.55% accuracy. Vanilla KD improves student model's performance, and KD with larger degrees could even make the student model perform similarly (20% and 40%) to the teacher model or even better (30%) than the teacher model.", "figure_data": "MethodSemantic Shift Degree 0% (Vanilla KD) 10% 20% 30% 40% 50% 60% 70% 80%kNN79.5979.45 80.08 81.11 80.78 79.67 79.74 79.24 78.08", "figure_id": "tab_5", "figure_label": "8", "figure_type": "table" } ]
Ziqi Wang; Chi Han; Wenxuan Bao; Heng Ji
[ { "authors": "Àlex Bravo; Janet Piñero; Núria Queralt-Rosinach; Michael Rautschka; Laura I Furlong", "journal": "BMC bioinformatics", "ref_id": "b0", "title": "Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research", "year": "2015" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Razvan Bunescu; Ruifang Ge; J Rohit; Edward M Kate; Raymond J Marcotte; Arun K Mooney; Yuk Wah Ramani; Wong", "journal": "Artificial intelligence in medicine", "ref_id": "b2", "title": "Comparative experiments on learning information extractors for proteins and their interactions", "year": "2005" }, { "authors": "James Marta R Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Maillard", "journal": "", "ref_id": "b3", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019-06" }, { "authors": "Sergey Edunov; Myle Ott; Michael Auli; David Grangier", "journal": "", "ref_id": "b5", "title": "Understanding back-translation at scale", "year": "2018" }, { "authors": "Lingyun Feng; Minghui Qiu; Yaliang Li; Hai-Tao Zheng; Ying Shen", "journal": "", "ref_id": "b6", "title": "Learning to augment for data-scarce domain bert knowledge distillation", "year": "2021" }, { "authors": "Jörg Hakenberg; Conrad Plake; Ulf Leser; Harald Kirsch; Dietrich Rebholz-Schuhmann", "journal": "", "ref_id": "b7", "title": "Lll'05 challenge: Genic interaction extraction-identification of language patterns based on alignment and finite state automata", "year": "2005" }, { "authors": "María Herrero-Zazo; Isabel Segura-Bedmar; Paloma Martínez; Thierry Declerck", "journal": "Journal of biomedical informatics", "ref_id": "b8", "title": "The ddi corpus: An annotated corpus with pharmacological substances and drug-drug interactions", "year": "2013" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b9", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Xiaoqi Jiao; Yichun Yin; Lifeng Shang; Xin Jiang; Xiao Chen; Linlin Li; Fang Wang; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "TinyBERT: Distilling BERT for natural language understanding", "year": "2020-11" }, { "authors": "Oleksandr Kolomiyets; Steven Bethard; Marie-Francine Moens", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Model-portability experiments for textual temporal analysis", "year": "2011-06" }, { "authors": "Martin Krallinger; Obdulia Rabal; A Saber; Martın Akhondi; Jesús Pérez Pérez; Gael Santamaría; Georgios Pérez Rodríguez; Ander Tsatsaronis; José Intxaurrondo; Umesh Antonio López; Nandal", "journal": "", "ref_id": "b12", "title": "Overview of the biocreative vi chemical-protein interaction track", "year": "2017" }, { "authors": "Jinhyuk Lee; Wonjin Yoon; Sungdong Kim; Donghyeon Kim; Sunkyu Kim; Chan Ho; So ; Jaewoo Kang", "journal": "Bioinformatics", "ref_id": "b13", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "year": "2020" }, { "authors": "Kevin J Liang; Weituo Hao; Dinghan Shen; Yufan Zhou; Weizhu Chen; Changyou Chen; Lawrence Carin", "journal": "", "ref_id": "b14", "title": "Mixkd: Towards efficient distillation of large-scale language models", "year": "" }, { "authors": "Frederick Liu; Siamak Shakeri; Hongkun Yu; Jing Li", "journal": "", "ref_id": "b15", "title": "Enct5: Fine-tuning t5 encoder for non-autoregressive tasks", "year": "2021" }, { "authors": " Tb Openai", "journal": "OpenAI", "ref_id": "b16", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2022" }, { "authors": "Sampo Pyysalo; Filip Ginter; Juho Heimonen; Jari Björne; Jorma Boberg; Jouni Järvinen; Tapio Salakoski", "journal": "BMC bioinformatics", "ref_id": "b17", "title": "Bioinfer: a corpus for information extraction in the biomedical domain", "year": "2007" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b18", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b19", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Mourad Sarrouti; Carson Tao; Yoann Mamy; Randriamihaja ", "journal": "", "ref_id": "b20", "title": "Comparing encoder-only and encoder-decoder transformers for relation extraction from biomedical texts: An empirical study on ten benchmark datasets", "year": "2022" }, { "authors": "Siqi Sun; Yu Cheng; Zhe Gan; Jingjing Liu", "journal": "", "ref_id": "b21", "title": "Patient knowledge distillation for bert model compression", "year": "2019" }, { "authors": "Zhiqing Sun; Hongkun Yu; Xiaodan Song; Renjie Liu; Yiming Yang; Denny Zhou", "journal": "", "ref_id": "b22", "title": "Mobilebert: a compact task-agnostic bert for resource-limited devices", "year": "2020" }, { "authors": "Zhiqing Sun; Hongkun Yu; Xiaodan Song; Renjie Liu; Yiming Yang; Denny Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Mobile-BERT: a compact task-agnostic BERT for resource-limited devices", "year": "2020-07" }, { "authors": "Raphael Tang; Yao Lu; Linqing Liu; Lili Mou; Olga Vechtomova; Jimmy Lin", "journal": "", "ref_id": "b24", "title": "Distilling taskspecific knowledge from bert into simple neural networks", "year": "2019" }, { "authors": "Iulia Turc; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b25", "title": "Well-read students learn better: On the importance of pre-training compact models", "year": "2019" }, { "authors": "Erik M Van Mulligen; Annie Fourrier-Reglat; David Gurwitz; Mariam Molokhia; Ainhoa Nieto; Gianluca Trifiro; Jan A Kors; Laura I Furlong", "journal": "Journal of biomedical informatics", "ref_id": "b26", "title": "The eu-adr corpus: annotated drugs, diseases, targets, and their relationships", "year": "2012" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b27", "title": "Attention is all you need", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b28", "title": "", "year": "2017" }, { "authors": "Huan Wang; Suhas Lohit; Michael N Jones; Yun Fu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "What makes a\" good\" data augmentation in knowledge distillation-a statistical perspective", "year": "2022" }, { "authors": "William Yang; Wang ; Diyi Yang", "journal": "", "ref_id": "b30", "title": "That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets", "year": "2015" }, { "authors": "Ziqi Wang; Yuexin Wu; Frederick Liu; Daogao Liu; Le Hou; Hongkun Yu; Jing Li; Heng Ji", "journal": "", "ref_id": "b31", "title": "Augmentation with projection: Towards an effective and efficient data augmentation paradigm for distillation", "year": "2023" }, { "authors": "Jason Wei; Kai Zou", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks", "year": "2019-11" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" }, { "authors": "Min Kang; Dongju Yoo; Jaewook Park; Sang-Woo Kang; Woomyoung Lee; Park", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "GPT3Mix: Leveraging large-scale language models for text augmentation", "year": "2021-11" }, { "authors": "Le Zhang; Zichao Yang; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "TreeMix: Compositional constituency-based data augmentation for natural language understanding", "year": "2022-07" }, { "authors": "Jing Zhou; Yanan Zheng; Jie Tang; Li Jian; Zhilin Yang", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "FlipDA: Effective and robust data augmentation for few-shot learning", "year": "2022-05" } ]
[ { "formula_coordinates": [ 3, 179.63, 197.56, 252.74, 20.06 ], "formula_id": "formula_0", "formula_text": "L KD = D α L f t (f g (x), y) + (1 -α) Distance(f g (x), f t (x))" }, { "formula_coordinates": [ 3, 108, 294.57, 331.11, 56.87 ], "formula_id": "formula_1", "formula_text": "L KD = D β L f t (f g (x ), y ) + (1 -β) Distance(f g (x ), f t (x )) L = γ L KD + (1 -γ) L KD β, γ ∈ [0, 1]" }, { "formula_coordinates": [ 5, 185.44, 342.57, 241.12, 27.19 ], "formula_id": "formula_2", "formula_text": "P aug (x ) = 1 n 1 (2π) d τ x∈Dtrain e -x -x i 2 2 2τ , y = g(x )," }, { "formula_coordinates": [ 13, 261.36, 397.18, 242.64, 9.65 ], "formula_id": "formula_3", "formula_text": "T (h) -S (h) < + λ,(1)" } ]
10.48550/ARXIV.2210.11416
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b2", "b25", "b28", "b5", "b18", "b1", "b25", "b25", "b25", "b25" ], "table_ref": [], "text": "Recent work in NLP has shown that pretrained language models have made noteworthy progress toward generalization to unseen tasks. Despite being pretrained on only language modeling objectives, large language models can perform reasonable zeroshot generalization given natural language instructions, i.e. prompts (Radford et al., 2019;Brown et al., 2020). Further research shows that finetuning language models on a mixture of tasks with prompt templates enhances their performance on held-out new tasks (Sanh et al., 2022;Wei et al., 2021).\nIn recent years, two significant research paths have emerged in the field of pretrained language models: one seeks to improve generalization either by scaling up the model, increasing parameters, data, and compute, or by refining prompts.\nAnother divergent yet complementary approach focuses on augmenting the efficiency of pretraining, particularly in the context of BERT-style models. This approach has been proven to significantly improve pretraining efficiency through the use of model-generated pretraining signals, as evidenced by ELECTRA (Clark et al., 2020), COCO-LM (Meng et al., 2021), and METRO-LM (Bajaj et al., 2022). However, this improvement has primarily been witnessed in single-task supervised finetuning settings. Our work seeks to bridge these two areas of research. We present a novel method that enhances the pretraining efficiency of T5, a widely used encoder-decoder Transformer in prompt-based learning, by utilizing ELECTRA-Style model-generated signals.\nOur preliminary studies, however, encountered many challenges in pretraining T5 with modelgenerated signals, particularly in designing an effective objective to train the decoder and ensuring training stability. To address these challenges, we study the impact of key components in this pretraining scheme, such as the decoding target, the location of the Replace Token Detection (RTD) task, and the masking pattern. Then we redesign the pretraining algorithm to solve training stability issues, thus bringing in the benefits of ELECTRAstyle pretraining to T5-style Transformer encoderdecoder models. The pretrained model is then finetuned on a family of multi-task training mixtures of NL-prompted dataset, which has previously been used to train the T0 models (Sanh et al., 2022). Our model, METRO-T0, is a T0 model pretrained with Model generated dEnoising TRaining Objective.\nExperimental results show that METRO-T0 is highly parameter efficient. It consistently outperforms similar-sized baselines on all NL-prompted benchmark we evaluated upon. As shown in Fig- (Sanh et al., 2022) T0 base++ (256M) Metro-T0 base++ (256M) Metro-T0 large++ (775M)\nFigure 1: Prompt learning results of METRO-T0 versus our T0 baseline and T0 3B by Sanh et al. (2022) on 4 tasks in the T0 Eval benchmark. Each point denotes the accuracy using one prompt template, except that the median accuracy over all templates of T0 3B is indicated by the blue point. The plots of other tasks are in Appendix A.7.\nure 1, METRO-T0 BASE++ outperforms T0 3B (Sanh et al., 2022) with only 7% of its parameters on the T0 Eval benchmark. Moreover, METRO-T0++ LARGE++ rivals 14x larger T0++ 11B , the stateof-the-art in prompt-based learning. Our method is also compute efficient: METRO-T0 pretrained for 500k steps has similar performance as its T0 counterpart pretrained for 2M steps.\nTo further understand the benefit of METRO pretraining, we conduct two studies on the pretrained METRO-T0 model, analyzing its neural activation and parameter sensitivity. The studies show that model-generated signals balance the contribution of each NN parameter and reduce the number of under-activated neurons by 55%, indicating that a key source of the improved pretraining efficiency is better utilization of network parameters." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b23", "b2", "b24", "b25", "b28", "b29" ], "table_ref": [], "text": "Prompt-based learning with language models. Prompt-based learning allow language models to handle a wide range of tasks with no training data (zero-shot) or a few training data (few-shot), by leveraging natural language instructions and task demonstrations as context (Radford et al., 2019;Brown et al., 2020). Raffel et al. (2019) proves the effectiveness of prompt-based learning as a framework of multi-task learning for text-to-text Transformers such as T5. LMs are usually finetuned with NL instructions to improve their performance and usability. Such a procedure is called promptfinetuning. The finetuning data comes from aggregated mixtures of NLP tasks (Sanh et al., 2022;Wei et al., 2021), dialogs (Chung et al., 2022), or even chain-of-thoughts (Wei et al., 2022). Our work aims to improve the zero-shot generalization of T5like text-to-text LMs in prompt-based learning by efficient and effective pretraining strategies." }, { "figure_ref": [], "heading": "Efficient pretraining using model-generated signals.", "publication_ref": [ "b1", "b5", "b18", "b19", "b3", "b8", "b27" ], "table_ref": [], "text": "Training big language models require sub-stantial computational resources. This paper is part of a line of research that improves the pretraining efficiency of LMs using model-generated signals, i.e., METRO (Bajaj et al., 2022), pioneered by ELECTRA (Clark et al., 2020), a Transformer encoder pretrained using signals generated by an auxiliary BERT. Various studies (Meng et al., 2021(Meng et al., , 2022;;Chi et al., 2021;Fang et al., 2022) show that an auxiliary model can generate informative training signals that greatly improve the efficiency and effectiveness of BERT-like Transformer encoder models, as evaluated on supervised single-task benchmarks like GLUE (Wang et al., 2018). Compared with these works, we use model-generated signals to pretrain T5-like Transformer encoderdecoder models and evaluate this model on largescale NL-prompted benchmarks." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "This section provides an overview of T5 and METRO-style pretraining." }, { "figure_ref": [], "heading": "Text-to-Text Transformers", "publication_ref": [ "b24" ], "table_ref": [], "text": "Our models are based on the T5 framework (Raffel et al., 2019). T5 is a text-to-text Transformer pretrained on natural language corpus. T5 Pretraining. T5 is a Transformer encoderdecoder language model pretrained by modeling corrupted spans of subword tokens. The noisy input is constructed by replacing consecutive spans of tokens in the input by distinct \"sentinel\" tokens, e.g.,\nX noise = [x orig 1 , ..., [M] i:j , ...x orig n ],\nwhere the sentinel token is denoted by [M] i:j . Then the pretraining task is to generate the deleted tokens using the Transformer decoder, conditioned on X noise as input to the Transformer encoder:\n[x orig 1 , . . . [M] i:j , . . . x orig n ] Encoder ----→ H enc H enc Decoder ----→ [[M] i:j , x orig i , ..., x orig j ].\n(1)\nText-to-Text Formulation of Downstream Tasks. T5 supports multitask learning on a diverse set of downstream tasks-including classification, question answering, and summarization-by casting all these tasks into a text-to-text format, where the encoder is fed with the text input and the decoder is then asked to generate the target prediction.\nText-to-Text Prompt-Finetuning. A pretrained text-to-text Transformer can then be finetuned to enhances its performance on held-out new tasks. The finetuning corpus is usually a multi-task mixture of NLP datasets, where each input-output pair is an example formatted with an NL prompt template. The finetuning procedure is standard seq2seq learning: the input sequence is fed to the encoder, and the target sequence serves as the ground truth to compute the cross-entropy loss of the decoder output." }, { "figure_ref": [], "heading": "Model-Generated Pretraining Signals", "publication_ref": [ "b5", "b5", "b18" ], "table_ref": [], "text": "In this subsection, we discuss techniques involving model-generated pretraining signals in prior work.\nReplace token detection (RTD) is the training objective used to train ELECTRA (Clark et al., 2020). The RTD input is a noisy text sequence X noise , generated by an auxiliary masked language model (MLM) like BERT. The token x noise i in each masked position of the text sequence is sampled from the predicted probability of the auxiliary model p MLM (x noise i |h aux i ), while the token in each unmasked position x noise j is copied from the original text x orig j . The main model, a Transformer encoder, is pretrained to denoise the noisy input by classifying whether each token is replaced by the auxiliary model or from the original text.\nX orig Random Mask -------→ [x orig 1 , . . . [M], . . . x orig n ];\n(2)\n[x orig 1 , . . . [M], . . . x orig n ] Auxiliary -----→ X noise ;\n(3)\nX noise Model ---→ H RTD Head -----→ 1(x orig i = x noise i ). (4\n)\nPrior work show that the RTD objective is more efficient than the MLM objective, resulting in significant performance improvement for pretrained Transformer encoders (Clark et al., 2020). However, replacing MLM with RTD turns the generative model into a discriminative model, hindering the model's ability to perform generation.\nCorrective language modeling (CLM) restores the generation capability of a Transformer encoder model pretrained with RTD (Meng et al., 2021). The CLM objective is trained alongside the RTD objective in a multi-task manner, so the CLM input is the same as the RTD input X noise . The model is pretrained to recover the original text X orig .\nX noise Model ---→ H CLM Head ------→ X orig .\n(5)" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we present the algorithm to train our model, METRO-T0." }, { "figure_ref": [], "heading": "Pretraining Objective Design", "publication_ref": [ "b30" ], "table_ref": [], "text": "METRO-T0 is jointly pretrained with two objectives: the RTD objective, enhancing performance through model-generated signals, and the CLM objective, enabling text-to-text generation akin to T5. The pretraining algorithm is illustrated in Figure 2. METRO-T0 uses a BERT-style MLM encoder as the auxiliary model and a T5-style encoder-decoder as the main model. The overall pretraining procedure is:\nX orig i.i.d. Random Mask ----------→ [x orig 1 , . . . [M], . . . x orig n ]; (6) [x orig 1 , . . . [M], . . . x orig n ]\nAuxiliary -----→ X noise ;\n(7)\nX noise Encoder ----→ H enc RTD Head -----→ 1(x orig i = x noise i ); (8) H enc Decoder ----→ H dec CLM Head ------→ X orig . (9\n)\nThe auxiliary model receives inputs constructed by randomly masking tokens in the original text X orig , and makes MLM predictions, which are used to create noisy inputs X noise for the main model. The main model is pretrained using two objectives: (a) the RTD objective on the encoder outputs H enc , which aims to identify whether each token was replaced by the auxiliary model or not, and (b) the CLM objective, which aims to recover the original text X orig through the decoder. During pretraining, the weighted average of three losses is optimized:\nL MLM = -E i∈M log p MLM (x orig i |h aux i ),(10)\nL RTD = -E log p RTD (1(x orig i = x noise i )|h enc i ), (11\n) L CLM = -E i∈M log p LM (x orig i |h dec i ), (12\n) L = L MLM + λ RTD L RTD + λ CLM L CLM .\n(13) In crafting METRO-T0's pretraining algorithm, we explored various alternatives before finalizing our design. For example, an alternative method could train RTD objectives on decoder outputs or use a masking pattern other than i.i.d. random sampling. In the rest of this section, we will explain our design choices and the reasons behind them.\nDecoding Target. Table 1 shows three variants of decoding targets: \"masked tokens only\", \"all tokens\", and \"all tokens masked loss\".\nPretraining with the T5-style \"masked tokens only\" target proves unfeasible due to its ill-formed nature. The decoder cannot distinguish between unmasked tokens (e.g., \"you\") and those correctly predicted by the auxiliary model in masked positions (e.g., \"for\"). Consequently, a single source sequence may correspond to multiple correct target sequences, introducing ambiguity and impeding effective pretraining. A detailed example is provided in Appendix A.9.\nThe \"all tokens\" target is inefficient, as the cross entropy loss is averaged on all tokens, including unmasked tokens where the tasks are trivial copyand-pastes. Therefore, METRO-T0 uses \"all tokens masked loss\", where the loss is averaged on masked tokens only.\nLocation of the RTD Head. We consider two choices to place the RTD head: on the outputs of the Transformer encoder or decoder. Decoder RTD at position i requires the information of the i-th token of the encoder input, but this information is absent from the input of the decoder. Consequently, the decoder needs a long attention path to connect position i of the encoder. This complexity defeats the purpose of RTD in providing a simpler task to stabilize optimization, making pretraining unstable in practice (Xu et al., 2020). Therefore, METRO-T0 uses encoder RTD.\nMasking Pattern on Auxiliary. When can use either T5-style contiguous span masking or BERTstyle i.i.d. random masking to generate the MLM input for the auxiliary model. However, using contiguous span masking in METRO-T0 pretraining leads to label leakage. At position i during teacher-forced training, the decoder has access to the ground truth X orig before position i. It can compare x orig i-1 with x noise i-1 . If the two disagree, it is likely the following position i is also masked out. As a result, the model can exploit this shortcut to achieve high RTD accuracy without learning meaningful representations of natural languages. Therefore, METRO-T0 uses i.i.d. random masking." }, { "figure_ref": [], "heading": "Architectural Upgrades over T5", "publication_ref": [ "b7", "b17", "b18", "b1" ], "table_ref": [], "text": "We incorporate model architecture changes that have been proved to be beneficial in earlier works.\nThe vanilla T5 exclusively uses relative positional embeddings, while the vanilla BERT (Devlin et al., 2019) model relies solely on absolute positional embeddings. However, recent research by Luo et al. (2022) suggests that using only rela-tive positional embeddings may not yield optimal results. Consequently, in line with the practices in COCO-LM (Meng et al., 2021) and METRO-LM (Bajaj et al., 2022), we use absolute positional embeddings in addition to relative position embeddings in our model.\nWe also introduce a change in how layer normalization is combined with residual connections. Rather than using T5's Pre-LayerNorm approach (defined as x → x + f (LN(x)) where f is either multi-head attention or MLP), our model adopts a Post-LayerNorm design (x → LN(x + f (x))). The Post-LayerNorm vs. Pre-LayerNorm debate is ongoing in the field, but we use Post-LayerNorm, which typically resulted in better performance on downstream tasks in our studies." }, { "figure_ref": [], "heading": "Prompt-Finetuning", "publication_ref": [ "b25" ], "table_ref": [], "text": "The model pretrained using the method described above is called METRO-T5. After pretraining METRO-T5 on an NL corpus, we discard the auxiliary model and retain the main model, which is a standard text-to-text Transformer. We finetune this model on multi-task training mixtures of NL-prompted datasets, T0/T0+/T0++ Train (Sanh et al., 2022), to obtain METRO-T0/T0+/T0++, a text-to-text Transformer that supports zero-shot generalization to held-out tasks." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b24", "b5", "b16", "b25", "b25", "b11", "b2", "b25", "b6", "b9", "b21", "b25", "b2", "b24", "b25", "b28", "b4", "b24" ], "table_ref": [], "text": "Model Architecture. Each of our models has an architecture similar to T5 (Raffel et al., 2019). We train models in three standard setups: base, base++, and large++. Our base/base++ model has an architecture similar to T5 BASE . Our large++ model has an architecture similar to T5 LARGE except for some differences mentioned in Section 4. The auxiliary model for generating training signals is a Transformer encoder of the same hidden size as the main model but is shallower: it consists of 4 layers in base/base++ and 6 layers in large++. We follow Clark et al. (2020) and share token embeddings between the main and the auxiliary model.\nPretraining. Our base model is pretrained on English Wikipedia and BookCorpus (16GB of texts) for 131 billion tokens (512 tokens per sequence, 2,048 sequences per batch, and 125k steps). Base++/Large++ is the training configuration first used in RoBERTa (Liu et al., 2019): pretraining on a mixed corpus of 160GB texts for a maximum 2.1 trillion tokens (512 tokens per sequence, 2,048 sequences per batch, and at most 2M steps).\nPrompt-Finetuning. We finetune each of our pretrained METRO-T5 models on three multi-task mixtures: T0/T0+/T0++ Train, using the same prompt templates and shuffling strategy as Sanh et al. (2022) does. Each model is finetuned for 125k steps, using the same hyperparameters as pretraining, except the peak learning rate is reduced to 0.1x. We do not perform any checkpoint selection and simply use the last checkpoint at 125k steps for evaluation.\nEvaluation. We evaluate zero-shot generalization on the T0 Eval benchmark (Sanh et al., 2022) and the Massive Multi-task Language Understanding (MMLU) benchmark (Hendrycks et al., 2020). T0 Eval consists of 11 datasets in natural language inference, coreference, word sense disambiguation, and sentence completion. MMLU includes exam questions from 57 tasks such as maths, history, law, and medicine. For each dataset, we report accuracy on the validation split. Following GPT-3 (Brown et al., 2020) and T0 (Sanh et al., 2022), we use rank classification for inference.\nFor T0 Eval, we use the same prompt templates as T0. For MMLU, we use prompt templates from the AI2 Reasoning Challenge (AI2-ARC) (Clark et al., 2018), concatenated with 5 passages retrieved using T5-ANCE (Ge et al., 2023;Ni et al., 2021) (See Appendix A.8 for details). When there are multiple prompts for a dataset, we do not perform prompt selection based on the validation split, because such prompt selection will break the \"zeroshot\" evaluation. Instead, we report the average accuracy across all prompts for this dataset, following the standard practices of Sanh et al. (2022).\nBaselines. For a fair comparison, the main baseline is our own T0 runs. Except for METRO-style pretraining, our T0 baselines use the same Transformer architecture, pretraining data, and promptfinetuning data, pretrained in the same computational environment.\nWe also compare with the reported numbers of other language models that supports zero-shot prompting, including pretraining-only models such as GPT-3 (Brown et al., 2020) and T5 (Raffel et al., 2019), as well as prompt-finetuned models such as T0 (Sanh et al., 2022) and Flan-T5 (Wei et al., 2021;Chung et al., 2022). T0/T0+/T0++ is pretrained on the the C4 (Raffel et al., 2019) texts for 1 trillion tokens and then prompt-finetuned on the T0/T0+/T0++ Train multitask mixture after LM adaptation for 100 billion tokens. Flan-T5 is also pretrained on the C4 corpus, but finetuned on a much larger dataset of prompted multi-task mixtures, dialog, and chain-of-thoughts." }, { "figure_ref": [], "heading": "Evaluation Results", "publication_ref": [], "table_ref": [], "text": "This section compares the performance of METRO-T0 and baseline models on T0 Eval and MMLU to demonstrate the effectiveness and efficiency of our method. We also explore the reason behind METRO-T0's effectiveness through detailed model analysis." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b25" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Table 2 presents the experimental results on T0 Eval, and Table 3 presents the experimental results on MMLU. These results show that:\nMETRO-T0 is highly parameter efficient, as it rivals or even outperforms much larger models in zero-shot generalization. METRO-T0 BASE++ , having only 256M parameters, outperforms T0 3B (Sanh et al., 2022) with only 7% of its parameters. Also, METRO-T0 LARGE++ , having only 775M parameters, outperforms T0 3B by 7pts and is only 2.8pts behind T0 11B , a 14x larger model. The gain stems from METRO-style pretraining.\nOn both benchmarks, METRO-T0 models in all setups consistently outperform our fair-comparison T0 baselines of the same model size, which were pretrained using the same corpus and configurations. This fact demonstrates that the performance improvement is not due to better hyperparameters or data engineering, but a result of using METROstyle pretraining. Further confirmation of this argument will be provided through model analysis in Section 6.4 and Section 6.5." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Ablation Studies", "publication_ref": [ "b18" ], "table_ref": [], "text": "In Section 4, we discuss the choices we made to redesign the pretraining method for METRO-T0.\nIn this subsection, we compare the empirical results of different variants of METRO-T0. Table 4 shows the performance of each variant prompt-finetuned on T0/T0+/T0++ Train and evaluated on T0 Eval.\n\"All tokens, masked loss\" is the best decoding target. Table 1 presents three possible choices for the decoding target, in which \"masked tokens only\" is ill-formed and thus not suitable, as discussed in Section 4. Table 4 compares the remaining two options and shows that computing CLM/LM loss on all positions negatively affects the downstream performance of METRO-T5/T5 by overwhelming the model with too many trivial copy-and-paste tasks.\nThe same reasoning also applies to our decision not to use the copy mechanism (Meng et al., 2021) in CLM heads. Encoder RTD makes pretraining more stable. Figure 3a demonstrates this by comparing the loss on the CLM task during pretraining with RTD applied to the encoder (red line) versus the decoder (blue line). Decoder RTD caused pretraining to diverge. While techniques such as strong gradient clipping and an additional projection layer can mitigate this issue (orange and green lines), the model still has higher training loss and poorer generalization on downstream tasks as shown in Table 4.\nLabel leakage is prevented by i.i.d. masking. Figure 3b illustrates the RTD recall (true positive rate) of METRO-T5 when using i.i.d. random masking on the auxiliary model compared to T5's continuous span masking. As discussed in Section 4, continuous span masking leads to label leakage, resulting in easy solutions for many masked positions, as demonstrated by the more than 2x pretraining RTD recall on masked positions with Span Mask. As expected, this label leakage hurts the model's generalization ability as shown in Table 4." }, { "figure_ref": [ "fig_2", "fig_3", "fig_2", "fig_3" ], "heading": "Pretraining Efficiency", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this experiment, we study the pretraining efficiency of METRO-T5 by comparing the intermediate checkpoints pretrained for 500k/1M/2M steps of T5 BASE++ and METRO-T5 BASE++ . We assess each checkpoint's prompt-based learning performance by finetuning on the T0++ Train dataset and recording the average performance on T0 Eval.\nFigure 4 shows that METRO-T5 is more compute efficient than vanilla T5. METRO-T0++ achieves better downstream performance at every point. In particular, METRO-T0++ pretrained for 500k steps has a similar performance to T0++ pretrained for 2M steps, showing a 165% efficiency increase. An interesting research question is: does modelgenerated signals simply make pretraining faster or do METRO-T5 and T5 learn different representations?\nTo answer this question, we compare the following two models by showing their performance on each task in the T0 Eval benchmark in Figure 5: (a) T0++ finetuned from the T5 checkpoint pretrained for 2M steps, indicated by the last blue datapoint in Figure 4; (b) METRO-T0++ finetuned from the METRO-T5 checkpoint pretrained for 500k steps, indicated by the first orange datapoint. Although these two models have similar average accuracies (58.57 vs. 58.68), they have different strengths, as shown in Figure 5. T0++ (2M steps) outperforms METRO-T0++ (500k steps) on wordlevel tasks (WiC) and conventional natural language inference (ANLI and RTE), while METRO-T0++ (500k steps) has much better performance on commonsense reasoning (HellaSwag and COPA). This phenomenon implies that model-generated signals let the model learn different representations of texts, which finally result in a significant performance gap between the fully pretrained T0++ and METRO-T0++, as shown in Table 2." }, { "figure_ref": [ "fig_4" ], "heading": "Neural Activation", "publication_ref": [ "b22", "b14" ], "table_ref": [], "text": "In this subsection, and the following one, explore the extent to which the internal statistics of the neural networks quantify the differences between METRO-T5 and T5.\nThe first aspect we explore is neural activation. Specifically, we examine the feedforward module in each Transformer layer of METRO-T5 BASE++ and T5 BASE++ , counting neurons that are underactivated. A neuron is considered under-activated if it is inactive (exhibits zero ReLU activations) for 99.5% of tokens within the T0++ Train dataset.\nFigure 6 shows that T5 has ∼2x as many underactivated neurons as METRO-T5 at every checkpoint. Studies suggest that such neurons can typically be pruned without substantially affecting neural network performance (Polyak and Wolf, 2015;Li et al., 2016). So the presense of many underactivated neurons is a sign of underutilization of model capacity and computing cost. Therefore, our findings suggest that METRO-style modelgenerated training signals enhance neuron utilization in METRO-T5." }, { "figure_ref": [ "fig_5" ], "heading": "Parameter Sensitivity", "publication_ref": [ "b12" ], "table_ref": [], "text": "In addition to analyzing the neural activation of T5 and METRO-T5, we also examine their parameter sensitivity, which serves as another means to quantify the underlying differences between T5 and METRO-T5.\nThe sensitivity of a parameter, defined in Equation ( 14), approximates the change in the loss magnitude when this parameter is completely zeroedout. θ denotes the parameter vector and L denotes the loss function. θ -j denotes the parameter vector θ with its j-th entry set to zero. The approximation is derived from the first-order Taylor expansion of L at θ. Therefore, the sensitivity of the j-th parameter, denoted by I j , approximates the change in the loss magnitude when this parameter is completely zeroed-out (LeCun et al., 1989).\nI j = |θ T -j ∇ θ L(θ)| ≈ |L(θ) -L(θ -θ -j )|(14)\nLiang et al. (2022) shows that parameter sensitivity is a reliable indicator of redundancy in pretrained language models. Specifically, parameters with low sensitivity can be safely pruned with only marginal impact on the LM's downstream performance, and an LM with lower, more concentrated sensitivity is more sufficiently trained and generalizes better.\nWe compare parameter sensitivity distributions of each checkpoint of METRO-T5 and T5, using gradients calculated on the T0++ Train dataset. The result is shown in Figure 7, from which we observe that the sensitivity distribution exhibits a lower variance in METRO-T5 (the orange hill in each row) than in T5 (the blue hill in each row). The difference in parameter sensitivity becomes more conspicuous when the models are trained for more steps. These observations suggest that pretraining with model-generated signals makes the sensitivity of parameters more concentrated. In other words, the amount of each parameter's contribution becomes more balanced with METROstyle pretraining, which leads to a more sufficiently trained model." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents a new method for improving the zero-shot generalization of T5-like text-to-text Transformers by incorporating model-generated signals in the pretraining process. METRO-T0, the model sufficiently trained using our redesigned pretraining method, is highly parameter efficient and compute efficient. We hope that the success of our approach could inspire further work on efficient big LM pretraining and prompt-based learning." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This work focuses on pretraining large language models for zero-shot generalization. Although our proposed method is more efficient than baselines, it still requires significant computational resources, specifically GPU resources. The GPU resources used and training time are detailed in Appendix A.6. Our study is also limited by the computational budget, preventing us from training models as large as GPT-3 or T0 11B . However, our large++ model (775M parameters) already rivals or outperforms previous state-of-the-art models." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "This work proposes and releases language models that are pretrained on web-crawled data and finetuned on a large collection of NLP datasets. These models may perpetuate social stereotypes and disparities reflected in the training data, or accidentally reveal private information. Mitigating these risks presents a significant open research challenge that calls for collective efforts within the NLP community. Therefore, it is recommended to take appropriate measures to assess risks and potential harms in the application context before deployment." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Pretraining Corpus", "publication_ref": [ "b7", "b10", "b16", "b26", "b16", "b18", "b1", "b24", "b24" ], "table_ref": [], "text": "Our base model is pretrained on English Wikipedia and BookCorpus (16GB of texts). We encode the pretraining corpus with an uncased vocabulary of 32,768 BPE tokens. This setup is similar to vanilla BERT (Devlin et al., 2019).\nOur base++/large++ model is pretrained on a mixed corpus of 160GB texts, which consists of English Wikipedia, BookCorpus, OpenWebText (Gokaslan and Cohen, 2019), CC-News (Liu et al., 2019), andSTORIES (Trinh andLe, 2018). We encode the corpus with a cased vocabulary of 64,000 BPE tokens. This setup is similar to RoBERTa (Liu et al., 2019), COCO-LM (Meng et al., 2021), and METRO-LM (Bajaj et al., 2022).\nAs a reference, T0 (Sanh et al., 2022) models andFlan-T5 (Chung et al., 2022) are all based on the original T5 model by Raffel et al. (2019). The pretraining corpus is the C4 corpus (Raffel et al., 2019) of 800GB of texts based on CommonCrawl. They encode the corpus with a cased vocabulary of 32k BPE tokens." }, { "figure_ref": [], "heading": "A.2 Pretraining Hyperparameters", "publication_ref": [], "table_ref": [], "text": "The hyperparameters we used to pretrain METRO-T0 and our T0 baseline are listed in Table 5. 5: Pretraining hyperparameters for METRO-T0 and our T0 baselines. Rows with an \" * \" are specific to METRO-style pretraining and not applicable to our T0 baselines. We only train our large++ model for 1.3M steps (instead of 2M steps) due to limitations of computational budgets but it still yields impressive performance." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [ "b18" ], "table_ref": [], "text": "In pretraining, we use 15% masking ratio for the auxiliary MLM pretraining task. We create a [MASK] symbol for each masked token. Each token in X noise is sampled from the softmax distribution predicted by the auxiliary model for each [MASK] symbol. The weight of each pretraining objective is λ MLM = 1, λ RTD = 50, and λ CLM = 1, following Meng et al. (2021). In both the auxiliary transformer and the main transformer, we use shared token embeddings in the embedding layer and the language modeling head.\nWe have three projection heads in our model: MLM head on the auxiliary transformer, RTD head on the main transformer's encoder, and CLM head on the main transformer's decoder. Both the MLM and CLM head are a single linear transformation. We use RoBERTa-style projection head for the RTD head, which contains a linear projection, a ReLU activation, a layer norm and another linear projection. For the RTD on decoder (complex CLM head) ablation, we use a RoBERTa-style head as the architecture of the CLM head." }, { "figure_ref": [], "heading": "A.3 Data for Prompt-Finetuning", "publication_ref": [ "b25", "b25" ], "table_ref": [], "text": "Following Sanh et al. (2022), we finetune our models on three training mixtures, T0 Train (39 datasets), T0+ Train (49 datasets), and T0++ Train (55 datasets), respectively. Each dataset is associated with multiple (8.03 on average) prompt templates that are used to format example instances to input and target pairs. Please refer to Sanh et al. (2022) for more details about our finetuning datasets." }, { "figure_ref": [], "heading": "A.4 Prompt-Finetuning Hyperparameters", "publication_ref": [ "b25" ], "table_ref": [ "tab_6" ], "text": "Once we have METRO-T5 pretrained on a natural language corpus, we discard the auxiliary model and keep the main model, which is a standard text-to-text Transformer. We finetune this model on multi-task training mixtures of NL-prompted datasets proposed by Sanh et al. (2022). Once the model parameters are initialized with pretrained METRO-T5, the finetuning procedure is standard sequence-to-sequence learning: the input sequence is fed to the encoder, and the target sequence serves as the ground truth to compute the cross-entropy loss of the decoder output. Each model is finetuned using hyperparameters listed in Table 6. Basically, we use the same hyperparameters as pretraining, except the peak learning rate is reduced to 0.1x and each target sequence is truncated to a max length of 256. We do not perform any checkpoint selection or hyperparameter selection, and simply use the last checkpoint at 125k steps of this single run for evaluation." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [], "table_ref": [], "text": "Base Base++ Large++ " }, { "figure_ref": [], "heading": "A.5 Evaluation", "publication_ref": [ "b25", "b11" ], "table_ref": [], "text": "We evaluate zero-shot generalization on the T0 Eval benchmark (Sanh et al., 2022) and the Massive Multi-task Language Understanding (MMLU) benchmark (Hendrycks et al., 2020). T0 Eval consists of 11 held-out datasets in natural language inference, coreference, word sense disambiguation, and sentence completion, and details are shown in Each task in T0 Eval or MMLU is formulated as multiple-choice questions. We compute the log probability of each choice under the finetuned model and select the choice with the highest log probability as the prediction." }, { "figure_ref": [], "heading": "A.6 Implementation Details", "publication_ref": [], "table_ref": [], "text": "Implementation We implement our T0 baseline and METRO-T0 based on fairseq1 . The prompt templates to format the finetuing data are from the promptsource2 library (Bach et al., 2022). We evaluate pretrained models on the T0 Eval benchmark using transformers3 and t-zero4 .\nPretraining and Finetuning Costs. Pretraining METRO-T5 in the base setting takes 20.8 hours on 64x NVIDIA A100 (40GB) GPUs. The pretraining cost of METRO-T5 is T5 (our implementation) plus the auxiliary transformer, whose number of layers is 1/3 of the main transformer's encoder. Pretraining METRO-T5 in the base++ setting takes 159 hours on 128x NVIDIA A100 (40GB) GPUs. Pretraining METRO-T5 in the large++ setting takes 289 hours on 256x NVIDIA A100 (40GB) GPUs. In finetuning, we remove the auxiliary transformer and the RTD and CLM heads, so the finetuning cost of METRO-T5 and T5 are the same. Prompt-finetuning each base/base++ model takes about 22 hours on 64x NVIDIA V100 (16GB) GPUs. Prompt-finetuning each large++ model takes about 70 hours on 64x NVIDIA V100 (16GB) GPUs. 2022) on all 9 tasks in the T0 Eval benchmark. The results shows that METRO-T0 LARGE++ , having only 775M parameters, consistently outperforms T0 3B over all tasks on the T0 Eval benchmark." }, { "figure_ref": [], "heading": "A.7 Full Results on T0 Eval", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.8 Evaluation on MMLU", "publication_ref": [ "b20", "b9", "b21" ], "table_ref": [ "tab_2", "tab_2", "tab_2", "tab_9", "tab_9" ], "text": "The prompt template used to evaluate our models MMLU is the prompt template from the AI2 Reasoning Challenge (AI2-ARC) concatenated with 5 passages in MS MARCO (Nguyen et al., 2016). These 5 passages are selected via dense retrival using T5-ANCE (Ge et al., 2023;Ni et al., 2021), which maps a query to a single vector to retrieve similar passage from the corpus. Adding densely-retrieved passages to prompts is a standard approach to enhance LM's performance on zero-shot prompting. This approach is named retrieval augmentation. All T0 and METRO-T0 results reported in Table 3 are evaluated using this prompt template with retrieval augmentation.\nOn the other hand, all Flan-T5 results reported in Table 3 are numbers reported in their paper. For each model, we take the maximum score of the reported \"direct\" prompting performance and the \"chain-ofthought (CoT)\" prompting performance. Both prompt templates are not publicly available as of the time this paper is written.\nAs a result, Table 3 involves comparisons across multiple prompt templates. So in Table 8, we present the performance of each model using the plain AI2-ARC prompt template without retrieval augmentation or CoT. The result in Table 8 shows that METRO-T0++ still consistently outperforms the T0 baseline and similar-sized Flan-T5 models when they are evaluated using the same prompt template.\nA.9 Example of the Challenge of Ill-Formed Target\nIn our discussion about \"decoding target\" inSection 4, we claim that \"masked tokens only\" is an ill-formed target for the CLM objective in METRO-style pretraining of T5. This section shows a concrete example where such ill-formed target leads to ambiguities.\nIn Table 9, the original sentence is \"1 2 3 4 5\". Using different random samples of masked positions, we can derive two masked sequences as the input of the auxiliary model: \"1 M M M 5\" and \"1 2 M M 5\". Table 9: An example where ill-formed target leads to ambiguities. Each number denotes a distinct subword token. M denotes the special token [MASK]. In \"Auxiliary Model Prediction\", a token shown in green denotes a correct prediction, where a token shown in red denotes a wrong prediction. The difference is whether \"2\" is masked or not. So the target for the decoder corrective LM objective will be \"2 3 4\" and \"3 4\" respectively. After we have the masked input, the auxiliary model, which is a masked language model (MLM), tries to fill masked positions with predicted tokens \"2 6 4\" and \"6 4\" respectively. The resulting main model input is \"1 2 6 4 5\" for both cases, but the target is \"2 3 4\" for case 1 and \"3 4\" for case 2. This is an ambiguity where the main model is unsure where it should begin to generate predictions: \"2\" or \"3\"." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "Linyuan Gong and Alvin Cheung are partially supported by the National Science Foundation through grants IIS-1955488, IIS-2027575, CCF-1723352, ARO W911NF2110339. We thank Mingrui Shen for his support providing computing infrastructure support on the finetuning work flow, Guolin Ke for his support in establishing the data processing pipelines for our pretraining corpora, and anonymous reviewers for their constructive feedback." } ]
This paper explores the effectiveness of modelgenerated signals in improving zero-shot generalization of text-to-text Transformers such as T5. We study various designs to pretrain T5 using an auxiliary model to construct more challenging token replacements for the main model to denoise. Key aspects under study include the decoding target, the location of the RTD head, and the masking pattern. Based on these studies, we develop a new model, METRO-T0, which is pretrained using the redesigned ELECTRA-Style pretraining strategies and then prompt-finetuned on a mixture of NLP tasks. METRO-T0 outperforms all similar-sized baselines on prompted NLP benchmarks, such as T0 Eval and MMLU, and rivals the state-of-the-art T0 11B model with only 8% of its parameters. Our analysis on model's neural activation and parameter sensitivity reveals that the effectiveness of METRO-T0 stems from more balanced contribution of parameters and better utilization of their capacity. The code and model checkpoints are available at https://github.com/gonglinyuan/ metro_t0.
Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers
[ { "figure_caption": "Figure 3 :3Figure 3: Pretraining behaviors of different designs.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Comparison of the pretraining efficiency of T5 and METRO-T5. Each point shows the performance of a T0++/METRO-T0++ model finetuned from a checkpoint at 500k/1M/2M pretraining steps. The xaxis displays the pretraining wall time, reflecting computational cost, as all models were pretrained in the identical environment.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Per-task performance of T0++ (pretrained for 2M steps) and METRO-T0++ (pretrained for only 500k steps) on T0 Eval. The error bars are calculated using the model's performance across prompt templates.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparison of the percentage of underactivated neurons in T5 and METRO-T5 on T0++ train dataset. The first point of both models (0 steps) overlap because they are the same initial model.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Comparison of the parameter sensitivity distributions of T5 and METRO-T5. Each row shows the parameter sensitivity distributions of T5 and METRO-T5 at the same pretraining step, indicated by the corresponding label on the y-axis.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Prompt-based learning results of METRO-T0 versus our T0 baseline and T0 3B by Sanh et al. (2022) on all 9 tasks in the T0 Eval benchmark. Each point denotes the accuracy using one prompt template, except that the median accuracy over all templates of T0 3B is indicated by the blue point.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 88Figure 8 results of METRO-T0 versus our T0 baseline and T0 3B by Sanh et al. (2022) on all 9 tasks in the T0 Eval benchmark. The results shows that METRO-T0 LARGE++ , having only 775M parameters,", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "corpus of 800GB of Prompt learning results on the T0 Eval dataset. \"Wino.\", \"SC.\", and \"HS\" refer to Winogrande, StoryCloze, and HellaSwag, respectively. All reported datasets use accuracy as their metric. Italic results are produced under the supervised setting. Others are under the zero-shot setting. Each row without a citation contains experimental results from models trained by us (our T0 baseline and METRO-T0), while each row with a citation contains experimental results from the cited paper (GPT-3, Google T5, and the original T0).", "figure_data": "ModelParamsNLICoref.Compl.WSDRTECBANLI r1/r2/r3 WSC Wino. COPASC.HS.WiCAVGPretraining onlyGPT-313B (Brown et al., 2020)13B 62.80 19.60 33.20/33.50/34.40 64.4067.9084.00 79.50 70.900.00 50.02GPT-3175B (Brown et al., 2020)175B 63.50 46.40 34.60/35.40/34.50 65.4070.2091.00 83.20 78.900.00 54.83T5+LM (Lester et al., 2021)11B 53.03 34.34 32.89/33.76/33.82 54.0950.6554.88 27.00 48.16 50.30 42.99Prompt Finetune on T0 TrainT0BASE226M 62.85 45.30 30.82/32.37/32.14 62.1650.7770.63 81.03 24.86 50.78 49.43METRO-T0BASE226M 65.18 45.60 31.64/32.98/33.81 55.7751.0770.81 80.97 25.28 50.69 49.44T0BASE++256M 62.24 53.45 31.68/32.94/34.88 61.7351.6570.63 87.62 25.88 51.21 51.26METRO-T0BASE++256M 68.16 63.21 34.92/33.81/36.82 60.4852.0378.50 89.23 27.68 50.88 54.15T03B (Sanh et al., 2022)3B 64.55 45.36 33.84/33.11/33.33 65.1050.9772.40 84.03 27.29 50.69 50.97METRO-T0LARGE++775M 76.75 65.48 41.49/36.29/40.18 60.5854.5188.00 94.07 29.31 50.97 57.97T011B (Sanh et al., 2022)11B 80.83 70.12 43.56/38.68/41.26 61.4559.9490.02 92.40 33.58 56.58 60.77Prompt Finetune on T0+ TrainT0+BASE226M 63.57 48.93 31.76/32.92/33.02 60.9651.9372.38 81.71 40.11 51.32 51.69METRO-T0+BASE226M 70.56 47.08 33.05/34.53/34.37 57.9851.7569.13 83.08 49.00 50.78 52.85T0+BASE++256M 68.30 60.24 33.77/34.31/35.00 60.9651.5970.00 89.29 56.10 51.39 55.54METRO-T0+BASE++256M 71.44 60.71 36.91/35.24/36.46 62.2154.0878.88 90.29 67.57 51.60 58.67METRO-T0+LARGE++775M 81.26 70.00 45.06/38.59/42.35 60.6757.5290.50 95.41 83.82 52.32 65.23T0+11B (Sanh et al., 2022)11B 67.47 59.20 43.45/39.77/40.76 62.2459.9492.24 96.43 86.13 55.02 63.88Prompt Finetune on T0++ TrainT0++BASE226M 69.06 48.39 31.90/33.61/33.94 55.7251.1576.06 82.55 39.62 63.18 53.20METRO-T0++BASE226M 72.04 58.63 33.85/35.29/36.57 56.1152.1574.06 83.65 48.66 64.29 55.94T0++BASE++256M 77.87 63.10 36.15/34.61/38.18 56.4451.7875.38 89.33 55.95 65.53 58.57METRO-T0++BASE++256M 77.80 69.52 39.69/36.61/40.08 61.4454.5583.88 90.88 68.54 67.59 62.78METRO-T0++LARGE++775M 83.68 74.88 46.84/40.37/44.95 71.8362.7592.63 95.65 83.74 70.49 69.80T0++11B (Sanh et al., 2022)11B 85.31 75.69 47.07/42.18/44.09 70.2966.4293.71 96.49 86.11 70.02 70.67ModelParams MMLUT0++BASE226M37.5METRO-T0++BASE226M38.3Flan-T5BASE (Wei et al., 2022)223M35.9T0++BASE++256M41.7METRO-T0++BASE++256M42.7GPT-3175B (Brown et al., 2020)175B43.9Flan-T5LARGE (Wei et al., 2022)750M45.1T0++11B (Sanh et al., 2022)11B35.6METRO-T0++LARGE++775M48.0", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Prompt learning results on the MMLU dataset. All reported results use accuracy averaged over 57 subtasks as their metric.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Hyperparameters for prompt-finetuning METRO-T5 and our pretrained T5 baseline. All hyperparameters not mentioned in this table is the same as in the pretraining procedure.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Table7MMLU includes example questions from 57 tasks such as maths, history, law, and medicine. Please refer toHendrycks et al. (2020) for more details about MMLU.", "figure_data": "Size TaskMetricRTE277 Natural language inferenceAccuracyCB56 Natural language inferenceAccuracyANLI3,200 Natural language inferenceAccuracyWSC104 Coreference resolutionAccuracyWinogrande XL1,267 Coreference resolutionAccuracyCOPA100 Sentence completionAccuracyStoryCloze 20161,871 Sentence completionAccuracyHellaSwag10,042 Sentence completionAccuracyWiC638 Word Sense Disambiguation Accuracy", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The overview of the T0 Eval benchmark for prompt learning.", "figure_data": "", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Full prompt learning results on the MMLU dataset in three setups. All reported results use accuracy averaged over 57 subtasks as their metric.", "figure_data": "ModelParamsMMLUAI2-ARC Prompt TemplateT0++BASE226M31.5METRO-T0++BASE226M31.9Flan-T5BASE (Wei et al., 2022)223M33.8T0++BASE++256M37.8METRO-T0++BASE++256M38.9Flan-T5LARGE (Wei et al., 2022)750M39.0T0++11B (Sanh et al., 2022)11B30.9METRO-T0++LARGE++775M43.4AI2-ARC Prompt Template + Retrieval AugmentationT0++BASE226M37.5METRO-T0++BASE226M38.3Flan-T5BASE (Wei et al., 2022)223M40.4T0++BASE++256M41.7METRO-T0++BASE++256M42.7Flan-T5LARGE (Wei et al., 2022)750M41.4T0++11B (Sanh et al., 2022)11B35.6METRO-T0++LARGE++775M48.0Reported numbers by Chung et al. (2022)Flan-T5BASE (Wei et al., 2022)223M35.9GPT-3175B (Brown et al., 2020)175B43.9Flan-T5LARGE (Wei et al., 2022)750M45.1", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" } ]
Linyuan Gong; Chenyan Xiong; Xiaodong Liu; Payal Bajaj; Yiqing Xie; Alvin Cheung; Jianfeng Gao; Xia Song
[ { "authors": "H Stephen; Victor Bach; Zheng-Xin Sanh; Albert Yong; Colin Webson; Raffel; V Nihal; Abheesht Nayak; Taewoon Sharma; M Kim; Thibault Saiful Bari; Zaid Fevry; Manan Alyafeai; Andrea Dey; Zhiqing Santilli; Srulik Sun; Canwen Ben-David; Gunjan Xu; Han Chhablani; Jason Wang; Alan Fries; Maged S Al-Shaibani; Shanya Sharma; Urmish Thakker; Khalid Almubarak; Xiangru Tang; Xiangru Tang; Mike Tian-Jian; Alexander M Jiang; Rush", "journal": "", "ref_id": "b0", "title": "Promptsource: An integrated development environment and repository for natural language prompts", "year": "2022" }, { "authors": "Payal Bajaj; Chenyan Xiong; Guolin Ke; Xiaodong Liu; Di He; Saurabh Tiwary; Tie-Yan Liu; Paul Bennett; Xia Song; Jianfeng Gao", "journal": "", "ref_id": "b1", "title": "Metro: Efficient denoising pretraining of large scale autoencoding language models with model generated signals", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Zewen Chi; Shaohan Huang; Li Dong; Shuming Ma; Saksham Singhal; Payal Bajaj; Xia Song; Furu Wei", "journal": "", "ref_id": "b3", "title": "Xlm-e: Cross-lingual language model pre-training via electra", "year": "2021" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b4", "title": "Scaling instructionfinetuned language models", "year": "2022" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b5", "title": "ELECTRA: Pretraining text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Peter Clark; Isaac Cowhey; Oren Etzioni; Tushar Khot; Ashish Sabharwal; Carissa Schoenick; Oyvind Tafjord", "journal": "", "ref_id": "b6", "title": "Think you have solved question answering? try arc, the AI2 reasoning challenge", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Yuxin Fang; Li Dong; Hangbo Bao; Xinggang Wang; Furu Wei", "journal": "", "ref_id": "b8", "title": "Corrupted image modeling for self-supervised visual pre-training", "year": "2022" }, { "authors": "Suyu Ge; Chenyan Xiong; Corby Rosset; Arnold Overwijk; Jiawei Han; Paul Bennett", "journal": "", "ref_id": "b9", "title": "Augmenting zero-shot dense retrievers with plug-in mixtureof-memories", "year": "2023" }, { "authors": "Aaron Gokaslan; Vanya Cohen", "journal": "", "ref_id": "b10", "title": "Openwebtext corpus", "year": "2019" }, { "authors": "Dan Hendrycks; Collin Burns; Steven Basart; Andy Zou; Mantas Mazeika; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b11", "title": "Measuring massive multitask language understanding", "year": "2020" }, { "authors": "Yann Lecun; John Denker; Sara Solla", "journal": "Morgan-Kaufmann", "ref_id": "b12", "title": "Optimal brain damage", "year": "1989" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Hao Li; Asim Kadav; Igor Durdanovic; Hanan Samet; Hans Peter Graf", "journal": "", "ref_id": "b14", "title": "Pruning filters for efficient convnets", "year": "2016" }, { "authors": "Chen Liang; Haoming Jiang; Simiao Zuo; Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen; Tuo Zhao", "journal": "", "ref_id": "b15", "title": "No parameters left behind: Sensitivity guided adaptive learning rate for training large transformer models", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b16", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "year": "2019" }, { "authors": "Shengjie Luo; Shanda Li; Shuxin Zheng; Tie-Yan Liu; Liwei Wang; Di He", "journal": "", "ref_id": "b17", "title": "Your transformer may not be as powerful as you expect", "year": "2022" }, { "authors": "Yu Meng; Chenyan Xiong; Payal Bajaj; Saurabh Tiwary; Paul Bennett; Jiawei Han; Xia Song", "journal": "", "ref_id": "b18", "title": "COCO-LM: Correcting and contrasting text sequences for language model pretraining", "year": "2021" }, { "authors": "Yu Meng; Chenyan Xiong; Payal Bajaj; Saurabh Tiwary; Paul Bennett; Jiawei Han; Xia Song", "journal": "", "ref_id": "b19", "title": "Pretraining text encoders with adversarial mixture of training signal generators", "year": "2022" }, { "authors": "Tri Nguyen; Mir Rosenberg; Xia Song; Jianfeng Gao; Saurabh Tiwary; Rangan Majumder; Li Deng", "journal": "", "ref_id": "b20", "title": "MS MARCO: A human generated machine reading comprehension dataset", "year": "2016" }, { "authors": "Jianmo Ni; Gustavo Hernández Ábrego; Noah Constant; Ji Ma; Keith B Hall; Daniel Cer; Yinfei Yang", "journal": "", "ref_id": "b21", "title": "Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models", "year": "2021" }, { "authors": "Adam Polyak; Lior Wolf", "journal": "IEEE Access", "ref_id": "b22", "title": "Channel-level acceleration of deep face representations", "year": "2015" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b23", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b24", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2019" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; Debajyoti Datta; Jonathan Chang; Mike Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Fevry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b25", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "H Trieu; Quoc V Trinh; Le", "journal": "", "ref_id": "b26", "title": "A simple method for commonsense reasoning", "year": "2018" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b27", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b28", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b29", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Zhenhui Xu; Linyuan Gong; Guolin Ke; Di He; Shuxin Zheng; Liwei Wang; Jiang Bian; Tie-Yan Liu", "journal": "", "ref_id": "b30", "title": "MC-BERT: efficient language pre-training via a meta controller", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 328.07, 650.4, 149.48, 15.34 ], "formula_id": "formula_0", "formula_text": "X noise = [x orig 1 , ..., [M] i:j , ...x orig n ]," }, { "formula_coordinates": [ 2, 322.74, 739.6, 172.35, 36.9 ], "formula_id": "formula_1", "formula_text": "[x orig 1 , . . . [M] i:j , . . . x orig n ] Encoder ----→ H enc H enc Decoder ----→ [[M] i:j , x orig i , ..., x orig j ]." }, { "formula_coordinates": [ 3, 85.7, 601.89, 168.84, 14.36 ], "formula_id": "formula_2", "formula_text": "X orig Random Mask -------→ [x orig 1 , . . . [M], . . . x orig n ];" }, { "formula_coordinates": [ 3, 85.7, 620.74, 157.97, 15 ], "formula_id": "formula_3", "formula_text": "[x orig 1 , . . . [M], . . . x orig n ] Auxiliary -----→ X noise ;" }, { "formula_coordinates": [ 3, 85.7, 640.24, 199.48, 14.48 ], "formula_id": "formula_4", "formula_text": "X noise Model ---→ H RTD Head -----→ 1(x orig i = x noise i ). (4" }, { "formula_coordinates": [ 3, 285.18, 643.94, 3.95, 8.81 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 3, 341.64, 188.33, 147.28, 14.1 ], "formula_id": "formula_6", "formula_text": "X noise Model ---→ H CLM Head ------→ X orig ." }, { "formula_coordinates": [ 3, 313.23, 428.2, 211.18, 33.86 ], "formula_id": "formula_7", "formula_text": "X orig i.i.d. Random Mask ----------→ [x orig 1 , . . . [M], . . . x orig n ]; (6) [x orig 1 , . . . [M], . . . x orig n ]" }, { "formula_coordinates": [ 3, 313.23, 466.55, 211.18, 31.67 ], "formula_id": "formula_8", "formula_text": "X noise Encoder ----→ H enc RTD Head -----→ 1(x orig i = x noise i ); (8) H enc Decoder ----→ H dec CLM Head ------→ X orig . (9" }, { "formula_coordinates": [ 3, 520.46, 489.11, 3.95, 8.81 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 3, 315.02, 689.99, 209.39, 13.76 ], "formula_id": "formula_10", "formula_text": "L MLM = -E i∈M log p MLM (x orig i |h aux i ),(10)" }, { "formula_coordinates": [ 3, 318.21, 707.12, 201.97, 13.76 ], "formula_id": "formula_11", "formula_text": "L RTD = -E log p RTD (1(x orig i = x noise i )|h enc i ), (11" }, { "formula_coordinates": [ 3, 316.6, 710.11, 207.81, 27.9 ], "formula_id": "formula_12", "formula_text": ") L CLM = -E i∈M log p LM (x orig i |h dec i ), (12" }, { "formula_coordinates": [ 3, 332.51, 727.23, 191.9, 23.79 ], "formula_id": "formula_13", "formula_text": ") L = L MLM + λ RTD L RTD + λ CLM L CLM ." }, { "formula_coordinates": [ 9, 77.26, 367.12, 211.88, 14.19 ], "formula_id": "formula_14", "formula_text": "I j = |θ T -j ∇ θ L(θ)| ≈ |L(θ) -L(θ -θ -j )|(14)" } ]
10.18653/v1/2022.acl-demo.9
2023-10-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b38", "b9", "b24", "b26", "b27", "b55", "b49", "b26", "b26", "b24", "b44", "b0" ], "table_ref": [], "text": "Collecting annotated data is time-consuming and expensive. The goal of few-shot learning is to address this limitation by developing models that generalize from a small number of training examples.\nA now dominant paradigm in few-shot learning involves pre-training a large language model (PLM) on unsupervised language modelling objectives, combined with supervised fine-tuning (Kaplan et al., 2020;Wei et al., 2022b). Fine-tuning on a variety of classification tasks improves generalization to new unseen tasks even further (Sanh et al., 2022;Wei et al., 2022b;Chung et al., 2022).\nPrompts, instructions that describe the tasks in natural language, are crucial to successful finetuning on many tasks. Typically, prompts consist of two components: task templates and answer choices. Task templates are textual instructions about the task. Answer choices are semantic descriptions of the categorical labels. Supervised training on prompted samples, as shown in Figure 1, helps PLMs generalize when instructed via prompts on a new problem (here natural language inference). Following Lin et al. (2022), we use the term upstream model for these instructionfinetuned PLMs. These prompted upstream models provide state-of-the-art few-shot learning ( (Liu et al., 2022), yet they still rely on strenuous manual intervention from manually crafted prompts, designed by experts with domain knowledge about the underlying tasks. Figure 1: Instruction-tuning uses prompts to specify the task via templates (blue) and label descriptions via answer choices (magenta). Fine-tuning on multiple instructed tasks improves generalization to new ones.\nIn this paper, we are concerned with an automated few-shot classification regime, where the algorithm can only access the training samples Figure 2: A schematic view of our prompt automation method, AuT-Few, consisting of: the retrieval of templates from the instruction tuning collection ( §4.1), and the generation of template-tailored and topic-specific answer choices and the configuration amongst them (and optionally the default dataset label text) ( §4.2).\nand their categorical labels. While efforts have been made to automate prompting, these methods are not directly transferable to upstream models. Most techniques target prompted masked language models (i.e. encoder-only models, that make predictions over continuous embeddings via its mask token (Gao et al., 2021, inter alia). Automation methods for models with a discrete output space (i.e. a decoder over the vocabulary) are costly and limited to the automation of the task template, still relying on handcrafted descriptions of labels (Liu et al., 2021;Zhou et al., 2023).\nTo automate few-shot learning with upstream models, we analyse the role of prompts across various classification tasks and we observe that upstream models exhibit low variability towards task-unspecific templates. In contrast, the selection of suitable answer choices can be important, yet answer choices do not need to be tailored to the specific instruction (e.g. Yes/No for a polar question). These insights confirm observations by Webson and Pavlick (2022) in a broader context and they motivate a simple few-shot learning automation method for upstream models, named AuT-Few.\nAuT-Few builds on the state-of-the-art learning method T-Few (Liu et al., 2022), but crucially does not use any task-specific handcrafted prompts. AuT-Few automatically finds the most relevant templates to our target task from the collection prompts used to instruction-tune the upstream model. As illustrated in Figure 2, given an NLI task, AuT-Few might retrieve templates written for paraphrase identification. To automate answer choices, AuT-Few generates label descriptions tailored to the retrieved templates (e.g., Yes/No for a polar question, as for the illustrated paraphrase identification template) and descriptions that capture a class' overall topic (e.g. Enron/purchase for Enron spam classification). AuT-Few selects the most appropriate configuration via cross-validation.\nAuT-Few outperforms strong baselines, including T-Few (Liu et al., 2022), by 2.1 points over a total of 12 datasets, spanning 8 tasks, without any task-specific handcrafted prompts. All but one task are unseen to the upstream models, indicating AuT-Few's strong generalization capabilities. Moreover, by applying AuT-Few to a small upstream model (BART0 (Lin et al., 2022)), we achieve competitive performance and efficiency to the current state-ofthe-art prompt-free method, SetFit (Tunstall et al., 2022). Furthermore, AuT-Few achieves the best average rank across datasets on the few-shot RAFT benchmark (Alex et al., 2021). An ablation justifies the components of our automation method.1 2 Background and Related Work" }, { "figure_ref": [], "heading": "Instruction-Finetuned Language Models", "publication_ref": [ "b38", "b24", "b3", "b9", "b21", "b53", "b8", "b26" ], "table_ref": [], "text": "A language model is instruction-finetuned on prompted samples D src from various tasks, such as summarization or question answering, by autoregressively generating the target answer choice through standard maximum likelihood training. In-struction tuning not only improves generalization for large decoder-only models (Wei et al., 2022a), but also for comparably smaller encoder-decoder models, like T0 (Sanh et al., 2022) or BART0 (Lin et al., 2022). Prompt knowledge bases (KB), like PromptSource (Bach et al., 2022), contain prompt instructions for hundreds of tasks. Flan-T5 (Chung et al., 2022) is an improved upstream model scaled to thousands of tasks (Wang et al., 2022b).\nInference. We are interested in using upstream models for an unseen few-shot binary or multiclass classification task D tgt test . A prediction ŷ with an upstream model θ is made by computing the length-normalized log probabilities for each class y ∈ Y, conditioned on the sample x, a handcrafted template ϕ j ∈ Φ (i.e. task description and sample input formatting), and on the associated answer choices ψ j ∈ Ψ (textual descriptions of labels):\nargmax y ( 1 T t log p θ (ψ j (y) | x, ϕ j , ψ j (y) <t ),\nwith T being the length of the answer choice of y.\nSince the use of a single prompt might model the expectation over all possible prompts poorly, most systems handcraft multiple prompts for a target task. The expectation is then modelled by randomly drawing a template and its answer choices.\nParameter-Efficient Finetuning. Adapting upstream models to a new task or domain on a few available samples D tgt train via full model finetuning is often infeasible as these models consist of billions of parameters. Parameter-efficient finetuning adds or updates only a small subset of parameters θ P EF T ≪ θ, and largely retains the fine-tuning performance (Karimi Mahabadi et al., 2021;Zhang et al., 2021;Chen et al., 2023). Liu et al. (2022) proposed T-Few and showed that parameter-efficient finetuning an upstream model with T-Few performs better than in-context learning with GPT-3 in the few-shot learning setting. T-Few learns attention and activation re-scaling vectors by optimizing the maximum likelihood estimation and complements it with an unlikelihood loss." }, { "figure_ref": [], "heading": "Prompt Automation", "publication_ref": [ "b27", "b17", "b42", "b15", "b55", "b26", "b52", "b46", "b34", "b1", "b14", "b42", "b15", "b17", "b11", "b22", "b28", "b44", "b35" ], "table_ref": [], "text": "Template Automation. To automate the instructions as input to the model, previous work uses soft representation in the input via prompt tuning (Liu et al., 2021;Hambardzumyan et al., 2021), generates discrete instructions (Shin et al., 2020;Gao et al., 2021;Zhou et al., 2023), or combines both via semi-parametric prompt tuning (Bari et al., 2022). However, prompt tuning is brittle to optimize (Hu et al., 2022a;Liu et al., 2022), and the generation of discrete instructions requires substantial computational resources, a particular concern with upstream models as they typically have billions of parameters. The retrieval of instructions is limited to the retrieval of trained soft prompts and samples (Ye et al., 2022), prompt initialization (Vu et al., 2022), or the retrieval of multiple prompt mixtures (Qin and Eisner, 2021;Asai et al., 2022).\nAnswer Choice Automation. Methods to automate label representations are targeting BERTlike masked language models (Devlin et al., 2019), which enables optimization of the output descriptions on continuous vector representation. Shin et al. (2020) train a logistic classifier on embeddings to score tokens in the vocabulary by how well they predict the task labels. Gao et al. (2021) compute the probability for a token to be the masked classification token, by computing the dot product between both embeddings. Wang et al. (2022a) additionally ensure that label tokens belong only to a single class. Alternatively to such discrete search is learning soft output representations of labels via gradient descent (Hambardzumyan et al., 2021;Cui et al., 2022;Hu et al., 2022b;Karimi Mahabadi et al., 2022), or combining both (Ma et al., 2022). Tunstall et al. (2022) propose a fully prompt-free method using Sentence Transformers (Reimers and Gurevych, 2019).\nNovelty. Prior works on prompt automation are computationally intensive, brittle to optimize, or assume a continuous output representation for each token. By contrast, our proposed approach automates prompts for upstream models, which operate over a discrete output space. We do not insert any additional trainable parameters for automating templates. Instead, our work is the first to use retrieved instruction-finetuning templates for an unseen task directly and to use them to optimize the answer choices via the generation of distinct, semantically meaningful, answer choice configurations." }, { "figure_ref": [ "fig_0" ], "heading": "How Much Does the Design of Prompts", "publication_ref": [ "b49", "b40", "b41" ], "table_ref": [], "text": "Matter for Upstream Models?\nTo automate prompts, we need to understand their role in few-shot classification. While previous research suggests that the wording of instructions for masked language models is crucial, Webson and Pavlick (2022) observe that the semantic relevance of a prompt is not a strong performance indicator for upstream models. However, their analysis is restricted to natural language inference whilst using the PET (Schick and Schütze, 2021) algorithm to train the model. Yet, results in Schick and Schütze (2022) suggest that templates do matter in principle, but PET is robust when correctly configured. These results raise questions regarding the role of prompts for upstream models in the context of automated few-shot learning on unseen tasks. We conduct a systematic ablation study for both templates Φ and answer choices Ψ. We use T-Few with the T0 upstream model and 32 samples per class. We evaluate 12 datasets, spanning 8 tasks. For details on the datasets, see Appendix A.\nTemplates. We design four experiments to understand the importance of accurate task descriptions (i.e. semantics) in increasing order: concatenation of a sample's content without any additional text (null), uniform sampling of words from the training vocabulary (random), general purpose instructions (e.g. Given ..., the answer is ...) that are not tailored to the task (general), handcrafted instructions (handcrafted). We use the same handcrafted answer choices and templates across all settings (and vice versa the same templates across experiments for answer choice experiments).\nAs seen in Figure 3 (top), with a mean score of 62.8, 62.9, 64.0, 64.2, for each setting, respectively, we observe that simple task-unspecific templates perform surprisingly well, only performing slightly worse than more complex handcrafted ones. Templates that are not well-formed or lack an instruction entirely perform substantially worse than handcrafted ones. Note that results differ heavily between datasets. While some datasets (Enron and CR) are virtually unaffected by the design of the template, performance is strongly affected by the template for some other (e.g. RTE, WSC, Amazon)." }, { "figure_ref": [], "heading": "Answer Choices.", "publication_ref": [], "table_ref": [], "text": "Similarly, for answer choices we run four experiments: reversed handcrafted answer choices (reversed), uniform sampling of a random word from the training vocabulary (random), label text as presented in a dataset itself, such as Entailment in Figure 2 (dataset), and handcrafted choices. Different handcrafted templates for the same task might have different answer choices, depending on the instruction. In contrast, there exists only a single answer choice configuration for dataset answer choices (i.e. mapping from categorical label to text), which we use across all templates. We observe that unlike templates, the selection of answer choices makes a large difference in performance. However, datasets that were particularly robust regarding template design appear to be also robust here. Moreover, despite dataset choices (e.g. entailment, not_entailment) not matching a template's instruction (e.g. \"Given ... does ... fol-low? Yes or No?\"), and only having one configuration of choices, we observe comparable performance to handcrafted ones. Thus neither templatetailored answer choices nor multiple distinct answer choice configurations are needed. By manually selecting a single configuration of answer choices from both dataset and handcrafted choices (best-single), we easily achieve the highest average score with 66.2. An automated selection mechanism of a single configuration can subsequently perform favourably over multiple distinctly handcrafted prompts.\n4 AuT-Few: Automated Few-shot Classification with Upstream Models\nAuT-Few is a simple, yet efficient, algorithm to automate prompts for upstream models, drawing from the insights gained from Section 3. Figure 2 shows an illustration of AuT-Few's template and answer choice automation. AuT-Few deploys a lightweight template automation approach since accurate task templates are not essential to performance. It selects suitable templates from the collection of prompts the upstream model was instructionfinetuned on (Section 4.1).\nOn the other hand, the selection of answer choices has a substantial impact on performance. Searching over all possible answer choices is intractable for large upstream models and also imprecise due to the small training size. Thus, AuT-Few only considers two distinct types of answer choices (Section 4.2). One is tailored to the retrieved templates by measuring the log-likelihood on the training data (template-tailored). The other is based on capturing the topic of samples belonging to the same class (topic-specific).\nWe select the most appropriate template and answer choice configurations via cross-validation. The automated prompts are then used for training and inference of our upstream model, where we largely follow T-Few (c.f. Section 5.1 for details)." }, { "figure_ref": [], "heading": "Automated Templates via Retrieval", "publication_ref": [ "b35" ], "table_ref": [], "text": "We retrieve templates that are used in instruction tuning the upstream models. This enables us to (i) adhere closely to instructions the model is familiar with and has already learned (ii) exploit the associated inductive bias on answer choices for candidate generation in the next step. Specifically, we consider the collection of all prompts used for instruction tuning, Φ IT , such as the ones shown in Figure 1 for sentiment classification and paraphrase identification. We then aim to find templates Φ A ⊂ Φ IT from the collection that are related to our downstream task. For instance, given the NLI sample from Figure 2, we rather want to retrieve templates about paraphrase identification than sentiment classification. The former is both semantically and structurally more similar to NLI, as both have two arguments in their input. For NLI they are hypothesis and premise while for paraphrase identification these are the two compared sentences.\nTo find suitable templates, we first filter the collection Φ IT to templates that match the target task format the most. We achieve this by matching the number of underlying arguments of our target task, against the number of arguments of individual templates in Φ IT . We then do a semantic search via an efficient retrieval system: we query a concatenation of a sample's argument descriptions (e.g. the strings hypothesis and premise) against all suitable templates in Φ IT by encoding both query and every template in the collection with a lightweight bi-encoder (Reimers and Gurevych, 2019). If the field descriptions are uninformative (e.g. numbers), we instead use the averaged representations of all samples in D tgt train as the query. Using cosine similarity, we then select the top R templates. Finally, we adjust the retrieved templates to the downstream task via regular expressions to obtain Φ A ." }, { "figure_ref": [], "heading": "Automated Selection of Answer Choices", "publication_ref": [], "table_ref": [], "text": "Generation of Answer Choice Candidates. Apart from the label descriptions that appear in the dataset, which may not be meaningful, we consider the generation of two distinct types of answer choices given the retrieved templates: template-tailored and topic-specific answer choices.\nTemplate-tailored answer choices are generated by finding individual tokens for each class c that maximize the conditional likelihood over the training data of that class D c train , given the retrieved templates ϕ ∈ Φ A , computed via the upstream model:\nL c = x∈D c train ϕ∈Φ A log p θ (v | x, ϕ),\nwith v ∈ V being a token of the subword vocabulary of the upstream model. Tokens unspecific to an individual class might be ranked high across multiple classes. Thus, we further compute for every token how far its likelihood deviates from the mean\n1 |C| c∈C L c .\nWe finally select the top-ranked dis-tinct tokens across all classes that maximize the sum of these scores.\nRelying exclusively on the likelihood signal (and the retrieved templates) to find answer choices might amplify the inductive bias of the model and it restricts other potentially viable answer choices2 . Since our analysis indicates that answer choices not tailored to the templates can still perform strongly, we additionally consider topic-specific answer choices not generated via our upstream model. We use the high quality contextual representations of Sentence Transformers to find single-word (not token) representations that semantically express the underlying content for each class. For each sentence S c for a particular class, we obtain a contextual representation of the sentence and each word. For every class and over the training vocabulary we then compute the cosine similarity between each sentence and word. We remove words that occur across different classes and finally use the top word for each class as the topic-specific choices.\nSelection of Best Answer Choice Configuration. We are now tasked to find the best representation for the given task. For each choice option, we consider a joint signal derived from a supervised evaluation, i.e. F 1 score, on a subset of the training data D train , and from a measure of the overall log probabilities on the test data D test . The assumption for the latter is that representative answer choices better estimate the task's distribution, resulting in overall higher log probabilities on unseen data of the target task:\ny ϕ A ∈Φ A x∈Dtest ( 1 T log p θ (ψ p (y) |\nx, ϕ, ψ p (y) <t ), with ψ p being the current answer choices configuration. We compute the final score for each candidate by summing the normalized scores of each metric over 3-fold cross-validation." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b0", "b26", "b44", "b26", "b44", "b44" ], "table_ref": [], "text": "This section provides an overview of our experimental setup. We are sampling K training samples for each class y i ∈ Y, for a total of K × |Y| training samples3 . We do not consider a validation set to exist for hyperparameter-tuning, following Alex et al. (2021). For baselines, and implementation specifics, including hyperparameters, see Appendix B. For used datasets, see Appendix A.\nDatasets. We conduct experiments on a total of 12 text classification datasets, spanning a total of 8 tasks. This collection is in essence a combination of evaluation datasets used in Liu et al. (2022) and Tunstall et al. (2022), minus datasets that we consider not traditional classification tasks, e.g. sentence completion, where the meaning of the class changes per instance.\nImplementation Details. AuT-Few largely follows T-Few (Liu et al., 2022) for finetuning, with some modifications to training and inference to increase robustness for our automated few-shot method. Instead of only learning rescaling vectors of the upstream model's weights ((IA) 3 ), we additionally learn and re-scale decomposition matrices (LoRA), as proposed by Hu et al. (2022a). (IA) 3 and LoRA are complementary and the gradient updates from both methods can be made persistent to the model's weights after training without inquiring additional inference costs over the upstream model itself. Another limitation of T-Few is its inference algorithm. T-Few selects a single template at random (c.f. Section 2) and it can be a poor approximation of the overall expectation, especially with noisy templates as used with AuT-Few. We instead run a Monte-Carlo approximation over all retrieved templates, computing a weighted average over the probabilities computed via each template.\nBaselines. In addition to the current state-of-theart few-shot learning method T-Few, we consider SetFit (Tunstall et al., 2022) (with a RoBERTA backbone), which is of particular relevance in our context, since it is the state-of-the-art efficient prompt-free few-shot method. We also compare against a fully-finetuned RoBERTa LARGE model, based on the baseline in Tunstall et al. (2022). The majority baseline is based on the class distribution in the test data." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b44" ], "table_ref": [ "tab_1" ], "text": "Results for K = 32 samples per class are shown in Table 1. Both T-Few and AuT-Few use T0-3B as the upstream model. We report accuracy on all datasets with the exception of Amazon-CF, where we report Matthew's correlation coefficient due to the skewed distribution, following Tunstall et al. (2022). AuT-Few outperforms T-Few (64.2 ± 2.4) and SetFit (59.0 ± 2.6), with an average score of 66.3 ± 2.5. A trivial T-Few automation strategy that randomly draws answer choices from the training data (c.f Section 3) performs substantially worse than AuT-Few with much higher variability (57.7 ± 4.4). While AuT-Few has a higher average score than T-Few, the latter wins against AuT-Few on 8 out of 12 datasets. However, we observe a statistically significant difference 4 on only 4 datasets. Out of these four datasets where we observe statistical significance, AuT-Few outperforms T-Few in three of them (WiC, Emotion, Amazon-CF). 5 Moreover, we would like to emphasise that performing even comparable against T-Few is already a win since the latter uses multiple diverse handcrafted prompts for each target task while AuT-Few does not require any manual involvement by the user to optimize the prompt while maintaining comparable standard deviation.\nOn the blind test set with the best variant of T0 (T0++, 11B parameters) AuT-Few achieves an average score of 71.3 versus 70.5 for T-Few (with the same backbone), excluding WiC and WSC, as 4 We ran the two-sided Monte-Carlo permutation test with 10000 repetitions (p-value < 0.01). Significance for a dataset holds iff results are significant across all seeds.\n5 Notably, the performance difference between AuT-Few and T-Few on WSC, the only dataset where AuT-Few performs substantially worse, is not statistically significant given our test: this can be explained by the very small sample size of the dataset's evaluation data of only 104 samples. Liu et al. ( 2022) also observed \"unstable results\" on WSC, see the discussion on this Github issue.\nthese datasets have been used to train T0 (see App. C.1 for detailed scores).\nWe note that the automated prompts are not always semantically coherent. As shown in Appendix D, automation choices for some datasets, such as mp3player and ipod for CR, appear odd, yet the model still achieves a very high score on them. This observation can be explained by our findings in section 3, identifying that some datasets such as CR and EnronSpam are particularly robust towards the task description and the answer choices. For CR, AuT-Few's cross-validation strategy for selecting the best answer choice subsequently measures almost identical scores for all three choice configurations (90.1, 89.8, 90.4 for the dataset, template-tailored, and topic-specific choices, respectively), resulting in the seemingly erroneously answer-choice selection." }, { "figure_ref": [ "fig_1" ], "heading": "Results across Upstream Models & Efficiency.", "publication_ref": [ "b26", "b44", "b20", "b0" ], "table_ref": [ "tab_2", "tab_2", "tab_3", "tab_8" ], "text": "Results of AuT-Few with different upstream models, namely BART0, T0, and Flan-T5 are seen in Table 2. The results in the table are computed without Monte-Carlo approximation, resulting in a minor performance decline, yet simplifying the efficiency comparison. Datasets that are part of the instruction-tuning corpus of Flan-T5 or BART0 have been excluded (greyed out). BART0 being about 8 times smaller than T0 performs substantially worse, but it still substantially outperforms T-Few with BART0 and maintains a higher average score than SetFit. Flan-T5 performs on average the best on its unseen datasets, indicating the improved capabilities of the model's much larger and diverse instruction-tuning. These results highlight the effectiveness of AuT-Few across upstream models of varying sizes. The computational costs for training and inference are listed in Table 2. We follow the approach adopted by Liu et al. (2022) and Tunstall et al. (2022) to measure computational costs, namely FLOPs-per-token (Kaplan et al., 2020). AuT-Few requires about 7x the training cost of T-Few, yet remains computationally accessible, taking only a few hours to train on a single A10G GPU, since the number of training steps for few-shot PEFT is overall small. Similarly, while AuT-Few with BART0 takes 4.2x longer than SetFit, it still takes less than an hour of total training time. Importantly, during inference, AuT-Few is as efficient as T-Few (excluding Monte-Carlo approximation, otherwise scaling linearly with the number of retrieved templates). AuT-Few with BART0 is even more efficient than SetFit during inference, requiring only 60% of its computation while maintaining a competitive score.\nWe emphasize that while T-Few takes somewhat less computation than AuT-Few, T-Few requires significantly more human intervention, and human time is much more valuable than computer time. The difference of a couple hours of computer time is negligible when it can save orders of magnitude more human time and associated costs.\nVarying sample sizes. Figure 4 shows the performance of our baselines as well as Aut-Few over 16, Real-world evaluation: RAFT. RAFT (Alex et al., 2021) is a benchmark targeted towards evaluating few-shot classification methods. It consists of 11 datasets, from various domains, such as the legal or medical domain. In RAFT 50 randomly sampled training samples are provided, with a potentially imbalanced label distribution. We submitted predictions of AuT-Few with the 11B Flan-T5 backbone, with handcrafted prompts as provided by RAFT (AuT-Few (H)), as well as with our automated prompts (AuT-Few). We do not make any manual dataset adjustments, with the exception of Banking_77 as only a subset of the classes appears in its training data, c.f. App. C.2.\nResults are shown in Table 3. Our method with handcrafted prompts and the Flan-T5 upstream model achieves rank-1 with the overall highest average score. Our automated version achieves scores slightly below T-Few (the previously 2nd ranked system). This is largely due to AuT-Few's poor performance on a single dataset, Tweet-Eval-Hate, as a result of improper selection of answer choices. However, AuT-Few has the best average rank across all five models with 2.45. It wins against T-Few on 7 out of 11 datasets. Furthermore, it has the highest overall win rate, winning against all other models we considered (including our approach with handcrafted prompts) on 4 out of 11 datasets, see Table 7. These results highlight AuT-Few's robustness and generalizability to real-world classification tasks. Ablation. Results of our ablation study for AuT-Few with 32 samples per class are shown in Table 4. We ablate our template retrieval method by considering randomly selected templates from the instruction tuning KB, as well as template retrieval from the entire PromptSource collection of prompts. As seen both settings perform worse than AuT-Few, with higher standard deviation across seeds. While retrieving from the entire collection performs slightly better for tasks that appear in it (e.g. NLI, emotion classification), it strongly underperforms on unseen ones (e.g. WiC, Amazon-CF).\nFurther, the ablation of the choice options shows that each definition of answer choices by itself performs worse than AuT-Few (including the label descriptions that appear in the dataset). Finally, we see that our modifications to T-Few's inference and training are effective, with both LoRA and (IA) 3 PEFT performing worse individually. Note that AuT-Few still outperforms T-Few even when using only (IA) 3 , indicating AuT-Few's superiority without any architectural adjustments." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "AuT-Few replaces hand-designed task-specific prompts with automated templates, and achieves state-of-the-art results on a wide range of datasets " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This work and the automation pipeline is constrained to classification tasks in English. The role of templates and answer choices is necessarily different for tasks such as natural language generation (e.g. summarization or question answering) where a single textual class representation does not exist. The proposed automated few-shot approach is not expected to work well under extremely low data regime or when training samples are highly imbalanced (i.e. < 8 samples per class) as some data signal is required for optimizing the choice space. While our evaluation aims to cover a diverse range of classification tasks, the list of evaluation tasks is not exhaustive. Subsequently, there is no guarantee that AuT-Few performs equally well on every unseen tasks, particular ones that divert strongly from tasks the model has seen during instruction tuning." }, { "figure_ref": [], "heading": "A Datasets", "publication_ref": [ "b12", "b13", "b30", "b23", "b31", "b43", "b10", "b39", "b29" ], "table_ref": [], "text": "We conduct experiments on a total of 12 text classification datasets. The tasks we consider are 1) natural language inference: RTE (Dagan et al., 2005) , CB (de Marneffe et al., 2019), ANLI (Nie et al., 2020); 2) coreference resolution: WSC (Levesque et al., 2012); 3) word sense disambiguation: WiC (Pilehvar and Camacho-Collados, 2019); 4) counterfactual detection: Amazon-CF (O'Neill et al., 2021); 5) sentiment classification: SST-5 (Socher et al., 2013), Customer reviews (CR) (Conneau and Kiela, 2018); 6) emotion classification: emotion (Saravia et al., 2018); and 7) spam detection: Enron (Metsis et al., 2006). All datasets are in English. Enron contains personal identifiable information, yet substantial efforts have been made to remove any integrity problems and samples of affected employees, see here for reference. Enron is an long-established dataset to use for classification problems and our use is in line with previous usages of it." }, { "figure_ref": [], "heading": "B Implementation Details", "publication_ref": [ "b26", "b3", "b6", "b26", "b44", "b44", "b2", "b45", "b54", "b36", "b25", "b33" ], "table_ref": [], "text": "Parameter-efficient fine-tuning via low-rank adaptation and rescaling While exclusively rescaling weights of an upstream model via IA 3 has shown to perform remarkably well, the expressiveness of the fine-tuning process is restricted, due to ∆h (the accumulated gradient update) being always of the form | W 0 -λW 0 |, with W 0 being the weights of the upstream model and λ being the rescaling vector. For tasks that require major adaptation capabilities, this might pose a hindrance. In contrast, LoRA explicitly models via decomposition matrices the gradient update ∆h = BA, resulting in higher expressiveness (about 10x as many parameters as IA 3 ), but has repeatably shown in our experiments to have substantially higher variability. We hence combine both PEFT strategies, by rescaling both the weights of the upstream model and the accumulated gradient updates jointly: h = λ(W 0 x + BAx). After training, both λ and BA can be applied to W 0 , making the weight updates persistent without inquiring any additional computation during inference. Following Liu et al. (2022), we pre-train the weights of the rescaling vectors in a similar fashion to the upstream model. While the authors only train the vectors for 100K steps, we observed further improvements when training them for longer (500K steps).\nInference via Monte-Carlo Approximation over Templates As outlined in section 3, in its current version the expectation over template and choice space is approximated during inference by randomly drawing a template from a collection of handcrafted ones. Besides being non-deterministic, the selected template might be a poor approximation of the overall expectation. Instead, we run a Monte-Carlo Approximation over the template space Φ A , by computing a weighted average over all retrieved templates:\nŷ = argmax y E Φ,Ψ [p θ (y i | x, Φ, Ψ)] = argmax y R r=1 w r p θ (y i | x, ϕ r , ψ A ),\nwith R r=1 w r = 1. We determine the weights for each template by computing the log-likelihood of each template on D test and applying a softmax function on them, following the previously mentioned motivation.\nHyperparameters Since our joint PEFT method converges substantially faster than IA 3 by itself, we set the number of total steps to 600 (contrary to 1000 used by T-Few). Further, for both T-Few and AuT-Few we use the following hyperparameters across all experiments: we use Adam, a learning rate of 1 -3 , cosine decay with a warmup ratio of 0.06, a learning rate decay of 1.0, and a batch size of 8. The contextual embeddings for template retrieval as well the topic-specific choices are generated using sentence-transformers' all-MiniLM-L6-v2 encoder model. For all main experiments, we set the number of retrieved templates to R = 5. The underlying prompt knowledge base used is PromptSource (Bach et al., 2022). For selecting the best answer choices, we split the training data using 3-fold cross-validation and train the upstream model with identical hyperparameters as our final model for every choice option.\nSystem & Code All models (550M, 3B, 11B parameters) are trained and run on a single A10G GPU with 23GB of memory by using gradient checkpointing, bfloat16 floating-point format, and in the case of the 11B model by offloading parameters using DeepSpeed6 . We produce results for the SetFit and finetune baseline using the associated repository7 . We filter stopwords and punctuation from the vocabulary of topic-specific answer choices using NLTK (Bird and Loper, 2004). Our code and models will be made openly accessible under Apache License 2.0.\nBaselines In addition to the current state-of-theart (Liu et al., 2022), we consider SetFit (Tunstall et al., 2022), as well as a standard finetuned LM. SetFit is of particular relevance to us since it is the state-of-the-art prompt-free fewshot method, shown to perform competitively to T-Few in their experiments while being computationally substantially more efficient. In their comparison to T-Few, they use a very small variation of the sentence-transformer MPNET, consisting of only 110M, however, we observed substantially better performance with the larger ROBERTa sentencetransformer model (355M parameters). Hence, we report results on the latter model8 . The traditionally finetuned model is a RoBERTa LARGE model, fullyfinetuned with an additional linear head, based on the baseline in (Tunstall et al., 2022). et al., 2020), NeurIPS impact statement risks (Ashurst et al., 2022), Onestop English (Vajjala and Lučić, 2018), Overruling (legal domain) (Zheng et al., 2021), Systematic Review Inclusion (Saeri et al., 2022), Tai safety research (Riedel and Deibel, 2020), Terms of Service (Lippi et al., 2019), Tweet Eval Hate (Basile et al., 2019), and Twitter Complaints (Preotiuc-Pietro et al., 2019). All datasets are in English." }, { "figure_ref": [], "heading": "C Detailed Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Detailed results of the experiment in", "publication_ref": [], "table_ref": [], "text": "Since only a small subset of the 77 classes appear in the training data of the Banking_77 dataset, we directly use the dataset's class representations for the answer choices. Banking_77 is strictly speaking a zero-shot and few-shot evaluation dataset and previous work such as SetFit that does not use a verbalizer at all also had to make use of the given class representations for that dataset 9 ." }, { "figure_ref": [], "heading": "D Automated Choices", "publication_ref": [], "table_ref": [], "text": "The generated and selected answer choices as used in AuT-Few with K = 32 and T0 as the upstream model on seed 0 are shown in Table 9." }, { "figure_ref": [], "heading": "E Automated Templates", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The retrieved templates as used in AuT-Few with K = 32 and T0 as the upstream model on seed 0 are shown in Table 10. 9 https://towardsdatascience.com/ sentence-transformer-fine-tuning-setfit-\\ outperforms-gpt-3-on-few-shot-text-class\\ ification-while-d9a3788f0b4e " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank Lewis Tunstall for his help to submit AuT-Few's predictions to the RAFT leaderboard and Aditya Rawal for pointing us to relevant related work. We would also like to thank the anonymous reviewers for their time and effort giving us valuable feedback on our paper." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our paper makes state-of-the-art few-shot classification methods more accessible to non-experts for real-world problems. The goal of this work is not to replace human involvement in the deployment of AI systems but instead to shift human resources to other essential aspects of model deployment such as the analysis of data, biases, or system errors. We discussed the computational costs of our automation approach and show that they are comparable at similar model size with the most efficient few-shot systems, which themselves again are computationally much more efficient than full-data and full model fine-tuning, or in-context learning. The following movie review expresses what sentiment? {{text}} 5 {{text}} Did the reviewer enjoy the movie?\nTable 10: Retrieved templates, when using T0 and 32 samples for seed 0." } ]
A particularly successful class of approaches for few-shot learning combines language models with prompts -hand-crafted task descriptions that complement data samples. However, designing prompts by hand for each task commonly requires domain knowledge and substantial guesswork. We observe, in the context of classification tasks, that instruction finetuned language models are remarkably robust towards some dimensions of a prompt's design. We subsequently propose a simple method to eliminate the need for handcrafted prompts, named AuT-Few. This approach consists of (i) a prompt retrieval module that selects suitable task instructions from the instruction-tuning knowledge base, and (ii) the generation of two distinct, semantically meaningful, class descriptions and a selection mechanism via cross-validation. Over 12 datasets, spanning 8 classification tasks, we show that AuT-Few outperforms current stateof-the-art few-shot learning methods. Moreover, AuT-Few is the best ranking method across datasets on the RAFT few-shot benchmark. Notably, these results are achieved without task-specific handcrafted prompts on unseen tasks.
Automated Few-shot Classification with Instruction-Finetuned Language Models
[ { "figure_caption": "Figure 3 :3Figure 3: An analysis of prompts used in PEFT of upstream models (here T0), broken down into templates (top) and answer choices (bottom). Experiments span 12 datasets and 8 tasks. Error bars indicate one standard deviation across 5 runs. General task-unspecific templates perform surprisingly well and instruction-independent single answer choice configurations (i.e. dataset and best-single) outperform handcrafted prompts.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Average scores when finetuned on 16, 32, and 64 samples per class. AuT-Few performs better relative to the baselines with more training samples.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Main results with 32 samples per class, averaged over five runs. AuT-Few adopts T0 as the upstream model. Rand. T-Few uses randomly selected answer choices. Statistically significant differences between AuT-Few and T-Few are marked with * , using a two-sided Monte-Carlo permutation test with 10000 repetitions (p < 0.01).AuT-Few has the highest average score across datasets without the use of handcrafted task prompts while maintaining comparable standard deviation to T-Few and SetFit.", "figure_data": "Majority Zero-shot Finetune SetFit Rand. T-Few T-Few AuT-FewRTE52.765.6 1.256.4 5.651.4 1.865.2 5.682.5 2.481.4 2.4WSC WiC ANLI-R163.5 50.0 33.462.1 3.9 51.3 0.6 35.6 0.849.2 7.1 53.9 5.1 32.1 1.950.3 4.4 55.0 5.1 32.9 1.649.6 6.6 55.3 5.2 45.2 4.970.2 3.1 55.9 4.4 52.9 2.059.2 1.5 * 58.4 5.1 49.1 3.7 *ANLI-R233.433.6 0.733.4 1.634.0 1.740.6 2.042.5 1.442.0 1.5ANLI-R333.534.2 0.831.5 1.632.7 1.036.9 3.444.2 1.243.5 3.0CB Emotion50.0 35.257.5 0.8 42.1 0.886.1 6.6 57.6 3.584.3 5.0 71.9 3.277.5 6.1 48.7 3.591.4 3.2 65.4 2.393.9 1.6 72.6 2.5 *Enron Amazon-CF50.9 0.0053.3 0.4 0.04 0.792.2 2.4 40.5 9.995.1 1.2 60.1 3.096.9 0.6 35.7 10.696.5 0.4 24.0 7.595.5 0.5 59.0 8.2 *CR64.288.9 0.484.8 4.390.7 1.793.6 3.593.7 0.292.5 1.1SST-526.338.9 1.042.1 3.449.2 0.947.2 3.951.5 1.148.6 2.5Average ↑41.147.3 1.055.0 4.459.0 2.657.7 4.464.2 2.466.3 2.5", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "1.8 80.4 1.5 82.5 2.4 71.3 7.0 79.3 3.5 90.1 1.8 WSC 50.3 4.4 61.2 3.3 70.2 3.1 52.9 3.1 58.3 4.6 73.1 5.6 WiC 55.0 5.1 59.4 1.5 55.9 4.4 55.1 2.9 59.7 4.9 * Amazon-CF 60.1 3.0 0.02 3.0 24.0 7.5 55.0 11.0 59.4 6.8 * Results and computational costs using different upstream models, 32 samples per class. All results are computed without Monte-Carlo approx. Datasets that appear in an upstream model's training are greyed out. WiC and WSC were excluded from all averages.", "figure_data": "T-FewAuT-FewSetFit BART0T0BART0T0Flan-T5# Param.330M400M3B400M3B3B# Tr. Param. 330M0.1M0.3M1.9M10.5M10.5MInf. FLOPs2.5e10 1.9e10 1.8e11 1.9e101.8e111.8e11Tr. FLOPs8.5e14 4.1e14 3.9e15 3.6e152.7e162.7e16RTE51.4 67.6 2.5ANLI-R132.9 1.6 34.7 0.7 52.9 2.0 33.4 3.1 47.8 3.5*67.1 3.5ANLI-R234.0 1.7 34.7 1.0 42.5 1.4 36.1 1.742.1 1.153.3 2.6ANLI-R332.7 1.6 36.9 1.3 44.2 1.2 36.2 1.142.1 2.952.1 2.8CB81.3 5.0 78.6 7.3 91.4 3.2 85.7 4.693.6 1.691.0 1.3Emotion71.9 3.2 42.0 3.3 65.4 2.3 63.9 6.5 72.1 2.6*74.3 1.8Enron95.1 1.2 54.3 1.6 96.5 0.4 92.8 1.895.6 1.896.1 0.762.7 7.5CR90.7 1.7 91.7 0.8 93.7 0.2 90.6 0.892.0 1.593.2 0.3SST-549.2 0.9 42.4 0.3 51.5 1.1 47.4 3.947.7 1.348.6 7.2Average ↑59.9 2.2 49.8 2.0 64.5 2.2 61.2 4.167.1 3.0-", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on the RAFT benchmark as of October 19 2023. Avg. Rank is reported across the shown models. Our method with handcrafted prompts achieves rank-1 with the overall highest average score while AuT-Few has the best average rank and highest win rate.", "figure_data": "RankMethodAvg. Score ↑ Avg. Rank ↓-AuT-Few (H)77.32.82-AuT-Few74.72.451yiwise76.82.552T-Few75.82.8212SetFit71.34.275Human baseline73.5-", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation for AuT-Few with 32 samples per class: randomized indicates randomly selected templates, entire Coll. considers all PromptSource prompts.", "figure_data": "SetupAvg. ScoreAuT-Few66.3 2.5Templatew/o retrieved template (randomized) w/ entire Collection65.7 2.9 65.6 2.9only dataset65.5 2.7Choicesonly template-tailored63.3 3.4only topic-specific62.2 4.3w/o Monte-Carlo approximation65.8 3.0Improv.only LoRA63.4 3.4only (IA) 365.2 2.5and tasks, and the best average rank across datasetson the RAFT benchmark. Machine learning, es-pecially few-shot learning, is about automation.Although T-Few takes less computation, it requireshand-designed prompts which involves significanthuman intervention and expertise. Human-time isprofoundly more valuable than computer time, andAuT-Few saves this valuable human time while stillretaining computational tractability. Future workincludes the identification of causes for the obser-vations made in section 3, particularly for datasetsthat are completely unaffected by the prompt's de-sign (e.g Enronspam and CR).", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table 1 for different sample sizes are shown in Table 8. Results on the held out test set. Excluding WiC and WSC as these wereeen in T0pp's pre-training.", "figure_data": "C.1 Results Blind Test SetDatasetT-Few AuT-FewRTE87.282.1ANLI-R160.754.6ANLI-R252.149.1ANLI-R351.951.8CB93.696.0Emotion62.171.7Enron97.097.6Amazon-CF50.262.6CR93.792.8SST-556.655.1Avg70.571.3", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Generated answer choices, when using T0 and 32 samples for seed 0.", "figure_data": "SystemAdeBanking Neurips One Stop Overruling Org Types Review Tai Safety ToSEval Hate ComplaintsAuT-Few (H) 0.8370.6470.780.8470.9420.9170.6870.703 0.7280.5170.892AuT-Few0.8460.5870.8980.770.9630.8010.620.742 0.7380.3500.901yiwise0.8560.6950.8390.6980.9440.9060.4930.737 0.7490.6470.883T-Few0.8040.6950.8330.6760.950.9150.5080.7360.750.5860.879SetFit0.7990.6320.8590.760.930.7690.5030.664 0.6040.4870.831", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Results on RAFT.", "figure_data": "A)", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Results with T0 upstream model. (H): Handcrafted, (A w/o D): Automated Prompts without dataset label candidates, (A): Automated Prompts.", "figure_data": "", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" } ]
Rami Aly; Xingjian Shi; Kaixiang Lin; Aston Zhang; Andrew Gordon Wilson
[ { "authors": "Neel Alex; Eli Lifland; Lewis Tunstall; Abhishek Thakur; Pegah Maham; C Jess Riedel; Emmie Hine; Carolyn Ashurst; Paul Sedille; Alexis Carlier; Michael Noetel; Andreas Stuhlmüller", "journal": "", "ref_id": "b0", "title": "RAFT: A real-world few-shot text classification benchmark", "year": "2021" }, { "authors": "Akari Asai; Mohammadreza Salehi; Matthew E Peters; Hannaneh Hajishirzi", "journal": "", "ref_id": "b1", "title": "Parameter-efficient multi-task tuning via attentional mixtures of soft prompts", "year": "2022" }, { "authors": "Carolyn Ashurst; Emmie Hine; Paul Sedille; Alexis Carlier", "journal": "", "ref_id": "b2", "title": "Ai ethics statements: analysis and lessons learnt from neurips broader impact statements", "year": "2022" }, { "authors": "Stephen Bach; Victor Sanh; Zheng Xin Yong; Albert Webson; Colin Raffel; V Nihal; Abheesht Nayak; Taewoon Sharma; M Kim; Thibault Saiful Bari; Zaid Fevry; Manan Alyafeai; Andrea Dey; Zhiqing Santilli; Srulik Sun; Canwen Ben-David; Gunjan Xu; Han Chhablani; Jason Wang; Maged Fries; Shanya Alshaibani; Urmish Sharma; Khalid Thakker; Xiangru Almubarak; Dragomir Tang; Mike Radev; Tian-Jian; Alexander Jiang; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Prompt-Source: An integrated development environment and repository for natural language prompts", "year": "2022" }, { "authors": "Bari Saiful; Aston Zhang; Shuai Zheng; Xingjian Shi; Yi Zhu; Shafiq Joty; Mu Li", "journal": "", "ref_id": "b4", "title": "Spt: Semiparametric prompt tuning for multitask prompted learning", "year": "2022" }, { "authors": "Cristina Valerio Basile; Elisabetta Bosco; Debora Fersini; Viviana Nozza; Francisco Patti; Manuel Rangel; Paolo Pardo; Manuela Rosso; Sanguinetti", "journal": "", "ref_id": "b5", "title": "Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter", "year": "2019" }, { "authors": "Steven Bird; Edward Loper", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "NLTK: The natural language toolkit", "year": "2004" }, { "authors": "Iñigo Casanueva; Tadas Temčinas; Daniela Gerz; Matthew Henderson; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Efficient intent detection with dual sentence encoders", "year": "2020" }, { "authors": "Jiaao Chen; Aston Zhang; Xingjian Shi; Mu Li; Alex Smola; Diyi Yang", "journal": "", "ref_id": "b8", "title": "Parameter-efficient fine-tuning design spaces", "year": "2023" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b9", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Alexis Conneau; Douwe Kiela", "journal": "European Language Resources Association (ELRA", "ref_id": "b10", "title": "SentEval: An evaluation toolkit for universal sentence representations", "year": "2018" }, { "authors": "Ganqu Cui; Shengding Hu; Ning Ding; Longtao Huang; Zhiyuan Liu", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Prototypical verbalizer for prompt-based few-shot tuning", "year": "2022" }, { "authors": "Ido Dagan; Oren Glickman; Bernardo Magnini", "journal": "", "ref_id": "b12", "title": "The pascal recognising textual entailment challenge", "year": "2005" }, { "authors": "Marie-Catherine De Marneffe; Mandy Simons; Judith Tonhauser", "journal": "Proceedings of Sinn und Bedeutung", "ref_id": "b13", "title": "The commitmentbank: Investigating projection in naturally occurring discourse", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Making pre-trained language models better few-shot learners", "year": "2021" }, { "authors": "H Gurulingappa; A M Rajput; A Roberts; J Fluck; M Hofmann-Apitius; L Toldo", "journal": "Journal of Biomedical Informatics", "ref_id": "b16", "title": "Development of a Benchmark Corpus to Support the Automatic Extraction of Drug-related Adverse Effects from Medical Case Reports", "year": "2012" }, { "authors": "Karen Hambardzumyan; Hrant Khachatrian; Jonathan May", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "WARP: Word-level Adversarial ReProgramming", "year": "2021" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; ; Chen", "journal": "", "ref_id": "b18", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Shengding Hu; Ning Ding; Huadong Wang; Zhiyuan Liu; Jingang Wang; Juanzi Li; Wei Wu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification", "year": "2022" }, { "authors": "Jared Kaplan; Sam Mccandlish; Tom Henighan; Tom B Brown; Benjamin Chess; Rewon Child; Scott Gray; Alec Radford; Jeffrey Wu; Dario Amodei", "journal": "", "ref_id": "b20", "title": "Scaling laws for neural language models", "year": "2020" }, { "authors": "Rabeeh Karimi Mahabadi; James Henderson; Sebastian Ruder", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Compacter: Efficient low-rank hypercomplex adapter layers", "year": "2021" }, { "authors": "Rabeeh Karimi Mahabadi; Luke Zettlemoyer; James Henderson; Lambert Mathias; Marzieh Saeidi; Veselin Stoyanov; Majid Yazdani", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Promptfree and efficient few-shot learning with language models", "year": "2022" }, { "authors": "Hector J Levesque; Ernest Davis; Leora Morgenstern", "journal": "AAAI Press", "ref_id": "b23", "title": "The Winograd Schema Challenge", "year": "2012" }, { "authors": "Kangmin Bill Yuchen Lin; Chris Tan; Beiwen Miller; Xiang Tian; Ren", "journal": "", "ref_id": "b24", "title": "Unsupervised crosstask generalization via retrieval augmentation", "year": "2022" }, { "authors": "Marco Lippi; Przemysław Pałka; Giuseppe Contissa; Francesca Lagioia; Hans-Wolfgang Micklitz; Giovanni Sartor; Paolo Torroni", "journal": "Artificial Intelligence and Law", "ref_id": "b25", "title": "Claudette: an automated detector of potentially unfair clauses in online terms of service", "year": "2019" }, { "authors": "Haokun Liu; Derek Tam; Mohammed Muqeeth; Jay Mohta; Tenghao Huang; Mohit Bansal; Colin Raffel", "journal": "", "ref_id": "b26", "title": "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning", "year": "2022" }, { "authors": "Xiao Liu; Yanan Zheng; Zhengxiao Du; Ming Ding; Yujie Qian; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b27", "title": "Gpt understands", "year": "2021" }, { "authors": "Ruotian Ma; Xin Zhou; Tao Gui; Yiding Tan; Linyang Li; Qi Zhang; Xuanjing Huang", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Templatefree prompt tuning for few-shot NER", "year": "2022" }, { "authors": "Vangelis Metsis; Ion Androutsopoulos; Georgios Paliouras", "journal": "", "ref_id": "b29", "title": "Spam filtering with naive bayes -which naive bayes", "year": "2006" }, { "authors": "Yixin Nie; Adina Williams; Emily Dinan; Mohit Bansal; Jason Weston; Douwe Kiela", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Adversarial NLI: A new benchmark for natural language understanding", "year": "2020" }, { "authors": "O' James; Polina Neill; Ryuichi Rozenshtein; Motoko Kiryo; Danushka Kubota; Bollegala", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "I wish I would have loved this one, but I didn't -a multilingual dataset for counterfactual detection in product review", "year": "2021" }, { "authors": "Mohammad Taher; Pilehvar ; Jose Camacho-Collados", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "WiC: the word-in-context dataset for evaluating context-sensitive meaning representations", "year": "2019" }, { "authors": "Daniel Preotiuc-Pietro; Mihaela Gaman; Nikolaos Aletras", "journal": "", "ref_id": "b33", "title": "Automatically identifying complaints in social media", "year": "2019" }, { "authors": "Guanghui Qin; Jason Eisner", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Learning how to ask: Querying LMs with mixtures of soft prompts", "year": "2021" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Jess Riedel; Angelica Deibel", "journal": "Online", "ref_id": "b36", "title": "Tai safety bibliographic database", "year": "2020" }, { "authors": "Peter Alexander K Saeri; Joannie Slattery; Thomas Lee; Neil Houlden; Romy L Farr; Jake Gelber; Lee Stone; Shane Huuskes; Kai Timmons; Windle", "journal": "VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations", "ref_id": "b37", "title": "What works to increase charitable donations? a metareview with meta-meta-analysis", "year": "2022" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; Debajyoti Datta; Jonathan Chang; Mike Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Fevry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b38", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "Elvis Saravia; Hsien-Chi Toby Liu; Yen-Hao Huang; Junlin Wu; Yi-Shin Chen", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "CARER: Contextualized affect representations for emotion recognition", "year": "2018" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "It's not just size that matters: Small language models are also fewshot learners", "year": "2021" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b41", "title": "True Few-Shot Learning with Prompts-A Real-World Perspective", "year": "2022" }, { "authors": "Taylor Shin; Yasaman Razeghi; Robert L Logan; I V ; Eric Wallace; Sameer Singh", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts", "year": "2020" }, { "authors": "Richard Socher; Alex Perelygin; Jean Wu; Jason Chuang; Christopher D Manning; Andrew Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "Lewis Tunstall; Nils Reimers; Unso Eun; Seo Jo; Luke Bates; Daniel Korat; Moshe Wasserblat; Oren Pereg", "journal": "", "ref_id": "b44", "title": "Efficient few-shot learning without prompts", "year": "2022" }, { "authors": "Sowmya Vajjala; Ivana Lučić", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "On-eStopEnglish corpus: A new corpus for automatic readability assessment and text simplification", "year": "2018" }, { "authors": "Tu Vu; Brian Lester; Noah Constant; Rami Al-Rfou; ' ; Daniel Cer", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "SPoT: Better frozen model adaptation through soft prompt transfer", "year": "2022" }, { "authors": "Han Wang; Canwen Xu; Julian Mcauley; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Automatic multi-label prompting: Simple and interpretable few-shot classification", "year": "2022" }, { "authors": "Yizhong Wang; Swaroop Mishra; Pegah Alipoormolabashi; Yeganeh Kordi; Amirreza Mirzaei; Anjana Arunkumar; Arjun Ashok; Arut Selvan Dhanasekaran; Atharva Naik; David Stap", "journal": "", "ref_id": "b48", "title": "Benchmarking generalization via in-context instructions on 1,600+ language tasks", "year": "2022" }, { "authors": "Albert Webson; Ellie Pavlick", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Do promptbased models really understand the meaning of their prompts", "year": "2022" }, { "authors": "Jason Wei; Maarten Bosma; Vincent Zhao; Kelvin Guu; Adams Wei Yu; Brian Lester; Nan Du; Andrew M Dai; Quoc V Le", "journal": "", "ref_id": "b50", "title": "a. Finetuned language models are zero-shot learners", "year": "2022" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler; Ed H Chi; Tatsunori Hashimoto; Oriol Vinyals; Percy Liang; Jeff Dean; William Fedus", "journal": "Transactions on Machine Learning Research", "ref_id": "b51", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Seonghyeon Ye; Joel Jang; Doyoung Kim; Yongrae Jo; Minjoon Seo", "journal": "", "ref_id": "b52", "title": "Retrieval of soft prompt enhances zero-shot generalization", "year": "2022" }, { "authors": "Aston Zhang; Yi Tay; Shuai Zhang; Alvin Chan; Anh Tuan Luu; Siu Hui; Jie Fu", "journal": "", "ref_id": "b53", "title": "Beyond fully-connected layers with quaternions: Parameterization of hypercomplex multiplications with 1/n parameters", "year": "2021" }, { "authors": "Lucia Zheng; Neel Guha; Brandon R Anderson; Peter Henderson; Daniel E Ho", "journal": "Association for Computing Machinery", "ref_id": "b54", "title": "When does pretraining help? assessing self-supervised learning for law and the casehold dataset of 53,000+ legal holdings", "year": "2021" }, { "authors": "Yongchao Zhou; Andrei Ioan Muresanu; Ziwen Han; Keiran Paster; Silviu Pitis; Harris Chan; Jimmy Ba", "journal": "", "ref_id": "b55", "title": "Large language models are human-level prompt engineers", "year": "2023" }, { "authors": "", "journal": "A) RTE", "ref_id": "b56", "title": "Majority Zero-shot Finetune SetFit T-Few AuT-Few (H) AuT-Few (w/o D) AuT-Few", "year": "0480" } ]
[ { "formula_coordinates": [ 3, 76.38, 335.22, 207.24, 29.1 ], "formula_id": "formula_0", "formula_text": "argmax y ( 1 T t log p θ (ψ j (y) | x, ϕ j , ψ j (y) <t )," }, { "formula_coordinates": [ 5, 335.74, 658, 159.06, 24.52 ], "formula_id": "formula_1", "formula_text": "L c = x∈D c train ϕ∈Φ A log p θ (v | x, ϕ)," }, { "formula_coordinates": [ 5, 307.34, 760.87, 58.47, 15.19 ], "formula_id": "formula_2", "formula_text": "1 |C| c∈C L c ." }, { "formula_coordinates": [ 6, 105.96, 512.6, 183.17, 15.46 ], "formula_id": "formula_3", "formula_text": "y ϕ A ∈Φ A x∈Dtest ( 1 T log p θ (ψ p (y) |" }, { "formula_coordinates": [ 14, 95.76, 229.11, 169.23, 50.52 ], "formula_id": "formula_4", "formula_text": "ŷ = argmax y E Φ,Ψ [p θ (y i | x, Φ, Ψ)] = argmax y R r=1 w r p θ (y i | x, ϕ r , ψ A )," } ]
2023-10-29
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b7", "b7", "b54", "b45", "b52" ], "table_ref": [], "text": "Recently, denoising diffusion models have emerged as a promising approach for human motion generation [12,65,67] outperforming other alternatives such as GAN or VAE in terms of both quality and diversity [8,54,61]. Several studies have focused on generating motion based on expressive text prompts [8,54], or music [55,67]. The stateof-the-art motion generation methods, such as MDM [54], utilize classifier-free guidance to generate motion conditioned on text prompts. However, incorporating spatial constraints into diffusion models remains underexplored. Human motions consist of both semantic and spatial information, where the semantic aspect can be described using natural languages or action labels and the spatial aspect governs physical interaction with surroundings. To generate realistic human motion in a 3D environment, both aspects must be incorporated. Our experiments show that simply adding spatial constraint guidance, such as global trajectories, into the state-of-the-art models or using imputation and in-painting approaches do not yield satisfactory results.\nWe identify two main issues that make the motion diffusion models likely to ignore the guidance when conditioned on spatial objectives: the sparseness of global orientation in the motion representation and sparse frame-wise guidance signals. By design, the diffusion models are a denoising model that consecutively denoises the target output over multiple steps. With sparse guidance, a small portion of the output that receives guidance will be inconsistent with all other parts that do not, therefore, are more likely to be treated as noise and discarded in subsequent steps. First, the sparseness within a frame is a result of common motion representations that separate local pose information, like joint rotations, from global orientations, such as pelvis translations and rotations [46], usually with more focus on local poses. For instance, the common motion representation [15] uses 4 values to represent global orientation and 259 values for local pose in each frame. Such imbalance can cause the model to focus excessively on local pose information, and consequently, perceive guided global orientation as noise, resulting in a discrepancy such as foot skating.\nSecond, in many applications such as character animation, gaming, and virtual reality, the spatial control signals are defined on only a few keyframes such as target locations on the ground. We show that the current diffusion-based motion generation models struggle to follow such sparse guidance as doing so is equivalent to guiding an image diffusion model with only a few pixels. As a result, either the guidance at the provided keyframes will be ignored during the denoising process or the output motion will contain an artifact where the character warps to satisfy the guidance only in those specific keyframes.\nTo effectively incorporate sparse spatial constraints into the motion generation process, we propose GMD, a novel and principled Guided Motion Diffusion model. To alleviate the discrepancy between local pose and global orientation in the guided denoising steps, we introduce emphasis projection, a general representation manipulation method that we use to increase the importance of spatial information during training. Additionally, we derive a new imputation and inpainting formulation that enables the existing inpainting techniques to operate in the projected space, which we leverage to generate significantly more coherent motion under guidance by spatial conditions. Then, to address the highly sparse guidance, we draw inspiration from the credit assignment problem in Reinforcement Learning [53,57], where sparse rewards can be distributed along a trajectory to allow for efficient learning [3]. Our key insight is that motion denoisers, including the diffusion model itself, can be used to expand the spatial guidance signal at a specific location to its neighboring locations without any additional model. By turning a sparse signal into a dense one by backpropagating through a denoiser, it enables us to achieve high-quality controllable motion synthesis, even with extremely sparse guidance signals.\nIn summary, our contributions are: (1) Emphasis projection, a method to adjust relative importance between different parts of the representation vector, which we use to encourage coherency between spatial information and local poses to allow spatial guidance. (2) Dense signal propagation, a conditioning method to tackle the sparse guidance problem. (3) GMD, an effective spatially controllable motion generation method that enables the unexplored synthesizing of motions based on free-text and spatial conditioning by integrating the above contributions into our proposed Unet-based architecture. We provide extensive analysis to support our design decisions and show the versatility of GMD on three tasks: trajectory conditioning, keyframe conditioning, and obstacle avoidance. Additionally, GMD's model also significantly outperforms the state-of-the-art in traditional text-to-motion tasks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b20", "b49", "b51", "b12", "b48", "b41", "b19", "b22", "b40", "b57", "b23", "b44", "b46", "b47", "b3", "b34", "b27", "b42", "b7", "b46", "b10", "b34", "b10", "b12", "b21", "b44", "b46", "b47", "b13", "b17", "b24", "b55", "b62", "b8", "b45", "b28", "b29", "b31", "b0", "b15", "b58", "b38", "b36", "b59", "b65", "b11", "b2", "b7", "b61", "b1", "b54" ], "table_ref": [], "text": "Diffusion-based probabilistic generative models (DPM). DPMs [21,50,51,52] have gained significant attention in recent years due to their impressive performance across multiple fields of research. They have been used for tasks such as image generation [13], image super-resolution [31,49], speech synthesis [27,27,42], video generation [20,23], 3D shape generation [41,58], and reinforcement learning [24].\nThe surge in interest in DPMs may be attributed to their impressive controllable generation capabilities, including text-conditioned generation [45,47,48] and image editing [4,6,10,19,35]. Latent diffusion models (LDM) are an-other area of interest, which includes representation learning [28,43] and more efficient modeling techniques [8,47].\nMoreover, DPMs exhibit a high degree of versatility in terms of conditioning. There are various methods for conditional generation, such as imputation and inpainting [10,11,35], classifier guidance [11,13], and classifier-free guidance [22,45,47,48]. Inpainting and classifier guidance can be applied to any pretrained DPM, which extends the model's capabilities further without the need for retraining.\nHuman motion generation. The goal of the human motion generation task is to generate motions based on the conditioning signals. Various conditions have been explored such as partial poses [14,18,54], trajectories [25,56,63], images [9,46], music [29,30,32], text [1,15,16,26,40], objects [59], action labels [17,39], or unconditioned [37,60,64,66]. Recently, many diffusion-based motion generation models have been proposed [12,26,33,65, 67] and demonstrate better quality compared to alternative models such as GAN or VAE. Employing the CLIP model [44], these models showed great improvements in the challenging text-tomotion generation task [8,61,62] as well as allowing conditioning on partial motions [54] or music [2,55]. However, they do not support conditioning signals that are not specifically trained, for example, following keyframe locations or avoiding obstacles. Maintaining the capabilities of the diffusion models, we propose methods to enable spatial guidance without retraining the model for each new objective." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Diffusion-based generative models", "publication_ref": [ "b20" ], "table_ref": [], "text": "Diffusion-based probabilistic generative models (DPMs) are a family of generative models that learn a sequential denoising process of an input x t with varying noise levels t. The noising process of DPM is defined cumulatively as q(x t |x 0 ) = N ( √ α t x 0 , (1 -α t )I), where x 0 is the clean input, α t = t s=1 (1 -β s ), and β t is a noise scheduler. The denoising model p θ (x t-1 |x t ) with parameters θ learns to reverse the noising process by modeling the Gaussian posterior distribution q(x t-1 |x t , x 0 ). DPMs can map a prior distribution N (0, I) to any distribution p(x) after T successive denoising steps.\nTo draw samples from a DPM, we start from a sample x T from the prior distribution N (0, I). Then, for each t, we sample x t-1 ∼ N (µ t , Σ t ) until t = 0, where\nµ t = √ α t-1 β t 1 -α t x 0 + √ 1 -β t (1 -α t-1 ) 1 -α t x t(1)\nand Σ t is a variance scheduler of choice, usually Σ t = 1-αt-1 1-αt β t [21]. x 0 in Eq. 1 is the prediction from a denoising model. For an ϵ θ model,\nx 0 = 1 √ αt x t + √ 1-αt √ αt ϵ θ (x t ).\nThere are multiple choices for the denoising model to predict including the clean input x 0 , the noise ϵ, and the one-step denoised target µ t . An x 0,θ model is trained using the squared loss to the clean input ∥x 0,θ (x t ) -x 0 ∥\n2 , an ϵ θ model is trained using the squared loss ∥ϵ θ (x t ) -ϵ∥ 2 , and µ t,θ model is trained using the squared loss ∥µ t,θ (x t ) -µ t ∥ 2 ." }, { "figure_ref": [], "heading": "Controllable generation with diffusion models.", "publication_ref": [ "b10", "b20" ], "table_ref": [], "text": "Classifier-free guidance. \nµ t = µ ′ t + sΣ t ∇ xt log p(d|x t )(2)\nwhere µ ′ t is the original mean, s controls the conditioning strength, and Σ t is a variance scheduler which can be the same as in Eq. 1. Since Σ t is a decreasing sequence, the guidance signal diminishes as t → 0 which corresponds to the characteristic of DPMs that tend to modify x t less and less as time goes. Classifier guidance is a post-hoc method, i.e., there is no change to the DPM model, one only needs to come up with p(d|x t ) which is extremely flexible.\nImputation and inpainting. To generate human motion sequences from partial observations, such as global motion trajectories or keyframe locations, inpainting is used. These partial observations, called imputing signals, are used to adjust the generative process towards the observations. Imputation and inpainting are two sides of the same coin.\nLet y be a partial target value in an input x that we want to impute. The imputation region of y on x is denoted by M x y , and a projection P x y that resizes y to that of x by filling in zeros. In DPMs, imputation can be done on the sample x t-1 after every denoising step [11]. We have the new imputed sample xt-1 as\nxt-1 = (1 -M x y ) ⊙ x t-1 + M x y ⊙ P x y y t-1(3)\nwhere ⊙ is a Hadamard product and y t-1 is a noised target value. y t-1 ∼ N ( √ α t-1 y, (1 -α t-1 )I) following Ho et al. [21] is one of the simplest choices of y t-1 . Note that all three modes of conditioning presented here are not mutually exclusive. One could apply one or more tricks in a single pipeline." }, { "figure_ref": [ "fig_2" ], "heading": "Guided Motion Diffusion", "publication_ref": [ "b7" ], "table_ref": [], "text": "Algorithm 1 GMD's two-stage guided motion diffusion Require: A trajectory DPM z 0,ϕ , a motion DPM x 0,θ , a goal function G z (•), and keyframe locations y (if any). \nz 0 ← z 0,ϕ (z t ) 5: µ, Σ ← µ(z 0 , z t ), Σ t 6:\n# Classifier guidance (Eq. 2)\n7:\n# Dense signal propagation 8:\nz t-1 ∼ N (µ -sΣ∇ zt G z (z 0 ), Σ) 9:\n# Impute y on z (Eq. 3) (if any) 10: x proj 0 ← x proj 0,θ (x proj t ) # Emphasis projection 17:\nz t-1 ← (1 -M z y ) ⊙ z t-1 + M z y y t-\n# Impute y on x proj (Eq. 6)\n18:\nxproj 0 ← A (1 -M ) ⊙ A -1 x proj 0 + M P x z y 19: µ, Σ ← µ(x proj 0 , x proj t ), Σ t 20:\n# Masked classifier guidance (Eq. 9)\n21:\n# Dense signal propagation\n22: ∆ ← -sΣA -1 ∇ x proj t G z P z x A -1 x proj 0 23: µ ← µ + A(1 -M ) ⊙ ∆ 24:\nx t-1 ∼ N (µ, Σ) 25: end for 26: return z 0\nWe aim to generate realistic human motions that can be guided by spatial constraints, enabling the generated human motion to achieve specific goals, such as following a global trajectory, reaching certain locations, or avoiding obstacles. Although diffusion-based models have significantly improved text-to-motion modeling [8,54], generating motions that achieve specific goals is still beyond the reach of the current models. Our work addresses this limitation and advances the state-of-the-art in human motion modeling.\nWe are interested in modeling a full-body human motion that satisfies a certain scalar goal function G x (•) that takes a motion representation x and measures how far the motion x is from the goal (lower is better). More specifically, x ∈ R N ×M represents a sequence of human poses for M motion steps, where N is the dimension of human pose representations, e.g., N = 263 in the HumanML3D [15] dataset. Let X be the random variable associated with x.\nOur goal is to model the following conditional probability using a motion DPM\np x|G x (X) = 0(4)\nThis can be extended to p x|G x (X) = 0, d , where d is any additional signal, such as text prompts. From now on, we omit d to reduce clutter. Many challenging tasks in motion modeling can be encapsulated within a goal function G z that only depends on the trajectory z of the human motion, not the whole motion x. Let us define z ∈ R L×M to be the trajectory part of x with length M and L = 2 describing the ground location of the pelvis of a human body. A particular location z (i) at motion step i describes the pelvis location of the human body on the ground plane. We define a projection P z\nx that resizes x to match z by taking only the z part, and its reverse P x z that resizes z to match x by filling in zeros. With this, our conditional probability becomes p x|G z (P z\nx X) = 0 . In this work, we will show how text-to-motion DPMs can be extended to solve several challenging tasks, including trajectory-conditioned motion generation, locationconditioned trajectory planning, and obstacle avoidance trajectory planning. Using our proposed Emphasis projection and dense signal propagation, we alleviate the sparse guidance problem and enable motion generation based on spatial conditions. The overview of our methods is shown in Fig. 3." }, { "figure_ref": [], "heading": "Emphasis projection", "publication_ref": [], "table_ref": [], "text": "One of the most straightforward approaches for minimizing the goal function G z (•) is by analyzing what trajectories that minimize z * = arg min z G z (z) look like. For a trajectory conditioning task, a whole trajectory z * is directly given. Our task is to generate the rest of the motion x. With such knowledge, we can employ imputation & inpainting technique by supplying the motion DPM with the x-shaped P x z z * to guide the generation process. Problem 1: Motion incoherence Since the imputing trajectory z * is only a small part of the whole motion x (L ≪ N ), we often observe that the DPM ignores the change from imputation and fails to make appropriate changes on the rest of x. This results in an incoherent local motion that is not aligned or well coordinated with the imputing trajectory." }, { "figure_ref": [], "heading": "Solution 1: Emphasis projection", "publication_ref": [], "table_ref": [], "text": "We tackle this problem by giving more emphasis on the trajectory part of motion x. More specifically, we propose an Emphasis projection method that increases the trajectory's relative importance within motion x. We achieve this by utilizing a random matrix A = A ′ B, where A ′ ∈ R N ×N is a matrix with elements randomly sampled from N (0, 1) and B ∈ R N ×N is a diagonal matrix whose trajectory-related diagonal indexes are c and the rest are 1 for emphasizing those trajectory elements. In our case, we emphasize the rotation and ground location of the pelvis, (rot, x, z), in x by c times. We now have a projected motion x proj = 1 N -3+3c 2 Ax. Note that the fractional term is to maintain the unit variance on x proj . The noising process of the projected motion becomes q(x proj t |x proj 0 ) = N ( √ α t x proj 0 , (1 -α t )I). There is no change on how a DPM that works on the projected motion p θ (x proj t-1 |x proj t ) operates and treats x proj t . In Section 6.3, we show that emphasis projection is an effective way of solving the motion incoherence problem, and is shown to be substantially better than a straightforward approach of retraining a DPM with an increased loss weight on the trajectory.\nImputation on the projected motion x proj . We have discussed imputing on the sample x t-1 in Eq. 3. Here, we introduce an imputation on x 0 which modifies the DPM's belief on the final outcome x 0,θ by imputing it with z. We have found this technique useful in many tasks we are interested in.\nLet us define the imputation region of z on x as M x z . We obtain the imputed x0 from\nx0 = (1 -M x z ) ⊙ x 0,θ + M x z ⊙ P x z z * x shaped(5)\nNow operating on the projected motion x proj , before we can do imputation, we need to unproject it back to the original motion using x 0 = A -1 x proj 0 , and then project the imputed x0 back using xproj 0 = Ax 0 . We obtain the imputed motion under emphasis projection xproj 0 from\nxproj 0 = A (1 -M x z ) ⊙ (A -1 x proj 0,θ ) + M x z ⊙ P x z z * (6)\nSubstituting xproj 0 into Eq. 1, we obtain the new mean μproj t for sampling\nx proj t-1 ∼ N (μ proj t , Σ t )." }, { "figure_ref": [], "heading": "Dense guidance signal with a learned denoiser", "publication_ref": [ "b12" ], "table_ref": [], "text": "Another way to minimize the goal function G z (•) is by adjusting the sample of each diffusion step x t-1 toward a region with lower G z . This trick is called classifier guidance [13]. The direction of change corresponds to a score function ∇ xt log p G x (X t ) = 0|x t which can be approximated as a direction ∆ x0 = -∇ x0 G z (P z x x 0,θ ) that reduces the goal function. We can guide the generative process by nudging the DPM's prediction as x 0 = x 0,θ + ∆ x0 . While imputation requires the minimizer z * of G z , which might not be easy to obtain or may not be unique, this trick only requires the easier-to-obtain direction of change.\nProblem 2: Sparse guidance signal In the motion domain, conditioning signals can often be sparse. There are two types of sparsity that can occur: sparsity in feature and sparsity in time. Sparsity in feature is when the conditioning signal is a small part of the feature dimension of x. For example, in trajectory-conditioned generation, z may only consist of a sequence of ground locations over time. This type of sparsity can be addressed by emphasis projection, as explained in Section 4.1. Sparsity in time refers to cases where the conditioning signal consists of small segments of a trajectory spread out over time. For instance, in keyframe location conditioning task, only a sparse set of keyframe locations are given. When the conditioning signal-to-noise ratio becomes too small, the conditioning signal may be mistaken as noise and ignored during the denoising process." }, { "figure_ref": [ "fig_5" ], "heading": "Solution 2: Dense signal propagation", "publication_ref": [], "table_ref": [], "text": "To turn a sparse signal into a dense signal, we need domain knowledge. One way to achieve this is by using a denoising function f (x t ) = x 0 , which is trained on a motion dataset to denoise by gathering information from the nearby motion frames. With the ability to relate a single frame to many other frames, the denoising function is capable of expanding a sparse signal into a denser one.\nWe can use backward propagation through the denoising function f to take advantage of this. Therefore, a dense classifier guidance can be obtained as follows:\n∇ xt log p G x (X t ) = 0|x t ≈ -∇ xt G z P z x f (x t ) z shaped(7)\nWhile an external function can be used as f , we observe that the existing DPM model x 0,θ (x t ) itself is a motion denoiser, and thus can be used to turn a sparse signal into a dense signal without the need for an additional model. In practice, this process amounts to computing the gradient of G with respect to x t through x 0,θ (x t ) using autodiff.\nApplying classifier guidance together with imputation.\nWhenever available, we want to utilize signals from both imputation and classifier guidance techniques to help guide the generative process. Imputation is explicit but may encounter sparsity in time, while classifier guidance is indirect but dense. We want to use the direct signal from imputation wherever available (with mask M x z ), and the rest from classifier guidance (with mask 1 -M x z ). Based on Eq. 2, imputation-aware classifier guidance can be written as\nµ t = μt -(1 -M x z ) ⊙ sΣ t ∇ xt G z P z x f (x t )(8)\nwhere μ is an imputed sampling mean. By replacing μ with μproj , we get classifier guidance together with imputation that works with emphasis projection as\n∆ µ = -sΣ t A -1 ∇ x proj t G z P z x A -1 f (x t proj )(9)\nµ proj t = μproj t + A(1 -M x z ) ⊙ ∆ µ (10\n)\nProblem 3: DPM's bias hinders the guidance signal A DPM removes noise from an input based on the distribution of the training data it has seen. This could be problematic when it comes to conditional generation because the conditioning signal may be outside of the training distribution. As a result, any changes made to the classifier guidance may be reversed by the DPM in the next time step, due to its inherent bias towards the data, shown in Figure 4." }, { "figure_ref": [ "fig_5" ], "heading": "Solution 3: Epsilon modeling", "publication_ref": [], "table_ref": [], "text": "While it is unlikely to train an unbiased DPM model, there are ways to minimize the influence of model's bias under the guidance signal. Conceptually, the DPM model usually makes less and less change near the final outcome. This is in tandem with the guidance signal that gradually decreases over time due to Σ t (Eq. 2). We investigate the coefficient\n√ αt-1βt 1-αt\nof x 0 in the sampling mean µ t (Eq. 1). This coefficient reaches its maximum value at t = 0, meaning that an x 0,θ model could have a significant impact on the sampling mean even at t = 0, which contradicts the weak guidance signal at that time.\nOn the other hand, an ϵ θ model will have the most influence on the sampling mean at t = T , which aligns with our intuition. In Section 6.4 and Figure 4, we demonstrate that modeling ϵ θ instead of x 0,θ is a successful approach for managing the bias effect of the DPM model in classifier guidance. We further discuss this point in Supplementary." }, { "figure_ref": [], "heading": "Applications", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Trajectory-conditioned generation", "publication_ref": [], "table_ref": [], "text": "This task aims at generating a realistic motion x that matches a given trajectory z. Our objective is to minimize the distance between the generated motion and the given trajectory, which we define as\nG x (x) := z -P z x x z part of x p(11)\nDespite the apparent simplicity of this task, a traditional DPM faces the challenge of ensuring coherence in the generated motion. However, our emphasis projection method can effectively address this problem." }, { "figure_ref": [ "fig_0" ], "heading": "Keyframe-conditioned generation", "publication_ref": [], "table_ref": [], "text": "The locations of ground positions at specific times can be used to define locations that we wish the generated motion to reach. This task is a generalized version of the trajectoryconditioned generation where only a partial and potentially sparse trajectory is given. Let y ∈ R 2×M be a trajectory describing keyframe locations and a mask M z y describe the key motion steps. Our goal function of a motion x is\nG x (x) := i M z y (P z x x -y) p(12)\nConsequently, G z (z) = i M z y (z -y) p . Due to the partial trajectory y, the imputation region of y on x becomes M x y = P x z M z y . Two-stage guided motion generation. Generating both the trajectory and motion simultaneously under a conditioning signal can be challenging and may result in lower quality motion. To address this issue, we propose a two-step approach. First, we generate a trajectory z that satisfies the keyframe locations and then generate the motion x given the trajectory (following Section 5.1). Our overall pipeline is depicted in Figure 2 (a). We offer two options for generating the trajectory from keyframe locations y: a point-topoint trajectory and a trajectory DPM.\nThe point-to-point trajectory connects consecutive keyframe locations with a straight line. These unrealistic trajectories can be used as imputation signals for the motion DPM during the early phase (t ≥ τ ). If τ is large enough, the DPM will adjust the given trajectory to a higher quality one. However, if τ is too large, the DPM may generate a motion that does not perform well on G z .\nThe trajectory DPM p ϕ (z t-1 |z t ), which is trained using the same dataset but with a smaller network, can be used to generate the trajectory under the guidance signal from G z . We summarize our two-stage approach in Algorithm 1.\nIt is also possible to combine the two methods, as the point-to-point trajectory can serve as a useful guidance signal for the trajectory DPM during t ≥ τ . After that, the trajectory DPM is subject to the usual imputation and classifier guidance from G z . By tuning τ , we can balance between trajectory diversity and lower scores on G z ." }, { "figure_ref": [], "heading": "Obstacle avoidance motion generation", "publication_ref": [], "table_ref": [], "text": "Humans have the ability to navigate around obstacles while traveling from point A to B. Under our framework, this problem can be defined using two goal functions: one that navigates from A to B, called G loc\nx (defined as in Eq. 12), and another that pushes back when the human model crosses the obstacle's boundary, called G obs\nx , as follows G obs\nx (x) :\n= i -clipmax(SDF((P z x x) (i) ), c)(13)\nwhere c is the safe distance from the obstacle. These two goal functions are combined additively to obtain the final goal function, G x (x) = G loc x (x) + G obs x (x), for this task. We utilize the same pipeline as in Section 5.2, with the exception that imputation is not possible for obstacle avoidance. Therefore, minimizing the obstacle avoidance goal relies solely on classifier guidance." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "To evaluate our methods, we perform experiments on the standard human motion generation task conditioned on text descriptors and spatial objectives. In particular, we evaluate (1) the performance of our model in standard text-condition motion generation tasks, (2) the effect of emphasis projection to alleviate incoherence between spatial locations and local poses, (3) the ability to conditionally generate motion based on spatial information by conditioning with given trajectories, keyframe locations, and obstacles." }, { "figure_ref": [], "heading": "Settings", "publication_ref": [ "b12", "b20", "b21" ], "table_ref": [], "text": "Evaluation metrics. We evaluate generative text-to-motion models using standard metrics introduced by Guo et al. [15]. These include Fréchet Inception Distance (FID), R-Precision, and Diversity. FID measures the distance between the distributions of ground truth and generated motion using a pretrained motion encoder. R-Precision evaluates the relevance of the generated motion and its text prompt, while Diversity measures the variability within the generated motion. We also report Foot skating ratio, which measures the proportion of frames in which either foot skids more than a certain distance (2.5 cm) while maintaining contact with the ground (foot height < 5 cm), as a proxy for the incoherence between trajectory and human motion.\nIn addition, for conditional generation with keyframe locations, we use Trajectory diversity, Trajectory error, Location error, and Average error of keyframe locations. Trajectory diversity measures the root mean square distance of each location of each motion step from the average location of that motion step across multiple samples with the same settings. Trajectory error is the ratio of unsuccessful trajectories, defined as those with any keyframe location error exceeding a threshold. Location error is the ratio of keyframe locations that are not reached within a threshold distance. Average error measures the mean distance between the generated motion locations and the keyframe locations measured at the keyframe motion steps.\nDatasets. We evaluate the text-to-motion generation using the HumanML3D [15] dataset, which is a collection of textannotate motion sequences from AMASS [34] and Human-Act12 [17] datasets. It contains 14,646 motions and 44,970 motion annotations.\nImplementation details. Both our motion DPM and trajectory DPM are based on UNET with AdaGN [13] depicted in details in the Supplementary. The motion DPM is an x 0 model, while the trajectory DPM is an ϵ model, as explained in Section 4.2, to enhance controllability. We utilized DDPM [21] with T =1,000 denoising steps for training and inference of both models. Additionally, we condition the generation process on text prompts in a classifier-free [22] manner, similar to MDM [54], and use the CLIP [44] Table 2. Trajectory-conditioned motions evaluation. The ground truth trajectory is used for imputing after each diffusion step. Comparing the effect of an original x with emphasis loss functions to the emphasis projection x proj after imputing whole trajectories after each diffusion step. " }, { "figure_ref": [], "heading": "Text-to-motion generation", "publication_ref": [ "b61", "b7" ], "table_ref": [], "text": "This section evaluates our model's performance in the standard text-to-motion generation task and compares it with other motion DPM baselines: MotionDiffuse [62], MDM [54], MLD [8], and PhysDiff [61]. Tab. 1 shows the results where our model architecture outperforms the baselines significantly in terms of motion quality measured by FID, while maintaining similar R-Precision and Diversity." }, { "figure_ref": [], "heading": "Trajectory-conditioned generation", "publication_ref": [], "table_ref": [], "text": "This section demonstrates how our emphasis projection method can address the issue of incoherent motion caused by spatial conditioning, specifically in the trajectory conditioning task, where the model is provided with ground-truth trajectories for imputation at each denoising step and is required to generate corresponding local poses. Both quantitative and qualitative results support that our emphasis projection leads to a reduction in Foot skating ratio, as evidenced in Tab. 2 and a more coherent motion in Fig. 5 We also compare our emphasis projection method with an alternative approach of increasing the trajectory loss strength during training. We include loss k 2 × baselines, where k ∈ {1, 2, 5, 10}, for comparison. The results in Tab. 2 indicate that, while increasing the loss strength marginally improves both FID and Foot skating ratio, increasing it beyond a certain point leads to a decline in both FID and Foot skating ratio. By contrast, our emphasis projection method consistently leads to improvements in both metrics. We discuss this topic further in the Supplementary." }, { "figure_ref": [ "fig_7" ], "heading": "Keyframe-conditioned generation", "publication_ref": [], "table_ref": [], "text": "This section evaluates the quality and adherence of the generated motion to the desired goal. A viable solution must meet both criteria to an acceptable degree.\nTo achieve high-quality motion, both FID and Foot skating ratio are essential since FID alone cannot adequately measure the trajectory-motion coherence. Our Emphasis projection technique significantly improves motion coherence, reducing foot skating as shown in Tab. 3 while MDM [54] is unsuitable for this task due to the high motion incoherence. Furthermore, our improved architecture significantly improves motion quality in all cases. Note that without dense signal propagation, the model ignores the keyframe conditioning as shown in Fig 6.\nWhile a single-stage model performs reasonably well due to emphasis projection, it is too restrictive at τ = 0 (forced trajectory), resulting in relatively high Foot skating. This issue can be addressed by allowing more modification (increasing to τ to 100) but at the cost of higher Loc. error.\nLastly, the trajectory model's better controllability reduces Location error by more than half compared to the single-stage model at τ = 100. As expected, increasing τ leads to more freedom in the model, resulting in increased Trajectory diversity, lower FID, and higher Location error." }, { "figure_ref": [], "heading": "Obstacle avoidance motion generation", "publication_ref": [], "table_ref": [], "text": "Finally, we demonstrate our model's ability to generate motion on additional guidance on the obstacle avoidance task. In this task, we randomly sample the target point that the human needs to reach at a specific motion step along with a set of obstacles it needs to avoid, represented as a 2D SDF (Sec. 5.3). We show the qualitative results in Fig 7." }, { "figure_ref": [], "heading": "Discussion and Limitations", "publication_ref": [], "table_ref": [], "text": "In this work, we propose GMD, a controllable human motion generation method based on goal functions. GMD produces high-quality and diverse motions and supports diverse possibilities for goal functions. Since obtaining necessary data and designing a classifier-free learning method for non-explicit goals, such as obstacle avoidance, can be challenging, our GMD utilizes a classifier-based method which allows for more conditioning flexibility without retraining the model. Thus, our studies on effective classifier guidance will be useful for further including more guiding signals." }, { "figure_ref": [], "heading": "Guided Motion Diffusion for Controllable Human Motion Synthesis", "publication_ref": [], "table_ref": [], "text": "**Appendix** A. Analysis on x 0,θ vs. ϵ θ DPMs\nIn this section, we discuss the differences in behavior between the x 0,θ and ϵ θ models used to train DPMs. While both models are capable of generating high-quality samples, their denoising processes differ significantly. In Section 4.2, we previously claimed that the x 0 predicting model maximizes its influence on the outcome x t-1 when t → 0, whereas the ϵ predicting model maximizes its influence when t → T . Based on this observation, we argue that the ϵ predicting model is more favorable than the x 0 predicting model in circumstances where the outcome of the diffusion process will be altered by an external factor from the classifier.\nTo further understand the behavior of the two models, we examine Equation 1, which indicates that x t-1 is sampled from a Normal distribution with mean\nµ t = √ α t-1 β t 1 -α t a x 0 + √ 1 -β t (1 -α t-1 ) 1 -α t b x t(14)\nThe coefficients a and b in µ t modulate the contribution of the x 0 model and the previous output x t . The larger the coefficient a is relative to b, the larger the contribution of the x 0 model on the outcome of the denoising process.\nIn the case of an ϵ model, we substitute x 0 based on the relationship\nx 0 = xt- √ 1-αtϵ √ αt\nand get a different expression for µ t as\nµ t = a α t + b c x t - a √ 1 -α t √ α t d ϵ(15)\nWe can see that the contribution of the x 0 model and the ϵ model are starkly different, with the ϵ model having a stronger contribution on µ t , and hence x t-1 , where t is large, while the opposite is true for the x 0 model. In other words, an ϵ model is restricted to make a smaller change over time while an x 0 model can still make a large change even at the very end of the diffusion process. From the analysis above, we conclude that the choice of modeling ϵ or x 0 is no longer arbitrary. Given the fact that the classifier guidance strength is modulated by Σ t , which is smaller as t → 0, and the fact that all DPM models are biased toward their training datasets, an x 0 model capable of ever larger change as the guidance signal diminishes is not an ideal choice because it could easily overpower the guidance signal, especially at the end of the diffusion process, undoing all the guidance signal. Therefore, our GMD's tra- " }, { "figure_ref": [], "heading": "A.1. Challenges of modeling ϵ in practice", "publication_ref": [ "b12" ], "table_ref": [], "text": "In Section A, we discussed the benefits of modeling ϵ over x 0 from the perspective of classifier guidance. However, there are fundamental differences and requirements for architectures that excel in predicting x 0 versus ϵ. Specifically, ϵ ∼ N (0, I) is independent and full-rank, meaning there is no smaller latent manifold that it resides in. On the other hand, x 0 usually has a smaller latent manifold, which is the case for many real-world data including motions as most of the possible values in x ∈ R 263×M are not valid human motions, only a small subset of that is. Due to these differences, it requires special considerations for architectural design in models that successfully predict ϵ.\nAlthough there is no sufficient reason to believe that modeling ϵ is fundamentally harder than modeling x 0 , in practice, modeling ϵ is restricted to cases where its shape is relatively small compared to the latent dimension of the denoising model. For example, when modeling ϵ ∈ R 263×M for a motion DPM, the original MDM architecture for ϵ prediction generated low-quality jagged motions compared to the same architecture for x 0 prediction, which produced high-quality motions flawlessly. Increasing the latent dimension of MDM from 512 to 1,536 did not solve the problem entirely, indicating that predicting x 0 and ϵ requires different architectural designs that may not be satisfied by a single architecture. We argue that there require further studies on how to effectively design an ϵ predicting model. However, when the space of ϵ is relatively small, such as in a trajectory DPM, the choice of architecture seems to matter less. The MDM transformer architecture was applied to trajectory modeling with relatively no problem. Ultimately, we used a convolution-based UNET with AdaGN [13] as the final architecture for our proposed method, as it demonstrated superior performance for both modeling trajectories and motions." }, { "figure_ref": [], "heading": "B. Relative vs. Absolute root representation", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss different ways of representing the root locations z of the motion. Generally, the root locations can be represented as absolute rotations and translations (abs) or relative rotations and translations compared to the previous frame (rel). In MDM [54], the root locations are represented with relative representation following the HumanML3D [15] dataset. In this case, the global locations at an exact frame i can be obtained by a cumulative summation of rotations and translations before i.\nHowever, we observe that representing the root with absolute coordinates (abs) is more favorable than the relative one (rel) in two aspects: being more straightforward for imputation and easier to optimize. Therefore, we adopt the absolute root representation for our models.\nIn rel, a trajectory is described as velocity ∆z (i) /∆i in the local coordinate frame of the current pelvis rotation. This representation makes each z (i) dependent on all previous motion steps in a non-linear relationship. Optimization becomes less stable as a small change in early motion steps may compound and become a larger change later on. Also, imputing specific values becomes ill-posed since there are many possible sets of values that are satisfiable.\nOn the other hand, for abs, the imputation and optimization of z become straightforward as they only involve replacing or updating z (i) without dependency on other motion steps. We ablated the root representation by retraining MDM [54] and our model with both relative and absolute root representation, then show the results in Tab B.1. MDM shows a significant drop in performance when converted to the absolute representation, likely because the architecture is highly optimized for the relative representation, while for our models, the representation change results in a trade-off between the FID and R-precision.\nLastly, we note that the use of absolute root representation is necessary for our final model as the spatial guidance is done via a combination of imputation and optimization. " }, { "figure_ref": [], "heading": "C. Analysis on Emphasis projection", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss in greater detail our proposed Emphasis projection. Conceptually, we wish to increase the relative importance of the trajectory representation z within the motion representation x. This could be done most simply by increasing the magnitude of those values of z by multiplying it with a constant c > 1. More precisely, let us assume the shape of x is 263 × M . A single motion frame x = x (i) is a column vector of 263 scalars in which 3 elements (rot, x, z) are a column vector of a trajectory frame z = z (i) that comprises root rotation and a ground location. The new trajectory elements become z × c." }, { "figure_ref": [], "heading": "How to calculate a suitable scalar c?", "publication_ref": [ "b15" ], "table_ref": [], "text": "By introducing a scalar c > 1, the trajectory elements z are given a higher relative importance than the remaining 260 elements in x. This relative importance is determined by the cumulative variance of the z elements compared to that of the remaining 260 elements. Assuming that all elements in x are independently and identically distributed according to a standard Normal distribution N (0, 1), we can represent the cumulative variance of the trajectory elements as Var[x (rot) + x (x) + x (z) ] = j∈Traj.\nVar[x (j) ] = 3 (16) where j ∈ Traj. refers to the indexes in x that are related to trajectory.\nSimilarly, we can represent the cumulative variance of the remaining 260 elements as Var[ j / ∈Traj. x (j) ] = 260, where j / ∈ Traj. refers to the indexes in x that are not related to trajectory.\nWhen we multiply trajectory by c, the new cumulative variance becomes Var[c × (x (rot) + x (x) + x (z) )] = c 2 j∈Traj. Var[x (j) ] = 3c 2 . Therefore, the relative importance of the scaled trajectory elements compared to the remaining 260 elements in x is given by the expression\n3c 2 260 + 3c 2(17)\nSetting c = 260 3 ≈ 9.3 results in a relative importance of 50%, which strikes a reasonable balance between the trajectory and the rest of human motion. We have selected c = 10 as a rounded number of this fact, and it has been found to work well in practice." }, { "figure_ref": [], "heading": "Maintaining the uniform unit variance after scaling", "publication_ref": [], "table_ref": [], "text": "After scaling up the trajectory elements by a factor of c, the variance of the new motion representation is no longer uniform. This presents a problem when trying to model it using the original DPM's β t scheduler. In order to maintain uniform variance, we can redistribute the increased values from the trajectory part c × z to the rest in x via a random matrix projection.\nThere are two reasons why a random matrix projection is a good choice. First, it maintains the distance measure of the original space with high probability, meaning that the properties of the motion representation remain relatively unchanged. Second, a random matrix projection is easy to obtain and linear. It has an exact inverse projection, which ensures that there is no loss of information after the projection.\nFinally, to maintain unit variance, we scale down the entire vector uniformly by a factor of " }, { "figure_ref": [], "heading": "C.1. Trajectory loss scaling", "publication_ref": [], "table_ref": [], "text": "One approach to increase the emphasis on the trajectory part z (i) of the motion x (i) is to scale the reconstruction loss of only the trajectory part during the training of the motion DPM. This method does not change the representation but can potentially increase the model's emphasis on the trajectory part of the motion compared to the rest of the motion.\nTo compare the loss scaling method with the proposed Emphasis projection, we formulate a new loss function for a specific motion frame i, which increases the trajectory importance by a factor of k. This is given by the equation:\nL (i) k = j∈Traj.\nkx (j) -kx (j) 2 + j / ∈Traj.\nx(j) -x (j) 2 (18) Here, x = x 0,θ (x t ) (i) represents the i-th motion frame of the DPM's prediction and x = x 0 (i) represents the i-th motion frame of the ground truth motion. The value of k multiplies inside the squared loss, resulting in k 2 times more importance on the trajectory part of the motion. For example, setting k = 10 would increase the importance of the trajectory part by 100-fold, which has the same scaling effect as setting c = 10 in Emphasis projection. Hence, the reasonable range of k is the same as that of c.\nIn the main text, we experimented with k ∈ 1, 2, 5, 10 and found that Emphasis projection consistently outperformed loss scaling regarding motion coherence." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement. This work was supported by the SNSF project grant 200021 204840." }, { "figure_ref": [], "heading": "D. GMD's Model Architecture", "publication_ref": [ "b12", "b23", "b46", "b47" ], "table_ref": [], "text": "The trajectory and motion architectures of GMDare both based on UNET with Adaptive Group Normalization (AdaGN), which was originally proposed by [13] for classconditional image generation tasks. However, we have adapted this model for sequential prediction tasks by using 1D convolutions. It should be noted that our architectures share some similarities with [24] with the addition of AdaGN. The architecture overview is depicted in Convolution-based architectures are commonly used in state-of-the-art image-domain DPMs, such as those proposed by [47] and [48]. On the other hand, transformerbased architectures, which were used in the original MDM proposed by [54], are not well-studied architectures for DPMs [7,38].\nOur proposed architecture alone has led to a significant improvement in motion generation tasks, reducing the Fréchet Inception Distance (FID) by more than half compared to the original MDM (0.556 vs 0.212), as shown in Table 1 in the main paper. Fixed small βt = 1-αt-1 1-αt β t Model avg. beta 0.9999" }, { "figure_ref": [], "heading": "E. Training details", "publication_ref": [ "b12", "b23", "b20" ], "table_ref": [], "text": "GMD's models. We used a batch size of 64 for motion models and a batch size of 512 for trajectory models. No dropout was used in all of the GMD's models: both trajectory and motion. We used AdamW with a learning rate of 0.0001 and weight decay of 0.01. We clipped the gradient norm to 1 which was found to increase training stability. [13]. The conditioning signal from the MLP, shared across all ResBlocks, is projected by first applying a Mish activation and then a resizing linear projection specific to each ResBlock. All kernel sizes are 5. We use Mish activation function following [24].\nWe used mixed precision during training and inference. We trained all motion models for 32,000,000 samples (equivalent to 500,000 iterations at batch size 64, and 62,500 iterations at batch size 512). We also employed the moving average of models during training (β = 0.9999) [21] and used the averaged model for better generation quality. Do note that our model architecture still improves over the baseline MDM without the moving average.\nGMD's trajectory model. While the crucial trajectory elements are only the ground x-z locations, we have found it useful to train the trajectory model with all four components (rot, x, y, z). The additional (rot, y) seem to provide useful information that helps the model learn and reduce overfitting in the trajectory model. Note that the trajectory DPM is sensitive to the overfitting problem. Overtraining the model will result in a strong trajectory bias in the model making the model more resistant to classifier guidance and imputation. Our choice of training the trajectory model for 32,000,000 samples was carefully chosen based on this ob-servation.\nRetraining of MDM models. We retrained the original MDM using our absolute root representation and proposed Emphasis projection as the two main baselines. In order to maintain consistency, we kept the original optimization settings for the MDM models. Specifically, we used AdamW optimizer with a learning rate of 0.0001 and without weight decay. We found that gradient clipping of 1 provided more stability, so we also applied it here. We did not utilize mixed precision training for these models. To match the settings of the original MDM, we trained these models for 400,000 iterations at a batch size of 64." }, { "figure_ref": [], "heading": "F. Inferencing details", "publication_ref": [], "table_ref": [], "text": "We have chosen the value of s as 100 for the classifier guidance strength. Our experiments have shown that this value of s performs well within the range of 100 to 200. For all our goal functions G x , we always used the p = 1 norm. Whenever feasible, we implemented both imputation and classifier guidance concurrently. However, we ceased the guidance signals, i.e., classifier guidance and imputation, at t = 20 as this led to a slight improvement in the motion coherence.\nObstacle avoidance task. In this particular case, it was not feasible to create a point-to-point trajectory because doing so could potentially lead to a collision with an obstacle. As a result, we decided against utilizing any point-to-point trajectory imputation for this task." } ]
Figure 1. Our proposed Guided Motion Diffusion (GMD) can generate high-quality and diverse motions given a text prompt and a goal function. We demonstrate the controllability of GMD on four different tasks, guided by the following conditions: (a) text only, (b) text and trajectory, (c) text and keyframe locations (double circles), and (d) with obstacle avoidance (red-cross areas represent obstacles). The darker the colors, the later in time.
Guided Motion Diffusion for Controllable Human Motion Synthesis
[ { "figure_caption": "Figure 2 .2Figure 2. We tackle the problem of spatially conditioned motion generation with GMD, depicted in a). Our main contributions are b) Emphasis projection, for better trajectory-motion coherence, and c) Dense signal propagation, for a more controllable generation even under sparse guidance signal.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "1 11 :11end for 12: # Stage 2: Trajectory-conditioned motion generation 13: x proj T ← sample from N (0, I) 14: for all t from T to 1 do 15: M ← P x z M z y # Imputation region of y on x 16:", "figure_data": "", "figure_id": "fig_1", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. (a) Under standard motion representation and guiding method, only a few values in the motion representation are updated according to the guidance. (b) With Emphasis projection, all values in each frame describing the motion receives gradients w.r.t. the guidance, leading to better coherence between global orientation and local pose in each frame. (c) With dense gradient propagation, all frames are updated according to the guidance at the keyframes, making the guidance less likely to be ignored.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "model as the text encoder across all tasks. Computational resources Our GMD architecture is capable of running both the motion and trajectory models on a single commercial GPU, such as the Nvidia RTX 2080 Ti, 3080, or 3090. The trajectory model achieved a throughput of 2,048 samples per second when run on an RTX 3090, with a training time of approximately 4.34 GPU hours. Meanwhile, the motion model achieved a throughput of 256 samples per second on an RTX 3090, with a training time of around 34.7 GPU hours. The total inference time for one sample is approximately 110 seconds.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "compared to the MDM [54] model.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Comparing evolution of the clean subject to classifier guidance from x0 and ϵ DPMs. The x0 DPM shows significant resistance on the guidance signal as exhibited by the trajectory \"contraction\" behavior at t → 0.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "FigureFigure Generated motion, conditioned a given trajectory and text \"walking forward\". MDM [54] exhibits motion incoherence where the model disregards the trajectory and generates an inconsistent motion. Our method, improved by emphasis projection, deals effectively with the conditioning.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Generated motion trajectories, conditioned on target loat given keyframes. Without dense signal propagation, the model ignores the target conditions.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Qualitative results from the obstacle avoidance task given keyframe locations and obstacles. The red crossed areas represent obstacles to avoid. More results are in the supplementary.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure A. 1 .1Figure A.1. Comparing x0 and ϵ contributions in the prediction of xt-1 based on the Cosine βi scheduler.", "figure_data": "", "figure_id": "fig_10", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Text-to-motion evaluation on the HumanML3D [15] dataset. The right arrow → means closer to real data is better.", "figure_data": "FID ↓R-precision ↑Diversity →(Top-3)Real0.0020.7979.503JL2P [1]11.020.4867.676Text2Gesture [5]7.6640.3456.409T2M [15]1.0670.7409.188MotionDiffuse [62]0.6300.7829.410MDM [54]0.5560.6089.446MLD [8]0.4730.7729.724PhysDiff [61]0.4330.631-Ours0.2120.6709.440Ours x proj0.2350.6529.726", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The effect of different conditioning strategies tested on keyframe-conditioning task. The keyframes (N = 5) are sampled from the ground truth motion trajectories with the same text prompts in the HumanML3D [15] test set.", "figure_data": "ModelConditioningFID ↓Foot ↓Traj. ↑Traj. err. ↓Loc. err. ↓Avg. err. ↓R-precision ↑skating ratiodiversity (m.)(50 cm)(50 cm)(Top-3)x + τ =01.2560.2020.1340.0000.0000.0000.631MDM [54]Single stagex proj + τ =0 x proj + τ =1002.994 2.2130.151 0.0950.134 0.2140.000 0.3260.000 0.1270.000 0.2360.554 0.555x proj + no p2p1.6790.0920.3940.5190.3260.5430.548Singleτ =00.9020.1270.1170.0000.0000.0000.594stageτ =1000.5230.0860.1570.1760.0490.1390.599τ =1000.9370.0980.1200.0760.0200.1090.574Ours (x proj )Two stageτ =300 τ =500 τ =7000.938 0.908 0.8980.098 0.098 0.0980.127 0.140 0.1620.118 0.157 0.1960.031 0.043 0.0580.128 0.140 0.1530.573 0.577 0.580τ =9000.8740.0980.1920.2380.0800.1800.581no p2p0.8620.1040.2220.2870.1180.2820.577", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "1. Text-to-motion evaluation on the HumanML3D [15] dataset. Comparision between relative and absolute root representation. The right arrow → means closer to real data is better.", "figure_data": "FID ↓R-precision ↑Diversity →(Top-3)Real0.0020.7979.503MDM [54] (rel)0.5560.6089.446MDM [54] (abs)0.8940.6388.819Ours (rel)0.3050.6669.861Ours (abs)0.2120.6709.440Ours x proj0.2350.6529.726", "figure_id": "tab_4", "figure_label": "B", "figure_type": "table" } ]
Korrawe Karunratanakul; Konpat Preechakul; Supasorn Suwajanakorn; Siyu Tang
[ { "authors": "Chaitanya Ahuja; Louis-Philippe Morency", "journal": "IEEE", "ref_id": "b0", "title": "Language2pose: Natural language grounded pose forecasting", "year": "2019" }, { "authors": "Rajmund Simon Alexanderson; Jonas Nagy; Gustav Eje Beskow; Henter", "journal": "", "ref_id": "b1", "title": "Listen, denoise, action! audio-driven motion synthesis with diffusion models", "year": "2022" }, { "authors": "Michael Jose A Arjona-Medina; Michael Gillhofer; Thomas Widrich; Johannes Unterthiner; Sepp Brandstetter; Hochreiter", "journal": "", "ref_id": "b2", "title": "RUDDER: Return decomposition for delayed rewards", "year": "2019" }, { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro; Tero Karras; Ming-Yu Liu", "journal": "", "ref_id": "b3", "title": "eDiff-I: Text-to-Image diffusion models with an ensemble of expert denoisers", "year": "2022-11" }, { "authors": "Uttaran Bhattacharya; Nicholas Rewkowski; Abhishek Banerjee; Pooja Guhan; Aniket Bera; Dinesh Manocha", "journal": "IEEE", "ref_id": "b4", "title": "Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents", "year": "2021" }, { "authors": " Brooks; A A Holynski; Efros", "journal": "", "ref_id": "b5", "title": "InstructPix2Pix: Learning to follow image editing instructions", "year": "2022" }, { "authors": "He Cao; Jianan Wang; Tianhe Ren; Xianbiao Qi; Yihao Chen; Yuan Yao; Lei Zhang", "journal": "", "ref_id": "b6", "title": "Exploring vision transformers as diffusion learners", "year": "2022-12" }, { "authors": "Xin Chen; Biao Jiang; Wen Liu; Zilong Huang; Bin Fu; Tao Chen; Jingyi Yu; Gang Yu", "journal": "", "ref_id": "b7", "title": "Executing your commands via motion diffusion in latent space", "year": "2022" }, { "authors": "Xin Chen; Zhuo Su; Lingbo Yang; Pei Cheng; Lan Xu; Bin Fu; Gang Yu", "journal": "", "ref_id": "b8", "title": "Learning variational motion prior for video-based motion capture", "year": "2022" }, { "authors": "Jooyoung Choi; Sungwon Kim; Yonghyun Jeong; Youngjune Gwon; Sungroh Yoon", "journal": "", "ref_id": "b9", "title": "ILVR: Conditioning method for denoising diffusion probabilistic models", "year": "2021-08" }, { "authors": "Hyungjin Chung; Byeongsu Sim; Dohoon Ryu; Jong Chul; Ye ", "journal": "", "ref_id": "b10", "title": "Improving diffusion models for inverse problems using manifold constraints", "year": "2022-06" }, { "authors": "Rishabh Dabral; Muhammad Hamza Mughal; Vladislav Golyanik; Christian Theobalt", "journal": "", "ref_id": "b11", "title": "Mofusion: A framework for denoising-diffusion-based motion synthesis", "year": "2022" }, { "authors": "Prafulla Dhariwal; Alex Nichol", "journal": "", "ref_id": "b12", "title": "Diffusion models beat GANs on image synthesis", "year": "2021-05" }, { "authors": "Yinglin Duan; Tianyang Shi; Zhengxia Zou; Yenan Lin; Zhehui Qian; Bohan Zhang; Yi Yuan", "journal": "", "ref_id": "b13", "title": "Singleshot motion completion with transformer", "year": "2021" }, { "authors": "Chuan Guo; Shihao Zou; Xinxin Zuo; Sen Wang; Wei Ji; Xingyu Li; Li Cheng", "journal": "", "ref_id": "b14", "title": "Generating diverse and natural 3d human motions from text", "year": "2022" }, { "authors": "Chuan Guo; Xinxin Zuo; Sen Wang; Li Cheng", "journal": "Springer", "ref_id": "b15", "title": "Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts", "year": "2022" }, { "authors": "Chuan Guo; Xinxin Zuo; Sen Wang; Shihao Zou; Qingyao Sun; Annan Deng; Minglun Gong; Li Cheng", "journal": "", "ref_id": "b16", "title": "Action2motion: Conditioned generation of 3d human motions", "year": "2020" }, { "authors": "G Félix; Mike Harvey; Derek Yurick; Christopher Nowrouzezahrai; Pal", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b17", "title": "Robust motion in-betweening", "year": "2020" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b18", "title": "Prompt-to-Prompt image editing with cross attention control", "year": "2022-08" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Tim Fleet; Salimans", "journal": "", "ref_id": "b19", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022-10" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b20", "title": "Denoising diffusion probabilistic models", "year": "2020-06" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b21", "title": "Classifier-free diffusion guidance", "year": "" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey Gritsenko; William Chan; Mohammad Norouzi; David J Fleet", "journal": "", "ref_id": "b22", "title": "Video diffusion models", "year": "2022-04" }, { "authors": "Michael Janner; Yilun Du; Joshua B Tenenbaum; Sergey Levine", "journal": "", "ref_id": "b23", "title": "Planning with diffusion for flexible behavior synthesis", "year": "2022-05" }, { "authors": "Manuel Kaufmann; Emre Aksan; Jie Song; Fabrizio Pece; Remo Ziegler; Otmar Hilliges", "journal": "IEEE", "ref_id": "b24", "title": "Convolutional autoencoders for human motion infilling", "year": "2020" }, { "authors": "Jihoon Kim; Jiseob Kim; Sungjoon Choi", "journal": "", "ref_id": "b25", "title": "Flame: Freeform language-based motion synthesis & editing", "year": "" }, { "authors": "Zhifeng Kong; Wei Ping; Jiaji Huang; Kexin Zhao; Bryan Catanzaro", "journal": "", "ref_id": "b26", "title": "DiffWave: A versatile diffusion model for audio synthesis", "year": "2020-09" }, { "authors": "Mingi Kwon; Jaeseok Jeong; Youngjung Uh", "journal": "", "ref_id": "b27", "title": "Diffusion models already have a semantic latent space", "year": "2023-02" }, { "authors": "Hsin-Ying Lee; Xiaodong Yang; Ming-Yu Liu; Ting-Chun Wang; Yu-Ding Lu; Ming-Hsuan Yang; Jan Kautz", "journal": "", "ref_id": "b28", "title": "Advances in neural information processing systems", "year": "2019" }, { "authors": "Buyu Li; Yongchi Zhao; Lu Shi Zhelun; Sheng", "journal": "", "ref_id": "b29", "title": "Danceformer: Music conditioned 3d dance generation with parametric motion transformer", "year": "2022" }, { "authors": "Haoying Li; Yifan Yang; Meng Chang; Huajun Feng; Zhihai Xu; Qi Li; Yueting Chen", "journal": "", "ref_id": "b30", "title": "SRDiff: Single image Super-Resolution with diffusion probabilistic models", "year": "2021-04" }, { "authors": "Ruilong Li; Shan Yang; David A Ross; Angjoo Kanazawa", "journal": "", "ref_id": "b31", "title": "Ai choreographer: Music conditioned 3d dance generation with aist++", "year": "2021" }, { "authors": "Jianxin Ma; Shuai Bai; Chang Zhou", "journal": "", "ref_id": "b32", "title": "Pretrained diffusion models for unified human motion synthesis", "year": "2022" }, { "authors": "Naureen Mahmood; Nima Ghorbani; Gerard Nikolaus F Troje; Michael J Pons-Moll; Black", "journal": "", "ref_id": "b33", "title": "Amass: Archive of motion capture as surface shapes", "year": "2019" }, { "authors": "Chenlin Meng; Yang Song; Jiaming Song; Jiajun Wu; Jun-Yan Zhu; Stefano Ermon", "journal": "", "ref_id": "b34", "title": "SDEdit: Image synthesis and editing with stochastic differential equations", "year": "2021-08" }, { "authors": "Alex Nichol; Prafulla Dhariwal", "journal": "", "ref_id": "b35", "title": "Improved denoising diffusion probabilistic models", "year": "2021-02" }, { "authors": "Georgios Pavlakos; Vasileios Choutas; Nima Ghorbani; Timo Bolkart; Dimitrios Ahmed Aa Osman; Michael J Tzionas; Black", "journal": "", "ref_id": "b36", "title": "Expressive body capture: 3d hands, face, and body from a single image", "year": "2019" }, { "authors": "William Peebles; Saining Xie", "journal": "", "ref_id": "b37", "title": "Scalable diffusion models with transformers", "year": "2022-12" }, { "authors": "Mathis Petrovich; Michael J Black; Gül Varol", "journal": "", "ref_id": "b38", "title": "Actionconditioned 3d human motion synthesis with transformer vae", "year": "2021" }, { "authors": "Mathis Petrovich; Michael J Black; Gül Varol", "journal": "Springer", "ref_id": "b39", "title": "Temos: Generating diverse human motions from textual descriptions", "year": "2022" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b40", "title": "DreamFusion: Text-to-3D using 2D diffusion", "year": "2022-09" }, { "authors": "Vadim Popov; Ivan Vovk; Vladimir Gogoryan; Tasnima Sadekova; Mikhail Kudinov", "journal": "", "ref_id": "b41", "title": "Grad-TTS: A diffusion probabilistic model for Text-to-Speech", "year": "2021-05" }, { "authors": "Konpat Preechakul; Nattanat Chatthee; Suttisak Wizadwongsa; Supasorn Suwajanakorn", "journal": "", "ref_id": "b42", "title": "Diffusion autoencoders: Toward a meaningful and decodable representation", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b43", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "ArXiv", "ref_id": "b44", "title": "Hierarchical text-conditional image generation with CLIP latents", "year": "2022" }, { "authors": "Davis Rempe; Tolga Birdal; Aaron Hertzmann; Jimei Yang; Srinath Sridhar; Leonidas J Guibas", "journal": "", "ref_id": "b45", "title": "Humor: 3d human motion model for robust pose estimation", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b46", "title": "High-Resolution image synthesis with latent diffusion models", "year": "2021-12" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; Sara Mahdavi; Rapha Gontijo Lopes; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b47", "title": "Photorealistic Text-to-Image diffusion models with deep language understanding", "year": "2022-05" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b48", "title": "Image Super-Resolution via iterative refinement", "year": "2021-04" }, { "authors": "Jascha Sohl-Dickstein; Eric A Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b49", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015-03" }, { "authors": "Yang Song; Stefano Ermon", "journal": "", "ref_id": "b50", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019-07" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b51", "title": "Score-Based generative modeling through stochastic differential equations", "year": "2020-11" }, { "authors": " Richard S Sutton", "journal": "Machine learning", "ref_id": "b52", "title": "Learning to predict by the methods of temporal differences", "year": "1988-08" }, { "authors": "Guy Tevet; Sigal Raab; Brian Gordon; Yonatan Shafir; Daniel Cohen-Or; Amit H Bermano", "journal": "", "ref_id": "b53", "title": "Human motion diffusion model", "year": "2022" }, { "authors": "Jonathan Tseng; Rodrigo Castellon; Karen Liu", "journal": "", "ref_id": "b54", "title": "Edge: Editable dance generation from music", "year": "2022" }, { "authors": "Jiashun Wang; Huazhe Xu; Jingwei Xu; Sifei Liu; Xiaolong Wang", "journal": "", "ref_id": "b55", "title": "Synthesizing long-term 3d human motion and interaction in 3d scenes", "year": "2021" }, { "authors": "Christopher John; Cornish Hellaby Watkins", "journal": "", "ref_id": "b56", "title": "Learning from delayed rewards", "year": "1989" }, { "authors": "Daniel Watson; William Chan; Ricardo Martin-Brualla; Jonathan Ho; Andrea Tagliasacchi; Mohammad Norouzi", "journal": "", "ref_id": "b57", "title": "Novel view synthesis with diffusion models", "year": "2022-10" }, { "authors": "Yan Wu; Jiahao Wang; Yan Zhang; Siwei Zhang; Otmar Hilliges; Fisher Yu; Siyu Tang", "journal": "Springer", "ref_id": "b58", "title": "Saga: Stochastic wholebody grasping with contact", "year": "2022" }, { "authors": "Sijie Yan; Zhizhong Li; Yuanjun Xiong; Huahan Yan; Dahua Lin", "journal": "", "ref_id": "b59", "title": "Convolutional sequence generation for skeletonbased action synthesis", "year": "2019" }, { "authors": "Ye Yuan; Jiaming Song; Umar Iqbal; Arash Vahdat; Jan Kautz", "journal": "", "ref_id": "b60", "title": "Physdiff: Physics-guided human motion diffusion model", "year": "2022" }, { "authors": "Mingyuan Zhang; Zhongang Cai; Liang Pan; Fangzhou Hong; Xinying Guo; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b61", "title": "Motiondiffuse: Text-driven human motion generation with diffusion model", "year": "2022" }, { "authors": "Siwei Zhang; Yan Zhang; Federica Bogo; Marc Pollefeys; Siyu Tang", "journal": "", "ref_id": "b62", "title": "Learning motion priors for 4d human body capture in 3d scenes", "year": "2021" }, { "authors": "Yan Zhang; Michael J Black; Siyu Tang", "journal": "", "ref_id": "b63", "title": "Perpetual motion: Generating unbounded human motion", "year": "2020" }, { "authors": "Mengyi Zhao; Mengyuan Liu; Bin Ren; Shuling Dai; Nicu Sebe", "journal": "", "ref_id": "b64", "title": "Modiff: Action-conditioned 3d motion generation with denoising diffusion probabilistic models", "year": "2023" }, { "authors": "Rui Zhao; Hui Su; Qiang Ji", "journal": "", "ref_id": "b65", "title": "Bayesian adversarial human motion synthesis", "year": "2020" }, { "authors": "Zixiang Zhou; Baoyuan Wang", "journal": "", "ref_id": "b66", "title": "Ude: A unified driving engine for human motion generation", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 79.77, 633.4, 206.59, 30.75 ], "formula_id": "formula_0", "formula_text": "µ t = √ α t-1 β t 1 -α t x 0 + √ 1 -β t (1 -α t-1 ) 1 -α t x t(1)" }, { "formula_coordinates": [ 3, 164.82, 695.96, 118.92, 19.27 ], "formula_id": "formula_1", "formula_text": "x 0 = 1 √ αt x t + √ 1-αt √ αt ϵ θ (x t )." }, { "formula_coordinates": [ 3, 365.4, 423.58, 179.72, 12.69 ], "formula_id": "formula_2", "formula_text": "µ t = µ ′ t + sΣ t ∇ xt log p(d|x t )(2)" }, { "formula_coordinates": [ 3, 338.37, 702.12, 206.75, 12.69 ], "formula_id": "formula_3", "formula_text": "xt-1 = (1 -M x y ) ⊙ x t-1 + M x y ⊙ P x y y t-1(3)" }, { "formula_coordinates": [ 4, 55.87, 264.67, 110.22, 32.47 ], "formula_id": "formula_4", "formula_text": "z 0 ← z 0,ϕ (z t ) 5: µ, Σ ← µ(z 0 , z t ), Σ t 6:" }, { "formula_coordinates": [ 4, 55.87, 312.49, 164.78, 20.51 ], "formula_id": "formula_5", "formula_text": "z t-1 ∼ N (µ -sΣ∇ zt G z (z 0 ), Σ) 9:" }, { "formula_coordinates": [ 4, 81.99, 334.85, 147.89, 12.19 ], "formula_id": "formula_6", "formula_text": "z t-1 ← (1 -M z y ) ⊙ z t-1 + M z y y t-" }, { "formula_coordinates": [ 4, 51.88, 435.68, 204.02, 40.33 ], "formula_id": "formula_7", "formula_text": "xproj 0 ← A (1 -M ) ⊙ A -1 x proj 0 + M P x z y 19: µ, Σ ← µ(x proj 0 , x proj t ), Σ t 20:" }, { "formula_coordinates": [ 4, 51.88, 488.71, 178.59, 37.09 ], "formula_id": "formula_8", "formula_text": "22: ∆ ← -sΣA -1 ∇ x proj t G z P z x A -1 x proj 0 23: µ ← µ + A(1 -M ) ⊙ ∆ 24:" }, { "formula_coordinates": [ 4, 391.56, 168.11, 153.55, 9.68 ], "formula_id": "formula_9", "formula_text": "p x|G x (X) = 0(4)" }, { "formula_coordinates": [ 5, 89, 623.77, 197.36, 25.61 ], "formula_id": "formula_10", "formula_text": "x0 = (1 -M x z ) ⊙ x 0,θ + M x z ⊙ P x z z * x shaped(5)" }, { "formula_coordinates": [ 5, 316.93, 311.24, 228.18, 13.84 ], "formula_id": "formula_11", "formula_text": "xproj 0 = A (1 -M x z ) ⊙ (A -1 x proj 0,θ ) + M x z ⊙ P x z z * (6)" }, { "formula_coordinates": [ 5, 361.99, 348.95, 85.78, 13.49 ], "formula_id": "formula_12", "formula_text": "x proj t-1 ∼ N (μ proj t , Σ t )." }, { "formula_coordinates": [ 6, 58.16, 213.79, 228.21, 25.63 ], "formula_id": "formula_13", "formula_text": "∇ xt log p G x (X t ) = 0|x t ≈ -∇ xt G z P z x f (x t ) z shaped(7)" }, { "formula_coordinates": [ 6, 74.86, 442.17, 211.5, 12.69 ], "formula_id": "formula_14", "formula_text": "µ t = μt -(1 -M x z ) ⊙ sΣ t ∇ xt G z P z x f (x t )(8)" }, { "formula_coordinates": [ 6, 81.59, 508.4, 204.77, 15.31 ], "formula_id": "formula_15", "formula_text": "∆ µ = -sΣ t A -1 ∇ x proj t G z P z x A -1 f (x t proj )(9)" }, { "formula_coordinates": [ 6, 77.5, 526.94, 204.71, 13.3 ], "formula_id": "formula_16", "formula_text": "µ proj t = μproj t + A(1 -M x z ) ⊙ ∆ µ (10" }, { "formula_coordinates": [ 6, 282.21, 529.93, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 6, 441.63, 93.8, 31.14, 19.27 ], "formula_id": "formula_18", "formula_text": "√ αt-1βt 1-αt" }, { "formula_coordinates": [ 6, 373.01, 346.32, 172.11, 25.61 ], "formula_id": "formula_19", "formula_text": "G x (x) := z -P z x x z part of x p(11)" }, { "formula_coordinates": [ 6, 358.17, 557.97, 186.94, 17.2 ], "formula_id": "formula_20", "formula_text": "G x (x) := i M z y (P z x x -y) p(12)" }, { "formula_coordinates": [ 7, 107.13, 415.13, 179.23, 21.98 ], "formula_id": "formula_21", "formula_text": "= i -clipmax(SDF((P z x x) (i) ), c)(13)" }, { "formula_coordinates": [ 13, 69.81, 325.33, 216.55, 43.59 ], "formula_id": "formula_22", "formula_text": "µ t = √ α t-1 β t 1 -α t a x 0 + √ 1 -β t (1 -α t-1 ) 1 -α t b x t(14)" }, { "formula_coordinates": [ 13, 99.27, 432.72, 67.84, 19.27 ], "formula_id": "formula_23", "formula_text": "x 0 = xt- √ 1-αtϵ √ αt" }, { "formula_coordinates": [ 13, 99.1, 468.08, 187.26, 45.21 ], "formula_id": "formula_24", "formula_text": "µ t = a α t + b c x t - a √ 1 -α t √ α t d ϵ(15)" }, { "formula_coordinates": [ 14, 406.55, 693.2, 138.57, 23.89 ], "formula_id": "formula_25", "formula_text": "3c 2 260 + 3c 2(17)" }, { "formula_coordinates": [ 15, 58, 533.38, 51.88, 23.19 ], "formula_id": "formula_26", "formula_text": "L (i) k = j∈Traj." } ]
2023-05-21
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b28", "b6", "b24", "b1", "b27", "b24", "b1", "b14", "b18", "b23", "b0", "b16", "b26", "b40", "b16", "b40", "b21", "b20", "b30", "b37", "b38", "b39", "b5" ], "table_ref": [], "text": "Graphs are pervasive in a wide spectrum of applications such as social networks [29], recommendation system [7] and knowledge graphs [25]. As real-world graphs are often partially observed, link prediction, which aims to predict missing links, has been recognized as a fundamental task. For example, link prediction can be used to recommend new friends on social media [2], predict protein interactions [28] or reconstruct knowledge graphs [25]. Existing link prediction methods can be generally split into two categories, i.e., heuristic-based approaches [2,15,19,24] and representation learning based approaches [1,17,27,41]. Recently, due to the great ability of representation learning on graphs, Graph Neural Networks (GNNs) have achieved state-of-the-art performance for link prediction [17,41]. Generally, GNNs iteratively update a node's representation by aggregating its neighbors' information. The learned representations capture both node attributes and local topology information, which facilitate link prediction.\nDespite their success in link prediction, GNNs are not interpretable, which hinders their adoption in various domains. For instance, in financial transaction networks, providing explainable transaction links among customers can help transaction platforms learn the reasons for transaction behaviors to improve customers' experience, and gain the credibility of platforms [22]. Though various methods have been proposed for the explainability of GNNs [21,31,[38][39][40], they mainly focus on post-hoc GNN explainers for node classification. Directly adopting existing post-hoc GNN explainers to explain link prediction is sub-optimal because: (i) post-hoc explainers usually adopt another strategy or model to explain a target model, which could misinterpret the target model [6]; and (ii) GNN explainers for node classification usually identify crucial subgraphs of each node for the explanation; while for link prediction, one needs to explain the prediction for each pair of nodes based on their features and local structures. Thus, in this paper, we aim to develop a self-explainable GNN for link prediction, which can simultaneously give predictions and explanations.\nIn real-world graphs, nodes are linked due to various factors, and capturing the common factor between a pair of nodes paves us a way for explainable link prediction. For example, as shown in Figure 1, user 𝑣 𝑖 has broad interests in football, arts, and music. 𝑣 𝑖 links to diverse neighbors based on one or two common interests 𝒗 𝒊 𝒗 𝒋" }, { "figure_ref": [], "heading": "Interest:", "publication_ref": [ "b17" ], "table_ref": [], "text": "Missing Link with them. For users 𝑣 𝑖 and 𝑣 𝑗 in the figure, they share common interests in music and are likely to be linked. However, it is difficult to predict the link simply based on their node feature similarity as most of their features/interests are different. Also, as 𝑣 𝑖 and 𝑣 𝑗 have diverse neighbors, directly applying a GNN by aggregating all neighbors' information to learn representations of 𝑣 𝑖 and 𝑣 𝑗 followed by the similarity of node representation will result in a low predicted link probability. Thus, how to effectively capture the common factor/preferences for a pair of nodes remains a question. Generally, the preferences of users are reflected in their neighbors. For example, both 𝑣 𝑖 and 𝑣 𝑗 have many neighbors interested in music. By identifying and aggregating these neighbors, we can better learn pair-specific node representations that reflect common interests to predict the link between them. Pair-specific means that the representation of 𝑣 𝑖 for (𝑣 𝑖 , 𝑣 𝑗 ) should reflect the common interest with 𝑣 𝑗 , and for (𝑣 𝑖 , 𝑣 𝑘 ) should reflect the comment interest with 𝑣 𝑘 . Based on this intuitive principle, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), one way for self-explainable link prediction is to find dominating 𝐾 neighbors of 𝑣 𝑗 that share the highest 𝐾 similarities with 𝑣 𝑖 , which reflects the common interest of (𝑣 𝑖 , 𝑣 𝑗 ) and vice versa. Then, the explanation for whether there is a link for (𝑣 𝑖 , 𝑣 𝑗 ) can be: (i) \"We suggest 𝑣 𝑗 to 𝑣 𝑖 because these 𝐾 neighbors of 𝑣 𝑖 have common features with 𝑣 𝑗 . 𝑣 𝑖 will most likely establish a new link with 𝑣 𝑗 for 𝑣 𝑗 is similar to current neighbors of 𝑣 𝑖 ; and (ii) \"We suggest no link predicted for (𝑣 𝑖 , 𝑣 𝑗 ) because even top 𝐾 similar neighbors of 𝑣 𝑖 w.r.t 𝑣 𝑗 don't share common feature with 𝑣 𝑗 , which represent 𝑣 𝑗 is different from neighbors of 𝑣 𝑖 . Therefore, it's hard for 𝑣 𝑖 to add 𝑣 𝑗 (different from 𝑣 𝑖 's current neighbors) as a new neighbor of it. \" Though promising, the work on exploring 𝐾 relevant neighbors for self-explainable GNNs on link prediction is rather limited.\nTherefore, in this paper, we investigate a novel problem of selfexplainable GNN for link prediction. Specifically, for a pair of nodes (𝑣 𝑖 , 𝑣 𝑗 ), we want to identify 𝐾 most important neighbors of 𝑣 𝑗 and 𝑣 𝑖 , respectively, for link prediction. In essence, there are two main challenges: (i) How to take both graph structure and node attributes into consideration when measuring node similarity for identifying important neighbors for link prediction? and (ii) how to give both accurate predictions and correct corresponding explanations given that we lack the supervision of groundtruth explanations? In an attempt to solve the challenges, we propose a novel framework named Interpretable Link Prediction based on Graph Neural Networks (ILP-GNN). For each node pair (𝑣 𝑖 , 𝑣 𝑗 ), ILP-GNN adopts a novel mechanism that can explicitly evaluate the node similarity and high-order structure similarity to find 𝐾 interpretable neighbors of 𝑣 𝑖 similar to 𝑣 𝑗 . These neighbors can represent common interests or features between 𝑣 𝑖 and 𝑣 𝑗 . For high-order structure similarity, graph diffusion is utilized to calculate the closeness of nodes by modeling their local and high-order neighbors' information [18]. Then, these 𝐾 neighbors are aggregated to learn pair-specific representation, which represents common features of node pairs and various factors of links for different node pairs. Furthermore, since explanation based on selecting 𝐾 important neighbors should preserve factors resulting in the existence of links, we propose a novel loss function to encourage explanation of our model and improve the performance of link prediction. The main contributions are:\n• We study a novel problem of self-explainable GNN for link prediction by finding 𝐾 neighbors which are relevant to the links. • We develop a novel framework ILP-GNN, which adopts an interpretable neighbor aggregation method, and a novel loss function to identify 𝐾 relevant neighbors for explainable link prediction; • We conduct extensive experiments to demonstrate the effectiveness of our model on both predictions and explanations. We also construct a synthetic dataset that can quantitatively evaluate the link prediction explanation." }, { "figure_ref": [], "heading": "RELATED WORKS", "publication_ref": [ "b3", "b10", "b15", "b32", "b7", "b9", "b33", "b36", "b43", "b3", "b15", "b9", "b33", "b33", "b4", "b36", "b43", "b43", "b1", "b24", "b1", "b14", "b18", "b23", "b0", "b16", "b26", "b40", "b19", "b23", "b1", "b2", "b13", "b16", "b26", "b40", "b16", "b40", "b12", "b20", "b37", "b37", "b20", "b5", "b42", "b5", "b35", "b29", "b41" ], "table_ref": [], "text": "Graph Neural Networks. Graph Neural Networks (GNNs) have shown great ability in representation learning on graphs. Generally, GNNs can be split into two categories, i.e., spectral-based [4,11,16,33] and spatial-based [8,10,34,37,44]. Spectral-based approaches are defined according to graph signal processing. Bruna et al. [4] firstly proposes convolution operation to graph data from the spectral domain. Then, a first-order approximation is utilized to simplify the graph convolution via GCN [16]. Spatial-based GNN models aggregate information of the neighbor nodes [10,34]. For example, the attention mechanism is utilized in Graph Attention Network (GAT) to update the node representation from the neighbors with different weights [34]. Moreover, various spatial methods are proposed for further improvements [5,37,44]. For instance, DisGNN models latent factors of edges to facilitate node classification [44]. Link Prediction. Link prediction has been widely applied in social networks [2] and knowledge graph [25]. Existing methods for link prediction can be generally split into two categories, i.e., heuristicbased approaches [2,15,19,24] and representation learning based approaches [1,17,27,41]. Heuristics-based approaches mainly compute the pairwise similarity scores based on graph structure or node properties [20]. For example, the common-neighbor index (CN) scores a pair of nodes by the number of shared neighbors [24]; CN and some methods, i.e. Adamic Adar (AA) [2], are calculated from up to two-hop neighbors of the target nodes. Other approaches also explore high-order neighbors, including Katz, rooted PageRank (PR) [3] and SimRank (SR) [14]. But these heuristic-based methods make strong assumptions and can't be generalized to different graph data. Representation learning based approaches firstly learn the representation of nodes and then apply dot product between two node representations to predict the likelihood of the link between two nodes. GNNs are applied to learn node-level representations that capture both the topology structure together with node feature information and achieve state-of-the-art performance on link prediction [17,27,41] in recent years. For example, VGAE [17] adopts GNNs to encode graph structure with features into node representations followed by a simple inner product decoder to get the link prediction result. SEAL [41] extracts subgraphs to predict links between nodes. However, these approaches are also not interpretable and will limit their ability in applications which may require why the model predicts links between nodes. Explainability of Graph Neural Networks. To address the problem of lacking interpretability in GNNs, extensive works have been proposed [13,21,38]. For example, GNNExplainer [38] learns soft masks for edges and node features to find the crucial subgraphs and features to explain the predictions. PGExplainer [21] generates the edge masks with parameterized explainer to find the significant subgraphs. However, previous methods are post-hoc explanations that learn an explainer to explain the outputs of a trained GNN with fixed parameters. Post-hoc explanations might result in unstable interpretation as generated explanations are not directly from the model. To fill this gap, self-explainable GNNs are proposed to make predictions and explanations simultaneously [6,43]. For example, SE-GNN finds interpretable labeled neighbors which have the same labels as target nodes [6]. But self-explainable GNN models on the link prediction task are rather limited. CONPI [36] models similarity between neighbors set of node pairs to determine the probability of the existence of links and provide explanations based on similar neighbors. Their explanations are based on local topology similarity which ignores high-order graph structure information. Other relevant papers are about explainable link prediction for knowledge graphs [30,42]. They are designed for knowledge graphs and their explanations are based on reasoning paths or (head, relation, tail) data format. Therefore, it's hard for them to be generalized to all link prediction tasks.\nOur work is inherently different from the aforementioned explainable GNN methods: (i) we focus on learning a self-explainable GNN on link prediction which can simultaneously give predictions and explanations while most of the previous methods are designed for node classification; (ii) we study a novel self-explainable method to find factors which determine links between nodes by considering both node and high-order structure information." }, { "figure_ref": [], "heading": "PROBLEM DEFINITION", "publication_ref": [ "b44", "b16", "b40", "b5", "b20", "b37", "b42", "b35" ], "table_ref": [], "text": "We use G = (V, E, X) to denote an attributed graph, where V = {𝑣 1 , . . . , 𝑣 𝑁 } is the set of 𝑁 nodes, E is the set of edges and X is the attribute matrix for nodes in G. The 𝑖-th row of X, i.e., x 𝑖 ∈ R 1×𝑑 0 , is the 𝑑 0 dimensional features of node 𝑣 𝑖 . A ∈ R 𝑁 ×𝑁 is the adjacency matrix. 𝐴 𝑖 𝑗 = 1 if node 𝑣 𝑖 and node 𝑣 𝑗 are connected; otherwise 𝐴 𝑖 𝑗 = 0. N 𝑖 represents the neighborhood set of 𝑣 𝑖 . The goal of link prediction is to determine whether there exists an edge 𝑒 𝑖 𝑗 between two given nodes {𝑣 𝑖 , 𝑣 𝑗 }. It can be formulated as a classification problem on a set of node pairs E 𝑈 given observed edges E 𝐿 and node attributes, where 𝑒 𝑖 𝑗 = 1 represents a link between node 𝑣 𝑖 and 𝑣 𝑗 , and 𝑒 𝑖 𝑗 = 0 means no link between 𝑣 𝑖 and 𝑣 𝑗 . Due to the great node representation learning ability, GNNs are usually adopted as an encoder: 𝑓 : V → R 𝑑 to map a node 𝑣 𝑖 to a 𝑑dimensional vector h 𝑖 for link prediction. 𝑓 should preserve the similarity between nodes based on the observed edge set E 𝐿 [45], and give large probability (large (𝑓 (𝑣 𝑖 ) 𝑇 𝑓 (𝑣 𝑗 ))) for 𝑒 𝑖 𝑗 = 1 but small probability (small (𝑓 (𝑣 𝑖 ) 𝑇 𝑓 (𝑣 𝑗 ))) for 𝑒 𝑖 𝑗 = 0. However, GNN usually lacks interpretability on why they give such predictions [17,41]. There are few attempts of explainers on the node classification task [6,21,38,43], while the work on interpretable link prediction based on GNNs is rather limited [36]. Thus, it's crucial to develop interpretable GNNs for link prediction.\nAs mentioned in the introduction, generally, the preferences of a node are reflected in its neighbors. For node pairs, only some of the neighbors with common features are important for their link prediction. Therefore, to learn pair-specific representation for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), we propose to find the top 𝐾 neighbors of node 𝑣 𝑖 which are similar to 𝑣 𝑗 . Specifically, for a node pair (𝑣 𝑖 , 𝑣 𝑗 ), we can learn pair-specific representations h 𝑖 and h 𝑗 by aggregating selected top 𝐾 neighbors. Thus, h 𝑇 𝑖 h 𝑗 will be large when neighbors of 𝑣 𝑖 are similar to 𝑣 𝑗 and dissimilar neighbors will result in lower h 𝑇 𝑖 h 𝑗 . Due to the undirected property of graphs, it also holds true for node 𝑣 𝑗 . Explanations of link prediction can be: (i) \"for a node pair 𝑣 𝑖 and 𝑣 𝑗 with a link, take node 𝑣 𝑖 as an example, 𝑣 𝑖 's neighbors 𝑣 𝑐 ∈ N 𝑖 have higher similarity score with regard to 𝑣 𝑗 and 𝑣 𝑗 in the graph. This explanation also holds for node 𝑣 𝑗 \"; (ii) \"for a node pair 𝑣 𝑖 and 𝑣 𝑗 without a link, neighbors of 𝑣 𝑖 are dissimilar to 𝑣 𝑗 and also neighbors of 𝑣 𝑗 are dissimilar to 𝑣 𝑖 \".\nWith the notations above, the problem is formally defined as: Given an attributed graph G = (V, E 𝐿 , X) with observed edge set E 𝐿 and unobserved edge set E 𝑈 , learn an interpretable link predictor 𝑔 𝜃 : V × V → {𝑇𝑟𝑢𝑒, 𝐹𝑎𝑙𝑠𝑒} which can accurately predict links in E 𝑈 and simultaneously generate explanation by identifying two sets of important 𝐾 neighbors for link between each node pair (𝑣 𝑖 , 𝑣 𝑗 )." }, { "figure_ref": [ "fig_1" ], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the details of the proposed framework ILP-GNN. The basic idea of ILP-GNN is: for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), it identifies 𝐾 neighbors of node 𝑣 𝑖 and 𝑣 𝑗 , respectively, aiming to capture the common interests of the two nodes. Then, it aggregates these 𝐾 neighbors' information to obtain their pair-specific representation vectors and calculate the similarity based on their representations. Meanwhile, the 𝐾 most relevant neighbors of 𝑣 𝑖 and 𝑣 𝑗 provide the explanation on why there is or isn't a link between this node pair. There are mainly two challenges: (i) how to obtain the 𝐾 most relevant neighbors of 𝑣 𝑖 and 𝑣 𝑗 for the prediction of links between them; (ii) how to simultaneously give accurate predictions and correct corresponding explanations. To address these challenges, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), ILP-GNN explicitly models both node and high-order structure similarity between 𝑣 𝑖 and neighbors of 𝑣 𝑗 to identify the 𝐾 important nodes of 𝑣 𝑗 for explainable link prediction. Similarly, it explicitly models both node and high-order structure similarity between 𝑣 𝑗 and neighbors of 𝑣 𝑖 to identify 𝐾 important nodes of 𝑣 𝑖 .\nAn illustration of the proposed framework is shown in Figure 2. It is mainly composed of an Interpretable Neighbors Aggregation module and an Explanation Enhancement module. With the Interpretable Neighbors Aggregation, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), the 𝐾 most important neighbors, which represent factors for the existence of links, are found based on both node and high-order structure information. Then, the prediction of links between nodes can be given based on the identified 𝐾 neighbors. Finally, the Explanation Enhancement module is designed to further benefit the accurate explanation generation, and also encourage the model to improve link prediction via aggregating 𝐾 important neighbors. " }, { "figure_ref": [], "heading": "Interpretable Neighbors Aggregation", "publication_ref": [ "b17", "b25", "b15", "b33", "b4", "b18" ], "table_ref": [], "text": "For GNN-based link prediction on a pair of nodes (𝑣 𝑖 , 𝑣 𝑗 ), we firstly aggregate their neighbors' information to obtain their representation vectors (h 𝑖 , h 𝑗 ). Then, the similarity will be calculated between (h 𝑖 , h 𝑗 ) via h 𝑇 𝑖 h 𝑗 to indicate whether there is a link between them and pair-specific representation is required to predict links.\nHowever, links are generated due to multiple factors. For different links from 𝑣 𝑖 , it's necessary for us to learn pair-specific representations for 𝑣 𝑖 to predict links between them. In graph data, the neighborhood of 𝑣 𝑖 can represent important characteristics but not all neighbors have relevant factors w.r.t different connected nodes of 𝑣 𝑖 . Based on this observation, ILP-GNN selects 𝐾 neighbors of 𝑣 𝑖 which are similar to 𝑣 𝑗 to learn pair-specific representation of 𝑣 𝑖 . Similarly, through the same way, we learn a pair-specific representation of 𝑣 𝑗 with 𝐾 interpretable neighbors. ILP-GNN relies on interpretable 𝐾 neighbors that reveal the common interest of 𝑣 𝑖 and 𝑣 𝑗 for link prediction and explanation. We need to design a similarity measurement to measure the similarity between neighbors of node 𝑣 𝑖 and another connected or unconnected node 𝑣 𝑗 . Unlike i.i.d data, which only needs to measure the similarity from the feature perspective, for graph-structure data, both node attributes and graph structures of nodes contain crucial information for link prediction. In the following part, for a node pair (𝑣 𝑖 , 𝑣 𝑗 ), we use 𝑣 𝑖 as an example to demonstrate the process of finding 𝐾 interpretable neighbors. And we will do the same operations for node 𝑣 𝑗 .\n4.1.1 High-order Structure Similarity. In the graph, for a pair of nodes (𝑣 𝑖 , 𝑣 𝑗 ), the distance between neighbors of 𝑣 𝑖 and 𝑣 𝑗 can be used to measure their similarity. For instance, one-hop neighbors of the node 𝑣 𝑖 may have higher similarity scores with 𝑣 𝑖 , while high-order neighbors will have lower similarity scores. And the one-hop relation is represented as the adjacency matrix A and highorder relations can be represented as A 2 , A 3 , etc. Since original edge relations are often sparse and missed in real-world graphs, only using one-hop neighbors to measure the distance between nodes may result in unreliable similarity, i.e., the similarity between 𝑣 𝑖 and 𝑣 𝑗 equals 𝐴 𝑖 𝑗 . Therefore, it's necessary to model both one-hop and high-order relations to measure the similarity between nodes based on graph structure. To measure this high-order similarity, we propose to use a Graph Diffusion matrix which calculates the closeness of nodes in the graph structure by repeatedly passing the weighting coefficients to the neighboring nodes and represents the high-order similarity between nodes based on graph structure [18]:\nS = ∞ ∑︁ 𝑘=0 𝜃 𝑘 T 𝑘 ,(1)\nwhere T represents the random walk transition matrix as T = AD -1 , and the degree matrix D is the diagonal matrix of node degrees, i.e. 𝐷 𝑖𝑖 = 𝑁 𝑗=1 𝐴 𝑖 𝑗 . In this paper, we utilize a popular example of graph diffusion, Personalized PageRank (PPR) [26]. PPR chooses 𝜃 PPR 𝑘 = 𝛾 (1 -𝛾) 𝑘 with teleport probability 𝛾 ∈ (0, 1). 𝛾 is set as 0.05 in the experiment. Then, we normalize this diffusion matrix via\nS = D -1/2 𝑆 SD -1/2 𝑆\nto convert the similarity score to (0, 1). After getting this normalized diffusion matrix, for each 𝑣 𝑐 ∈ N 𝑖 , the structure importance weight of 𝑣 𝑐 to 𝑣 𝑖 for the link of (𝑣 𝑖 , 𝑣 𝑗 ) is:\n𝑠 ST (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) = S𝑐 𝑗 ,(2)\nwhere S𝑐 𝑗 represents high-order structure similarity between 𝑣 𝑖 's neighbor 𝑣 𝑐 and 𝑣 𝑗 based on graph structure.\n4.1.2 Node Similarity. Generally, node similarity on the feature level can be used to measure how similar neighbors of one node are to another connected or unconnected node, which can be used to find 𝐾 interpretable nodes relevant to the existence of links. Since node features are often noisy and sparse, directly utilizing the raw feature may result in noisy similarity. A straightforward way is to model the node's local relationships via a GNN such as GCN [16] and GAT [34] to learn the node embedding. However, GNN models will implicitly model graph structure information and reduce the interpretability of node features. Therefore, we first use a Multilayer Perceptron (MLP) to encode node features and then update node embedding via graph structure information. This can be written as:\nH 𝑚 = MLP(X), H 𝑟 = 𝜎 ( Ã[H 𝑚 ∥X]W) + H 𝑚 , (3\n)\nwhere ∥ is the concatenation operation of two vectors. The concatenation records local information with attribute information and facilitates training by providing skip connections. H 𝑟 represents the learned embedding matrix for all nodes. Then, the similarity between neighborhoods of node 𝑣 𝑖 and node 𝑣 𝑗 based on node attributes and their local topology can be calculated as:\n𝑠 NO (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) = sigmoid (h 𝑟 𝑗 ) 𝑇 h 𝑟 𝑐 , ∀ 𝑣 𝑐 ∈ N 𝑖(4)\nwhere sigmoid(•) is the sigmoid function to convert the similarity between two vectors into (0, 1), and 𝑠 NO (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) represents the weight 𝑣 𝑐 contributes to the prediction of links between 𝑣 𝑖 and 𝑣 𝑗 based on node similarity.\nFinally, with node and high-order structure similarity, the importance score of 𝑣 𝑖 with 𝑣 𝑐 ∈ N 𝑖 on (𝑣 𝑖 , 𝑣 𝑗 ) is: (5) where 𝛼 is the hyperparameter to control the contributions of node and high-order structure similarity.\n𝑠 (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) = 𝛼 • 𝑠 ST (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) + (1 -𝛼) • 𝑠 NO (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ),\nAfter getting weights for neighbors of node 𝑣 𝑖 , we will find 𝐾 neighbors based on these weights. We will do the same thing for neighbors of 𝑣 𝑗 . In the link prediction task, common neighbors between 𝑣 𝑖 and 𝑣 𝑗 may be highly relevant to links between pairs of nodes [19]. Thus, we first select common neighbors to a new neighborhood set N 𝑟 𝑖 . If the number of common neighbors is larger than 𝐾, we can utilize top 𝐾 common neighbors based on weight scores. Then, we can select the top relevant neighborhoods based on the weight scores of original neighbors of node 𝑣 𝑖 to the neighborhood set N 𝑟 𝑖 , which makes the size of N 𝑟 𝑖 equal 𝐾. Note that nodes in the N 𝑟 𝑖 only appear one time. If the size of N 𝑟 𝑖 is smaller than 𝐾, we will use all of its neighborhoods with different weights. We normalize the weight scores 𝑏 𝑖𝑐 on this set for comparable scores:\n𝑏 𝑖𝑐 = exp 𝑠 𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 𝑣 𝑐 ∈N 𝑟 𝑖 exp 𝑠 𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 .(6)\nFinally, node 𝑣 𝑖 's representation vectors can be obtained as:\nh 𝑖 = h 𝑟 𝑖 + 𝛽 ∑︁ 𝑣 𝑐 ∈N 𝑟 𝑖 𝑏 𝑖𝑐 h 𝑟 𝑐 ,(7)\nwhere 𝛽 is used to control the influence of neighborhoods on the final representation of nodes. Also, we can use the same way to obtain the representation vector h 𝑗 of node 𝑣 𝑗 . As h 𝑖 and h 𝑗 captures important neighbors for link prediction, then the link probability 𝑝 𝑖 𝑗 for (𝑣 𝑖 , 𝑣 𝑗 ) can be calculated as:\n𝑝 𝑖 𝑗 = sigmoid(h 𝑇 𝑖 h 𝑗 ).(8)" }, { "figure_ref": [], "heading": "Explanation Enhancement", "publication_ref": [], "table_ref": [], "text": "To guarantee the selected neighbors of two nodes (𝑣 𝑖 , 𝑣 𝑗 ) are highly relevant to whether there is a link between them, we propose selfsupervision to enhance the explanation. First, for each linked node pair (𝑣 𝑖 , 𝑣 𝑗 ) with 𝑒 𝑖 𝑗 = 1, to make sure that the selected neighbors by ILP-GNN have a positive effect on the final prediction, we randomly select 𝐾 neighbors of node 𝑣 𝑖 from N 𝑖 \\N 𝑟 𝑖 , where N 𝑖 \\N 𝑟 𝑖 represents the set of 𝑣 𝑖 's neighbors excluding those selected by ILP-GNN.\nWe do the same operation for node 𝑣 𝑗 . Let the obtained two random neighbor sets be {N \nh rand 𝑖 = h 𝑟 𝑖 + 𝛽 ∑︁ 𝑣 𝑐 ∈N rand 𝑖 𝑏 rand 𝑖𝑐 h 𝑟 𝑐 ,(9)\nwhere 𝑏 rand 𝑖𝑐 is the weighted scores normalized on N rand 𝑖 using Eq.( 6). h rand 𝑖 is 𝑣 𝑖 's representation by aggregating randomly selected neighbors of 𝑣 𝑖 . Then we can calculate the predicted link existence probability 𝑝 rand 𝑖 𝑗 for 𝑣 𝑖 and 𝑣 𝑗 using h rand 𝑖 and h rand 𝑗 as:\n𝑝 rand 𝑖 𝑗 = sigmoid((h rand 𝑖 ) 𝑇 h rand 𝑗 ).(10)\nIntuitively, we would expect h 𝑖 and h 𝑗 using {N 𝑟 𝑖 , N 𝑟 𝑗 } to be more effective for link prediction than h rand 𝑖 and h rand 𝑗 using randomly selected neighbors. In other word, 𝑝 𝑖 𝑗 should be larger than that 𝑝 rand 𝑖 𝑗 by a margin 𝛿 with 0 < 𝛿 < 1; otherwise, we penalize our model. This can be mathematically written as:\nL 𝑝 dis = ∑︁ 𝑒 𝑖 𝑗 ∈E 𝐿 ,𝑒 𝑖 𝑗 =1 max(0, 𝑝 rand 𝑖 𝑗 + 𝛿 -𝑝 𝑖 𝑗 ),(11)\nSecond, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ) without a link, we hope that our model gives a lower predicted probability for h 𝑖 and h 𝑗 learned from {N 𝑟 𝑖 , N 𝑟 𝑗 }. This predicted probability is also determined by the similarity between h 𝑖 and h 𝑗 in Eq.( 8). In other words, if nodes 𝑣 𝑖 and 𝑣 𝑗 have lower similarity scores with nodes in N 𝑟 𝑗 and N 𝑟 𝑗 respectively, the similarity of h 𝑖 and h 𝑗 will be small and the model will give a lower probability for the link of (𝑣 𝑖 , 𝑣 𝑗 ). Therefore, the selected neighbors set N 𝑟 𝑖 of node 𝑣 𝑖 should be assigned lower similarity scores w.r.t node 𝑣 𝑗 . It also holds true for node 𝑣 𝑗 . To achieve this purpose, we randomly sample unlinked pairs 𝑒 𝑖 𝑗 = 0 which have the same number as the number of linked pairs 𝑒 𝑖 𝑗 = 1 in E 𝐿 . The set of randomly selected unlinked pairs can be denoted as E 𝑁 . The similarity scores can be minimized through the following loss function: \nL 𝑛 dis = ∑︁ 𝑒 𝑖" }, { "figure_ref": [], "heading": "Overall Objective Function", "publication_ref": [ "b40", "b11" ], "table_ref": [], "text": "Link prediction can be treated as a binary classification problem. Since the majority of node pairs are unconnected, most of the elements in adjacency matrix are 0. To avoid the the missing links dominating the loss function, following [41], we adopt negative sampling to alleviate this issue. We treat each linked pair in E 𝐿 as positive samples. For each positive sample, we randomly sample one unlinked pair as the negative sample. Then, we treat link prediction as a binary classification problem to predict positive and negative samples.\nA cross-entropy loss is adopted for this binary classification problem and the loss can be written as:\nL cls = ∑︁ 𝑒 𝑖 𝑗 ∈E 𝐿 -log 𝑝 𝑖 𝑗 + ∑︁ 𝑒 𝑖 𝑗 ∈E 𝑁 -log 1 -𝑝 𝑖 𝑗 ,(13)\nwhere 𝑝 𝑖 𝑗 is the predicted probability for the node pair (𝑣 𝑖 , 𝑣 𝑗 ) in the Eq.( 8) and E 𝑁 is the set of negative samples in Eq. (12). The final loss function of ILP-GNN is given as:\nmin Θ L = L cls + 𝜆(L 𝑝 dis + L 𝑛 dis ),(14)\nwhere 𝜆 controls the balance between classification loss and the loss to enhance the explanation for links between node pairs, and Θ is the set of learnable parameters for our proposed ILP-GNN." }, { "figure_ref": [], "heading": "Algorithm 1 Training Algorithm of ILP-GNN.", "publication_ref": [], "table_ref": [], "text": "Input: G = (V, E 𝐿 , X), 𝐾, 𝜆, 𝛼, 𝛿 Output: GNN model 𝑔 𝜃 with explanation for link prediction. For each node pair (𝑣 𝑖 , 𝑣 𝑗 ), assign weights 𝑠 ST (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) to neighbors of 𝑣 𝑖 by high-order structure similarity in Eq.( 2)." }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "Learn node feature representation by Eq.( 3) and assign weights to neighbors of 𝑣 𝑖 by node similarity in Eq.( 4)." }, { "figure_ref": [], "heading": "6:", "publication_ref": [ "b8" ], "table_ref": [], "text": "Do the same operation on 𝑣 𝑗 and aggregate top 𝐾 neighbors of 𝑣 𝑖 and 𝑣 𝑗 with two kinds of weights in Eq.( 7).\n7:\nCalculate the probability 𝑝 𝑖 𝑗 of a link between two nodes. The training algorithm of ILP-GNN is given in Algorithm 1. We utilize 𝑒 𝑖 𝑗 = 1 as positive samples and 𝑒 𝑖 𝑗 = 0 as negative samples. ILP-GNN assigns weights to neighbors of nodes based on both node and high-order structure similarity. The top 𝐾 neighbors with high weights are aggregated to learn representation for self-explainable link predictions. Specifically, in line one, we initialize the parameters of the model with Xavier initialization [9]. Then, we calculate high-order similarity by Eq.( 1). In lines 4 to line 6, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ) with or without links, we assign weights to neighbors of 𝑣 𝑖 and 𝑣 𝑗 by learned node and high-order structure similarity. Then, ILP-GNN aggregates top 𝐾 neighbors with high weights to obtain pair-specific representation vectors h 𝑖 , h 𝑗 . Also, these top 𝐾 neighbors, which represent common interests between node pairs, can be treated as the explanation for the existence of links. In line 7, the probability of a link is calculated based on node representation vectors in Eq.( 8). To guarantee the quality of explanation, in lines 8 and 9, for positive samples, L 𝑝 dis is proposed to make the predicted probability from ILP-GNN larger than the probability predicted from representation vectors with randomly selected neighbors. For negative samples, L 𝑛 dis is applied to make weights of neighbors small which represents neighbors of one node are dissimilar to another node. In this case, the model will give low predicted probabilities for the existence of links. Finally, the model is optimized on the total loss function Eq.( 14)." }, { "figure_ref": [], "heading": "Time Complexity Analysis.", "publication_ref": [ "b16", "b16" ], "table_ref": [], "text": "The main time complexity of our model comes from calculating the node similarity and high-order structure similarity together with our proposed loss function. For high-order structure similarity, the time complexity of Personalized PageRank (PPR) is denoted as O (𝑘 |E 𝐿 |), where 𝑘 is the number of iterations in Eq.( 1). Also, high-order structure similarity can be pre-computed which will not influence the training process of the model. For node similarity, the time complexity for links the edges according to time, where the collaboration edges until 2017 are used as training edges, those in 2018 are used as validation edges, and those in 2019 are test edges. For Synthetic datasets, we randomly select 40% edges from edges set with groundtruth explanation as testing set and 10% as validation set. The remaining edges are used as training samples. We maintained consistency across datasets by following VAGE's split proportion and randomly selecting node pairs not in the training set as negative samples following [17]. Note that this approach differs from WP and SEAL, which selected pairs not present in any of the training, validation, or test sets. The number of negative samples is equal to the number of positive samples. Positive and negative samples are then combined as our training, validation and testing sets [17].\n𝑒 𝑖 𝑗 ∈ E is O ( 𝑒 𝑖 𝑗 ∈ E |N 𝑖 ||N 𝑗 |𝑑)," }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b23", "b1", "b16", "b15", "b33", "b40", "b35", "b26", "b37" ], "table_ref": [], "text": "Baselines. We compare the proposed framework with representative and state-of-the-art methods for link prediction, which include:\n• CN [24]: Common-neighbor index counts the number of common neighbors to predict the link for a pair of nodes. • AA [2]: Adamic-Adar index is a second-order traditional heuristic method. It assumes that a shared neighbor with the large degree is less significant to the measure of a link. • VGAE [17]: VGAE is a generative model for graph representation. We use a GCN as the encoder where the second layer has two channels for mean and deviations to sample the latent embeddings and a link reconstruction module as the decoder.\n• GCN [16]: GCN is one of the most popular spectral GNN models based on graph Laplacian, which has shown great performance for node classification. To adopt it for link prediction, we treat it as the encoders in the Graph Autoencoder manner. • GAT [34]: Instead of treating each neighbor node equally, GAT utilizes an attention mechanism to assign different weights to nodes in the neighborhood during the aggregation step.\n• SEAL [41]: SEAL is a link prediction method that extracts local subgraphs of node pairs and learns link prediction heuristics from them automatically. • CONPI [36]: CONPI is an interpretable model to compare the similarity between neighbors sets of two nodes for link prediction. It has two variants and we report the best results of them.\n• WP [27]: WalkPooling (WP) jointly encodes node representations and graph topology into learned topological features. Then, these features are used to enhance representation of extracted subgraphs which are relevant to links of node pairs .\nAs the work on explainable GNN for link prediction is rather limited, we also adapt a popular post-hoc explanation model GN-NExplainer for post-hoc link prediction.\n• GNNExplainer [38]: GNNExplainer takes a trained GNN and the predictions as input to obtain post-hoc explanations. A soft edge mask is learned for each instance to identify the crucial subgraph. We adopt GNNExplainer to find crucial neighbors of node pairs with links for link prediction.\nConfigurations. All experiments are conducted on a 64-bit machine with Nvidia GPU (NVIDIA RTX A6000, 1410MHz , 48 GB memory). For a fair comparison, we utilize a two-layer graph neural network for all methods, and the hidden dimension is set as 128. The learning rate is initialized to 0.001. Besides, all models are trained until converging, with the maximum training epoch as 1000. The implementations of all baselines are based on Pytorch Geometric or their original code. The hyperparameters of all methods are tuned on the validation set. In particular, for the proposed framework, we select 𝐾 from 1 to 6 and vary 𝜆 as {0.1, 0.3, 0.5, 0.7}. The 𝛼 which balances the node similarity and high-order structure similarity is fixed as 0.3 for all datasets. The margin 𝛿 in Eq.( 11) is set as 0.5." }, { "figure_ref": [], "heading": "Performance on Link Prediction", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "In this subsection, we compare the performance of the proposed ILP-GNN with baselines for link prediction on real-world graphs introduced in Sec. 5.1, which aims to answer RQ1. Each experiment is conducted 5 times for all datasets and the average link prediction AUC scores with standard deviations are reported in Table 1. From the table, we make the following observations:\n• Our method outperforms VGAE, GCN and GAT on various realworld datasets. This is because for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), our model can select 𝐾 neighbors of 𝑣 𝑖 and 𝑣 𝑗 separately, which contain common characteristics of (𝑣 𝑖 , 𝑣 𝑗 ) and are highly relevant to the factors for the existence of the link. By aggregation these 𝐾 significant neighbors, our model can learn pair-specific representations of 𝑣 𝑖 and 𝑣 𝑗 for the link and predict it accurately. • ILP-GNN can outperform SEAL and its variant WP which extracts subgraphs to learn link prediction heuristics. The reason is that our model can implicitly find local neighbors which can also explore link prediction heuristics like common neighbors and high-order structure information. • Though CONPI also compares neighbors of node pairs and finds relevant neighbors for link prediction, our model can outperform CONPI, which shows the effectiveness of our method in selecting neighbors for link prediction and our loss function can guide the model to obtain link prediction relevant neighbors." }, { "figure_ref": [ "fig_4" ], "heading": "Explanation Quality", "publication_ref": [], "table_ref": [ "tab_6", "tab_6", "tab_7" ], "text": "In this subsection, we conduct quantitatively experimental comparisons and visualization to answer RQ2. are AUC value with regard to different predicted results, and p𝑖 𝑗 is the original predicted result. We compare our model with GAT and CONPI which can also assign weights to neighbors for link prediction. We vary the number of deleted neighbor 𝑀 as {1, 2, 3, 4}. All experiments are conducted five times with random splits and the results are reported in Figure 3. From the figure, we make the following observations: (i) ILP-GNN consistently outperforms two baselines with different deleted neighbors. Compared with GAT, for different links (𝑣 𝑖 , 𝑣 𝑗 ) and (𝑣 𝑖 , 𝑣 𝑘 ), ILP-GNN learns pair specific representation for 𝑣 𝑖 by selecting different neighbors of 𝑣 𝑖 relevant to the existence of various links. However, GAT assigns higher weights to the same neighbors and learns one representation of 𝑣 𝑖 for different links. In real-word graphs, different links may be from different factors, which are relevant to different neighbors of nodes. Therefore, our proposed ILP-GNN can explore the relevant neighbors for links with different factors, which will greatly improve link prediction and lead to higher fidelity scores. (ii) Also, our proposed loss function can help the model select neighbors to give high probabilities to links and low probabilities to no-links. It can help the model find neighbors relevant to links. Thus, our model can achieve the best fidelity score by deleting neighbors. We also do the same operations on 𝑣 𝑗 . We then calculate the precision@2 and precision@1 for the ranked neighbors list of 𝑣 𝑖 and 𝑣 𝑗 w.r.t groundtruth explanation neighbors set of 𝑣 𝑖 and 𝑣 𝑗 . Also, we includes the baseline (named Random in Table 2) which randomly select 𝐾 neighbors of 𝑣 𝑖 and 𝑣 𝑗 and evaluate the explanation performance based on randomly selected 𝐾 neighbors list. To further demonstrate the effectiveness of our model on link prediction explanation, we adopt one classical post-hoc explanation method, GNNExplainer, which is originally designed for node classification. For the link between 𝑣 𝑖 and 𝑣 𝑗 , we adopt GNNExplainer to find the top 𝐾 crucial neighbors of them for the link between them. We treat these 𝐾 neighbors of 𝑣 𝑖 and 𝑣 𝑗 as explanation neighbors of them. The higher the precision@1 or precision@2 is, the closer the explainable neighbors found by the model with the groundtruth. We evaluate explanation performance by precision@1 or precision@2 and link prediction performance by AUC. All experiments are conducted five times and the average results and standard deviations on the three synthetic datasets are reported in Table 2 and Table 3 for explanation and link prediction performance. From the results, we make the following observations: (i) GNNExplainer only has a little improvement compared with the Random method. It demonstrates that current explanation methods designed for node classification can't be easily adopted to link prediction. (ii) ILP-GNN consistently outperforms all baselines on both explanation and link prediction metrics, which indicates that it can retrieve reliable 𝐾 neighbor nodes for prediction and explanation simultaneously." }, { "figure_ref": [ "fig_8" ], "heading": "Visualization of Weights.", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Finally, we visualize the distribution of learned weight scores for our model and CONPI on both existent links and no-links. The experiment is conducted on Cora and the results are shown in Figure 6. We can observe that CONPI can't recognize different neighbors for links and no-links, which shows the reason for their bad performance in Table 1. For each linked node pair (𝑣 𝑖 , 𝑣 𝑗 ), our model assigns higher weights to neighbors of 𝑣 𝑖 that indicate neighbors of it are similar to 𝑣 𝑗 . It will result in higher predicted probabilities by aggregating these neighbors to learn representation h 𝑖 and then predict probabilities for this link in Eq.( 8). Also, for dissimilar neighbors, ILP-GNN will give lower " }, { "figure_ref": [ "fig_9" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "To answer RQ3, we conduct an ablation study to explore the effects of both node and structure similarity to find 𝐾 interpretable neighbors for the link prediction task. ILP-GNN\\E denotes the variant that we don't use structure similarity measurement to find neighbors, i.e., setting 𝛼 = 0 in Eq.( 5). ILP-GNN\\F denotes the variant without node similarity measurement in Eq.( 5), i.e., setting 𝛼 = 1. ILP-GNN\\F, E represents the GCN model that aggregates all neighbors without weights. The experimental results on Cora and Photo are reported in Figure 7. We can observe that: (a) two variants of ILP-GNN and ILP-GNN by predicting links with selecting neighbors can greatly outperform ILP-GNN\\F, E. It means that the proposed interpretable neighbor aggregation module can greatly improve the performance of link prediction by finding 𝐾 neighbors of one node, which are similar to other linked nodes. (b) ILP-GNN can outperform ILP-GNN\\F and ILP-GNN\\E, which implies that combining both node and structure similarity is helpful for finding relevant neighbors to improve the link prediction." }, { "figure_ref": [ "fig_7" ], "heading": "Hyperparameter Sensitivity Analysis", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "In this section, we explore the sensitivity of the most important hyperparameters 𝐾 and 𝜆, which control the number of neighbors to select and the contribution of the Explanation Enhancement module, respectively. Specifically, we vary 𝐾 as {1, 2, 4, 6} and 𝜆 as {0.1, 0.3, 0.5, 0.7}. The other settings are the same as the experiment for Table 1. We report the AUC value for link prediction on Cora and Citeseer. The experiment is conducted five times with random splits and the average results are shown in Figure 5. From the figure, we observe that: (i) with the increase of 𝐾, the performance firstly increases and then decreases. When 𝐾 is small, a small number of relevant neighbors are selected. A small set of neighbors may not reflect the characteristics of nodes to build links with other nodes, which results in poor performance. When 𝐾 is large, the large number of selected neighbors may contain bias and will cover important characteristics of nodes building links. Therefore, when 𝐾 is in the range of 2 to 4, the performance is generally good. (ii) with the increasing of 𝜆, the performance of ILP-GNN tends to firstly increase and then decrease. When 𝜆 is small, little supervision is received to select neighbors which may result in a high predicted probability for positive samples and low probabilities for negative samples. Also, large 𝜆 will be dominated by enhancing selected neighbors for link prediction, which can also lead to poor performance. For 𝜆, a value between 0.3 to 0.5 generally gives a good performance." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "We conduct a case study to show the importance of selected neighbors for the decision process of link prediction. Specifically, we apply t-SNE to the learned representation of nodes, i.e. h 𝑖 for node 𝑣 𝑖 in Eq.( 7), with aggregating selected neighbors from our model. Then, we learn another representation h 𝑠 𝑖 by aggregating neighbors without these selected neighbors. The relative positions of nodes for these two representations are visualized. Also, we visualize their local 1-hop neighbors to obtain the positions of nodes based on node similarity as shown in Figure 4. We show cases of links between two nodes with and without common neighbors and node pairs without links. As shown in Figure 4 (a), the learned representation with selected neighbors from our model will have higher similarity in the left figure (𝑣 1 is near to 𝑣 2 ). However, the learned embedding without these neighbors will result in a lower similarity in the right figure (two nodes are distant). For no-links, our model will make the learned representation of both 𝑣 1 and 𝑣 2 with their neighbors distant. And the model will give lower probabilities for no-links. Therefore, our model can provide both predictions and explanations accurately by finding significant neighbors." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we study a novel problem of self-explainable GNNs for link prediction by exploring 𝐾 important neighbors for links. We propose a novel framework, which designs an interpretable aggregation module for finding 𝐾 neighbors relevant to factors of links and simultaneously uses these neighbors to give predictions and explanations. Also, a novel loss function is proposed to enhance the generation of explanations and also the performance of the link prediction task. Extensive experiments on real-world and synthetic datasets verify the effectiveness of the proposed ILP-GNN for explainable link prediction. An ablation study and parameter sensitivity analysis are also conducted to understand the contribution of the proposed modules and sensitivity to the hyperparameters. There are several interesting directions that need further investigation. One direction is to extend ILP-GNN for dynamic network link prediction. Another direction is to design more efficient and learnable approaches to explore high-order similarity in graphs." }, { "figure_ref": [], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct experiments on real-world and synthetic datasets to verify the effectiveness of ILP-GNN. In particular, we aim to answer the following research questions: (RQ1) Can our proposed method provide accurate predictions for link prediction? (RQ2) Can ILP-GNN learn reasonable explanation for the existence of links? (RQ3) How do two similarity measurement methods of our ILP-GNN contribute to the link prediction performance?" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b15", "b31", "b22", "b11", "b34", "b11", "b34", "b16", "b11" ], "table_ref": [], "text": "We conduct experiments on four publicly available real-world datasets and their details are shown below:\n• Cora and Citeseer [16]: These two datasets are citation networks where nodes are papers and edges are their citation relations. Each node has a sparse bag-of-words feature vector. • Photo [32]: This dataset is a subgraph of the Amazon co-purchase graph [23], where nodes are products and two frequently purchased products are connected via an edge. Each node has a bag-of-word feature vector of reviews. • Ogbn-arxiv [12]: It is a citation network between all Computer Science arXiv papers indexed by MAG [35]. Nodes in this dataset represent papers and edges indicate one paper citing another one. Each paper has a 128-dimensional feature vector obtained by averaging the embeddings of words in its title and abstract. • Ogbl-collab [12]: This dataset is a subset of the collaboration network between authors indexed by MAG [35], where nodes are authors and edges indicate the collaboration between authors. All nodes have 128-dimensional feature vectors by averaging the embeddings of words in papers published by authors.\nSynthetic-Data: Since the publicly available datasets don't have groundtruth explanations, to quantitatively evaluate the explanation of the proposed method, we also construct synthetic datasets that have groundtruth explanation for links. We firstly generate a set of 𝑁 nodes, {𝑣 1 , 𝑣 2 , ..., 𝑣 𝑁 }, with their feature information. Then, for the generation of links between the node pair (𝑣 𝑖 , 𝑣 𝑗 ), the links are determined by 𝐵 neighbors of 𝑣 𝑖 and these 𝐵 neighbors share both high structure and node similarity with 𝑣 𝑗 . Similarly, due to the undirected property of the graph, we also consider 𝐵 neighbors of 𝑣 𝑗 similar to 𝑣 𝑖 for the generation of links. And these 𝐵 neighbors of 𝑣 𝑖 and 𝐵 neighbors of 𝑣 𝑗 can be treated as groundtruth explanation for this link. We fix 𝑁 as 1000 and generate three datasets with different number of edges for experiments. The detailed construction process of the synthetic datasets is given in Appendix A.\nThe statistics of these datasets are summarized in Table 4 in Appendix A. Following [17], for Cora, Citeseer and Photo, we randomly split the edges of each dataset into 85%/5%/10% as train/val/test. The random split is conducted 5 times and average performance will be reported. For Ogbn-arxiv, we randomly split the edges into 60%/10%/30% as train/val/test. For Ogbl-collab, follow [12], we split " }, { "figure_ref": [], "heading": "A DATASETS", "publication_ref": [], "table_ref": [], "text": "We put the dataset statistics into this section, the table includes the number of features, the number of edges, and the number of nodes for each dataset. For synthetic datasets, we generate three datasets with the same number of nodes but different numbers of edges. And they are shown in Table 4. Also, a detailed description of the synthetic datasets is shown below: Synthetic Datasets: For synthetic data, we assume that feature of each node is sampled from a Guassian Mxiture Model (GMM) 𝑝 (𝑥 𝑖 ) = 𝑀 𝑗=1 𝜙 𝑖 𝑗 N (𝜇 𝑗 , 𝐼 ) with 𝑀 components and different weights for each component. Specifically, 𝑁 nodes are divided into 𝑀 groups which represent components of GMM with equal number of nodes, and assign each node with labels 𝑦 𝑖 ∈ R 𝑀 which is a one-hot vector and represents which group it belongs to. Then, we add random noise 𝛿 𝑖 ∈ R 𝑀 , where the value of each dimension is sampled from a uniform distribution 𝑈 (0, 0.1) and obtain the weight vector 𝜙 𝑖 𝑗 = normalize(𝑦 𝑖 + 𝛿). Then, we obtain node features 𝑋 sampled from the above Guassian Mixture Model. Based on 𝑋 , we generate the edge between node 𝑣 𝑖 and 𝑣 𝑗 via 𝑒 𝑖 𝑗 ∼ Bern((cos(𝑥 𝑖 , 𝑥 𝑗 ) + 1)/2) with Bernoulli distribution to otain graph G 1 , which preserves the homophilious properties of graphs. Then, we expand G 1 to G 2 with explainibility for the existence of edges. Note that the explanation in this paper is about selecting neighbors which are relevant to the existence of links so we will select 𝐾 neighbors of each node as groundtruth explanation. Firstly, we assume that if a node 𝑣 𝑖 is similar to neighbors of another connected or unconnected node 𝑣 𝑗 , these two nodes are likely connected. Thus, we can get 𝑠 NO (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) = 1\n|𝑁 𝑖 | 𝑐 ∈𝑁 𝑖 (cos(𝑣 𝑐 , 𝑣 𝑗 ) + 1)/2. Secondly, we assume that a node 𝑣 𝑖 is near to neighbors of another node 𝑣 𝑗 . Specifically, we firstly count the number of paths which are less than 3 via 𝑆 ST = 𝐴 + 1 2 𝐴 2 + 1 3 𝐴 3 and normalize it by " }, { "figure_ref": [], "heading": "B RUNNING TIME COMPARISON", "publication_ref": [], "table_ref": [], "text": "We also conduct experiments to compare our training time with baselines. For comparison, we consider three representative and state-of-the-art baselines that achieve great performance on link prediction, including CONPI, SEAL and WP. For SEAL and WP, they sample one subgraph for each link and we also include this sampling process in the running time calculation. We conduct all running time experiments on the same 64-bit machine with Nvidia GPU (NVIDIA RTX A6000, 1410MHz , 48 GB memory). Furthermore, we count running time of the models with multiple epochs which achieve the best results. Table 5 shows the results of running time for all models. We can observe that our model (ILP-GNN) has a shorter running time than CONPI and SEAL on most of datasets.\nOur model has a shorter running time on larger datasets than WP, like photo. Also, we have comparable running time with regard to WP on Cora and Citeseer." } ]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance for link prediction. However, GNNs suffer from poor interpretability, which limits their adoptions in critical scenarios that require knowing why certain links are predicted. Despite various methods proposed for the explainability of GNNs, most of them are post-hoc explainers developed for explaining node classification. Directly adopting existing post-hoc explainers for explaining link prediction is sub-optimal because: (i) post-hoc explainers usually adopt another strategy or model to explain a target model, which could misinterpret the target model; and (ii) GNN explainers for node classification identify crucial subgraphs around each node for the explanation; while for link prediction, one needs to explain the prediction for each pair of nodes based on graph structure and node attributes. Therefore, in this paper, we study a novel problem of selfexplainable GNNs for link prediction, which can simultaneously give accurate predictions and explanations. Concretely, we propose a new framework and it can find various 𝐾 important neighbors of one node to learn pair-specific representations for links from this node to other nodes. These 𝐾 different neighbors represent important characteristics of the node and model various factors for links from it. Thus, 𝐾 neighbors can provide explanations for the existence of links. Experiments on both synthetic and real-world datasets verify the effectiveness of the proposed framework for link prediction and explanation.
Self-Explainable Graph Neural Networks for Link Prediction
[ { "figure_caption": "Figure 1 :1Figure 1: Example of factors for the link between 𝑣 𝑖 and 𝑣 𝑗 .", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An overview of the proposed ILP-GNN.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4. 4 . 141Training Algorithm.", "figure_data": "", "figure_id": "fig_2", "figure_label": "41", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Results on fidelity scores.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "A link between nodes 𝑣 1 and 𝑣 2 without common neighbors A link between nodes 𝑣 1 and 𝑣 2 with common neighbors No link between nodes 𝑣 1 and 𝑣 2", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Case Study for node pairs with links or without links on Cora.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure 5: Hyperparameter Sensitivity Analysis", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Visualization of weighted scores.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Ablation Study on Cora and Photo datasets.probabilities to no-links. Therefore, it further demonstrates the effectiveness of our model to select relevant neighbors to improve the performance of link prediction.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "𝑣 𝑖 and 𝑣 𝑗 , respectively. Note that there are nodes whose |N 𝑖 | -|N 𝑟 𝑖 | is smaller than 𝐾 and we will not perform the following objective function. Then we will use {N rand 𝑣 𝑖 and 𝑣 𝑗 to predict link between (𝑣 𝑖 , 𝑣 𝑗 ). This predicted link probability should be smaller than that of using {N 𝑟 𝑖 , N 𝑟 𝑗 } as we expect {N 𝑟 𝑖 , N 𝑟 𝑗 } to be more effective than {N rand", "figure_data": "rand 𝑖 } for nodes 𝑖 , N rand 𝑗 , N rand 𝑗 } to learn node representation of𝑖, N rand 𝑗} . Specifically,the node representation of 𝑣 𝑖 by aggregating randomly selectedneighbors N rand", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "where 𝑑 is the embedding dimension and |N 𝑖 | is the number of neighbors for 𝑣 𝑖 . The time complexity of the proposed loss function is O (𝐾 |E 𝐿 |𝑑). Therefore, the overall time complexity for the training phase in each iteration is O ( 𝑒 𝑖 𝑗 ∈E 𝐿 |N 𝑖 ||N 𝑗 |𝑑 + 𝐾 |E 𝐿 |𝑑). The time complexity of the testing phase is O ( 𝑒 𝑖 𝑗 ∈E 𝑈 |N 𝑖 ||N 𝑗 |𝑑). A detailed training time comparison is given in Appendix B. Link Prediction performance (AUC(%) ± Std.) on all graphs. ± 0.15 75.57 ± 0.13 91.18 ± 0.41 91.57 ± 0.69 91.87 ± 0.93 92.21 ± 1.23 89.69 ± 0.32 92.42 ± 1.1 93.21 ± 1.14 Citeseer 69.80 ± 0.22 69.70 ± 0.23 91.42 ± 0.96 92.51 ± 1.00 93.57 ± 0.64 90.52 ± 1.29 87.47 ± 0.10 91.37 ± 0.98 95.23 ± 1.33 Photo 96.59 ± 0.22 96.21 ± 0.04 97.03 ± 0.15 97.08 ± 0.13 96.47 ± 0.19 98.04 ± 0.70 96.45 ± 0.42 98.12 ± 0.14 98.23 ± 0.04 Ogbn-arxiv 82.41 ± 0.02 82.43 ± 0.01 95.05 ± 0.07 95.27 ± 0.06 94.79 ± 0.03 95.30 ± 0.04 94.22 ± 0.08 95.33 ± 0.02 95.42 ± 0.03 Ogbn-collab 58.09 ± 0.09 57.88 ± 0.02 96.27 ± 0.07 96.73 ± 0.02 96.64 ± 0.05 93.57 ± 0.04 92.31 ± 0.03 96.81 ± 0.02 97.17 ± 0.01", "figure_data": "MethodAACNVGAEGCNGATSEALCONPI-PairWPILP-GNNCora75.80", "figure_id": "tab_5", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Explanation Performance on Synthetic Dataset Results on Fidelity Scores. We first demonstrate the effectiveness of explanation in terms of fidelity scores. The fidelity score measures the link prediction performance drop for each pair of nodes when important neighbors of the pair of nodes are removed. Intuitively, if a model can capture important neighbors of a pair of nodes for link prediction, the removal of such neighbors would result in a significant performance drop. Specifically, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ) in the test set, take node 𝑣 𝑖 as an example, we denote the delete top 𝑀 neighbors based on weight scores given by the model as N 𝑜 𝑖 . We delete will N 𝑜 𝑖 and obtain a new neighbor set N 𝑖 \\N 𝑜 𝑖 . We then aggregate neighbors 𝑣 𝑐 of 𝑣 𝑖 from this new set 𝑣 𝑐 ∈ N 𝑖 \\N 𝑜 𝑖 to obtain the representation vector h 𝑤 𝑖 via different aggregation methods from different models, i.e., GAT, CONPI, ILP-GNN. We do the same operations for node 𝑣 𝑗 to get the representation h 𝑤 𝑗 . Note that if |N 𝑖 | ≤ 𝑀 and the new neighbors set is empty, we only use their features. Then, we can obtain the new link prediction score p𝑤 𝑤 𝑗 by Eq.(8). The fidelity score Δ𝐴𝑈𝐶% is calculated as Δ𝐴𝑈 𝐶% = 𝑒 𝑖 𝑗 ∈ E 𝑈 (𝐴𝑈𝐶 p𝑖 𝑗 -𝐴𝑈𝐶 p𝑤", "figure_data": "MethodSyn-sparse Precision@1 Precision@2 Precision@1 Precision@2 Precision@1 Precision@2 Syn-medium Syn-denseRandom30.62 ± 1.7731.96 ± 1.0326.44± 1.0525.85± 0.9312.75± 0.9213.15 ± 0.82GAT33.92 ± 2.2139.40 ± 1.6828.39 ± 1.5828.17 ± 1.3124.05 ± 2.3423.74 ± 1.06CONPI39.29 ± 2.5346.22 ± 2.0330.75 ± 2.5315.23 ± 4.1232.46 ± 1.2119.22 ± 1.03GNNExplainer 24.46 ± 1.9734.01 ± 1.7227.22 ± 1.9732.75 ± 1.8529.90 ± 1.8435.60 ± 1.89ILP-GNN49.68 ± 3.87 65.25 ± 3.49 81.27 ± 2.67 84.31 ± 1.76 79.10 ± 0.15 82.22 ± 0.305.4.1 𝑖 𝑗using h 𝑤 𝑖 and h 𝑖 𝑗)%, where 𝐴𝑈𝐶 p𝑖 𝑗 and𝐴𝑈 𝐶 p𝑤 𝑖 𝑗", "figure_id": "tab_6", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Link Prediction Performance (AUC %) on Synthetic Dataset. Results on Synthetic Datasets. Secondly, we evaluate the explanation quality on synthetic datasets with groundtruth explanation. Specifically, in the synthetic datasets, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), 𝑣 𝑖 is similar to 𝐾 neighbors of node 𝑣 𝑗 and 𝑣 𝑗 are similar to 𝐾 neighbors of 𝑣 𝑖 , which lead to the link between them. We treat these 𝐾 neighbors of 𝑣 𝑖 and 𝐾 neighbors of 𝑣 𝑗 as explanation neighbors for the link between them. Therefore, for the task to find explanation neighbors in the synthetic datasets, the model should find the correct 𝐾 neighbors of 𝑣 𝑖 and 𝐾 neighbors of 𝑣 𝑗 for explainable link prediction. Specifically, for each node pair (𝑣 𝑖 , 𝑣 𝑗 ), we rank neighbors of 𝑣 𝑖 based on weight scores assigned from the model, i.e., ILP-GNN, GAT and CONPI.", "figure_data": "MethodGCNGATCONPIILP-GNNSyn-sparse78.88 ± 1.46 79.11 ± 1.70 78.93 ± 1.51 80.42 ± 0.58Syn-medium 82.92 ± 1.90 83.04 ± 1.86 83.18 ± 1.72 83.44 ± 1.76Syn-dense23.74 ± 1.06 84.64 ± 1.80 84.23 ± 2.67 84.85 ± 0.425.4.2", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" } ]
Huaisheng Zhu; Dongsheng Luo; Xianfeng Tang; Junjie Xu; Hui Liu; Suhang Wang
[ { "authors": "Evrim Acar; Tamara G Daniel M Dunlavy; Kolda", "journal": "IEEE", "ref_id": "b0", "title": "Link prediction on evolving data using matrix and tensor factorizations", "year": "2009" }, { "authors": "A Lada; Eytan Adamic; Adar", "journal": "Social networks", "ref_id": "b1", "title": "Friends and neighbors on the web", "year": "2003" }, { "authors": "Sergey Brin; Lawrence Page", "journal": "Computer networks and ISDN systems", "ref_id": "b2", "title": "The anatomy of a large-scale hypertextual web search engine", "year": "1998" }, { "authors": "Joan Bruna; Wojciech Zaremba; Arthur Szlam; Yann Lecun", "journal": "", "ref_id": "b3", "title": "Spectral networks and locally connected networks on graphs", "year": "2013" }, { "authors": "Jie Chen; Tengfei Ma; Cao Xiao", "journal": "", "ref_id": "b4", "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "year": "2018" }, { "authors": "Enyan Dai; Suhang Wang", "journal": "", "ref_id": "b5", "title": "Towards self-explainable graph neural network", "year": "2021" }, { "authors": "Wenqi Fan; Yao Ma; Qing Li; Yuan He; Eric Zhao; Jiliang Tang; Dawei Yin", "journal": "", "ref_id": "b6", "title": "Graph neural networks for social recommendation", "year": "2019" }, { "authors": "Hongyang Gao; Zhengyang Wang; Shuiwang Ji", "journal": "", "ref_id": "b7", "title": "Large-scale learnable graph convolutional networks", "year": "2018" }, { "authors": "Xavier Glorot; Yoshua Bengio", "journal": "", "ref_id": "b8", "title": "Understanding the difficulty of training deep feedforward neural networks", "year": "2010" }, { "authors": "Will Hamilton; Zhitao Ying; Jure Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "Mingguo He; Zhewei Wei; Hongteng Xu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b10", "title": "Bernnet: Learning arbitrary graph spectral filters via bernstein approximation", "year": "2021" }, { "authors": "Weihua Hu; Matthias Fey; Marinka Zitnik; Yuxiao Dong; Hongyu Ren; Bowen Liu; Michele Catasta; Jure Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b11", "title": "Open graph benchmark: Datasets for machine learning on graphs", "year": "2020" }, { "authors": "Qiang Huang; Makoto Yamada; Yuan Tian; Dinesh Singh; Yi Chang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b12", "title": "Graphlime: Local interpretable model explanations for graph neural networks", "year": "2022" }, { "authors": "Glen Jeh; Jennifer Widom", "journal": "", "ref_id": "b13", "title": "Simrank: a measure of structural-context similarity", "year": "2002" }, { "authors": "Leo Katz", "journal": "Psychometrika", "ref_id": "b14", "title": "A new status index derived from sociometric analysis", "year": "1953" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b15", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b16", "title": "Variational graph auto-encoders", "year": "2016" }, { "authors": "Johannes Klicpera; Stefan Weißenberger; Stephan Günnemann", "journal": "", "ref_id": "b17", "title": "Diffusion improves graph learning", "year": "2019" }, { "authors": "Linyuan Lü; Ci-Hang Jin; Tao Zhou", "journal": "Physical Review E", "ref_id": "b18", "title": "Similarity index based on local paths for link prediction of complex networks", "year": "2009" }, { "authors": "Linyuan Lü; Tao Zhou", "journal": "Physica A: statistical mechanics and its applications", "ref_id": "b19", "title": "Link prediction in complex networks: A survey", "year": "2011" }, { "authors": "Dongsheng Luo; Wei Cheng; Dongkuan Xu; Wenchao Yu; Bo Zong; Haifeng Chen; Xiang Zhang", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Parameterized explainer for graph neural network", "year": "2020" }, { "authors": "Charl Maree; Jan Erik Modal; Christian W Omlin", "journal": "IEEE", "ref_id": "b21", "title": "Towards responsible AI for financial transactions", "year": "2020" }, { "authors": "Julian Mcauley; Christopher Targett; Qinfeng Shi; Anton Van Den; Hengel", "journal": "", "ref_id": "b22", "title": "Image-based recommendations on styles and substitutes", "year": "2015" }, { "authors": " Mark Ej Newman", "journal": "Physical review E", "ref_id": "b23", "title": "Clustering and preferential attachment in growing networks", "year": "2001" }, { "authors": "Maximilian Nickel; Kevin Murphy; Evgeniy Volker Tresp; Gabrilovich", "journal": "Proc. IEEE", "ref_id": "b24", "title": "A review of relational machine learning for knowledge graphs", "year": "2015" }, { "authors": "Lawrence Page; Sergey Brin; Rajeev Motwani; Terry Winograd", "journal": "", "ref_id": "b25", "title": "The PageRank citation ranking: Bringing order to the web", "year": "1999" }, { "authors": "Liming Pan; Cheng Shi; Ivan Dokmanić", "journal": "", "ref_id": "b26", "title": "Neural Link Prediction with Walk Pooling", "year": "2022" }, { "authors": "Yanjun Qi; Ziv Bar-Joseph; Judith Klein-Seetharaman", "journal": "Proteins: Structure, Function, and Bioinformatics", "ref_id": "b27", "title": "Evaluation of different biological data and computational classification methods for use in protein interaction prediction", "year": "2006" }, { "authors": "Liang Qu; Huaisheng Zhu; Ruiqi Zheng; Yuhui Shi; Hongzhi Yin", "journal": "", "ref_id": "b28", "title": "Imgagn: Imbalanced network embedding via generative adversarial graph networks", "year": "2021" }, { "authors": "Andrea Rossi; Donatella Firmani; Paolo Merialdo; Tommaso Teofili", "journal": "", "ref_id": "b29", "title": "Explaining link prediction systems based on knowledge graph embeddings", "year": "2022" }, { "authors": "Nicola De Michael Sejr Schlichtkrull; Ivan Cao; Titov", "journal": "", "ref_id": "b30", "title": "Interpreting graph neural networks for nlp with differentiable edge masking", "year": "2020" }, { "authors": "Oleksandr Shchur; Maximilian Mumme; Aleksandar Bojchevski; Stephan Günnemann", "journal": "", "ref_id": "b31", "title": "Pitfalls of graph neural network evaluation", "year": "2018" }, { "authors": "Shanshan Tang; Bo Li; Haijun Yu", "journal": "", "ref_id": "b32", "title": "ChebNet: Efficient and stable constructions of deep neural networks with rectified power units using chebyshev approximations", "year": "2019" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio", "journal": "", "ref_id": "b33", "title": "Graph attention networks", "year": "2017" }, { "authors": "Kuansan Wang; Zhihong Shen; Chiyuan Huang; Chieh-Han Wu; Yuxiao Dong; Anshul Kanakia", "journal": "Quantitative Science Studies", "ref_id": "b34", "title": "Microsoft academic graph: When experts are not enough", "year": "2020" }, { "authors": "Zhen Wang; Bo Zong; Huan Sun", "journal": "", "ref_id": "b35", "title": "Modeling Context Pair Interaction for Pairwise Tasks on Graphs", "year": "2021" }, { "authors": "Teng Xiao; Zhengyu Chen; Donglin Wang; Suhang Wang", "journal": "", "ref_id": "b36", "title": "Learning how to propagate messages in graph neural networks", "year": "2021" }, { "authors": "Zhitao Ying; Dylan Bourgeois; Jiaxuan You; Marinka Zitnik; Jure Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b37", "title": "Gnnexplainer: Generating explanations for graph neural networks", "year": "2019" }, { "authors": "Jiliang Hao Yuan; Xia Tang; Shuiwang Hu; Ji", "journal": "", "ref_id": "b38", "title": "Xgnn: Towards modellevel explanations of graph neural networks", "year": "2020" }, { "authors": "Haiyang Hao Yuan; Jie Yu; Kang Wang; Shuiwang Li; Ji", "journal": "PMLR", "ref_id": "b39", "title": "On explainability of graph neural networks via subgraph explorations", "year": "2021" }, { "authors": "Muhan Zhang; Yixin Chen", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Link prediction based on graph neural networks", "year": "2018" }, { "authors": "Wen Zhang; Bibek Paudel; Wei Zhang; Abraham Bernstein; Huajun Chen", "journal": "", "ref_id": "b41", "title": "Interaction embeddings for prediction and explanation in knowledge graphs", "year": "2019" }, { "authors": "Zaixi Zhang; Qi Liu; Hao Wang; Chengqiang Lu; Cheekong Lee", "journal": "", "ref_id": "b42", "title": "Protgnn: Towards self-explaining graph neural networks", "year": "2022" }, { "authors": "Tianxiang Zhao; Xiang Zhang; Suhang Wang", "journal": "", "ref_id": "b43", "title": "Exploring edge disentanglement for node classification", "year": "2022" }, { "authors": "Jie Zhou; Ganqu Cui; Shengding Hu; Zhengyan Zhang; Cheng Yang; Zhiyuan Liu; Lifeng Wang; Changcheng Li; Maosong Sun", "journal": "AI Open", "ref_id": "b44", "title": "Graph neural networks: A review of methods and applications", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 413.91, 235.02, 144.3, 27.38 ], "formula_id": "formula_0", "formula_text": "S = ∞ ∑︁ 𝑘=0 𝜃 𝑘 T 𝑘 ,(1)" }, { "formula_coordinates": [ 4, 331.66, 337.37, 60.87, 13.43 ], "formula_id": "formula_1", "formula_text": "S = D -1/2 𝑆 SD -1/2 𝑆" }, { "formula_coordinates": [ 4, 402.34, 378.49, 155.86, 11.2 ], "formula_id": "formula_2", "formula_text": "𝑠 ST (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) = S𝑐 𝑗 ,(2)" }, { "formula_coordinates": [ 4, 353.24, 561.34, 201.79, 9.96 ], "formula_id": "formula_3", "formula_text": "H 𝑚 = MLP(X), H 𝑟 = 𝜎 ( Ã[H 𝑚 ∥X]W) + H 𝑚 , (3" }, { "formula_coordinates": [ 4, 555.03, 563.35, 3.17, 7.94 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 354.55, 650.58, 203.65, 10.17 ], "formula_id": "formula_5", "formula_text": "𝑠 NO (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) = sigmoid (h 𝑟 𝑗 ) 𝑇 h 𝑟 𝑐 , ∀ 𝑣 𝑐 ∈ N 𝑖(4)" }, { "formula_coordinates": [ 5, 72.75, 116.26, 201.89, 9.64 ], "formula_id": "formula_6", "formula_text": "𝑠 (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) = 𝛼 • 𝑠 ST (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 ) + (1 -𝛼) • 𝑠 NO (𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 )," }, { "formula_coordinates": [ 5, 114.28, 305.66, 179.77, 24.01 ], "formula_id": "formula_7", "formula_text": "𝑏 𝑖𝑐 = exp 𝑠 𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 𝑣 𝑐 ∈N 𝑟 𝑖 exp 𝑠 𝑣 𝑖 , 𝑣 𝑐 , 𝑣 𝑗 .(6)" }, { "formula_coordinates": [ 5, 130.59, 353.47, 163.45, 23.24 ], "formula_id": "formula_8", "formula_text": "h 𝑖 = h 𝑟 𝑖 + 𝛽 ∑︁ 𝑣 𝑐 ∈N 𝑟 𝑖 𝑏 𝑖𝑐 h 𝑟 𝑐 ,(7)" }, { "formula_coordinates": [ 5, 134.44, 448.18, 159.6, 9.75 ], "formula_id": "formula_9", "formula_text": "𝑝 𝑖 𝑗 = sigmoid(h 𝑇 𝑖 h 𝑗 ).(8)" }, { "formula_coordinates": [ 5, 117.33, 685.84, 176.71, 24.63 ], "formula_id": "formula_10", "formula_text": "h rand 𝑖 = h 𝑟 𝑖 + 𝛽 ∑︁ 𝑣 𝑐 ∈N rand 𝑖 𝑏 rand 𝑖𝑐 h 𝑟 𝑐 ,(9)" }, { "formula_coordinates": [ 5, 378.73, 139.92, 179.47, 11.38 ], "formula_id": "formula_11", "formula_text": "𝑝 rand 𝑖 𝑗 = sigmoid((h rand 𝑖 ) 𝑇 h rand 𝑗 ).(10)" }, { "formula_coordinates": [ 5, 361.56, 223.12, 196.64, 21.94 ], "formula_id": "formula_12", "formula_text": "L 𝑝 dis = ∑︁ 𝑒 𝑖 𝑗 ∈E 𝐿 ,𝑒 𝑖 𝑗 =1 max(0, 𝑝 rand 𝑖 𝑗 + 𝛿 -𝑝 𝑖 𝑗 ),(11)" }, { "formula_coordinates": [ 5, 328.65, 411.32, 44.27, 21.8 ], "formula_id": "formula_13", "formula_text": "L 𝑛 dis = ∑︁ 𝑒 𝑖" }, { "formula_coordinates": [ 5, 352.91, 585.57, 205.29, 21.8 ], "formula_id": "formula_14", "formula_text": "L cls = ∑︁ 𝑒 𝑖 𝑗 ∈E 𝐿 -log 𝑝 𝑖 𝑗 + ∑︁ 𝑒 𝑖 𝑗 ∈E 𝑁 -log 1 -𝑝 𝑖 𝑗 ,(13)" }, { "formula_coordinates": [ 5, 382.43, 659.39, 175.77, 16.11 ], "formula_id": "formula_15", "formula_text": "min Θ L = L cls + 𝜆(L 𝑝 dis + L 𝑛 dis ),(14)" }, { "formula_coordinates": [ 6, 74.23, 700.79, 119.46, 9.52 ], "formula_id": "formula_16", "formula_text": "𝑒 𝑖 𝑗 ∈ E is O ( 𝑒 𝑖 𝑗 ∈ E |N 𝑖 ||N 𝑗 |𝑑)," } ]
10.18653/v1/P17-1183
2023-10-30
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b21", "b7", "b4" ], "table_ref": [], "text": "Transformer-based encoder-decoder architectures (Bahdanau et al., 2014;Vaswani et al., 2017) that decode sequences from left to right have become dominant for sequence-to-sequence tasks. While this approach is quite straightforward and intuitive, some research has shown that models suffer from this arbitrary constraint. For example, models that decode left-to-right are often more likely to miss tokens near the end of the sequence, while rightto-left models are more prone to making mistakes near the beginning (Zhang et al., 2019;Zhou et al., 2019a). This is a result of the \"snowballing\" effect, whereby the model's use of its own incorrect predictions can lead future predictions to be incorrect (Bengio et al., 2015;Liu et al., 2016).\nwalk from the left and the suffix ed from the right.\nWe explore several methods for training models under this framework, and find that they are highly effective on the 2023 SIGMORPHON shared task on inflection (Goldman et al., 2023). Our method improves by over 4 points in average accuracy over a typical L2R model, and one of our loss functions is particularly adept at learning split points for words with a clear affix. We also set SOTA on both the 2022 and 2023 shared tasks (Kodner et al., 2022), which have very different data distributions." }, { "figure_ref": [], "heading": "Prior Bidirectional Decoders", "publication_ref": [ "b7", "b21", "b12", "b20", "b1", "b18", "b6" ], "table_ref": [], "text": "Various bidirectional decoding approaches have been proposed for tasks such as machine translation and abstractive summarization, including ones that use some form of regularization to encourage the outputs from both directions to agree (Liu et al., 2016;Zhang et al., 2019;Shan et al., 2019), or algorithms where the model first decodes the entire sequence in the R2L direction and then conditions on that sequence when decoding in the L2R direction (Zhang et al., 2018;Al-Sabahi et al., 2018). Still more methods utilize synchronous decoding, where the model decodes both directions at the same time and either meet in the center (Zhou et al., 2019b;Imamura and Sumita, 2020) or proceed until each direction's hypothesis is complete (Zhou et al., 2019a;Xu and Yvon, 2021). Lawrence et al. (2019) allows the model to look into the future by filling placeholder tokens at each timestep." }, { "figure_ref": [], "heading": "A Bidirectional Decoding Framework", "publication_ref": [], "table_ref": [], "text": "The following sections present a general framework for training and decoding models with bidirectional decoding that is irrespective of model architecture, subject to the constraints discussed in §3.3." }, { "figure_ref": [], "heading": "Probability Factorization", "publication_ref": [], "table_ref": [], "text": "For unidirectional models, the probability of an L2R sequence -→ y = y 1 • • • y n or an R2L sequence ←y = y n • • • y 1 given an input x is defined as\nP ( - → y |x) = |y| i=1 P ( - → y i | - → y <i , x)(1)\nP ( ← - y |x) = |y| j=1 P ( ← -y j | ← - y <j , x)(2)\nwhere -→ y i = y i or ←y j = y n-j+1 is the ith or jth token in a particular direction. Generation begins with a start-of-sentence token; at each step a token is chosen based on those preceding, and the process halts once an end-of-sentence token is predicted. In contrast, our bidirectional scheme starts with an empty prefix $ and suffix #. At each timestep, the model chooses to generate the next token of either the prefix or the suffix, and then whether or not to join the prefix and suffix. If a join is predicted, then generation is complete.\nWe define an ordering o = o (1) • • • o (n) as a sequence of left and right decisions: that is, o (t) ∈ {L, R}. We use y (t) to refer to the token generated at time t under a particular ordering, and -→ y (≤t) and ←y (≤t) to refer to the prefix and suffix generated up to (and including) time t. 3 An example derivation of the word walked is shown below:\nDropping the dependence on x for notational convenience, we define the joint probability of output sequence y and ordering o as P (y, o) = |y| t=1 P (o (t) | -→ y (<t) , ←y (<t) ) • P (y (t) | o (t) , -→ y (<t) , ← -\ny (<t) ) •Q (t) (3)\nwhere Q (t) is the probability of joining (or not joining) the prefix and suffix:\nQ (t) = P (join | -→ y (≤t) , ← - y (≤t) ) if t = |y| 1 -P (join | -→ y (≤t) , ← - y (≤t) ) otherwise" }, { "figure_ref": [], "heading": "Likelihood and MAP Inference", "publication_ref": [], "table_ref": [], "text": "To compute the likelihood of a particular sequence y, we need to marginalize over all orderings: P (y|x) = o P (y, o|x). Since we cannot enumerate all 2 |y| orderings, we have developed an exact O(|y| 2 ) dynamic programming algorithm, reminiscent of the forward algorithm for HMMs.\nTo simplify notation, let P L ( -→ y i | -→ y <i , ←y <j ) (or P R ( ←y j | -→ y <i , ←y <j )) be the probability of generating the ith token from the left (or the jth token from the right), conditioned on -→ y <i and ←y <j , the prefix and suffix generated thus far: PL( -→ y i| -→ y<i, ←y <j) = P (L | -→ y<i, ←y <j)•P ( -→ y i| L, -→ y<i, ← -y<j)\nPR( ←y j| -→ y<i, ← -\ny <j) = P (R | -→ y<i, ← - y <j)•P ( ← -y j|R, -→ y<i, ← - y<j)\nLet Q ij be the join probability for -→ y ≤i and ←y ≤j :\nQij = P (join | -→ y ≤i , ← - y ≤j ) if i + j = |y| 1 -P (join | -→ y ≤i , ← - y ≤j ) otherwise (4)\nFinally, denote the joint probability of a prefix -→ y ≤i and suffix ←y ≤j by f [i, j]. We set the probability of an empty prefix and suffix (the base case) to 1:\nf [0, 0] = 1\nThe probability of a non-empty prefix -→ y ≤i and empty suffix ϵ can be computed by multiplying f [i-1, 0] (the probability of prefix -→ y <i and empty suffix ϵ) by P L ( -→ y i | -→ y <i , ϵ) (the probability of generating -→ y i ) and the join probability Q i0 :\nf [i, 0] = f [i -1, 0] • P L ( - → y i | - → y <i , ϵ) • Q i0\nAnalogously, we define\nf [0, j] = f [0, j -1] • P R ( ← -y j |ϵ, ← - y <j ) • Q 0j\nFinally, f [i, j] represents the case where both prefix -→ y ≤i and suffix ←y ≤j are non-empty. This prefix-suffix pair can be produced either by appending -→ y i to the prefix -→ y <i and leaving the suffix unchanged, or by appending ←y j to the suffix ←y <j and leaving the prefix unchanged. The sum of the probabilities of these cases gives the recurrence:\nf [i, j] = f [i -1, j] • P L ( - → y i | - → y <i , ← - y ≤j ) • Q ij + f [i, j -1] • P R ( ← -y j | - → y ≤i , ← - y <j ) • Q ij\nAfter filling out the dynamic programming table f , the marginal probability P (y) can be computed by summing all entries f [i, j] where i + j = |y|:\nP (y) = i,j I(i + j = |y|) • f [i, j]\nIf all local probabilities can be calculated in constant time, the runtime of this algorithm is O(|y| 2 ).\nAs an aside, the MAP probability, or the probability of the best ordering for a given sequence, can be calculated by replacing each sum with a max:\nf [i, j] = max f [i -1, j]•PL( -→ y i| -→ y <i, ← - y ≤j )•Qij, f [i, j -1] • PR( ← -y j | -→ y ≤i , ← - y <j ) • Qij max o P (y, o) = max i,j I(i + j = |y|) • f [i, j]\nThe best ordering itself can be found with a backtracking procedure similar to Viterbi for HMM's." }, { "figure_ref": [], "heading": "Why does dynamic programming work?", "publication_ref": [], "table_ref": [], "text": "Dynamic programming (DP) only works for this problem if the local probabilities (i.e. the token, join, and order probabilities) used to compute f [i, j] depend only on the prefix and suffix corresponding to that cell, but not on a particular ordering that produced the prefix and suffix. This is similar to the how the Viterbi algorithm relies on the fact that HMM emission probabilities depend only on the hidden state and not on the path taken.\nTo satisfy this requirement, the model's architecture should be chosen carefully. Any model that simply takes a prefix and suffix as input and returns the corresponding local probabilities is sufficient. However, one must be careful if designing a model where the hidden representation is shared or reused across timesteps. This is particularly problematic if hidden states computed from both the prefix and suffix are reused. In this case, the internal representations will differ depending on the order in which the prefix and suffix were generated, which would cause a DP cell to rely on all possible paths to that cellthus breaking the polynomial nature of DP." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b9" ], "table_ref": [], "text": "We propose two different loss functions to train a bidirectional model. Based on our probability factorization, we must learn the token, join, and order probabilities at each timestep.\nOur first loss function L xH (θ) trains each of these probabilities separately using cross-entropy loss. However, since ordering is a latent variable, it cannot be trained with explicit supervision. Hence, we fix the order probability to be 0.5 at each timestep, making all orderings equi-probable.\nWe then define S to contain the indices of all valid prefix-suffix pairs in a given sequence y:\nS = {(i, j) | 1 ≤ i, j, ≤ |y|; i + j ≤ |y|} Hence, S has O(|y| 2 ) elements.\nFinally, we define a simple loss L xH (θ) that averages the cross-entropy loss for the token probabilities (based on the next token in -→ y or ←y ) and join probabilities (based on whether the given prefix and suffix complete y): \nLxH (θ) = 1 3 -→ L (θ) + ← - L (θ) + L (join) (θ) -→ L (θ) = - 1 |S| (i,j)∈S log P ( -→ y i | -→ y <i, ← - y <j , x; θ) ← - L (θ) = - 1 |S| (i,j)∈S log P ( ← -y j | -→ y <i, ← - y <j , x; θ)\nL (join) (θ) = - 1 |S| (i,j)∈S log Qij\nwhere Q ij is defined as in Equation 4. Due to the size of S, this loss takes O(|y| 2 ) time to train. 4 Given that a typical unidirectional model takes O(|y|) time to train, we also propose an O(|y|) approach that involves sampling from S; this is presented in Appendix F.\nAn alternative is to train with Maximum Marginal Likelihood (MML) (Guu et al., 2017;Min et al., 2019), which learns the order probabilities via marginalization. This is more principled because it directly optimizes P (y | x), the quantity of interest. The loss is given by L M M L (θ) = -log P (y|x; θ), which is calculated with the dynamic programming algorithm described in §3.2. 5Learning the order probabilities enables the model to assign higher probability mass to orderings it prefers and ignore paths it finds unhelpful.\nThis loss also requires O(|y| 2 ) time to train." }, { "figure_ref": [], "heading": "Decoding", "publication_ref": [], "table_ref": [], "text": "The goal of decoding is to find y such that y = argmax y P (y|x). Unfortunately, it is not computationally feasible to use the likelihood algorithm in §3.2 to find the best sequence y, even with a heuristic like beam search. Instead, we use beam search to heuristically identify the sequence y and ordering o that maximize the joint probability P (y, o|x):\ny, o = argmax y,o P (y, o|x)\nThe formula for P (y, o|x) is given by Equation 3. Each hypothesis is a prefix-suffix pair. We start with a single hypothesis: an empty prefix and suffix, represented by start-and end-of-sentence tokens. At a given timestep, each hypothesis is expanded by considering the distribution over possible actions: adding a token on the left, adding a token on the right, or joining. The k best continuations are kept based on their (joint) probabilities. Generation stops once all hypotheses are complete (i.e. the prefix and suffix are joined)." }, { "figure_ref": [ "fig_0" ], "heading": "Model Architecture", "publication_ref": [ "b17" ], "table_ref": [], "text": "Our architecture (Figure 1) is based on the character-level transformer (Wu et al., 2021), which has proven useful for morphological inflection. First, the input sequence x is encoded with a typical Transformer encoder; for the inflection task, this consists of the lemma (tokenized by character) concatenated with a separator token and set of tags.\nGiven a prefix -→ y ≤i and suffix ←y ≤j (as well as the encoder output), the decoder must produce each direction's token probabilities, the join probability, and the order probability. We construct the input to the decoder by concatenating the prefix and suffix tokens with some special classification tokens:\n⟨c J , c O , - → y 1 , ..., - → y i , c L2R , c R2L , ← -y j , ..., ← -y 1 ⟩\nThe tokens c J , c O , c L2R , and c R2L are special classification tokens that serve a purpose similar to the CLS embedding in BERT (Devlin et al., 2019). We feed this input to a Transformer decoder as follows:\ns J , s O , ..., s L2R , s R2L , ... = Decoder(⟨• • • ⟩)\nThese vectors are fed through their own linear layers and softmax, giving the desired probabilities: P (order | -→ y ≤i , ← -\nP (join | - → y ≤i , ← - y ≤j ) = Softmax(s O V ) Avg # Langs # Langs (p ≤ 0.05) ≥ BL2 > BL2 = BL2 < BL2 L2R\ny ≤j ) = Softmax(s J U ) P ( - → y i | - → y ≤i , ← - y ≤j ) = Softmax(s J -→ W ) P ( ← -y j | - → y ≤i , ← - y ≤j ) = Softmax(s J ← - W )\nSince this architecture does have cross-attention between the prefix and suffix, the decoder hidden states for each prefix-suffix pair must be recomputed at each timestep to allow for DP (see §3.3)." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b4", "b4", "b10" ], "table_ref": [], "text": "Datasets. We experiment with inflection datasets for all 27 languages (spanning 9 families) from the SIGMORPHON 2023 shared task (Goldman et al., 2023). Each language has 10,000 training and 1,000 validation and test examples, and no lemma occurs in more than one of these partitions. We also show results on the 20 \"large\" languages from the SIGMORPHON 2022 shared task (Kodner et al., 2022), which has a very different sampling of examples in the train and test sets. A list of all languages can be found in Appendix A.\nTokenization. Both the lemma and output form are split by character; the tags are split by semicolon. For the 2023 shared task, where the tags are \"layered\" (Guriel et al., 2022), we also treat each open and closed parenthesis as a token. Appendix B describes the treatment of unknown characters. Model hyperparameters. Our models are implemented in fairseq (Ott et al., 2019). We experiment with small, medium, and large model sizes (ranging from ∼240k to ∼7.3M parameters). For each language, we select a model size based on the L2R and R2L unidirectional accuracies; this procedure is detailed in Appendix A.\nThe only additional parameters in our bidirectional model come from the embeddings for the 4 classification tokens (described in §4); hence, our unidirectional and bidirectional models have roughly the same number of parameters.\nTraining. We use a batch size of 800, an Adam optimizer (β 1 = 0.9, β 2 = 0.98), dropout of 0.3, and an inverse square root scheduler with initial learning rate 1e-07. Training is halted if validation accuracy does not improve for 7,500 steps. All validation accuracies are reported in Appepndix A.\nInference. Decoding maximizes joint probability P (y, o|x) using the beam search algorithm of §3.5 with width 5. In some experiments, we rerank the 5 best candidates according to their marginal probability P (y|x), which can be calculated with dynamic programming ( §3.2). Models. We experiment with the following models (see Appendices D and F for more variants):\n• L2R & R2L: Standard unidirectional transformer baselines, trained with the loss given in Equations 1 and 2.\n• BL2: A naive \"bidirectional\" baseline that returns either the best L2R or R2L hypothesis based on which has a higher probability. • xH & MML: Our bidirectional transformer ( §4) trained under the cross-entropy or MML loss of §3.4, and decoded under P (y, o|x).\n• xH-Rerank & MML-Rerank: These variants rerank the 5 candidates returned by beam search of the xH and MML models according to their marginal probability P (y|x).\n• BL2-xH & BL2-MML: These methods select the best L2R or R2L candidate, based on which has higher marginal probability under the xH or MML model.\n6 Empirical Results" }, { "figure_ref": [], "heading": "Comparison of Methods", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Accuracies averaged over languages are shown in Table 1; results by language are in Appendix D.\nBaselines. BL2, which selects the higher probability among the L2R and R2L hypotheses, improves by more than 2.3 points in average accuracy over the best unidirectional model. This simple scheme serves as an improved baseline against which to compare our fully bidirectional models.\nxH & MML. Our bidirectional xH model is clearly more effective than all baselines, having a statistically significant degradation in accuracy on only 3 languages. The MML method is far less effective, beating L2R and R2L but not BL2. MML may suffer from a discrepancy between training and inference, since inference optimizes joint probability while training optimizes likelihood.\nxH-& MML-Rerank. Reranking according to marginal probability generally improves both bidirectional models. xH-Rerank is the best method overall, beating BL2 by over 1.75 points in average accuracy. MML-Rerank is better than either unidirectional model but still underperforms BL2." }, { "figure_ref": [], "heading": "BL2-xH & BL2-MML. Selecting the best L2R", "publication_ref": [], "table_ref": [], "text": "or R2L hypothesis based on marginal probability under xH or MML is very effective. Both of these methods improve over BL2, which chooses between the same options based on unidirectional probability. BL2-xH stands out by not having a statistically significant degradation on any language.\nComparison with Prior SOTA. Goldman et al. ( 2023) presents the results of seven other systems submitted to the task; of these, five are from other universities and two are baselines provided by the organizers. The best of these systems is the neural baseline (a unidirectional transformer), which achieves an average accuracy of 81.6 points. Our best system, xH-Rerank, has an accuracy of 84.38 points, achieving an improvement of 2.7 points." }, { "figure_ref": [ "fig_2" ], "heading": "Improvement by Language", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 shows that the best methods are xH-Rerank (by average accuracy) and BL2-xH (improves upon BL2 on the most languages). Figure 3 illustrates this by showing the difference in accuracy between each of these methods and the best baseline BL2.\nThe plots show that accuracy difference with BL2 has a higher range for xH-Rerank (-2.6% to 8.7%) than for BL2-xH (-0.5% to 5.8%). This is because xH-Rerank has the ability to generate new hypotheses, whereas BL2-xH simply discriminates between the same two hypotheses as BL2." }, { "figure_ref": [], "heading": "Analysis of Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Length of Output Forms", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows the accuracies by output form length for BL2 and our best method xH-Rerank. xH-Rerank outperforms the baseline at every length (except 10), but especially excels for longer outputs (≥ 16 characters). This may be due to the bidirectional model's decreased risk of \"snowballing\": it can delay the prediction of an uncertain token by generating on the opposite side first, a property not shared with unidirectional models." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "How does generation order compare with", "publication_ref": [], "table_ref": [], "text": "the morphology of a word?\nIn this section we consider only forms that can be classified morphologically as prefix-only (e.g. will |walk) or suffix-only (e.g. walk|ed), because these words have an obvious split point. Ideally, the bidirectional model will exhibit the desired split point by decoding the left and right sides of the form from their respective directions.\nWe first classify all inflected forms in the test set as suffix-only, prefix-only, or neither. We do this by aligning each lemma-form pair using Levenshtein distance and considering the longest common substring that has length of at least 3 to be the stem. 6 If the inflected form only has an affix attached to the stem, then it is classified as prefix-only or suffixonly; otherwise, it is considered neither.7 Finally, Figure 5 shows the percentage of words with a clear affix on which each bidirectional model has the correct analysis. A correct analysis occurs when the model joins the left and right sequences at the correct split point and returns the correct word.\nIt is immediately obvious that the MML models tend to exhibit the correct analysis, while the xH models generally have the wrong analysis. This make sense because MML learns the latent order- ing variable, unlike cross-entropy. Despite MML's success at learning this morphology, it tends to have lower accuracy than xH; we explore this by breaking down accuracy by word type in Figure 6.\nLearning the ordering seems to be harmful when there is no obvious affix: compared with BL2, MML barely drops in accuracy on prefix-and suffix-only forms but degrades greatly when there is no clear split. The xH model, which does not learn ordering, improves in all categories.\nWe conclude that MML models better reflect the stem-affix split than cross-entropy models but have lower accuracy. Improving the performance of MML models while maintaining their linguistic awareness is a promising direction for future work." }, { "figure_ref": [], "heading": "Ablation Study: Does bidirectional decoding help?", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze to what extent the bidirectional models' improvement is due to their ability to produce tokens from both sides and meet at any position. To this end, we force our trained xH and MML models to decode in a fully L2R or R2L manner by setting the log probabilities of tokens in the opposite direction to -∞ at inference time.\nThe results are shown in Table 4. The bidirectional models perform poorly when not permitted to decode from both sides. This is particularly detrimental for the MML model, which is expected as the marginalized training loss enables the model to assign low probabilities to some orderings. Clearly, our MML model does not favor unidirectional orderings.\nThe xH model, on the other hand, does not suffer as much from unidirectional decoding. Since it was trained to treat all orderings equally, we would expect it to do reasonably well on any given ordering. Nonetheless, it still drops by about 7 points for L2R decoding and about 13 points for R2L decoding. This shows that the full bidirectional generation procedure is crucial to the success of this model." }, { "figure_ref": [], "heading": "Results on 2022 Shared Task", "publication_ref": [ "b4", "b19", "b5" ], "table_ref": [ "tab_1", "tab_2" ], "text": "We also train our bidirectional cross-entropy model on the 2022 SIGMORPHON inflection task (Kodner et al., 2022), which, unlike the 2023 data, does have lemmas that occur in both the train and test sets. The results are shown in Table 2. All of our methods (including the baselines) outperform the best submitted system (Yang et al., 2022) on the 2022 data; our best method BL2-xH improves by over 4.7 points in average accuracy. However, only BL2-xH outperforms the baseline BL2 (barely), which is in stark contrast to the 2023 task, where all cross-entropy-based methods beat the baseline considerably. To make the comparison between the years more fair, we evaluate the 2022 models only on lemmas in the test set that did not occur in training. Again, only BL2-xH outperforms the baseline, this time by a wider margin; xH and xH-Rerank still underperform.\nWe posit that this discrepancy is likely due to the considerably different properties of the 2022 and 2023 datasets, which are shown in Table 3. The 2023 languages have far fewer unique lemmas and have many more forms per lemma. Hence, it seems that our bidirectional model improves much more compared with the baseline when there are fewer but more \"complete\" paradigms.\nThis investigation shows that the performance of inflection models depends substantially on the data sampling, which is not always controlled for. Kodner et al. (2023) Table 4: Ablation study on 2023 dataset. Macroaveraged accuracies for bidirectional models decoded using the method of §3.5 (Bidi), or when forced to decode in an L2R or R2L manner. Bidi-2 indicates the outcome when selecting between the forced unidirectional decodings based on which has a higher probability.\nThe unidirectional models (Uni) indicate the accuracies of standard unidirectional transformers and BL2.\ndoes not explicitly examine paradigm \"completeness\", which should be a focus in future studies." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have proposed a novel framework for bidirectional decoding that allows a model to choose the generation order for each sequence, a major difference from previous work. Further, our method enables an efficient dynamic programming algorithm for training, which arises due to an independence assumption that can be built into our transformer-based architecture. We also present a simple beam-search algorithm for decoding, the outputs of which can optionally be reranked using the likelihood calculation. Our model beats SOTA on both the 2022 and 2023 shared tasks without resorting to data augmentation. Further investigations show that our model is especially effective on longer output words and can implicitly learn the morpheme boundaries of output sequences.\nThere are several avenues for future research. One open question is the extent to which data augmentation can improve accuracy. We also leave open the opportunity to explore our bidirectional framework on other sequence tasks, such as machine translation, grapheme-to-phoneme conversion, and named-entity transliteration. Various other architectures could also be investigated, such as the bidirectional attention mechanism of Zhou et al. (2019b) or non-transformer based approaches. Finally, given the effectiveness of MML reranking, it could be worthwhile to explore efficient approaches to decode using marginal probability. " }, { "figure_ref": [], "heading": "A Datasets, Hyperparameter Tuning, & Validation Accuracies", "publication_ref": [], "table_ref": [ "tab_9", "tab_10", "tab_15", "tab_16" ], "text": "The languages in the SIGMORPHON 2022 and 2023 datasets are listed in Tables 7 and8. We experiment with small, medium, and large model sizes for each language, whose configurations and approximate number of parameters can be found in For each language, we train L2R and R2L models (with random initialization) for each hyperparameter size (a total of 6 models per language), and select a size based on the average of the L2R and R2L validation accuracies. The model sizes chosen for each language, along with each language's validation accuracies, are reported in Tables 13 and14.\nNote that the number of parameters vary slightly among languages due to different vocabulary sizes (i.e. number of unique characters in the training set), and the bidirectional models also have a small number of extra parameters due to the additional classification tokens described in §4." }, { "figure_ref": [], "heading": "B Handling Unknown Characters", "publication_ref": [], "table_ref": [], "text": "If an unknown character is encountered in a lemma at test time, then a special UNK character is used; however, this character is not explicitly trained. If an UNK character is predicted by the model, then we replace it with the first (leftmost) unknown character in the lemma; if no such character exists then it is ignored.\nWe adopt a special scheme for Japanese, which has a very high number of unknown characters. All characters that occur fewer than 100 times in the training set are considered \"unknown\". If a lemma has n unknown tokens, then these are replaced with UNK 1 , ..., UNK n ; the corresponding tokens in the inflected form are replaced as well. In this way, the model can learn to copy rare or unknown characters to their appropriate locations in the output. At test time, each predicted unknown token is replaced with its corresponding character in the lemma." }, { "figure_ref": [], "heading": "C Tempering the Order Distribution at Train Time", "publication_ref": [], "table_ref": [], "text": "Initial empirical results showed that training with MML loss caused the model to quickly reach a \"degenerate\" state, where every sequence was decoded in the same direction. To encourage the model to explore different orderings at an early stage, we temper the order probabilities over a warmup period. The temperature is degraded from initial temperature τ 0 to 1 over a period of W steps as follows:\nτ n = τ 0 -1 W a (W -n) a + 1\nThe parameter a controls how fast the shift occurs, and n corresponds to the training step. This temperature is applied to the softmax of order probabilities for the first W steps of training.\nIn our experiments, we set W = 4, 000, τ 0 = 50 and a = 2." }, { "figure_ref": [], "heading": "D All Results", "publication_ref": [], "table_ref": [ "tab_11", "tab_12" ], "text": "The accuracies for all languages in our study are shown in Table 9 (2023 data) andTable 10 (2022 data). These tables also display L2R-Rerank (which reranks the 5 candidates from the L2R model's beam search under the cross-entropy or MML model), R2L-Rerank, and (L2R+R2L)-Rerank (which reranks the 10 candidates returned from the L2R and R2L's beam search under the cross-entropy or MML model). " }, { "figure_ref": [], "heading": "E Oracle Scores", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "Table 11 shows the oracle score for each method; this gives an upper bound for choosing among a set of hypotheses. We see that both xH-Rerank and BL2-Rerank approach their respective bounds: the average accuracy for xH-Rerank is within 1 point of its oracle score, and the average accuracy for BL2-xH is within 2 points of its oracle score." }, { "figure_ref": [], "heading": "F Cross-entropy with Random Path (xH-Rand)", "publication_ref": [], "table_ref": [ "tab_14" ], "text": "The cross-entropy loss presented in §3.4 requires enumerating all O(|y| 2 ) prefix-suffix pairs. Here, we propose an O(|y|) variant in which the join loss is averaged over a random set of prefix-suffix pairs for each word. Specifically, the set S is defined such that there is only one (i, j) pair for each 1 ≤ k ≤ |y| where i + j = k. Otherwise, this loss L xH-Rand (θ) is the same as the cross-entropy loss of §3.4. Since this loss has an O(|y|) runtime, it has the same complexity as a standard unidirectional loss (assuming all local probabilities take constant time to compute). Table 12 compares the accuracies of this model with the other bidirectional variants discussed in §6. Reranking xH-Rand is slightly better than not reranking, and this performs well: its average accuracy is almost 1 percentage point higher than BL2 and it improves on 15/27 languages. xH-Rand is better than MML but not as good as xH. Nonetheless, its faster runtime and competitive performance makes this a useful method. " }, { "figure_ref": [], "heading": "G Additional Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_7", "fig_8" ], "heading": "G.1 Accuracy by Length", "publication_ref": [], "table_ref": [], "text": "Figure 2 in §7.1 compares the accuracy of our bidirectional method xH-Rerank with that of the baseline BL2 by the length of the output form. Figure 7 shows a similar comparison for BL2-xH (our other best method) with BL2; consistent with the analysis of §6.2, there is less of a difference between these methods, but BL2-xH does equal or outperform BL2 at all lengths.\nFigure 8 shows the distribution of output form length across all languages." }, { "figure_ref": [ "fig_11" ], "heading": "G.2 Accuracy by Part-of-Speech", "publication_ref": [], "table_ref": [], "text": "Figures 10 and 9 compare the accuracies of xH-Rerank and BL2-xH (our best bidirectional methods) with the accuracy of BL2 by part-of-speech. We see that xH-Rerank maintains or improves accuracy over BL2 in all categories except V.MSDR (masdars), and BL2-xH maintains or improves accuracy in all categories except V.MSDR and V.PTCP (participles). These categories make up a small fraction of the data; this can be seen in Figure 11, which shows the distribution of part-of-speech categories across all languages." }, { "figure_ref": [ "fig_12" ], "heading": "G.3 What orderings does each method prefer?", "publication_ref": [], "table_ref": [], "text": "In this section, we investigate the ordering preferences for each method: does a model prefer to decode words entirely in the L2R or R2L direction, or partially in each direction? These results can be seen for each language in Figure 12.\nBoth the xH and MML methods have a strong tendency to decode words partially in each di- rection; however, MML models clearly have a higher proportion of words decoded from both directions than their xH counterparts. Out of the words decoded entirely in one direction, the xH model shows a slight preference for R2L generations, though most languages have words decoded from both directions. On the other hand, for the MML model, no language shows a preference for R2L generations over L2R generations; in fact, R2L generations are extremely rare for the MML models." }, { "figure_ref": [], "heading": "G.4 Empirical Inference Times", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "Given that our bidirectional model must recompute previous hidden states at each timestep during inference (see §4), we wish to compare the empirical slowdown in decoding for our bidirectional models compared with unidirectional models. The average number of seconds taken to decode 50 examples is shown in Table 6.\nRecomputing hidden states at each step slows down inference by a factor of about 3. However, in practice, we barely notice the difference on this task, as the test sets have only 1,000 examples each. Given the strong outperformance of the bidirectional methods over the unidirectional baselines (and even over the naive bidirectional baseline BL2), one must therefore make a tradeoff between time and performance. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [ "b3" ], "table_ref": [], "text": "This work utilizes resources supported by the National Science Foundation's Major Research Instrumentation program, grant #1725729, as well as the University of Illinois at Urbana-Champaign. In particular, we made significant use of the HAL computer system (Kindratenko et al., 2020). We would also like to acknowledge Weights & Biases (Biewald, 2020), which we utilized to manage our experiments." } ]
Transformer-based encoder-decoder models that generate outputs in a left-to-right fashion have become standard for sequence-tosequence tasks. In this paper, we propose a framework for decoding that produces sequences from the "outside-in": at each step, the model chooses to generate a token on the left, on the right, or join the left and right sequences. We argue that this is more principled than prior bidirectional decoders. Our proposal supports a variety of model architectures and includes several training methods, such as a dynamic programming algorithm that marginalizes out the latent ordering variable. Our model sets state-of-the-art (SOTA) on the 2022 and 2023 shared tasks, beating the next best systems by over 4.7 and 2.7 points in average accuracy respectively. The model performs particularly well on long sequences, can implicitly learn the split point of words composed of stem and affix, and performs better relative to the baseline on datasets that have fewer unique lemmas (but more examples per lemma).
A Framework for Bidirectional Decoding: Case Study in Morphological Inflection
[ { "figure_caption": "Figure 1 :1Figure 1: Architecture for bidirectional decoding model. Depicts the token inputs for the verb walked at timestep t = 3 with -→ y ≤2 = $wa and ←y ≤1 = d#. All inputs are surrounded by a rectangle.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Accuracies of xH-Rerank and BL2 by Word Length. Average accuracies of BL2 and xH-Rerank models over all languages, grouped by length (number of characters) of the output form.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Accuracy Improvement by Language. Difference in accuracy between our best models (xH-Rerank and BL2-xH) and our best baseline BL2.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Morphology of words in test set. Percentage of forms that are suffix-only, prefix-only, or neither in the test set for each language.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Analysis for prefix-and suffix-only words. Percentage of forms for each training method that (1) are correct and whose ordering agrees with the form's morphology; (2) are correct but whose ordering does not agree with the form's morphology; and (3) are incorrect.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Accuracy of models by word type. Accuracy of words that are suffix-or prefix-only, or neither.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure 4 shows the percentage of words that are prefix-only, suffix-only, or neither for each language. Most languages favor suffix-only inflections, although Swahili strongly prefers prefixes and several other languages have a high proportion of words without a clear affix.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Accuracies of BL2-xH and BL2 by Word Length. Average accuracies of BL2 and BL2-xH models over all languages, grouped by length (number of characters) of the output form.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Number of Test Examples by Length. Number of test examples across all languages by number of characters in (correct) output form.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Accuracies of BL2-xH and BL2 by Part of Speech. Accuracies of BL2 and BL2-xH models averaged over all languages, grouped by part of speech.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Accuracies of xH-Rerank and BL2 by Part of Speech. Accuracies of BL2 and xH-Rerank models averaged over all languages, grouped by part of speech.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Number of Test Examples by Part-ofspeech. Number of test examples across all languages by part-of-speech.", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Ordering choices. Percentage of examples for each language and training method that are decoded fully L2R, fully R2L, or partially from each direction.", "figure_data": "", "figure_id": "fig_12", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Accuracies of Methods. Accuracy averaged over all languages in the SIGMORPHON 2023 shared task, and number of languages whose accuracy equals or exceeds (≥) the best baseline BL2. The entry Goldman et al. (2023) shows the accuracy of the next best system submitted to the shared task. Also shows number of languages with a statistically significant improvement (>) or degradation (<), or no statistically significant change (=), in accuracy compared with BL2 using a paired-permutation test(Zmigrod et al., 2022) with α = 0.05. The best entry in each column is bold. See Table9in Appendix D for results by language.", "figure_data": "80.26----R2L 79.65----BL2 82.59----xH 84.25 19/27 12/27 12/27 3/27MML 81.439/275/27 11/27 11/27xH-Rerank 84.38 18/27 12/27 13/27 2/27MML-Rerank 81.509/275/27 12/27 10/27BL2-xH 84.00 24/27 12/27 15/27 0/27BL2-MML 83.54 18/277/27 17/27 3/27Goldman et al. (2023)81.6----", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of 2022 and 2023 results.", "figure_data": "20222023Overall Unseen Overall/UnseenL2R73.2074.9980.26R2L74.4875.7079.65BL275.9677.2382.59xH72.7674.8584.25xH-Rerank72.9174.7284.38BL2-xH76.0378.0284.00Yang et al. (2022)71.2674.96-Macro-averaged accuracies over all languages in theSIGMORPHON 2022 and 2023 shared tasks. Accura-cies on test lemmas that are unseen in the training dataare also reported (for 2023, all test lemmas are unseenin the training data). The average accuracies of the bestsystem (Yang et al., 2022) submitted to the 2022 sharedtask are also reported.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Dataset Statistics. Number of unique lemmas, unseen lemmas, average number of forms per lemma, and average number of lemmas per tagset averaged over all languages for the 2022 and 2023 datasets. 2022 numbers are scaled to the 2023 size (10k train, ∼1k test examples) to allow for direct comparison.", "figure_data": "TrainTest2022202320222023Unique Lemma 3636.4 753.4 1492.0 94.1Unseen Lemma--619.094.1Forms per Lemma2.519.31.415.4Lemmas per Tagset100.9209.915.422.1", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "makes progress on this matter, but", "figure_data": "BidiForced L2RForced R2LBidi-2Uni-80.2679.6582.59xH 84.2571.0577.3178.42MML 81.434.680.072.33", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "David Guriel, Omer Goldman, and Reut Tsarfaty. 2022.Morphological reinflection with multiple arguments: An extended annotation schema and a Georgian case study. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 196-202, Dublin, Ireland. Association for Computational Linguistics.", "figure_data": "Ran Zmigrod, Tim Vieira, and Ryan Cotterell. 2022.Exact paired-permutation testing for structured teststatistics. In Proceedings of the 2022 Conference ofthe North American Chapter of the Association forComputational Linguistics: Human Language Tech-nologies, pages 4894-4902, Seattle, United States.Association for Computational Linguistics.Kenji Imamura and Eiichiro Sumita. 2020. Transformer-based double-token bidirectional autoregressive de-coding in neural machine translation. In Proceedingsof the 7th Workshop on Asian Translation, pages 50-57, Suzhou, China. Association for ComputationalLinguistics.Katharina Kann and Hinrich Schütze. 2017. The LMUsystem for the CoNLL-SIGMORPHON 2017 sharedtask on universal morphological reinflection. In Pro-ceedings of the CoNLL SIGMORPHON 2017 SharedTask: Universal Morphological Reinflection, pages40-48, Vancouver. Association for ComputationalLinguistics.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "SMLEmbed dim64128256FFN dim2565121024Num. layers234Num. heads248Learning rate0.0050.0010.001Num. params ∼ 240k ∼1.4M ∼7.3M", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Hyperparameters. Hyperparameters for small, medium, and large models.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Inference", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "2023 Dataset Information. Information on each language in the 2023 dataset, including language family and genus, baseline accuracies on test set, and model size chosen (based on validation accuracies).", "figure_data": "Veps Uralic FinnicTurkish Turkic OghuzSlovak Indo-European SlavicPomak Indo-European SlavicPolish Indo-European SlavicOld Norse Indo-European GermanicLudic Uralic FinnicKarelian Uralic FinnicKorean Koreanic -Khalkha Mongolian Mongolic -Kazakh Turkic KipchakGeorgian Kartvelian Karto-ZanArmenian Indo-European ArmenianHungarian Uralic UgricHebrew Afro-Asiatic SemiticGothic Indo-European GermanicEvenki Tungusic Northern TungusicAssamese Indo-European IndicArabic Afro-Asiatic SemiticOld English Indo-European GermanicLanguage Linguistic Information Family GenusvepturslkpomapolnonludkrlkorkhkkazkathyehunhebgotevnasmaraangCodeLanguage60.91 61.26 63.37 60.56 60.41 62.5794.15 93.85 93.70 93.65 95.40 92.9091.90 92.55 93.25 93.25 94.10 93.4566.98 65.68 67.83 66.88 64.23 67.8890.10 90.40 88.95 89.30 89.15 89.9082.92 84.43 84.73 82.92 82.32 85.8963.16 74.14 68.27 72.32 78.34 56.9865.18 71.49 67.94 70.79 70.89 72.3957.13 57.18 57.54 57.23 58.76 57.6446.26 48.59 48.69 46.97 48.59 48.4364.14 64.34 62.99 62.54 69.11 68.1584.15 86.10 84.50 83.85 88.15 88.9589.90 91.90 91.15 90.15 92.90 92.7068.20 78.40 75.80 72.65 76.40 75.8047.90 49.50 49.35 52.70 50.55 53.1072.52 73.47 73.62 70.11 68.51 74.7756.11 56.05 54.68 57.37 54.79 52.0981.46 80.75 81.11 74.42 84.87 83.5777.74 76.94 77.49 77.84 77.59 77.2456.78 59.93 62.06 60.49 63.03 64.55Small Medium Large Small Medium LargeL2R Test Accuracies R2L Test AccuracieslargemediumlargelargemediumlargemediummediumlargemediummediumlargemediummediumlargelargesmallmediumsmalllargeChosenModel Size", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "2022 Dataset Information. Information on each language in the 2022 dataset, including language family and genus, baseline accuracies on test set, and model size chosen (based on validation accuracies). Only large datasets (7,000 train examples) are used. 78.70 * 84.60 * 84.90 * 81.50 79.20 83.40 * 90.60 * 87.30 * 85.80 * 82.90 * 89.30 84.70 * 89.10 83.60 * 83.60 * 87.50 86.30 * 85.00 * 83.60 * 89.50 89.60 89.30 85.70 * 86.80 * 85.70 * 88.30 85.40 * 75.70 * 80.30 81.50 * 80.20 81.90 * 53.50 * 56.00 * 55.70 * 53.60 * 53.30 * 53.70 * 55.10 * 54.60 * 55.80 * 41.50 * 43.10 * 56.20 * 59.30 * 84.09 * 93.55 91.04 * 91.74 90.53 * 88.62 * 84.99 * 91.64 86.20 * 75.00 * 83.60 * 83.60 * 81.50 * 75.00 * 79.30 * 78.20 79.30 * 85.00 * 85.10 * 82.30 * 80.10 * 79.80 * 81.20 * 93.70 * 92.10 * 95.30 92.70 * 97.20 * 96.40 90.10 * 90.60 * 93.90 * 95.40 .90 87.40 84.20 * 82.10 * 85.50 * 85.40 * 84.70 * 83.30 * 87.30 87.30 59.60 * 65.90 66.40 * 66.10 * 60.60 93.40 * 93.30 * 93.30 * 93.30 * 93.50 * 91.50 91.00 82.50 * 91.90 * 89.90 * 82.80 * 82.40 * 88.60 * 85.90 * 89.30 85.80 * 87.80 85.30 * 90.30 * 86.60 swa medium 92.70 92.90 93.10 96.60 * 90.50 * 96.60 * 95.60 * 91.40 * 90.50 * 93.10 93.00", "figure_data": "Number (p ≤ 0.05) < 3/27 11/27 2/27 3/27 7/27 10/27 0/27 3/27 7/27 9/27 6/27 10/27 1/27 4/27Number (p ≤ 0.05) = 12/27 11/27 13/27 12/27 12/27 12/27 15/27 17/27 11/27 15/27 18/27 15/27 14/27 16/27Number (p ≤ 0.05) > 12/27 5/27 12/27 12/27 8/27 5/27 12/27 7/27 9/27 3/27 3/27 2/27 12/27 7/27Number ≥ 19/27 9/27 18/27 18/27 17/27 9/27 24/27 18/27 16/27 16/27 11/27 8/27 19/27 14/27Average 80.26 79.65 82.59 84.25 81.43 84.38 84.26 82.90 81.50 84.00 83.54 82.64 81.85 81.81 81.29 84.11 83.00tur small 88.80 87.40 90.90 94.00 * 89.90 94.20 * 93.40 * 93.00 * 89.90 91.00 90.30 92.40 * 91.60 88.10 * 86.10 * 93.30 * 91.30sqi medium 85.00 84.40 87.60 91.00 93.10 92.80 93.30 92.90 93.50 93.00spa medium 90.30 91.20 90.90 93.20 91.50 91.20 91.80 91.70 92.30 * 92.00 *sme medium 63.40 65.30 69.90 67.40 75.60 * 67.30 69.00 70.00 75.20 * 71.80 * 70.90 66.60 * 70.00 67.90 71.10 69.90 75.00 *san small 61.50 60.70 63.30 67.70 69.10 * 68.80 * 67.10 * 65.50 61.90 60.50 * 66.60 * 65.20rus small 82.10 8483.30 * 81.20 * 86.30 86.00 86.30 84.30 *nav small 53.70 48.90 54.00 55.10 57.10 * 55.60 55.00 57.00 * 57.00 * 55.10 55.60 * 54.90 56.40 * 54.30 55.60 56.00 57.10 *mkd medium 89.70 92.00 91.90 92.10 91.40 92.40 92.40 93.20 91.50 92.40 91.90 93.80 * 92.00 93.20 92.70 92.80 91.90klr medium 99.40 98.30 99.40 99.40 99.40 99.40 99.40 99.40 99.40 99.40 99.40 99.40 99.40 99.10 99.00 99.40 99.40kat small 79.70 79.50 84.10 81.30 * 81.40 * 82.90 82.60 83.50 81.10 * 84.70 84.60 82.70 * 84.30 81.40 * 81.10 * 83.80 83.70jap medium 93.80 91.00 92.80 94.90 * 92.30 94.90 * 94.20 93.50 92.10 94.20 * 93.40 94.80 * 93.00 94.40 * 92.20 94.80 * 92.40ita small 89.30 93.90 95.80 94.40 92.70 94.30 * 94.70hye small 82.00 86.40 88.40 94.20 * 86.50 94.30 * 94.20 * 88.40 86.20 91.40 * 91.10 * 85.10 * 81.50 * 88.80 87.60 92.70 * 88.00hun small 77.70 66.10 76.30 84.30 81.70 * 79.10 * 75.20 76.10 81.40 * 81.10 *grc medium 53.20 39.10 48.90 56.00 heb large 91.14 87.41 92.95 92.45 84.09 * 92.45 92.25 86.10 hebu medium 78.50 74.10 77.30 83.70 79.30 * 77.50 79.40 * 74.20 * 82.40 * 76.20fra small 63.70 67.70 69.30 71.70 71.60 72.90 * 73.50 * 74.90 * 71.50 74.70 * 74.40 * 72.00 * 70.70 72.70 * 74.00 * 74.80 * 75.00 *fin medium 78.30 74.00 79.20 83.60 * 77.70 82.70 * 83.90 * 77.70 77.90 78.80 78.50 81.20 * 81.00 78.10 78.70 81.20 * 79.80dan medium 88.40 87.80 88.80 86.50 deu large 73.10 78.00 79.70 80.20 81.10 79.70 80.80 80.80 81.00 79.70 81.00 * 74.80 eng medium 94.50 94.00 95.60 95.70 95.80 95.70 96.20 96.00 95.80 95.90 95.80 95.50 95.60 94.90 94.80 95.50 95.80bel large 70.30 70.10 73.50 72.90 72.80 72.90 73.20 74.00 72.90 74.70 74.40 71.80 73.60 71.60 73.80 72.10 74.10amh medium 84.40 89.30 88.90 88.90 83.40 * 88.60 87.90 85.90 arz small 87.20 88.10 89.20 89.10 87.50 88.70 88.90 87.30 * 87.40 * 88.70 89.10 89.20 88.50 88.30 89.20 89.10 88.90afb small 75.20 78.10 80.70 84.10 82.20 * 80.30 80.20 78.00 * 80.00 76.90 * 82.90 * 79.20Size L2R R2L BL2 xH MML xH MML xH MML xH MML xH MML xH MML xH MMLModel Baselines Standalone xH-Rerank MML-Rerank BL Discriminator L2R-Rerank R2L-Rerank (L2R+R2L)-Rerank", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "All Accuracies (2023 data). A number is starred (*) if it shows a statistically significant difference with the best baseline BL2; a number is colored in green if it improves over BL2 (regardless of significance) using a paired permutation test(Zmigrod et al., 2022); and a number is bold if it is the best for the language.", "figure_data": "Number ≥ 2/20 2/20 13/20 0/20 3/20 3/20Average 73.20 74.48 75.96 72.76 72.91 76.03 73.49 74.40 74.30vep large 63.37 62.57 65.43 63.67 63.32 65.48 64.33 63.87 64.53tur medium 93.85 95.40 95.75 95.60 95.60 95.60 95.20 95.80 95.75slk large 93.25 93.45 94.20 94.05 94.00 94.90 93.55 93.10 94.15poma large 67.83 67.88 69.78 67.88 67.93 70.14 69.73 68.83 69.03pol medium 90.40 89.15 90.95 89.40 89.15 90.80 89.70 89.40 89.70non large 84.73 85.89 87.95 84.88 85.03 88.05 84.83 85.84 85.64lud medium 74.14 78.34 82.19 64.37 63.82 80.62 70.34 77.13 71.26krl medium 71.49 70.89 73.40 64.38 65.93 72.34 69.59 68.99 68.49kor large 57.54 57.64 59.11 57.64 57.69 59.93 57.59 58.81 58.81khk medium 48.59 48.59 48.94 48.94 48.99 48.99 48.79 48.94 48.99kaz medium 64.34 69.11 70.36 65.70 66.70 70.96 63.19 69.46 69.01kat large 84.50 88.95 88.95 92.00 91.50 89.75 87.70 90.30 91.70hye medium 91.90 92.90 93.60 90.30 90.85 93.90 92.10 92.60 92.05hun medium 78.40 76.40 78.15 77.00 77.00 77.80 77.60 76.60 77.05heb large 49.35 53.10 53.95 49.95 49.95 52.00 49.35 50.80 50.35got large 73.62 74.77 75.33 72.17 72.62 75.83 74.32 74.02 74.07evn small 56.11 57.37 60.70 56.86 56.86 61.10 56.51 58.00 58.35asm medium 80.75 84.87 85.73 83.42 83.22 87.44 83.87 83.67 85.18ara small 77.74 77.84 79.45 76.19 77.09 79.20 78.70 77.59 78.45ang large 62.06 64.55 65.26 60.89 61.05 65.77 62.77 64.20 63.53Size L2R R2L BL2 xH xH-Rerank BL2-xH L2R-Rerank-xH R2L-Rerank-xH (L2R+R2L)-Rerank-xHModel Baselines Bidirectional Baselines Rerank", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "All Accuracies (2022 data). A number is colored in green if it improves over BL2, and a number is bold if it is the best for the language.", "figure_data": "ModelBaselinesBidirectionalSizeL2RR2LBL2BL10xHxH-RandMMLafbsmall89.0089.50 85.1093.4086.2083.8084.90amh medium89.3096.40 92.0096.7089.0089.0089.20arzsmall94.9094.70 90.7096.3089.2088.5089.60bellarge82.6082.40 77.4087.3075.1074.7078.50dan medium95.0092.40 92.5097.0088.5089.1087.80deularge81.6086.50 82.3090.3081.2081.3083.80eng medium98.2097.10 96.7098.7096.4096.7096.90fin medium89.5083.60 81.4091.0084.5086.4080.40frasmall86.0089.60 79.4094.9074.4073.2080.90grc medium63.0048.50 55.8069.9056.0049.1055.50heblarge95.0790.43 94.3696.6892.4589.4386.30hebu medium86.8083.50 82.4090.7085.6086.4088.80hunsmall88.3083.60 83.6091.3085.3084.5084.80hyesmall87.0090.60 92.0095.6094.3091.9088.70itasmall92.6097.60 97.9098.4094.8095.1095.80jap medium97.0094.50 94.7097.0094.9093.6093.60katsmall85.8085.20 85.8088.8083.2080.9084.90klr medium 100.00 99.60 99.40 100.00 99.4099.30100.00mkd medium96.1096.60 93.6098.2092.6093.2094.20navsmall63.7065.30 58.3072.4056.4053.6059.40russmall89.7093.50 90.2094.7086.1087.8087.30sansmall81.3072.90 72.4084.3068.2067.3075.60sme medium78.0077.20 73.8086.0070.8070.8077.50spa medium93.9093.30 92.3095.1093.3093.0093.70sqi medium95.2090.80 90.0097.5092.9089.4082.90swa medium96.4097.40 93.1097.7097.2097.7091.40tursmall94.7090.20 91.1095.8094.3090.2094.50Average88.5487.52 85.8692.4385.2784.2985.44", "figure_id": "tab_12", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Oracle Accuracies (2023 data). Accuracies of each method if an oracle were used to select among the hypotheses returned from beam search. In the case of BL10, an oracle chooses out of the 10 candidates returned from L2R and R2L's beam search; in the case of BL2, an oracle chooses between the best L2R and best R2L hypothesis.", "figure_data": "ModelBaselinesStandaloneRerankerSizeL2RR2LBL2xHxH-Rand MMLxHxH-Rand MMLafbsmall75.20 78.10 80.70 84.1081.0078.70 84.6082.4079.20amh medium 84.40 89.30 88.90 88.9088.5083.40 88.6088.4083.40arzsmall87.20 88.10 89.20 89.1087.8087.50 88.7087.5087.40bellarge70.30 70.10 73.50 72.9070.8072.80 72.9072.3072.90dan medium 88.40 87.80 88.80 86.5087.7083.60 87.5087.4083.60deularge73.10 78.00 79.70 80.2080.9081.10 79.7080.5081.00eng medium 94.50 94.00 95.60 95.7095.8095.80 95.7095.9095.80fin medium 78.30 74.00 79.20 83.6085.1077.70 82.7085.9077.90frasmall63.70 67.70 69.30 71.7072.1071.60 72.9072.3071.50grc medium 53.20 39.10 48.90 56.0048.9053.50 56.0049.0053.30heblarge91.14 87.41 92.95 92.4589.1284.09 92.4589.2284.09hebu medium 78.50 74.10 77.30 83.7086.1075.00 83.6086.2075.00hunsmall77.70 66.10 76.30 84.3083.8079.30 85.0084.3080.10hyesmall82.00 86.40 88.40 94.2090.6086.50 94.3091.3086.20itasmall89.30 93.90 95.80 94.4094.3092.70 93.7094.7092.70jap medium 93.80 91.00 92.80 94.9092.7092.30 94.9093.6092.10katsmall79.70 79.50 84.10 81.3079.8081.40 82.9080.8081.10klr medium 99.40 98.30 99.40 99.4099.2099.40 99.4099.2099.40mkd medium 89.70 92.00 91.90 92.1092.8091.40 92.4093.2091.50navsmall53.70 48.90 54.00 55.1050.4057.10 55.6052.1057.00russmall82.10 84.90 87.40 84.2085.3082.10 85.5086.8083.30sansmall61.50 60.70 63.30 67.7065.5059.60 65.9066.8060.60sme medium 63.40 65.30 69.90 67.4067.3075.60 67.3067.3075.20spa medium 90.30 91.20 90.90 93.2093.0093.40 93.3093.0093.50sqi medium 85.00 84.40 87.60 91.0088.3082.50 91.9087.9082.40swa medium 92.70 92.90 93.10 96.6096.9090.50 96.6097.3090.50tursmall88.80 87.40 90.90 94.0089.5089.90 94.2089.3089.90Average 80.26 79.65 82.59 84.2583.0881.43 84.3883.5081.50Number ≥19/2714/279/2718/2715/279/27", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Random Cross-Entropy Accuracies (2023 data). A number is colored in green if it improves over BL2, and a number is bold if it is the best for the language.", "figure_data": "LanguageModel Size ChosenL2R Val. Accuracies R2L Val. Accuracies Bidirectional Val. Accuracies S M L S M L xH xH-Rand MMLafbsmall74.5 77.474.879.9 76.674.483.781.181.6amhmedium81.1 84.883.081.8 83.982.188.790.186.1arzsmall87.9 89.388.389.0 87.487.989.689.488.2bellarge73.1 73.374.573.2 75.574.876.077.176.4danmedium88.9 90.089.889.4 89.988.587.390.387.2deularge79.3 80.178.676.7 76.979.680.581.177.8engmedium94.8 95.994.894.9 94.693.895.295.995.0finmedium93.7 96.892.492.6 90.891.298.196.394.8frasmall76.3 80.773.579.7 74.972.483.282.580.0grcmedium57.1 62.557.456.5 57.554.767.263.366.6hebumedium91.3 92.790.792.8 92.491.694.393.993.5heblarge89.9 90.790.185.4 86.689.993.393.090.4hunsmall87.3 85.981.777.3 77.374.788.789.179.5hyesmall89.1 86.584.384.6 75.470.195.192.894.4itasmall95.8 94.794.294.0 85.488.297.396.396.8japmedium89.6 89.888.688.6 89.090.292.992.692.0katsmall81.6 79.479.876.6 74.771.380.079.180.4klrmedium99.6 99.498.898.7 99.499.399.899.899.8mkdmedium93.2 94.092.691.3 93.690.696.095.793.9navsmall59.3 53.550.156.3 54.454.957.659.059.5russmall88.3 87.586.387.9 87.385.788.687.586.5smemedium72.2 74.772.070.2 70.362.576.079.081.4spamedium95.2 96.091.394.5 94.094.198.597.595.9sqimedium89.2 89.779.590.4 90.674.893.993.489.5swamedium97.1 96.592.496.8 97.695.597.697.697.3tursmall92.1 92.187.590.7 82.470.797.194.690.0sansmall67.5 63.257.663.1 46.247.976.275.671.4", "figure_id": "tab_14", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Validation Accuracies (2023 data). Validation accuracies for each language in the 2023 dataset. Validation accuracies on unidirectional models are used for hyperparameter selection. The bidirectional validation accuracies(xH, xH-Rand, MML) are reported for the chosen model size for each language.", "figure_data": "LanguageModel Size ChosenL2R Val. Accuracies R2L Val. Accuracies S M L S M LxHanglarge59.5 64.163.962.7 64.564.964.0arasmall75.0 74.575.375.9 75.774.974.7asmmedium83.6 84.485.076.9 88.085.185.6evnsmall52.1 50.950.053.4 50.649.353.0gotlarge81.0 81.781.880.0 79.082.178.3heblarge26.1 27.727.631.2 31.331.728.2hunmedium67.1 75.073.470.6 75.975.076.5hyemedium91.2 93.492.191.0 93.993.491.7katlarge86.0 88.787.786.0 89.290.492.6kazmedium65.6 67.767.354.0 61.260.357.7khkmedium38.0 39.839.239.2 39.739.239.2korlarge56.9 57.658.657.8 59.658.956.5krlmedium64.6 66.864.765.5 65.767.062.7ludmedium59.8 71.565.271.5 76.655.360.5nonlarge85.7 88.986.987.4 86.588.889.1polmedium90.3 91.990.390.5 90.591.091.0pomalarge49.4 52.753.649.3 55.457.552.9slklarge90.7 91.492.492.8 92.792.992.7turmedium96.0 96.596.495.7 97.296.097.3veplarge63.0 61.663.559.7 59.660.964.1", "figure_id": "tab_15", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Validation Accuracies (2022 data). Validation accuracies for each language in the 2022 dataset. Validation accuracies on unidirectional models are used for hyperparameter selection. The xH validation accuracies are reported for the chosen model size for each language.", "figure_data": "", "figure_id": "tab_16", "figure_label": "14", "figure_type": "table" } ]
Marc E Canby; Julia Hockenmaier; Ryan Cotterell; Christo Kirov; John Sylak-Glassman; Géraldine Walther; Ekaterina Vylomova; Arya D Mc- Carthy
[ { "authors": "Roee Aharoni; Yoav Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Morphological inflection generation with hard monotonic attention", "year": "2017" }, { "authors": "Kamal Al-Sabahi; Zhang Zuping; Yang Kang", "journal": "", "ref_id": "b1", "title": "Bidirectional attentional encoder-decoder model and bidirectional beam search for abstractive summarization", "year": "2018" }, { "authors": "Antonios Anastasopoulos; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Pushing the limits of low-resource morphological inflection", "year": "2019" }, { "authors": "Volodymyr Kindratenko; Dawei Mu; Yan Zhan; John Maloney; Sayed Hadi Hashemi; Benjamin Rabe; Ke Xu; Roy Campbell; Jian Peng; William Gropp", "journal": "Association for Computing Machinery", "ref_id": "b3", "title": "Hal: Computer system for scalable deep learning", "year": "2020" }, { "authors": "Jordan Kodner; Salam Khalifa; Khuyagbaatar Batsuren; Hossep Dolatian; Ryan Cotterell; Faruk Akkus; Antonios Anastasopoulos; Taras Andrushko; Aryaman Arora; Nona Atanalov; Gábor Bella; Elena Budianskaya; Yustinus Ghanggo Ate; Omer Goldman; David Guriel; Simon Guriel; Silvia Guriel-Agiashvili; Witold Kieraś; Andrew Krizhanovsky; Natalia Krizhanovsky; Igor Marchenko; Magdalena Markowska; Polina Mashkovtseva; Maria Nepomniashchaya; Daria Rodionova; Karina Scheifer; Alexandra Sorova; Anastasia Yemelina; Jeremiah Young; Ekaterina Vylomova", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "SIGMORPHON-UniMorph 2022 shared task 0: Generalization and typologically diverse morphological inflection", "year": "2022" }, { "authors": "Jordan Kodner; Sarah Payne; Salam Khalifa; Zoey Liu", "journal": "", "ref_id": "b5", "title": "Morphological inflection: A reality check", "year": "2023" }, { "authors": "Carolin Lawrence; Bhushan Kotnis; Mathias Niepert", "journal": "", "ref_id": "b6", "title": "Attending to future tokens for bidirectional sequence generation", "year": "2019" }, { "authors": "Lemao Liu; Andrew Finch; Masao Utiyama; Eiichiro Sumita", "journal": "", "ref_id": "b7", "title": "Agreement on targetbidirectional lstms for sequence-to-sequence learning", "year": "2016" }, { "authors": "D Arya; Ekaterina Mccarthy; Shijie Vylomova; Chaitanya Wu; Lawrence Malaviya; Garrett Wolf-Sonkin; Christo Nicolai; Miikka Kirov; Sabrina J Silfverberg; Jeffrey Mielke; Ryan Heinz; Mans Cotterell; Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and crosslingual transfer for inflection", "year": "2019" }, { "authors": "Sewon Min; Danqi Chen; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "A discrete hard EM approach for weakly supervised question answering", "year": "2019" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Tiago Pimentel; Maria Ryskina; Sabrina J Mielke; Shijie Wu; Eleanor Chodroff; Brian Leonard; Garrett Nicolai; Yustinus Ghanggo Ate; Salam Khalifa; Nizar Habash", "journal": "", "ref_id": "b11", "title": "Sigmorphon 2021 shared task on morphological reinflection: Generalization across languages", "year": "2021" }, { "authors": "Yong Shan; Yang Feng; Jinchao Zhang; Fandong Meng; Wen Zhang", "journal": "", "ref_id": "b12", "title": "Improving bidirectional decoding with dynamic target semantics in neural machine translation", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b13", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "Ekaterina Vylomova; Jennifer White; Elizabeth Salesky; Sabrina J Mielke; Shijie Wu; Maria Edoardo; Rowan Ponti; Ran Hall Maudslay; Josef Zmigrod; Svetlana Valvoda; Francis Toldova; Elena Tyers; Ilya Klyachko; Natalia Yegorov; Paula Krizhanovsky; Irene Czarnowska; Andrew Nikkarinen; Tiago Krizhanovsky; Lucas Pimentel; Christo Torroba Hennigen; Garrett Kirov; Adina Nicolai; Antonios Williams; Hilaria Anastasopoulos; Eleanor Cruz; Ryan Chodroff; Miikka Cotterell; Mans Silfverberg; Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "SIGMORPHON 2020 shared task 0: Typologically diverse morphological inflection", "year": "2020" }, { "authors": "Shijie Wu; Ryan Cotterell", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Exact hard monotonic attention for character-level transduction", "year": "2019" }, { "authors": "Shijie Wu; Ryan Cotterell; Mans Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Applying the transformer to character-level transduction", "year": "2021" }, { "authors": "Jitao Xu; François Yvon", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "One source, two targets: Challenges and rewards of dual decoding", "year": "2021" }, { "authors": "Changbing Yang; ( Ruixin; ) Ray; Garrett Yang; Miikka Nicolai; Silfverberg", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Generalizing morphological inflection systems to unseen lemmas", "year": "2022" }, { "authors": "Xiangwen Zhang; Jinsong Su; Yue Qin; Yang Liu; Rongrong Ji; Hongji Wang", "journal": "", "ref_id": "b20", "title": "Asynchronous bidirectional decoding for neural machine translation", "year": "2018" }, { "authors": "Zhirui Zhang; Shuangzhi Wu; Shujie Liu; Mu Li; Ming Zhou; Tong Xu", "journal": "", "ref_id": "b21", "title": "Regularizing neural machine translation by target-bidirectional agreement", "year": "2019" }, { "authors": "Long Zhou; Jiajun Zhang; Chengqing Zong", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b22", "title": "Synchronous bidirectional neural machine translation", "year": "2019" }, { "authors": "Long Zhou; Jiajun Zhang; Chengqing Zong; Heng Yu", "journal": "", "ref_id": "b23", "title": "Sequence generation: From both sides to the middle", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 110.78, 661.03, 179.09, 34.74 ], "formula_id": "formula_0", "formula_text": "P ( - → y |x) = |y| i=1 P ( - → y i | - → y <i , x)(1)" }, { "formula_coordinates": [ 2, 110.78, 700.92, 179.09, 34.74 ], "formula_id": "formula_1", "formula_text": "P ( ← - y |x) = |y| j=1 P ( ← -y j | ← - y <j , x)(2)" }, { "formula_coordinates": [ 2, 459.88, 504.88, 65.13, 10.33 ], "formula_id": "formula_2", "formula_text": "y (<t) ) •Q (t) (3)" }, { "formula_coordinates": [ 2, 315.51, 573.39, 189.38, 23.8 ], "formula_id": "formula_3", "formula_text": "Q (t) = P (join | -→ y (≤t) , ← - y (≤t) ) if t = |y| 1 -P (join | -→ y (≤t) , ← - y (≤t) ) otherwise" }, { "formula_coordinates": [ 3, 122.27, 127.22, 166.48, 12.27 ], "formula_id": "formula_4", "formula_text": "y <j) = P (R | -→ y<i, ← - y <j)•P ( ← -y j|R, -→ y<i, ← - y<j)" }, { "formula_coordinates": [ 3, 82.29, 164.68, 207.44, 24.33 ], "formula_id": "formula_5", "formula_text": "Qij = P (join | -→ y ≤i , ← - y ≤j ) if i + j = |y| 1 -P (join | -→ y ≤i , ← - y ≤j ) otherwise (4)" }, { "formula_coordinates": [ 3, 155.83, 258.99, 48.33, 9.57 ], "formula_id": "formula_6", "formula_text": "f [0, 0] = 1" }, { "formula_coordinates": [ 3, 86.09, 346.01, 187.33, 15.53 ], "formula_id": "formula_7", "formula_text": "f [i, 0] = f [i -1, 0] • P L ( - → y i | - → y <i , ϵ) • Q i0" }, { "formula_coordinates": [ 3, 82.89, 383.68, 193.25, 15.53 ], "formula_id": "formula_8", "formula_text": "f [0, j] = f [0, j -1] • P R ( ← -y j |ϵ, ← - y <j ) • Q 0j" }, { "formula_coordinates": [ 3, 73.4, 502.64, 212.73, 32.07 ], "formula_id": "formula_9", "formula_text": "f [i, j] = f [i -1, j] • P L ( - → y i | - → y <i , ← - y ≤j ) • Q ij + f [i, j -1] • P R ( ← -y j | - → y ≤i , ← - y <j ) • Q ij" }, { "formula_coordinates": [ 3, 103.73, 590.43, 152.53, 22.08 ], "formula_id": "formula_10", "formula_text": "P (y) = i,j I(i + j = |y|) • f [i, j]" }, { "formula_coordinates": [ 3, 80.29, 696.58, 194.36, 46.21 ], "formula_id": "formula_11", "formula_text": "f [i, j] = max f [i -1, j]•PL( -→ y i| -→ y <i, ← - y ≤j )•Qij, f [i, j -1] • PR( ← -y j | -→ y ≤i , ← - y <j ) • Qij max o P (y, o) = max i,j I(i + j = |y|) • f [i, j]" }, { "formula_coordinates": [ 3, 306.14, 585.93, 199.67, 31 ], "formula_id": "formula_12", "formula_text": "S = {(i, j) | 1 ≤ i, j, ≤ |y|; i + j ≤ |y|} Hence, S has O(|y| 2 ) elements." }, { "formula_coordinates": [ 3, 321.01, 700.25, 198.03, 74.93 ], "formula_id": "formula_13", "formula_text": "LxH (θ) = 1 3 -→ L (θ) + ← - L (θ) + L (join) (θ) -→ L (θ) = - 1 |S| (i,j)∈S log P ( -→ y i | -→ y <i, ← - y <j , x; θ) ← - L (θ) = - 1 |S| (i,j)∈S log P ( ← -y j | -→ y <i, ← - y <j , x; θ)" }, { "formula_coordinates": [ 4, 76.24, 264.92, 125.43, 23.99 ], "formula_id": "formula_14", "formula_text": "L (join) (θ) = - 1 |S| (i,j)∈S log Qij" }, { "formula_coordinates": [ 4, 316.82, 625.19, 196.92, 15.38 ], "formula_id": "formula_15", "formula_text": "⟨c J , c O , - → y 1 , ..., - → y i , c L2R , c R2L , ← -y j , ..., ← -y 1 ⟩" }, { "formula_coordinates": [ 4, 316.52, 710.25, 197.5, 10.72 ], "formula_id": "formula_16", "formula_text": "s J , s O , ..., s L2R , s R2L , ... = Decoder(⟨• • • ⟩)" }, { "formula_coordinates": [ 4, 329.23, 758.72, 177.1, 15.53 ], "formula_id": "formula_17", "formula_text": "P (join | - → y ≤i , ← - y ≤j ) = Softmax(s O V ) Avg # Langs # Langs (p ≤ 0.05) ≥ BL2 > BL2 = BL2 < BL2 L2R" }, { "formula_coordinates": [ 5, 97.9, 401.72, 174.9, 49.73 ], "formula_id": "formula_18", "formula_text": "y ≤j ) = Softmax(s J U ) P ( - → y i | - → y ≤i , ← - y ≤j ) = Softmax(s J -→ W ) P ( ← -y j | - → y ≤i , ← - y ≤j ) = Softmax(s J ← - W )" }, { "formula_coordinates": [ 12, 355.48, 504.94, 119.6, 24.43 ], "formula_id": "formula_19", "formula_text": "τ n = τ 0 -1 W a (W -n) a + 1" } ]
2023-05-21
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b31", "b24", "b25", "b42", "b18", "b39", "b53", "b38", "b40", "b43", "b44", "b51", "b28", "b36", "b40", "b40", "b8", "b21", "b0", "b45", "b33", "b32", "b29", "b6" ], "table_ref": [], "text": "Contemporary natural science and engineering is replete with data sets that are images, lattices, or grids of geometric objects. These might be observations of intensities (scalars), magnetic fields (pseudovectors), or polarizations (2-tensors) on a surface or in a volume. They might be the inputs or outputs of a simulation where the initial conditions or fields are specified on a regular grid; see Figure 1 for some examples. Any lattice of vectors or tensors can be seen as a generalization of the concept of an image in which the intensity in each pixel is replaced with a geometric object -scalar, vector, tensor, or their pseudo counterparts. These objects are geometric in the sense that they are defined in terms of their transformation properties under geometric operators such as rotation, translation, and reflection. Thus there is a need for machine learning methods designed for geometric images-lattices or grids of scalars, vectors, and tensors. There are already countless applications of machine learning in contexts in which the input data are geometric images, including examples in essentially all natural-science disciplines.\nAt the present day, the go-to tools for machine learning with images are convolutional neural networks (CNNs; [32]) and their many descendants, including residual networks (ResNets) [25], dense networks (DenseNets) [26], and attention mechanisms such as transformers [43]. Other recent tools for machine learning with images include generative adversarial networks (GANs) [19] for image synthesis and style transfer, and recurrent neural networks (RNNs) [40] for tasks such as image captioning and video analysis. Additionally, transfer learning [54] has emerged as a powerful technique for leveraging pre-trained models on large image datasets to improve performance on smaller or specialized datasets.\nTraditional CNNs are designed to work on one-or few-channel images in which the early layers of the network involve image convolutions with learned filters followed by the application of pointwise nonlinearities. In typical contexts, the channels of multi-channel input images will be something like the red, green, and blue channels of a color image; these can be combined arbitrarily in the layers of the CNN. When these CNN-based tools are applied to lattices of vectors, typically the components of the vectors are just treated as channels of the input image and then everything proceeds as with multi-channel color images. This ignores the inherent structure of the vectors, but, to the chagrin of the physicists, there are many projects that have had great success using this strategy on geometric images. However, there are better choices. Here we propose a set of tools that generalize the concept of convolution to apply to geometric images such that the outputs of the convolutions are also geometric images, obeying the same geometric transformation rules as the inputs.\nThe fundamental observation inspiring this work is that when an arbitrary function is applied to the components of vectors and tensors, the geometric structure of these objects is destroyed. There are strict rules, dating back to the early days of differential geometry [39], about how geometric objects can be combined to produce new geometric objects, consistent with coordinate freedom and transformation rules. These rules constitute a theme of [41], where they are combined into a geometric principle. In previous work [44,45,52] we have capitalized on the geometric principle to develop modified machine-learning methods that are restricted to exactly obey group-theoretic equivariances in physics contexts. More broadly there is a growing field of physics informed machine learning [29,37]. Here we use these rules to create a comprehensive set of tools that parameterize functions that take geometric images as input and produce geometric images as output.\nTensors can be defined-and distinguished from mere arrays of numbers-in two ways. In one, a tensor of order k is a k-multilinear function of k vector inputs that returns a scalar, an object whose value is invariant to changes in the coordinate system ( [41] Section 1.3). In the other, a tensor of order k is defined by the way that its components transform under rotations ( [41] Section 1.6). We will take the latter point of view, and this definition will be made precise in Section 2.1. Mission [9]. The color map shows a temperature field (a scalar or 0 (+) -tensor) on the sphere, and the whiskers show the principal eigenvector direction of a 2 (+)tensor field in two dimensions. In detail the underlying data are represented on a pixel grid (healpixel [22]) on the sky (a 2-sphere). (b) Two-dimensional maps of ocean current (shown with arrows; a vector or 1 (+) -tensor field) and ocean salinity (shown with color; a scalar or 0 (+) -tensor field) at a depth of 5 m [1]. (c) A three-dimensional map of temperature (a scalar or 0 (+) -tensor field) based on sensors distributed throughout the volume of a granary [46]. (d) A two-dimensional map of potential vorticity (a pseudoscalar or 0 (-) -tensor field) in the Earth's atmosphere, measured for the purposes of predicting storms [34].\n(e) Two-dimensional maps on the sky of intensity I and the three independent components Q, U, V of the electromagnetic polarization 2 (+) -tensor, from a simulation of a jet outflow from an accreting black hole [Davelaar et al, in preparation]. (f) Components of the three-dimensional stress tensor (a 2 (+)tensor field) in a diamond anvil cell, which is used to study the behavior of samples at exceedingly high pressures [33].\nThere are two ways to think about transformations-alias and alibi. In the former (alias), the idea is that the transformation is applied to the coordinate system, not the vectors and tensors themselves. This transformation leaves the geometric objects unchanged but all of their components change because they are now being represented in a changed coordinate system. In the latter (alibi), the idea is that the coordinate system is fixed and all of the geometric objects are taken through an identical transformation. In either case-alias or alibi-the key idea is that all of the components of all of the vectors and tensors in play must be changed correspondingly, and at the same time. The geometric principle requires that for any function, all the inputs, constants, parameters, and outputs must undergo the same coordinate transformations simultaneously. In other words, all valid functions will be fundamentally equivariant with respect to coordinate transformations.\nWe are motivated in this work to help solve problems in the natural sciences and engineering, where geometric images abound. However, we conjecture that these tools are probably very useful even for standard machine-learning image-recognition and image-regression tasks. After all, even standard images are measurements of a scalar, the intensity of light, at a regular grid of points on a two-dimensional surface. The laws of physics still govern the objects in a photograph and how light travels from the objects to the camera, so we may still expect to benefit from the rules of geometry.\nThese rules of geometry-the consequences of the geometric principle-are roughly as follows: A k-tensor object (tensor of order k) in d dimensions has k indices, each of which can take a value from 1 to d; that is, the k-tensor is an element of (R d ) ⊗k . A k-tensor and a k -tensor can be multiplied with the outer product to make a (k + k )-tensor object. To reduce the tensor order, a k-tensor can be contracted to a (k -2)-tensor object by identifying a pair of indices and summing over them. 1-tensor objects are called vectors and 0-tensor objects are called scalars. There are also negative-parity versions of all these (pseudoscalars, pseudovectors, and pseudotensors) and parity-changing contractions using the Levi-Civita symbol, so in what follows we will define k (p) -tensors that have k indices and a parity p ∈ {-1, +1} (sometimes denoted \"-\" and \"+\" below). Two objects can only be added or subtracted if they have the same order k and parity p. These rules define objects that can be given transformation rules under rotation and reflection such that functions made of these operations are coordinate-free, or equivariant to any change of coordinate system.\nThe symmetries that suggest these rules are continuous symmetries. But of course images are usually-and for our purposes-discrete grids of values. This suggests that in addition to the continuous symmetries respected by the tensor objects in the image pixels there will be discrete symmetries for each geometric image taken as a whole. We will define these discrete symmetry groups and use them to define a useful kind of group equivariance for functions of geometric images. This equivariance, it turns out, is very easy to enforce, even for nonlinear functions of geometric images, provided that we compose our nonlinear functions from simple geometric operations. When we enforce this equivariance, the convolution filters that appear look very much like the differential operators that appear in discretizations of vector calculus.\nOur contribution: The rest of the paper is organized in the following manner. Section 2 defines geometric objects, geometric images, and the operations on each. Section 3 discusses equivariance of functions of geometric images with some important results building off of [30] and [7]. Section 4 describes how to explicitly count these equivariant functions using a result of Molien from 1897. Sections 5 and 6 describe how to build a GeometricImageNet and present a couple of small problems with numerical results. Finally, Section 7 discusses related work. The majority of the supporting propositions and proofs have been sequestered to the Appendix, as has a larger exploration of related work." }, { "figure_ref": [], "heading": "Geometric Objects and Geometric Images", "publication_ref": [], "table_ref": [], "text": "We define the geometric objects and geometric images that we use to generalize classical images in scientific contexts in Section 2.1 and Section 2.2. The main point is that the channels of geometric images-which will be like the components of vectors and tensors-are not independent. There is a set of allowed operations on geometric objects that respect the structure and the coordinate freedom of these objects." }, { "figure_ref": [ "fig_0" ], "heading": "Geometric objects", "publication_ref": [ "b1", "b13", "b38", "b40" ], "table_ref": [], "text": "The geometric principle implies that geometric objects should be coordinate-free scalars, vectors, and tensors, or their negative-parity pseudo counterparts. To define these objects we start by stating the coordinate transformations, which, in this case, will be given by the orthogonal group.\nWe fix d, the dimension of the space, which will typically be 2 or 3. The geometric objects are vectors and tensors. The orthogonal group O(d) is the space of isometries of R d that fix the origin. It acts on vectors and pseudovectors v ∈ R d in the following way: The objects are defined by the actions that they carry in the following sense: if F is a function with geometric inputs, outputs, and parameters, then F must be coordinate-free. In other words F (g • v) = g • F (v) for all v and all g. This is the mathematical concept of equivariance which we will explore further in Section 3.\ng • v = det(M (g)) 1-p 2 M (g) v(1)\nDefinition 1 (k (p) -tensors). The space R d equipped with the action O(d) defined by ( 1) is the space of 1 (p) -tensors. Remark (terminology and notation). The parity p is a signed bit, either +1 for positive parity or -1 for negative parity. Note the distinction between the order k of the k (p) -tensor, and the rank of the tensor. We could have a 2 (p) -tensor of rank 1, like those we use in Definition 1.\nIf v i is a 1 (pi) -tensor, then T := v 1 ⊗ . . . ⊗ v k is a rank-1 k (p) -tensor, where p = k i=1 p i and the action of O(d) is defined as g • (v 1 ⊗ . . . ⊗ v k ) = (g • v 1 ) ⊗ . . . ⊗ (g • v k ) .(2\nRemark (universality of transformations). Critically, when a transformation is applied to any k (p) -tensor, it must be applied to every other geometric object-every scalar, vector, and tensor of both parities-involved in any relevant mathematical expression. This includes all constants, and all inputs and outputs to any scalar, vector, or tensor functions. Related to this, there are both alias and alibi points of view that can be taken towards (2); that is, it can be seen as defining a change made to every tensor in a fixed coordinate system, or it can be seen as a change to the coordinate system in which every tensor is represented.\nIn physics the 1 (+) -tensors (such as velocities) are known as vectors, the 1 (-)tensors (such as angular momenta) are known as pseudovectors, the 0 (+) -tensors (such as rest masses) are known as scalars, the 0 (-) -tensors (such as surface vorticities) are known as pseudoscalars, the k (-) -tensors with k ≥ 2 are known as pseudotensors, and finally the k (+) -tensors with k ≥ 2 are the things that are commonly known as tensors. In general, any k (p) -tensor can be written as a sum of outer products of order-1 tensors (vectors and pseudovectors), where each term in the sum is an outer product of k order-1 tensors and the parity p is the product of the parities of the input order-1 tensors.\nDefinition 2 (outer products of tensors). Given a ∈ T d,k,p and b ∈ T d,k ,p , the outer product, denoted a ⊗ b, is a tensor in T d,k+k ,p p defined as\n[a ⊗ b] i1,...,i k+k = [a] i1,...,i k [b] i k+1 ,...,i k+k .\nDefinition 3 (Einstein summation notation). We use Einstein summation notation where outer products are written in component form, and repeated indices are summed over. For example, in this notation, the product of two 2 (+) -tensors (represented as two d × d matrices A and B) is written as\n[A B] i,j = [A] i,k [B] k,j := d k=1 [A] i,k [B] k,j(3)\nwhere [A] i,k is the i, k element of matrix A, and the sum from 1 to d on repeated index k is implicit in the middle expression. This notation works for tensor expressions of any order, provided that every index appears either exactly once, so it isn't summed over, or exactly twice, so it is summed over.\nRemark (lower and upper indices). In the original Einstein summation notation [14], or Ricci calculus [39], a distinction is made between lower and upper indices, which correspond to covariant and contravariant components. The pairs of indices that are summed always have one member of the pair an upper index and one member a lower index. We drop the upper/lower distinction here because we will work with intrinsically flat images that implicitly have the Riemmannian metric given by the identity matrix, such that there is no numerical difference between covariant and contravariant component values for a given object. That said, there truly is a distinction (for example, if a spatial displacement is a contravariant vector, the gradient of a scalar function with respect to that spatial displacement is a covariant vector), so there might be advantages to reinstating this distinction.\nIn summation notation, the group action of (1) on k (p) -tensor b is explicitly written\n[g • b] i1,...,i k = det(M (g)) 1-p 2 [b] j1,...,j k [M (g)] i1,j1 • • • [M (g)] i k ,j k(4)\nfor all g ∈ O(d), where [b] i1,...,i k ∈ R is a component of b, [M (g)] i,j ∈ R is the i, j element of the matrix representation of g, and all the i m and j m are indices in the range 1, . . . , d. For example, a 2 (+) -tensor has the transformation property\n[g • b] i,j = [b] k, [M (g)] i,k [M (g)] j,\n, which, in normal matrix notation, is written as\ng • b = M (g) b M (g) .\nWe consider two special tensors that will be important for the definition of our models, the Kronecker delta and the Levi-Civita symbol.\nDefinition 4 (Kronecker delta). The Kronecker delta, δ, is a 2 (+) -tensor represented by the identity matrix, namely the object with two indices i, j such that it has the value +1 when the two indices have the same value (i = j), and 0 otherwise. Definition 5 (Levi-Civita symbol). The Levi-Civita symbol in dimension d ≥ 2 is a d (-) -tensor such that if the d indices are not repeated and in an even-permutation order the value is +1 and if the d indices are not repeated and in an odd-permutation order the value is -1, and it has the value 0 in all other cases. Definition 6 (contractions). Given tensor a ∈ T d,k,p , where k ≥ 2, and given µ, ν ∈ [k], µ = ν, the contraction T (a, µ, ν) ∈ T d,k-2,p is defined as:\n[T (a, µ, ν)] i1,...,i k \\{iµ,iν } = [δ] iµiν [a] i1,...,iµ,...,iν ,...,i k(5)\nThat is, we view the components of a with given fixed values for i µ and i ν as forming a (k -2) (p) -tensor, and then we take the sum of these tensors of order k -2 where i µ = i ν . We can also define the composition of multiple contractions as a multicontraction:\nT M (a, (µ 1 , µ 2 ), . . . , (µ , µ +1 )) = T (•, µ , µ +1 ) • . . . • T (a, µ 1 , µ 2 ) ,(6)\nwhere µ 1 , . . . , µ +1 ∈ [k] are all distinct. Note that because µ 1 , . . . , µ +1 are integers referring to the indices of the axes being contracted, the indices may change when swapping from a multicontraction to multiple contractions. For example, if k ≥ 4,\nT M (a, (1, 3), (2, 4)) = T (T (a, 1, 3), 1, 2)\nbecause axes i 1 and i 3 will disappear, so i 2 becomes the new i 1 and i 4 becomes the new i 2 . Finally, the Levi-Civita contraction is defined for k ≥ d -1 and µ 1 , . . . , µ d-1 ∈ [k] distinct as the following:\nT LC (a, µ 1 , . . . , µ d-1 ) = T M (a ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1)) ,(7)\nwhere is the Levi-Civita symbol.\nRemark (negative-parity objects). With a slight modification of the Levi-Civita contraction, there is an invertible function that converts any negative-parity object to a positive-parity object. Thus it is possible to work without negative-parity objects at all. We will use this idea to improve the efficiency of our algorithms for certain settings in Section 5.2. However, since negative-parity objects are important in physics and engineering (see Figure 1), we retain them in the model.\nWe can combine multiplication with Kronecker and Levi-Civita symbols with contractions to define relevant operations. For example the 2 (+) -tensor formed by the outer product of 1 (+) -tensors a and b can be contracted with the Kronecker delta to give the standard dot product a b = [a] i [b] j [δ] ij , which is a 0 (+) -tensor or scalar. For another example, the same 2 (+) -tensor can (in d = 3 dimensions) be contracted with the Levi-Civita symbol to give the standard cross product [a×b] \nk = [a] i [b] j [ ] ijk , which is a 1 (-) -tensor or pseudovector.\nDefinition 7 (permutations of tensor indices). Given a ∈ T d,k,p and permutation σ ∈ S k , the permutation of tensor indices of a by σ, denoted a σ , is:\n[a σ ] i1,...,i k := [a] i σ -1 (1) ,...,i σ -1 (k)(8)\nRemark (tensors as linear functions). There is an alternative definition of k (p)tensors in terms of geometric functions (see, for example, [41] chapter 1): A k (+)tensor can be thought of as representing a multilinear function of k vectors (1 (+)tensors) that produces a scalar (0 (+) -tensor) output. For example, if A is a 4 (+)tensor, and u, v, w, x are vectors (1 (+) -tensors) then\nρ = [A] ijk [u] i [v] j [w] k [x](9)\nis a scalar (0 (+) -tensor). k (-) -tensors can be similarly defined in terms of input vectors and an output pseudoscalar." }, { "figure_ref": [], "heading": "Geometric images and operations", "publication_ref": [], "table_ref": [], "text": "We will start by considering square (or cubic or hyper-cubic) images on a d-torus.\nWe work on a d-torus to simplify the mathematical results; all the definitions and operations will be applicable with minor adjustments to rectangular, non-toroidal arrays as well. We consider an image A in with N equally spaced pixels in each dimension for N d pixels total \n(A + B)(ī) = A(ī) + B(ī)(10)\nfor pixel ī. That is, the sums of geometric images are performed pixel-wise.\nDefinition 10 (scalar multiplication of images). Given A ∈ A N,d,k,p and α ∈ R, the scalar product αA is defined as\n(αA)(ī) = αA(ī) .(11)\nSimilarly, we define contractions and permutations as applying an identical contraction or permutation to every pixel.\nWe now turn to the first major contribution of this paper, the generalization of convolution to take geometric images as inputs and return geometric images as outputs. The idea is that a geometric image of k (p) -tensors is convolved with a geometric filter of k (p ) -tensors to produce a geometric image that contains (k + k ) (p p )tensors, where each pixel is a sum of outer products. These (k + k ) (p p ) -tensors can then be contracted down to lower-order tensors using contractions (Definition 6). Note that the sidelength M of the geometric filter can be any positive odd number, but typically it will be much smaller than the sidelength N of the geometric image." }, { "figure_ref": [ "fig_3" ], "heading": "Definition 11 (geometric convolution). Given", "publication_ref": [], "table_ref": [], "text": "A ∈ A N,d,k,p on the d-torus, and C ∈ A M,d,k ,p where M = 2m + 1 for some m ∈ N, the geometric convolution A * C is a (k + k ) (p p ) -tensor image such that (A * C)(ī) = ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) ,(12)\nwhere ī -ā is the translation of ī by ā on the d-torus pixel grid (Z/N Z) d . Additionally, m is the This definition is on the torus, which we use to simplify the mathematical exposition. To define the convolution on [N ] d instead of the torus, we can pad the image out with zero tensors of the corresponding order and parity. See Figure 2 for examples with a scalar and vector filter.\nd length 1 (+) -tensor [m, . . . , m] T . For example, if d = 2 and ā = [0, 0] T , then ā + m = [m, m] T ,\nIn addition to contractions and index permutations that act pixel-wise in geometric images, it is possible to change the image size using pooling and unpooling operations. For both pooling and unpooling, there are alternative strategies to the ones we have defined below." }, { "figure_ref": [], "heading": "Definition 12 (average pooling). Given", "publication_ref": [], "table_ref": [], "text": "A ∈ A N,d,k,p and b ∈ Z + such that N is divisible by b, we define avg pool(A, b) ∈ A N/b,d,k,p for pixel index ī as: avg pool(A, b)(ī) = 1 b d ā∈[0,b-1] d A(bī + ā)(13)" }, { "figure_ref": [], "heading": "Definition 13 (nearest neighbor unpooling). Given", "publication_ref": [], "table_ref": [], "text": "A ∈ A N,d,k,p and b ∈ Z + , we define unpool(A, b) ∈ A N b,d,k,p for pixel index ī as: unpool(A, b)(ī) = A( ī/b ) (14\n)\nwhere ī/b denotes dividing each component of ī by b, then taking element-wise floor operator of the resulting vector.\nThe convolution, contraction, index-permutation, and pooling operators above effectively span a large class of linear functions from geometric images to geometric images. One way to construct nonlinear functions is using polynomials, which in this context will be sums of outer products of any of the linear function outputs, possibly followed by further geometric convolutions and contractions. Nonlinear functions can also be constructed by applying nonlinear functions to 0 (+) -tensors (scalars), or odd nonlinear functions to 0 (-) -tensors (pseudoscalars); we will return to these methods in Section 5." }, { "figure_ref": [], "heading": "Definition 14 (outer products of images). Given", "publication_ref": [], "table_ref": [], "text": "A ∈ A N,d,k,p and B ∈ A N,d,k ,p , the outer product A ⊗ B ∈ A N,d,k+k ,p p is defined as (A ⊗ B)(ī) = A(ī) ⊗ B(ī) .(15)\nfor each pixel ī. That is, the outer products of geometric images are performed pixel-wise." }, { "figure_ref": [ "fig_11", "fig_12", "fig_11" ], "heading": "Functions of geometric images and equivariance", "publication_ref": [ "b29", "b6", "b31", "b42" ], "table_ref": [], "text": "We start by defining equivariance and invariance for a general group G, and then we will describe the groups of interest and several theoretical results. \nf (g • A) = g • f (A) (16) Likewise, f is invariant to G if f (g • A) = f (A) .(17)\nWe may also say a geometric image is invariant to\nG if g • A = A for all g ∈ G.\nConvolutional filters are widely used in machine learning for scalar images. The fundamental property of these operators are that they are translation equivariant, and that every translation equivariant linear function can be expressed as a convolution with a fixed filter, as long as the filter can be set to be as large as the image. The same property holds for geometric images. \n(L τ A)(ī) = A(ī -τ ) ,(18)\nwhere ī -τ is the translation of ī by τ on the d-torus pixel grid (Z/N Z) d .\nProposition 1. A function f : A N,d,k,p → A N,d,k+k ,p p is a translation equivariant linear function if and only if it is the convolution with a geometric filter C ∈ A M,d,2k+k ,p followed by k contractions. When N is odd, M = N , otherwise M = N + 1.\nNote that this proposition merely generalizes the result of [30] for geometric convolution when the group is discrete translations. See appendix A for the proof.\nIn addition to translation symmetries, we want to consider other natural symmetries occurring in the application domains where vectors and tensors arise. Ideally we would like to apply continuous rotations to the images, but the discretized nature of images makes this challenging. For simplicity, we focus on discrete rotations, and we extend the group action to the geometric objects in these images. \n(g • A)(ī) = g • A(g -1 • ī) . (19\n)\nSince ī is a 1 (+) -tensor, the action g -1 • ī is performed by centering ī, applying the operator, then un-centering the pixel index:\ng -1 • ī = M (g -1 )(ī -m) + m\nwhere m is the d-length\n1 (+) -tensor N -1 2 , . . . , N -12\nT . If the pixel index is already centered, such as ā ∈ [-m, m] d , then we skip the centering and un-centering.\nIt might be a bit surprising that the group element g -1 appears in the definition of the action of the group on images. One way to think about it is that the pixels in the transformed image are \"looked up\" or \"read out\" from the pixels in the original untransformed image. The pixel locations in the original image are found by going back, or inverting the transformation. Remark. We view the d-torus as the quotient of the d-hypercube obtained by identifying opposite faces. The torus obtains the structure of a flat (i.e., zero curvature) Riemannian manifold this way. Because the symmetries B d of the hypercube preserve pairs of opposite faces, they act in a well-defined way on this quotient, so we can also view B d as a group of isometries of the torus. We choose the common fixed point of the elements of B d as the origin for the sake of identifying the N Now that we have defined the group that we are working with, we can specify how to build convolution functions that are equivariant to G N,d . The following theorem generalizes the Cohen and Welling paper [7] for geometric convolutions. To prove this, we will first state and prove a key lemma. \n(g • (A * C))(ī) = g • (A * C) g -1 • ī = g •   ā∈[-m,m] d A g -1 • ī -ā ⊗ C(ā + m)   = ā∈[-m,m] d g • A g -1 • ī -ā ⊗ C(ā + m) = ā∈[-m,m] d g • A g -1 • ī -ā ⊗ g • C(ā + m) Now let ā = g • ā. Thus g -1 • ā = g -1 • g • ā = ā. Then: (g • (A * C))(ī) = ā∈[-m,m] d g • A g -1 • ī -ā ⊗ g • C(ā + m) = g -1 •ā ∈[-m,m] d g • A g -1 • ī -g -1 • ā ⊗ g • C g -1 • ā + m = g -1 •ā ∈[-m,m] d g • A g -1 • ī -g -1 • ā ⊗ g • C g -1 • ā + g -1 • m = g -1 •ā ∈[-m,m] d g • A g -1 • (ī -ā ) ⊗ g • C g -1 • (ā + m) = g -1 •ā ∈[-m,m] d (g • A)(ī -ā ) ⊗ (g • C)(ā + m) = ā ∈[-m,m] d (g • A)(ī -ā ) ⊗ (g • C)(ā + m) = ((g • A) * (g • C))(ī)\nFor the penultimate step, we note that\ng -1 • ā ∈ [-m, m] d compared to ā ∈ [-m, m] d is\njust a reordering of those indices in the sum. Thus we have our result for pixel ī, so it holds for all pixels.\nWith this lemma, the proof of Theorem 1 follows quickly.\nProof of Theorem 1. Let A ∈ A N,d,k,p be a geometric image and let C ∈ A M,d,k ,p be a convolution filter invariant to B d . It is well known that convolution is equivariant to translations, and we prove it again the appendix for our definition of convolution (32). Now suppose g ∈ B d . By Lemma 1 and the B d -invariance of C we have:\ng • (A * C) = (g • A) * (g • C) = (g • A) * C\nThus the convolution is equivariant to the generators of G N,d , so it is equivariant to the group.\nTheorem 1 provides the foundation for building our equivariant GeometricIma-geNet. Finding the set of B d -invariant k (p ) -tensor filters is straightforward using group averaging -see Section 5 for implementation details. See Figure 4 and Figure 5 for the invariant convolutional filters in d = 2 dimensions for filters of sidelength M = 3 and M = 5 respectively. We now show some important relationships between the invariant filters of different tensor orders and parities. Proposition 2. Let C ∈ A M,d,k ,p be a B d -invariant convolutional filter and let ∆ ∈ A M,d,2,+ be the geometric image with the Kronecker delta, δ, in every pixel.\nThen C ⊗ ∆ ∈ A M,d,k +2,p is a B d -invariant convolutional filter.\nProof. This proof follows quickly from the B d -invariance of the Kronecker delta, which holds because B d ⊂ O(d) (see Proposition 6 in the Appendix). With C and ∆ defined as above and pixel index ī, we have:\n(g • (C ⊗ ∆))(ī) = (g • C ⊗ g • ∆)(ī) = (g • C)(ī) ⊗ (g • ∆)(ī) = C(ī) ⊗ g • ∆(g -1 • ī) = C(ī) ⊗ g • δ = C(ī) ⊗ δ = C(ī) ⊗ ∆(ī) = (C ⊗ ∆)(ī) Proposition 3. Let C ∈ A M,d,k ,p , k ≥ d-1 be a B d -invariant convolutional filter and µ 1 , . . . , µ d-1 ∈ [k ] distinct. Then T LC (C, µ 1 , . . . , µ d-1 ) ∈ A M,d,k -d+2,-p is a B d -invariant filter of opposite parity of C.\nProof. Let C and µ 1 , . . . , µ d-1 be defined as above and let g ∈ B d . We can immediately see that T LC (C, µ 1 , . . . , µ d-1 ) is B d -invariant by the equivariance of the Levi-Civita contraction (43), so\ng • T LC (C, µ 1 , . . . , µ d-1 ) = T LC (g • C, µ 1 , . . . , µ d-1 ) = T LC (C, µ 1 , . . . , µ d-1 ) .\nThus we just have to verify that T LC (C, µ 1 , . . . , µ d-1 ) ∈ A M,d,k -d+2,-p . Since the outer product adds tensor orders and multiplies parities, at each pixel ī, C(ī) ⊗ ∈ T d,k +d,-p . Performing d -1 contractions reduces the tensor order by 2(d -1), so the resulting tensor order is k\n+ d -2(d -1) = k + d -2d + 2 = k -d + 2 as desired.\nThe consequence of Propositions 2 and 3 is a natural pairing between B dinvariant convolutional filters. See the caption of Figure 4 for further details. In practice, this allows us to dramatically reduce the number of filters we need to use in certain applications, as we will explore in Section 5.2." }, { "figure_ref": [], "heading": "Counting equivariant maps", "publication_ref": [ "b10", "b21", "b21", "b26", "b28" ], "table_ref": [ "tab_2" ], "text": "With an eye to understanding the expressive power of convolution-based functions, we show how to compute the dimension of the vector space of equivariant polynomial maps of given degree.\nSuppose a finite group G acts on a pair of real vector spaces V and W . Let F be the collection of all equivariant polynomial maps V → W , and let F ⊆ F be the homogeneous equivariant polynomials of degree . The set F forms a finitedimensional real vector space whose dimension is dependent on . Thus dim (F ) forms a nonnegative integer sequence indexed by = 0, 1, 2, . . . . The generating function of this sequence,\nH(F, t) := ≥0 dim (F ) t ,(21)\nis known as the Hilbert series of our set of functions. A variant [11,Remark 3.4.3] on a classical result known as Molien's theorem expresses this generating function as a finite sum of explicit rational functions:\nH(F, t) = 1 |G| g∈G tr M W g -1 det(I -M V (g) t) ,(22)\nwhere M V (g), M W (g -1 ) are matrices describing the actions of g, g -1 on V, W respectively. The trace in the numerator is also known as the character of W evaluated at g -1 .\nRemark. The set F is also known as the module of covariants and written (R[V ] ⊗ W ) G , or Mor G (V, W ), or Mor(V, W ) G . In this context, the word module refers to the fact that the set of equivariant polynomial maps is closed under multiplication by arbitrary G-invariant polynomial functions on V as well as closed under addition.\nCovariant is another word for equivariant map, coming from classical invariant theory. The right side of ( 22) is reasonable to compute in practice. To illustrate, we compute it for the group G N,2 of Definition 20, with V = W = A N,2,1,+ , the space of 2-dimensional geometric images whose pixels consist of vectors. We assume N is odd.\nWe first compute the character tr (M (g)) for g ∈ G N,2 . This can be done explicitly by writing down a basis for A N,2,1,+ and expressing the action of each element of G N,2 in terms of that basis. The computation is expedited by the choice of a basis in which the action of G N,2 is monomial, that is, for basis vector e i and any g ∈ G N,d , we have g • e i = α e j , where α ∈ R and e j is some basis vector which may be the same as e i . When this condition holds, only the basis eigenvectors contribute to the trace. The group B 2 acts monomially on the standard basis vectors for T 2,1,+ ∼ = R 2 , and it follows that G N,2 acts monomially on a basis for A N,2,1,+ consisting of maps [N ] d → T 2,1,+ mapping one pixel to one standard basis vector and all other pixels to zero. This situation generalizes in a straightforward fashion to higher dimensions d and higher order tensors.\nLet e 0 , e 1 be the standard basis for R 2 , and then for pixel index ī and q ∈ {0, 1}, let e q ī ∈ A N,2,1,+ be the geometric image where e q ī (ī) = e q and e q ī () = 0 for all other pixel indices  = ī. As stated above, G acts monomially on the basis of A N,2,1,+ consisting of these images e q ī . If g ∈ G N,2 , then e q ī is not an eigenvector for g unless g fixes the pixel ī, and even then, there is no contribution to the trace from pixel ī unless g acts with nonzero trace on the span e 0 ī , e 1 ī . In turn, if g does fix pixel ī, then its trace on span e 0 ī , e 1 ī is equal to the trace of the corresponding element g of B 2 under the canonical map G N,2 → B 2 on R 2 . This is zero unless g = ±I since in all other cases, g is either a π/2-rotation or a reflection. It follows that the only elements of G N,2 with nonzero trace on A N,2,1,+ are the identity (with trace 2N 2 = dim A N,2,1,+ ) and the π-rotations centered at each of the N 2 pixels (each with trace -2, coming from the fixed pixel ī, where e 0 ī and e 1 ī are both negated).\nThus the only nonzero terms in the sum (22) are those with g -1 as just described. Conveniently, g = g -1 in all those cases. We need to compute det(I -M (g) t) for such g. For g = I we have\ndet(I -M (g) t) = (1 -t) 2N 2 . (23\n)\nIf g is a π-rotation about the pixel ī, then e 0 ī , e 1 ī have their signs reversed, while all other pixels are transposed in pairs, say  ↔ ā, with the corresponding e q  sent to -e q ā and vice versa. Then the matrix I -M (g) t can be written block-diagonally with two 1 × 1 blocks of the form (1 + t) for the pixel ī that we are rotating about and N 2 -1 blocks of the form\n1 -t -t 1 . (24\n)\nfor the pixels that are being swapped. So we have\ndet(I -M (g) t) = (1 + t) 2 (1 -t 2 ) N 2 -1 . (25\n)\nPutting all of this together, (22) becomes\nH(F, t) = 1 8N 2 2N 2 (1 -t) 2N 2 + N 2 (-2) (1 + t) 2 (1 -t 2 ) N 2 -1 (26) = 1 4 1 (1 -t) 2N 2 - 1 (1 + t) 2 (1 -t 2 ) N 2 -1 .(27)\nExpanding (27) in a power series and extracting the coefficient of t , we find that the dimension of the space of G N,2 -equivariant maps A N,2,1,+ → A N,2,1,+ is 1 4\n  2N 2 + -1 + (-1) +1 /2 j=0 ( -2j + 1) N 2 + j -2 j   .(28)\nThis expression evaluated for = 1, 2, 3 is shown in Table For degrees 1 and 2, the green cells, we were able to confirm we found all the functions. For degree 3, the pink cells, we had insufficient computer memory to confirm. For N = 3, = 3 in particular, we were able to find 289 of the 290 functions by searching a subset of the candidate functions before memory limitations forced us to stop.\nWith the ability to explicitly count the number of G N,d -equivariant homogeneous polynomials on geometric images, we want to know whether the operations defined in Section 2 are sufficient to characterize all these functions. Let g i : A N,d,k,p → A N,d,ki,pi for i = 1, . . . , be a linear function on geometric images defined by the linear operations in sections 2.1 and 2.2, excluding pooling and unpooling. Let h : A N,d,k,p → A N,d,k ,p be a linear function defined by the same operations as the g i functions, where k = i=1 k i and p = i=1 p i . Let function f : A N,d,k,p → A N,d,k ,p be defined for all A ∈ A N,d,k,p :\nf (A) = h(g 1 (A) ⊗ . . . ⊗ g (A))(29)\nWhen = 1, we will only do f (A) = h(A) rather than f (A) = h(g 1 (A)). We conjecture that these steps will allow us to construct all equivariant maps of any degree. To test this conjecture, we performed the following experiments to count the number of linear, quadratic, and cubic homogeneous polynomials from vector images to vector images. First we constructed all the B d -invariant k (p ) -tensor filters for k = 1, 2 and p = +1 and used those filters to construct all the homogeneous polynomials according to (29). We then generated a random vector image and applied all the functions to that image, and we want to know whether those output images are linearly independent. Thus we flattened all the resulting images into a giant matrix and performed a singular value decomposition; the number of non-zero singular values gives us the number of linearly independent functions. In the higher order polynomial cases we have to apply the various functions on multiple images to ensure separation. The results are given in Table 1." }, { "figure_ref": [ "fig_11", "fig_12" ], "heading": "GeometricImageNet Architectures", "publication_ref": [], "table_ref": [], "text": "Our GeometricImageNet model seeks to learn some function f : A N,d,k,p → A N,d,k ,p . The problem determines d, and therefore the groups B d and G N,d . After fixing these initial parameters, the modeler must decide the size, number, and type of layers that are described below. The first choice is the attributes of the convolution filters: size M , tensor order k , and parity p , all of which may be a single value or multiple values.\nA complete set of B d -invariant k (p ) -tensor filters can be found by group averaging. We first construct all group operators for the B d group by iterating the generators until the group is closed under all products of operators. The set of possible geometric filters is a vector space of dimension M d d k , so we can pick a basis of that many elements where each basis element C i has exactly one component of the tensor in a single pixel set to 1, and all other values are 0. Each of these basis elements is then group-averaged by applying all group operators and averaging the results:\nC i = 1 |B d | g∈B d g • C i ,(30)\nwhere |B d | is the number of group elements. The results of those group averages are unpacked into a matrix and the singular value decomposition is then run to find an \"eigen-set\" of orthogonal, non-zero filters. After the SVD, the filters can be normalized however seems appropriate. We normalized the filters such that the magnitudes of the non-zero filter values are as close to unity as possible, and the k = 1 filters are also reoriented such that non-zero divergences were set to be positive, and non-zero curls were set to be counter-clockwise. See Figure 4 and Figure 5 for the invariant convolutional filters in d = 2 dimensions for filters of sidelength M = 3 and M = 5 respectively. With the set of invariant filters in hand, we may build our equivariant neural networks using convolution layers, contraction layers, outer product layers, and nonlinear activation layers." }, { "figure_ref": [], "heading": "Architecture Components", "publication_ref": [ "b11", "b34", "b35", "b43" ], "table_ref": [], "text": "We think of each building block of our architecture as a layer whose input and output is a set of images grouped by tensor order and parity. The reason for grouping is two-fold: we can only add geometric images that share tensor order and parity, and we can more efficiently batch our operations in JAX when the shapes match. The operation of each layer is either convolution, contraction, taking outer products, or applying a nonlinear activation function.\nA convolution layer takes a set of images and a set of convolution filters. For each filter, we take a weighted sum of the images of a particular tensor order and parity and apply the convolution1 on that sum with that filter. Unlike a traditional CNN where the filters are parameterized, our filters are fixed to enforce the equivariance, and the weights of the weighted sums are the learned parameters. A convolution layer can also have an optional dilation where the filters are dilated before convolving. Dilations are helpful for expanding the effective size of the filters without having to calculate the invariant filters for larger M , which grows quickly; see [12] for a description of dilated convolution. If we use filters with tensor order greater than 0, the tensor order of the images will continue to grow as we apply convolutions. Thus we need a way to reduce the tensor order -we do this with the contraction layer.\nGiven an input layer and a desired tensor order, the contraction layer performs all unique contractions (see Contraction Properties (35) (36)) to reduce the layer to that tensor order. We always end the neural network with a contraction layer to return the images to the proper tensor order. Since contractions can only reduce the tensor order by multiples of 2, the last convolution layer before the final contraction must result in images of order k +2n for any n ∈ N. We also may include contraction layers after each convolution to cap the tensor order of each layer to avoid running out of memory as the tensor order grows. In practice, k = 5 seems to be a good max tensor order.\nAn outer product layer takes a set of images and a degree and computes the full polynomial of all the images with each other using the outer product of geometric images, up to the specified degree. Typically, this will result in a combinatorial blowup of images; we can take parameterized sums along the way to reduce the number of images created. We could also do a smaller set of products if we have some special domain knowledge. However, in practice it is usually better to use nonlinear activation functions, as is standard in machine learning.\nThe final type of layer is a nonlinear activation layer. In order to maintain equivariance, we can either apply a nonlinearity to a scalar, or scale our tensors by a nonlinearity applied to the norm of the tensor [44]. For this paper, we used the first strategy. We apply all possible contractions to all even tensor order images to reduce them to scalars, then apply the nonlinearity. Any typical nonlinearity works -ReLU, leaky ReLu, sigmoid, etc. This layer will result in scalar images, which will then grow in order again as we apply more convolution layers." }, { "figure_ref": [], "heading": "Architecture Efficiency", "publication_ref": [], "table_ref": [], "text": "Without specialized knowledge of what B d -invariant convolution filters are relevant for our problem, we want to use all the filters at a specified tensor order in our convolution layers. Thus we can improve the efficiency of the GI-Net by eliminating any redundant filters. The first result follows from Proposition 2 and says that we may omit the k (p ) -tensor filters if we are using the (k + 2) (p ) -tensor filters followed by taking all contractions. Figure 6: One possible architecture for a GI-Net that maps a vector image to a vector image when using convolution filters with tensor order k ∈ {1, 2}, parity p = +1, max order k = 3, and ReLu nonlinearities. Each layer is a block of multiple images that share tensor order. The blue arrows represent convolutions and raise the tensor order by 1 or 2. Contractions are applied when the tensor order goes above 3 to bring it down to 2 or 3, and when contracting to k = 0 in order to apply the ReLu. This process continues until the final step where the only layer is order k = 1, which is then combined using a parameterized linear combination. See Appendix A for the proof. This proposition can be repeatedly applied so that if we conclude a GI-Net by taking all unique contractions, then we need only include filters of tensor order k and k -1 to include all smaller tensor orders as well. The next result says that if the input and output parities of our network are equal, we may omit the k (-) -tensor filters if we are using the (k + d) (+) -tensor filters followed by taking all contractions. Proposition 5. Let F be the set of functions that preserve parity f : A N,d,k,p → A N,d,k ,p where each f is a convolution with a negative-parity k (-) -tensor filter followed by a Levi-Civita contraction. Let G be the set of functions that preserve parity g : A N,d,k,p → A N,d,k ,p where each g is a convolution with a positive-parity (k + d) (+) -tensor followed by d -1 contractions. Then F ⊆ G.\nSee Appendix A for the proof. We will employ these results in our numerical experiments to dramatically reduce the number of filters required." }, { "figure_ref": [ "fig_15" ], "heading": "Numerical Experiments", "publication_ref": [], "table_ref": [], "text": "Code to reproduce these experiments and build your own GI-Net is available at https://github.com/WilsonGregory/GeometricConvolutions. The code is built in Python using JAX.\nThe most natural problems for this model are those that we expect to obey the symmetries of the group G N,d . We present two problems from physics that despite their simplicity, exhibit the powerful generalization properties of the equivariant model even in cases where we have few training points.\nFirst, suppose we have as input a scalar image of point masses, and we want to learn the gravitational vector field induced by these masses. For this problem, we will work in two dimensions with image sidelength of 16, and the point charges are placed only at pixel centers, so the GI-Net is learning a function f : A 16,2,0,+ → A 16,2,1,+ . To generate the data, we sampled the pixel locations uniformly without replacement 5 times to be point masses, and then we set their masses to be a uniform value between 0 and 1.\nFor a second problem, we consider several point charges in a viscous fluid. All the point charges have charge +1, so they repel each other. The position of the charges in the fluid would be described by an ordinary differential equation, and we can approximate that using Euler's method:\nx i (t + ∆t) = x i (t) + ∆t V (x i , t) ,(31)\nwhere x i is a point, ∆t is one time step, and V (x i , t) is the vector field at time t induced by all particles other than x i . We iterate this system some number of steps T , and the learning problem is the following: Given the initial charge field, can we predict the charge field after step T ? For this toy problem we will again use an image in two dimensions of sidelength 16, so the function we are trying to learn is f : A 16,2,1,+ → A 16,2,1,+ . We took several precautions to make the data well behaved. Since the particles move freely in R 2 but we learn on a discrete vector field grid, the vectors act erratic when the charge passes very closely to the center of the pixel. Thus we applied a sigmoid transform on the charge field vectors on the input and output:\nQ( v, s) = 1 1 + e -v s - 1 2 v v ,\nwhere • is the usual Euclidean norm and s is a parameter that we set to 0.2. That is, the vector field is a nonlinear vector function of the original vector electric field. One advantage of learning on the charge field rather than the particles themselves is that vector field is discrete, but the vectors reflect the exact particle locations. However, if two particles start very close, it will appear that there is only a single particle on the charge vector field. To alleviate this problem, we iterated one step of Euler's method, and treated that as the input to the neural network. Additionally, we initialized points within the central 8 × 8 grid rather than the full 16 × 16 grid so that the charges would be unlikely to leave the bounds of the grid by step T . See Figure 7 for example inputs and outputs for the two problems." }, { "figure_ref": [], "heading": "Architectures", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "For the gravity problem, the architecture that we choose has 3 convolution layers followed by all contractions that reduce to k = 1, and then a parameterized linear combination of the resulting images. The second convolution layer uses dilations that range from 1 to 15, and the third convolution layer uses dilations from 1 to 7.\nFor our loss we use the root mean squared error (RMSE). The baseline architecture is built to have similar structure with 3 convolution layers with the same dilations, but also have a number parameters on the same order of magnitude. We treat the dilations as creating a separate channel. The sequence of channel depth is the following: to the problems respectively. The third column shows the model predicted output, and the fourth column shows the difference between the ground truth and the predicted output. To put the numerical results in context, the loss for the top row is 0.0177, while the loss for the bottom row is 2.061.\n1 → 2 → (15 * 2) → (7 * 2) → 2 (a) (b)\nTo get from the output of the 3rd convolution which is 2 filters across 7 dilations, we take a parameterized sum across the dilations to get an image with 2 channels, which is the number of channels we need for a vector image. See Table 2 The values of M, k , p represent the sidelength, tensor order, and parity of the convolutional filters. The baseline models do not have a parity because those filters are learned rather than fixed ahead of time.\nThe moving charges problem is more difficult, so we choose a more complex architecture. We have 9 convolution layers, with dilations of 1, 2, 4, 2, 1, 1, 2, 1, 1 in that order. Each convolution is followed by a nonlinear activation layer with a Leaky ReLu with negative slope of 0.01. This non-linearity seemed to perform best among the ones we tried. We then finish with the usual contraction layer and linear combination, and our loss is again the RMSE. For the baseline model, we use identical number of convolution layers, dilations, nonlinearities, and loss. The only difference is that we only use scalar filters, so we increase the depth of each layer to 20 except for the final output layer which must have a depth of 2 because we are learning a vector image. " }, { "figure_ref": [], "heading": "Training", "publication_ref": [], "table_ref": [], "text": "For all models, we trained the network using stochastic gradient descent with the Adam optimizer and an exponential learning rate decay that started at 0.005, has transition steps equal to the number of batches per epoch, and has a decay of 0.995. The one exception was the baseline model for the moving charges problem where we started with a learning rate of 0.001. These values were found with a limited grid search. For both problems and both models we initialized the parameters as Gaussian noise with mean 0 and standard deviation 0.1.\nFor the gravitational field problem, we created a test set of 10 images, a validation set of 5 images, and training sets ranging in size from 5 to 50 images. For the moving charges problem we created a test set of 10 images, a validation set of 10 images, and training sets ranging in size from 5 to 100 images. We used training batch sizes equal to 0.2 the training set size. For all models we trained them until the error on the validation set had not improved in 20 epochs." }, { "figure_ref": [ "fig_15" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Given sufficient data, both the GI-Net and the baseline model are able to perform well on the test data set. The RMSE carries very little information without further context, but we can see from the examples in Figure 7 that low error corresponds to only small differences between the ground truth output and the predicted output. By comparing the GI-Net to our simple CNN baseline we can see the advantages of the equivariant model.\nIn Figure 8(a), with only 10 data points, the test error of the GI-Net is almost equal to the training error, suggesting that the model is able to generalize well from just a few small examples. By contrast, the baseline model requires at least 50 training points to get its test error as close to its training error. Additionally, even when the baseline model has enough points, its error is still higher overall compared to the GI-Net.\nLikewise, in Figure 8(b) the gap between the test error and training error for the baseline model is much larger than the same gap for the GI-Net. In this case, the GI-Net and the baseline model reach the same test error when training off 100 points, despite the baseline model having smaller training error. This again suggests that the GI-Net does a better job generalizing, especially when the training data set is small." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b15", "b14", "b46", "b48", "b52", "b12", "b2", "b30", "b6", "b47", "b3", "b49", "b35", "b7", "b6", "b5", "b9", "b16", "b1", "b43", "b44", "b51", "b22", "b19" ], "table_ref": [], "text": "Restricting machine learning models to use functions that are equivariant to some group action is a powerful idea that has the potential to improve generalization and reduce training requirements. When we expect our target function to be equivariant to that group, this restriction improves the model's generalization and accuracy (see for instance [16,15,47]) and is a powerful remedy for data scarcity (see [49]). Equivariant networks, in certain cases, can approximate any continuous equivariant function (see [53,13,3,31]).\nThere is a wide variety of strategies to design equivariant maps (e.g. [7,48]). One class of methods, employed in this work, is group convolution, either on the group or on the homogeneous space where the features lie. Our generalization of convolution replaces arbitrary products with the outer product of tensors. Our approach is related to [4], which employs a Clifford Algebra.\nOther strategies to design equivariant maps use irreducible representations or invariant theory to parameterize the space of equivariant functions. The first work using representation theory for invariant and equivariant neural networks was the paper of Wood et al. [50] from 1996. More recent incarnations of these ideas include [36,8,7,6,10,17]. One can use classical invariant theory to compute the generators of the algebra of invariant polynomials. For instance, in [2] we show how to use the generators of the algebra of invariant polynomials to produce a parameterization of equivariant functions for certain groups and actions. This approach is inspired by the physical sciences, where the data is subject to rules coming from coordinate freedoms and conservation laws. In previous work [44,45,52] we used these geometric rules to develop modified machine-learning methods that are restricted to exactly obey group-theoretic equivariances in physics contexts. Similar ideas have been explored in [23,20].\nSee appendix B for a more in depth description of the mathematical details of the related work." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "This paper presents a flexible new model, the GeometricImageNet, which parameterizes functions that map geometric images to geometric images. It capitalizes on the vector or tensor structure of the image contents. The flexibility, coupled with the easy restriction to G N,d -equivariant functions, makes the model ideal for tackling many problems from the natural sciences in a principled way.\nThe added flexibility of tensors comes with a cost. Taking repeated convolutions with tensor filters grows the the tensor order and thus, with naive representations, the memory requirements of the network. We see the consequences of this issue when trying to numerically demonstrate that we have characterized all the equivariant homogeneous polynomials (Section 4). We also encounter this issue when solving problems in Section 6; fortunately, it appears that capping the maximum tensor order limits the memory requirements without much associated loss in performance.\nAnother shortcoming of this work is that we work with discrete symmetries rather than continuous symmetries. We expect invariance and equivariance with respect to rotations other than 90 degrees to appear in nature, but the images that we work with are always going to be d-cube lattices of points. Thus we use the group G N,d to avoid interpolating rotated images and working with approximate equivariances. This simplifies the mathematical results, and we see empirically that we still have the benefits of rotational equivariance. However, there are other possible image representations that might create more continuous concepts of images. For example, if the data is on the surface of a sphere, it could be represented with tensor spherical harmonics, and be subject to transformations by a continuous rotation group.\nThere are many possible future directions that could be explored.\nIt is an open question to understand the full expressive power of the model outside the few cases we were able to test in Section 4. Additionally, there are likely improvements to be made to both the time and space efficiency of the GeometricImageNet. Finally, there are many exciting applications in fluid dynamics, astronomy, climate science, biology, and others that could benefit from this physics-informed machine learning approach." }, { "figure_ref": [], "heading": "A Propositions and Proofs", "publication_ref": [ "b0" ], "table_ref": [], "text": "This section provides propositions and their proofs that are necessary for some of the results earlier in the paper. In many of these proofs we will show that the property holds for some arbitrary pixel index ī, so it must hold for all pixels. First we state two well known results from tensor analysis. Proposition 6. The Kronecker delta, Definition 4, is invariant to the group B d .\nProof. Let g ∈ B d with matrix representation M (g). The Kronecker delta δ is a 2 (+) -tensor so the action of g is by conjugation. Thus,\ng • δ = M (g)δM (g) T = M (g)M (g) T = δ\nsince the Kronecker delta is also the d × d identity matrix and the matrix representations of B d are orthogonal.\nProposition 7. The Levi-Civita symbol, Definition 5, is invariant to the group B d .\nProof. The Levi-Civita tensor ∈ (R d ) ⊗d is defined so as to satisfy the identity σ = sgn(σ) , where σ ∈ S d is a permutation of the indices (cf. Definition 7). Thus it is an alternating tensor of order d. It is well-known (e.g., [21, p. 160] or [35, p. 13]) that the subspace of these is (one-dimensional and) stable under the action of GL(R d ) on (R d ) ⊗d given by linear extension of M\n•(u 1 ⊗• • •⊗u d ) := (M u 1 )⊗• • •⊗(M u d ), where M ∈ GL(R d\n) is an arbitrary invertible matrix, and that M acts on this subspace by multiplication by det(M ). Thus\nM • = det(M ) ,\nwhere the action on the left is the one just described. Using instead the action under consideration throughout this paper, i.e., the action of O(d) ⊂ GL(R d ) defined by equation (1) and Definition 1, we get an additional determinant factor on the right side because is a d (-) -tensor. In other words, g • = det(M (g)) 2 . Since det(M (g)) 2 = 1 for all g ∈ O(d), we conclude that g • = .\nNext we state several properties of geometric convolution that are well known for the general mathematical definition of convolution. A,B ∈ A N,d,k,p ,C,S ∈ A M,d,k ,p , τ ∈ (Z/N Z) d and α, β ∈ R, then the following properties hold:" }, { "figure_ref": [], "heading": "Properties (Convolution). Given", "publication_ref": [ "b31", "b32", "b33", "b35", "b36", "b37", "b37", "b38", "b38", "b40", "b41", "b38", "b39" ], "table_ref": [], "text": "1. The convolution operation is translation equivariant:\n(L τ A) * C = L τ (A * C)(32)\n2. The convolution operation is linear in the geoemetric image:\n(αA + βB) * C = α(A * C) + β(B * C)(33)\nIt is also linear in the filters:\nA * (αC + βS) = α(A * C) + β(A * S)(34)\nProof. First we will prove (32). Let A, C, and τ be as above and let ī be a pixel index of L τ A * C. Then:\n(L τ A * C)(ī) = ā∈[-m,m] d (L τ A)(ī -ā) ⊗ C(ā + m) = ā∈[-m,m] d A(ī -ā -τ ) ⊗ C(ā + m) = ā∈[-m,m] d A((ī -τ ) -ā) ⊗ C(ā + m) = (A * C)(ī -τ ) = L τ (A * C)(ī)\nNow we will prove (33). Let A, B, C, α, and β be as above and let ī be a pixel index of (αA + βB) * C. Then:\n((αA + βB) * C)(ī) = ā∈[-m,m] d (αA + βB)(ī -ā) ⊗ C(ā + m) = ā∈[-m,m] d (αA(ī -ā) + βB(ī -ā)) ⊗ C(ā + m) = ā∈[-m,m] d αA(ī -ā) ⊗ C(ā + m) + βB(ī -ā) ⊗ C(ā + m) = α ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) + β ā∈[-m,m] d B(ī -ā) ⊗ C(ā + m) = α(A * C)(ī) + β(B * C)(ī)\nNow we will prove (34). Let A, C, S, α, and β be as above and let ī be a pixel index. Then:\n(A * (αC + βS))(ī) = ā∈[-m,m] d A(ī -ā) ⊗ (αC + βS)(ā + m) = ā∈[-m,m] d A(ī -ā) ⊗ αC(ā + m) + A(ī -ā) ⊗ βS(ā + m) = α ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) + β ā∈[-m,m] d A(ī -ā) ⊗ S(ā + m) = α(A * C)(ī) + β(A * S)(ī)\nThus we have our result. Now we will state several properties of the contraction.\nProperties (Contraction). Given A, B ∈ A N,d,k,p , S ∈ A N,d,k ,p , C ∈ C M,d,k ,p , g ∈ G N,d , µ, ν, ρ, σ ∈ [k]\nall distinct, and α, β ∈ R, then the following properties hold:\n1. Contraction indices can be swapped:\nT (A, µ, ν) = T (A, ν, µ)(35)\n2. Contractions on distinct indices commute:\nT M (A, (µ, ν), (ρ, σ)) = T M (A, (ρ, σ), (µ, ν))(36)\n3. Contractions are equivariant to G N,d :\ng • T (A, µ, ν) = T (g • A, µ, ν)(37)\n4. Contractions are linear functions:\nT (αA + βB, µ, ν) = α T (A, µ, ν) + β T (B, µ, ν)(38)\n5. Contractions commute with the outer product. If µ, ν ∈ [k], µ = ν then:\nT (A ⊗ S, µ, ν) = T (A, µ, ν) ⊗ S(39)\nOtherwise, if µ, ν ∈ {k + 1, . . . , k + k }, µ = ν then:\nT (A ⊗ S, µ, ν) = A ⊗ T (S, µ -k, ν -k)(40)\n6. Contractions commute with convolutions. If µ, ν ∈ [k] distinct then:\nT (A * C, µ, ν) = T (A, µ, ν) * C(41)\nOtherwise, if µ, ν ∈ {k + 1, . . . , k + k } distinct then:\nT (A * C, µ, ν) = A * T (C, µ -k, ν -k)(42)\nProof. Equation ( 35) follows directly from the definition, 6. Now we will prove (36). Let A, µ, ν, ρ, and σ be defined as above, let ī be a pixel index of A, and let a ∈ T d,k,p be the tensor at that index. Since contractions are applied the same to all pixels, it suffices to show that this proposition is true for this pixel. The result then follows quickly from the definition:\n[T M (a, (µ, ν), (ρ, σ))] {i1,...,i k }\\{iµ,iν ,iρ,iσ} = [δ] iρ,iσ [δ] iµ,iν [a] i1,...,i k = [δ] iµ,iν [δ] iρ,iσ [a] i1,...,i k = [δ] iµ,iν [δ] iρ,iσ [a] i1,...,i k = [T M (a, (ρ, σ), (µ, ν))] {i1,...,i k }\\{iµ,iν ,iρ,iσ}\nNext we will prove (37). Let A, µ, and ν be defined as above and let ī be a pixel of A. First we will show that contractions are equivariant to translations. Let τ ∈ (Z/N Z) d . Then\nT (L τ A, µ, ν)(ī) = T ((L τ A)(ī), µ, ν) = T (A(ī -τ ), µ, ν) = T (A, µ, ν)(ī -τ ) = (L τ T (A, µ, ν))(ī)\nThus contractions are equivariant to translations. Now we will show that contractions are equivariant to B d . Let g ∈ B d , and denote A(g -1 ī) = a. Then by equation ( 4) we have:\nT (g • A, µ, ν)(ī) = T ((g • A)(ī), µ, ν) = T g • A(g -1 • ī), µ, ν = T (g • a, µ, ν) = [δ] iµ,iν [g • a] i1,...,iµ,...,iν ,...i k = [δ] iµ,iν [a] j1,...,j k q∈[k] [M (g)] iq,jq = [δ] iµ,iν [a] j1,...,j k [M (g)] iµ,jµ [M (g)] iν ,jν q∈[k]\\{µ,ν} [M (g)] iq,jq = [δ] iµ,iν [M (g)] iµ,jµ [M (g)] iν ,jν [a] j1,...,j k q∈[k]\\{µ,ν} [M (g)] iq,jq = [δ] jµ,jν [a] j1,...,j k q∈[k]\\{µ,ν} [M (g)] iq,jq\nNote that [δ] iµ,iν [M (g)] iµ,jµ [M (g)] iν ,jν is the action of g on δ. Thus it equals [δ] jµjν by Proposition 6, as shown in the last step. Hence:\nT (g • A, µ, ν)(ī) = [T (a, µ, ν)] {j1,...,j k }\\{jµ,jν } q∈[k]\\{µ,ν} [M (g)] iq,jq = g • T (A(g -1 • ī), µ, ν) = g • T (A, µ, ν)(g -1 • ī) = (g • T (A, µ, ν))(ī)\nTherefore, since contractions are equivariant to the generators of G N,d , it is equivariant to the group.\nNext we will prove (38). Let A, B, µ, and ν be defined as above, let ī be a pixel index of (αA + βB), and let a, b ∈ T d,k,p be the tensors of A and B at that pixel index. Then:\nT (αA + βB, µ, ν)(ī) {i1,...,i k }\\iµ,iν = T (αA(ī) + βB(ī), µ, ν) {i1,...,i k }\\iµ,iν = T (αa + βb, µ, ν) {i1,...,i k }\\iµ,iν = δ iµ,iν (αa + βb) i1,...,i k = δ iµ,iν (αa) i1,...,i k + δ iµ,iν (βb) i1,...,i k = α δ iµ,iν a i1,...,i k + β δ iµ,iν b i1,...,i k = α • T (a, µ, ν) {i1,...,i k }\\iµ,iν + β • T (b, µ, ν) {i1,...,i k }\\iµ,iν = α • T (A, µ, ν)(ī) {i1,...,i k }\\iµ,iν + β • T (B, µ, ν)(ī) {i1,...,i k }\\iµ,iν\nThus we have shown (38). Now we will prove (39). Let A, S be as described, let µ, ν ∈ [k] distinct, let ī be a pixel index of A and S, and let A(ī) = a and S(ī) = s be the tensors at that pixel index. Then:\n[T (A ⊗ S, µ, ν)(ī)] i1,...,i k+k \\{iµ,iν } = [T (A(ī) ⊗ S(ī), µ, ν)] i1,...,i k+k \\{iµ,iν } = [T (a ⊗ s, µ, ν)] i1,...,i k+k \\{iµ,iν } = [δ] iµ,iν [a ⊗ s] i1,...,i k+k = [δ] iµ,iν [a] i1,...i k [s] i k+1 ,...,i k+k = [T (a, µ, ν)] i1,...,i k \\{iµ,iν } [s] i k+1 ,...,i k+k = [T (a, µ, ν) ⊗ s] i1,...,i k+k \\{iµ,iν } = [(T (A, µ, ν) ⊗ S)(ī)] i1,...,i k+k \\{iµ,iν }\nThis gives us (39). Now suppose instead, µ, ν ∈ {k + 1, . . . , k + k }. Skipping a few of the initial steps that are the same as above, we have:\n[T (A ⊗ S, µ, ν)(ī)] i1,...,i k+k \\{iµ,iν } = [δ] iµ,iν [a] i1,...i k [s] i k+1 ,...,i k+k = [a] i1,...i k [δ] iµ,iν [s] i k+1 ,...,i k+k = [a] i1,...i k [T (s, µ -k, ν -k)] i k+1 ,...,i k+k \\{iµ,iν } = [a ⊗ T (s, µ -k, ν -k)] i1,...,i k+k \\{iµ,iν } = [(A ⊗ T (S, µ -k, ν -k))(ī)] i1,...,i k+k \\{iµ,iν }\nThus we have our result.\nThe properties (41) and (42) follow from (39) and (40). Let A, C be as described above, let µ, ν ∈ [k] distinct, and let ī be a pixel index of A * C. Then by the previous results and the linearity of contraction we have:\nT (A * C, µ, ν)(ī) = T ((A * C)(ī), µ, ν) = T   ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m), µ, ν   = ā∈[-m,m] d T (A(ī -ā) ⊗ C(ā + m), µ, ν) = ā∈[-m,m] d T (A(ī -ā), µ, ν) ⊗ C(ā + m) = T (A, µ, ν) * C Likewise, if µ, ν ∈ {k + 1, . . . , k + k } distinct then: T (A * C, µ, ν)(ī) = ā∈[-m,m] d T (A(ī -ā) ⊗ C(ā + m), µ, ν) = ā∈[-m,m] d A(ī -ā) ⊗ T (C(ā + m), µ -k, ν -k) = A * T (C, µ -k, ν -k)\nThus we have our result.\nNext we will state one property of the Levi-Civita contraction, but many properties of regular contractions follow for Levi-Civita Contractions. " }, { "figure_ref": [], "heading": "Properties (Levi-Civita Contraction). Let", "publication_ref": [ "b36", "b37", "b33" ], "table_ref": [], "text": "Proof. W will prove (43) using the equivariance of the contraction (37). Let A and µ 1 , . . . , µ be as described and let ī be a pixel index. Then\n(g • T LC (A, µ 1 , . . . , µ d-1 ))(ī) = g • T LC (A, µ 1 , . . . , µ d-1 )(g -1 • ī) = g • T LC (A(g -1 • ī), µ 1 , . . . , µ d-1 ) = g • T M A g -1 • ī ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1) = T M g • A g -1 • ī ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1) = T M g • A g -1 • ī ⊗ g • , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1) = T M ((g • A)(ī) ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1)) = T LC (g • A, µ 1 , . . . , µ d-1 )(ī)\nThis proof relies on the fact that g • = given by Proposition 7. Thus we have our result.\nBefore proving Proposition 1, we must prove a short lemma about performing a convolution with a filter of size N + 1 on an image of size N . Lemma 2 in both the even and odd case. The reason this is an inequality rather than an equality is because it is possible that two linearly independent convolution filters result in identical functions g. We will now show that this is not possible.\nSuppose C 1 , C 2 ∈ A M,d,2k+k ,+ are linearly independent, so for α, β ∈ R we have that αC 1 + βC 2 = 0 if and only if α = β = 0. Now let g 1 , g 2 ∈ G be defined with convolution filters C 1 and C 2 respectively. Thus it suffices to show that αg 1 + βg 2 is equal to the function that sends all inputs to the 0 vector, in this case the zero image, if and only if α = β = 0. Let A ∈ A N,d,k,p and by the linearity of contraction (38) and convolution (34) we have:\nαg 1 (A) + βg 2 (A) = αT M (A * C 1 , (1, k + 1), . . . , (k, 2k)) + αT M (A * C 2 , (1, k + 1), . . . , (k, 2k)) = T M (α(A * C 1 ) + β(A * C 2 ), (1, k + 1), . . . , (k, 2k)) = T M (A * (αC 1 + βC 2 ), (1, k + 1), . . . , (k, 2k)) If α = β = 0 then clearly αg 1 (A) + βg 2 (A) is the zero geometric image for all A.\nNow suppose that α and β are not both equal to 0. Then by our linear independence assumption, αC 1 + βC 2 is not equal to the all zeros filter. Thus there must be at least one component of one pixel that is nonzero. Suppose this is at pixel index b + m and (αC 1 + βC 2 )( b + m) = c. Suppose the nonzero component is at index j 1 , . . . , j 2k+k . Let a ∈ T d,k,p where [a] i1,...,i k is nonzero and all other indices are 0. Now suppose A ∈ A N,d,k,p such that for pixel index ī of A, A(ī -b) = a and all other pixels are the zero tensor. Then:\n(αg 1 (A) + βg 2 (A))(ī) = T M (A * (αC 1 + βC 2 ), (1, k + 1), . . . , (k, 2k))(ī) = T M ((A * (αC 1 + βC 2 ))(ī), (1, k + 1), . . . , (k, 2k)) = T M   ā∈[-m,m] d A(ī -ā) ⊗ (αC 1 + βC 2 )(ā + m), (1, k + 1), . . . , (k, 2k)   = T M A(ī -b) ⊗ (αC 1 + βC 2 )( b + m), (1, k + 1), . . . , (k, 2k) = T M (a ⊗ c, (1, k + 1), . . . , (k, 2k))\nNote that the penultimate step removing the sum is because A(ī -ā) = 0 the zero tensor everywhere other than A(ī -b). Since the only nonzero entry of a is at index i 1 , . . . , i k , then at index j k+1 , . . . , j 2k+k of the resulting tensor we have:\n[(αg 1 (A) + βg 2 (A))(ī)] j k+1 ,...,j 2k+k = [a] i1...i k [c] j1,...,j 2k+k\nSince [a] i1,...,i k is nonzero and [c] j1,...,j 2k+k is nonzero, this index is nonzero. Thus the function is not identically 0, so g 1 and g 2 are linearly independent. Therefore, dim(G) = N d d 2k+k and since G ⊆ F we have F = G." }, { "figure_ref": [], "heading": "Proof of Proposition 4", "publication_ref": [], "table_ref": [], "text": "Proposition. Let F be the set of functions f : A N,d,k,p → A N,d,k ,p where each f is a convolution with a B d -invariant k (p ) -tensor filter. Let G be the set of functions g : A N,d,k,p → A N,d,k ,p where each g is a convolution with a B d -invariant (k + 2) (p ) -tensor filter followed by a contraction. Then F ⊆ G.\nProof. Let F and G be defined as above, let f ∈ F with its associated filter C ∈ A M,d,k ,p , and let A ∈ A N,d,k,p . Then by Proposition 2, the filter\nC = 1 d C ⊗ ∆ ∈ A M,d,k +2,p is B d -invariant.\nThen by Propositions 40 and 42,\nf = A * C = A * C ⊗ 1 d d = A * C ⊗ 1 d T (∆, 1, 2) = A * T 1 d C ⊗ ∆, k + 1, k + 2 = A * T (C , k + 1, k + 2) = T (A * C , k + k + 1, k + k + 2) ∈ G Thus f ∈ G, so F ⊆ G." }, { "figure_ref": [], "heading": "Proof of Proposition 5", "publication_ref": [], "table_ref": [], "text": "Proposition. Let F be the set of functions that preserve parity f : A N,d,k,p → A N,d,k ,p where each f is a convolution with a negative-parity k (-) -tensor filter followed by a Levi-Civita contraction. Let G be the set of functions that preserve parity g : A N,d,k,p → A N,d,k ,p where each g is a convolution with a positive-parity (k + d) (+) -tensor followed by d -1 contractions. Then F ⊆ G.\nProof. Let F and G be as described. \nT LC A * C, µ 1 , . . . , µ d-1 (ī) = T LC A * C (ī), µ 1 , . . . , µ d-1 = T M A * C (ī) ⊗ , (µ 1 , k + k + 1), . . . , (µ d-1 , k + k + d -1) = T M     ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m)   ⊗ , (µ 1 , k + k + 1), . . .   = T M   ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) ⊗ , (µ 1 , k + k + 1), . . .   = T M   ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) ⊗ E(ā + m), (µ 1 , k + k + 1), . . .   = T M   ā∈[-m,m] d A(ī -ā) ⊗ C ⊗ E (ā + m), (µ 1 , k + k + 1), . . .   = T M A * C ⊗ E , (µ 1 , k + k + 1), . . . , (µ d-1 , k + k + d -1) (ī)\nNote that in some of the middle steps we omit some of the multicontraction indices for readability. Now we note that C ⊗E has tensor order k +d and parity -1 * -1 = +1. Finally, we can show that C ⊗ E is B d -invariant. For pixel index ī and g ∈ B d :\n(g • E)(ī) = g • E(g -1 • ī) = g • = = E(ī)\nwhich follows from the B d invariance of the Levi-Civita symbol, Propostion 7. Thus\ng • C ⊗ E = g • C ⊗ g • E = C ⊗ E. Thus C ⊗ E ∈ A M,d,k +d,+ is B d -invariant so f ∈ G." }, { "figure_ref": [], "heading": "B Mathematical details of related work", "publication_ref": [], "table_ref": [], "text": "The most common method to design equivariant maps is via group convolution, on the group or on the homogeneous space where the features lie. Regular convolution of a vector field f : (Z/N Z) d → R c and a filter φ : (Z/N Z) d → R c is defined as \n(f * φ)(x) =\nOur generalization of convolution replaces this scalar product of vectors by the outer product of tensors." }, { "figure_ref": [], "heading": "B.1 Clifford Convolution", "publication_ref": [ "b3", "b43" ], "table_ref": [], "text": "Probably the most related work is by Brandstetter et al. [4], which replaces the scalar product in (44) by the geometric product of multivector inputs and multivector filters of a Clifford Algebra. It considers multivector fields, i.e.: vector fields f : Z 2 → (Cl p,q (R)) c . The real Clifford Algebra Cl p,q (R) is an associative algebra generated by p + q = d orthonormal basis elements: e 1 , . . . , e p+q ∈ R d with the relations:\ne i ⊗ e i = +1 (i ≤ p),(45)\ne j ⊗ e j = -1 (p < j ≤ n),\ne i ⊗ e j = -e j ⊗ e i (i = j).\nFor instance, Cl 2,0 (R) has the basis {1, e 1 , e 2 , e 1 ⊗ e 2 } and is isomorphic to the quaternions H. The Clifford convolution replaces the elementwise product of scalars of the usual convolution of (44) by the geometric product of multivectors in the Clifford Algebra:\nf * φ(x) = y∈(Z/N Z) d c j=1 f j (y) ⊗ φ j (y -x) ∈Clp,q(R) ,(48)\nwhere f : Z 2 → (Cl p,q (R)) c and φ : Z 2 → (Cl p,q (R)) c\nThe Clifford Algebra Cl p,q (R) is a quotient of the tensor algebra\nT (R d ) = k≥0 R d ⊗ . . . ⊗ R d k times = k≥0 (R d ) ⊗k ,(49)\nby the two-side ideal {v ⊗ v -Q(v) : v ∈ R d } , where the quadratic form Q is defined by Q(e i ) = +1,if i ≤ p, and Q(e j ) = -1, else p < j ≤ n. Our geometric images are functions A : (Z/N Z) d → T d,k,p , where T d,k,p = (R d ) ⊗k ⊂ T (R d ). They can be related with the Clifford framework by seeing them as N -periodic functions from Z d whose image is projected via the quotient map on the Clifford Algebra. This projection can be seen as a contraction of tensors. The Clifford convolution is not equivariant under multivector rotations or reflections. But the authors derive a constraint on the filters for d = 2 which allows to build generalized Clifford convolutions which are equivariant with respect to rotations or reflections of the multivectors. That is, they prove equivariance of a Clifford layer under orthogonal transformations if the filters satisfies the constraint: φ i (Rx) = Rφ i (x)." }, { "figure_ref": [], "heading": "B.2 Unified Fourier Framework", "publication_ref": [ "b50", "b29", "b17", "b4" ], "table_ref": [], "text": "Part of our work can be studied under the unified framework for group equivariant networks on homogeneous spaces derived from a Fourier perspective proposed in [51]. The idea is to consider general tensor-valued feature fields, before and after a convolution. Their fields are functions f : G/H → V over the homogeneous space G/H taking values in the vector space V and their filters are kernels κ : G → Hom(V, V ). Essentially, their convolution replaces the scalar product of vectors of traditional convolution by appliying an homomorphism. In particular, if G is a finite group and H = {0}, they define convolution as\nκ * f (x) = 1 |G| y∈G κ(x -1 y) f (y) ∈V .(50)\n[51] gives a complete characterization of the space of kernels for equivariant convolutions. In our framework, the group is Z/N Z and the kernel is an outer product by a filter C: κ(g)A(g) = A(g) ⊗ C(g). Note that Z/N Z is neither a homogeneous space of O(d) nor of B d .\nWe can analyze our problem from a spectral perspective, in particular we can describe all linear equivariant using representation theory, using similar tools as in the proof of Theorem 1 in [30]. This theorem states that convolutional structure is a sufficient and a necessary condition for equivariance to the action of a compact group. Some useful references about group representation theory are [18], a classical book about the theory of abstract harmonic analysis and [5], about the particular applications of it. " }, { "figure_ref": [], "heading": "B.3 Linear equivariant maps", "publication_ref": [ "b7", "b17" ], "table_ref": [], "text": "That is, there is a basis of the Hilbert space T d,k,p in which the action of G is defined via a linear sparse map. In the case of G finite, for all g ∈ G there is a matrix P splitting the representation in the Hilbert space into its irreducible components\nP -1 Φ d,k,p (g) P = π∈ Ĝ m d,k,p (π) π(g)(53)\nConsider now linear maps between Tensor images: The power of representation theory is not limited to compact groups. Mackey machinery allow us to study for instance semidirect products of compact groups and other groups, and in general to relate the representations of a normal subgroup with the ones of the whole group. This is the spirit of [8], which makes extensive use of the induced representation theory. An introduction to this topic can be found in Chapter 7 in [18].\nC : T d,k,p → T d ,k ,p(54)" }, { "figure_ref": [], "heading": "B.4 Steerable CNNs", "publication_ref": [ "b7" ], "table_ref": [], "text": "The work in [8] deals exclusively with signals f : Z 2 → R k . They consider the action of G = p4m on Z 2 by translations, rotations by 90 degrees around any point, and reflections. This group is a semidirect product of Z 2 and B 2 , so every x ∈ p4m can be written as x = t r, for t ∈ Z 2 and r ∈ B 2 . They show that equivariant maps with respect to representations ρ and ρ of rotations and reflections B 2 lead to equivariant maps with respect to certain representations of G, π and π . This means that if we find a linear map φ : f → φ f such that φ ρ(h) f = ρ (h) φ f for all h ∈ B 2 , then for the representation of G π defined by\nπ (t r) f (y) = ρ(r) [f ((t r) -1 y)], t r ∈ G, y ∈ Z 2 , (56\n)\nwe automatically have that φ π(g) f = π (g) φ f for all g ∈ G. This is the representation of G induced by the representation ρ of B 2 Note the similarity between the definition of the action of B d on tensor images 18 and equation (56). The convolution with a symmetric filter produces easily an equivariant map with respect to the action of the semidirect product of Z d and B d on the tensor images." }, { "figure_ref": [], "heading": "B.5 Approximate symmetries", "publication_ref": [ "b47" ], "table_ref": [], "text": "The recent work [48] studies approximately equivariant networks which are biased towards preserving symmetry but are not strictly constrained to do so. They define a relaxed group convolution which is approximately equivariant in the sense that\nρ X (g) f * G Ψ(x) -f * G Ψ(ρ Y (y) x < .(57)\nThey use a classical convolution but with different kernels for different group elements." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b41", "b23", "b27", "b37" ], "table_ref": [], "text": "Acknowlegements: It is a pleasure to thank Roger Blandford (Stanford), Drummond Fielding (Flatiron), Leslie Greengard (Flatiron), Ningyuan (Teresa) Huang (JHU), Kate Storey-Fisher (NYU), and the Astronomical Data Group at the Flatiron Institute for valuable discussions and input. This project made use of Python 3 [42], numpy [24], matplotlib [28], and cmastro [38]. All the code used for making the data and figures in this paper is available at https://github.com/WilsonGregory/ GeometricConvolutions. Funding: WG was supported by an Amazon AI2AI Faculty Research Award. BBS was supported by ONR N00014-22-1-2126. MTA was supported by H2020-MSCA-RISE-2017, Project 777822, and from Grant PID2019-105599GB-I00, Ministerio de Ciencia, Innovación y Universidades, Spain. SV was partly supported by the NSF-Simons Research Collaboration on the Mathematical and Scientific Foundations of Deep Learning (MoDL) (NSF DMS 2031985), NSF CISE 2212457, ONR N00014-22-1-2126 and an Amazon AI2AI Faculty Research Award." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b10" ], "table_ref": [], "text": "Proof. Let A and C be defined as above. Thus\nConsider the convolution definition (11) where we have A(ī-ā) where ī ∈ [0, N -1] d and ā ∈ [-m, m] d . Since A is on the d-torus, then whenever the th index of ā = -m we have:\nThus, any time there is an index ā with a value ±m, we have an equivalence class under the torus with all other indices with flipped sign of the m in any combination. If {ā} is this equivalence class, we may group these terms in the convolution sum:\nThus, we may pick a single pixel of the convolutional filter C, set it equal to ā ∈{ā} C(ā + m), and set all other pixels of the equivalence class to the zero k (p ) -tensor without changing the convolution. We choose the nonzero pixel to be the one whose index has all -m instead of m. Thus we can define the filter C by N d pixels rather than (N + 1) d pixels, and we have our result. First we will show that G ⊆ F. Let g ∈ G. By properties ( 33) and ( 38) both convolutions and contractions are linear. Additionally, by properties ( 32) and ( 37 " } ]
Convolutional neural networks and their ilk have been very successful for many learning tasks involving images. These methods assume that the input is a scalar image representing the intensity in each pixel, possibly in multiple channels for color images. In natural-science domains however, image-like data sets might have vectors (velocity, say), tensors (polarization, say), pseudovectors (magnetic field, say), or other geometric objects in each pixel. Treating the components of these objects as independent channels in a CNN neglects their structure entirely. Our formulation-the GeometricImageNet-combines a geometric generalization of convolution with outer products, tensor index contractions, and tensor index permutations to construct geometric-image functions of geometric images that use and benefit from the tensor structure. The framework permits, with a very simple adjustment, restriction to function spaces that are exactly equivariant to translations, discrete rotations, and reflections. We use representation theory to quantify the dimension of the space of equivariant polynomial functions on 2-dimensional vector images. We give partial results on the expressivity of GeometricImageNet on small images. In numerical experiments, we find that GeometricImageNet has good generalization for a small simulated physics system, even when trained with a small training set. We expect this tool will be valuable for scientific and engineering machine learning, for example in cosmology or ocean dynamics.
GeometricImageNet: Extending convolutional neural networks to vector and tensor images
[ { "figure_caption": "Figure 1 :1Figure 1: Examples of geometric images in the natural sciences. (a) A visualization of a temperature map and a polarization map from the ESA Planck Mission[9]. The color map shows a temperature field (a scalar or 0 (+) -tensor) on the sphere, and the whiskers show the principal eigenvector direction of a 2 (+)tensor field in two dimensions. In detail the underlying data are represented on a pixel grid (healpixel[22]) on the sky (a 2-sphere). (b) Two-dimensional maps of ocean current (shown with arrows; a vector or 1 (+) -tensor field) and ocean salinity (shown with color; a scalar or 0 (+) -tensor field) at a depth of 5 m[1]. (c) A three-dimensional map of temperature (a scalar or 0 (+) -tensor field) based on sensors distributed throughout the volume of a granary[46]. (d) A two-dimensional map of potential vorticity (a pseudoscalar or 0 (-) -tensor field) in the Earth's atmosphere, measured for the purposes of predicting storms[34]. (e) Two-dimensional maps on the sky of intensity I and the three independent components Q, U, V of the electromagnetic polarization 2 (+) -tensor, from a simulation of a jet outflow from an accreting black hole [Davelaar et al, in preparation]. (f) Components of the three-dimensional stress tensor (a 2 (+)tensor field) in a diamond anvil cell, which is used to study the behavior of samples at exceedingly high pressures[33].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "where for g ∈ O(d), M (g) ∈ R d×d is the standard matrix representation of g (i.e. M (g) M (g) = I) and p ∈ {-1, +1} is the parity of v. If p = +1 we obtain the standard O(d) action on R d vectors. If p = -1 we obtain the O(d) action on what in physics are known as pseudovectors.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ")Higher rank k (p) -tensors are defined as linear combinations of rank-1 k (p) -tensors where the action of O(d) is extended linearly. The set of k (p) -tensors in d dimensions is denoted T d,k,p .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Convolution of a scalar image with a scalar filter and with a vector filter. Note that convolution with the vector filter results in a vector image that looks like the gradient.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Definition 16 (16Translation of k (p) -tensor images). Given a k (p) -tensor image A on the d-torus, and a translation τ ∈ (Z/N Z) d , the action L τ A produces a k (p) -tensor image on the d-torus such that", "figure_data": "", "figure_id": "fig_4", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Definition 17 (17Group B d of symmetries of a d-hypercube). We denote by B d the group of Euclidean symmetries of the d-dimensional hypercube. The group B d is often called the hyperoctahedral group since the d-dimensional hyperoctahedron is dual to the hypercube, so they have the same group of symmetries. The notation B d is standard nomenclature coming from the classification", "figure_data": "", "figure_id": "fig_5", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 3 : 1 .31Figure 3: The elements of B 2 on the vector 2 1 . The original vector is blue and the transformed vector is red. Rotations are in degrees counterclockwise, flips are over the axis specified. The group B d has d!2 d elements, so d = 3 has 48 and d = 4 has 384 elements.", "figure_data": "", "figure_id": "fig_6", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "Definition 20 (20The group G N,d , and its action on k (p) -tensor images). G N,d is the group generated by the elements of B d and the discrete translations on the N d -pixel lattice on the d-torus.", "figure_data": "", "figure_id": "fig_7", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "d pixel lattice with the group T N,d ∼ = (Z/N Z) d of discrete translations of this lattice; then the action of B d on the torus induces an action of B d on T N,d by automorphisms. The group G N,d is the semidirect product T N,d B d with respect to this action. Thus there is a canonical group homomorphism G N,d → B d with kernel T N,d . In concrete terms, every element of G N,d can be written in the form τ • b, where b ∈ B d and τ ∈ T N,d . Then the canonical map G N,d → B d sends τ • b to b.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Theorem 1 .1A k (p ) -tensor convolution filter C produces convolutions that are equivariant with respect to the big group G N,d if C is invariant under the small group B d .", "figure_data": "", "figure_id": "fig_9", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Lemma 1 .1Given g ∈ B d , A ∈ A N,d,k,p , and C ∈ A M,d,k ,p , the action g distributes over the convolution of A with C: g • (A * C) = (g • A) * (g • C) (20) Proof. Let A ∈ A N,d,k,p be a geometric image, let C ∈ A M,d,k ,p , let g ∈ B d , and let ī be any pixel index of A. By Definition 19 we have", "figure_data": "", "figure_id": "fig_10", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: All the filters for d = 2, M = 3, k ∈ [0, 1, 2]. Notes: Scalars and pseudo-scalars are shown with signed colors; where there is no symbol in the box the value is zero. The 2 (p ) -tensor filters are shown via the action of the tensor on an image of a letter \"R\"; the transformation properties of the 2 (p ) -tensor filters are such that these filters may not look obviously invariant to rotations. There are no invariant pseudoscalar (0 (-) -tensor) filters available at d = 2, M = 3. Note that scalar 0 and scalar 2 are paired with 2 (+) -tensor 0 and 2 (+) -tensor 3 respectively by multiplication with the Kronecker delta symbol, per Proposition 2. Likewise, if we added 2 (+) -tensor 1 and 2 (+) -tensor 2 together, they would be paired with scalar 1. Also note that each 1 (+) -tensor filter is paired with a 1 (-) -tensor filter and likewise for each 2 (+) -tensor filter and 2 (-) -tensor filter by Proposition 3. We don't show the k (p ) -tensor filters at k > 2 because visualizing them is difficult, even the k = 2 case is potentially misleading. Note that the vector (1 (+) -tensor) filters look like pure divergence and the pseudovector (1 (-) -tensor) filters look like pure curl.", "figure_data": "", "figure_id": "fig_11", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: All the filters for d = 2, M = 5, k ∈ [0, 1]. The symbols and colors are as in Figure 4. We don't show the 2 (p ) -tensor filters at m = 2 because there are 26 of them. At (d, m) = (2, 2) a pseudoscalar filter appears. Again, the vector and pseudovectors look like pure divergence and pure curl, respectively.", "figure_data": "", "figure_id": "fig_12", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Proposition 4 .4Let F be the set of functions f : A N,d,k,p → A N,d,k ,p where each f is a convolution with a B d -invariant k (p ) -tensor filter. Let G be the set of functions g : A N,d,k,p → A N,d,k ,p where each g is a convolution with a B dinvariant (k + 2) (p ) -tensor filter followed by a contraction. Then F ⊆ G.", "figure_data": "", "figure_id": "fig_14", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: One test point each for the (a) gravitational field problem and the (b)moving point charge problem. The first two columns show the input and output to the problems respectively. The third column shows the model predicted output, and the fourth column shows the difference between the ground truth and the predicted output. To put the numerical results in context, the loss for the top row is 0.0177, while the loss for the bottom row is 2.061.", "figure_data": "", "figure_id": "fig_15", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "( a )Figure 8 :a8Figure 8: Comparison between baseline model and GI-Net as a function of the number of points in the training data set. The GI-Net have smaller testing loss when the training data set is small, even when the baseline model achieves better training error in the case of the moving charges problem.", "figure_data": "", "figure_id": "fig_16", "figure_label": "a8", "figure_type": "figure" }, { "figure_caption": "A ∈ A N,d,k,p for k ≥ d -1, C ∈ A M,d,k ,p , and µ 1 , . . . , µ d-1 ∈ [k] distinct. Then the following properties hold: 1. Levi-Civita Contractions are equivariant to G N,d : g • T LC (A, µ 1 , . . . , µ d-1 ) = T LC (g • A, µ 1 , . . . , µ d-1 )", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Lemma 2 .2Given A ∈ A N,d,k,p a geometric image and C ∈ A N +1,d,k ,p a geometric filter where M = N + 1, there exists C ∈ A N +1,d,k ,p such that A * C = A * C and C (ī) is the zero k (p ) -tensor, for ī ∈ [0, N ] d \\ [0, N -1] d . That is, C is totally defined by N d pixels, and every pixel with an N in the index is equal to the zero k (p ) -tensor.", "figure_data": "", "figure_id": "fig_18", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Let A ∈ A N,d,k,p , C ∈ A M,d,k ,-, and µ 1 , . . . , µ d-1 ∈ [k + k ] all distinct. Also let ī be a pixel index and let E ∈ A M,d,d,- be the geometric image with the Levi-Civita symbol in every pixel. Then if f (A) = T LC A * C, µ 1 , . . . , µ d-1 we have:", "figure_data": "", "figure_id": "fig_19", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "y∈(Z/N Z) d f (y), φ(x -y) scalar product of vectors = y∈(Z/N Z) d c j=1 f j (y)φ j (x -y) ∈R", "figure_data": "", "figure_id": "fig_20", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "In this work we define an action over tensor images of O(d), by rotation of tensors in each pixel; of B d by rotating the grid of pixels and each tensor in the pixel; and of (Z/N Z) d by translation of the grid of pixels. The action of each one of these groups G over T d,k,p Φ d,k,p : G → GL con (T d,k,p ), (51) can be decomposed into irreducible representations of G: Φ d,k,p ≡ π∈ Ĝ m d,k,p (π) π.", "figure_data": "", "figure_id": "fig_21", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Linear equivariant maps satisfy thatC • Φ d,k,p = Φ d ,k ,p • C. That is, if C is the representation of C in the above basis, C • π∈G m d,k,p (π) π = π∈G m d ,k ,p (π) π • C. (55)By Schur's Lemma, this implies that C ≡ π∈G m d,k,p (π) Id dπ .", "figure_data": "", "figure_id": "fig_22", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ". Each pixel contains a k (p) -tensor where k and p are the same for each pixel. Let T d,k,p be the set of k (p) -tensors in R d . We define the geometric images as follows.", "figure_data": "Definition 8 (geometric image). A geometric image is a function A : [N ] d → T d,k,p ,where [N ] = {0, 1, . . . , N -1}. The set of geometric images is denoted A N,d,k,p . Wewill also consider k (p) -tensor images on the d-torus, where [N ] d is given the algebraicstructure of (Z/N Z) d . The pixel index of a geometric image, often ī, is naturally a1 (+) -tensor of length d.Definition 9 (sums of images). Given A, B ∈ A N,d,k,p , the sum A + B ∈ A N,d,k,pis defined as", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Definition 15 (Equivariance of a geometric image function). Given a function on geometric images f : A N,d,k,p → A N,d,k ,p , and a group G equipped with actions on A N,d,k,p and A N,d,k ,p , we say that f is equivariant to G if for all g ∈ G and", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The number of equivariant maps A N,2,1,+ → A N,2,1,+ for different values of sidelength N and degree .", "figure_data": "1.", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of the different model architectures used for the two problems.", "figure_data": "for additional", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" } ]
Wilson Gregory; David W Hogg; Ben Blum-Smith; Maria Teresa Arias; Kaze W K Wong; Soledad Villar
[ { "authors": "", "journal": "UCAR", "ref_id": "b0", "title": "Climate Data Guide", "year": "2015" }, { "authors": "Ben Blum; - Smith; Soledad Villar", "journal": "", "ref_id": "b1", "title": "Machine learning and invariant theory", "year": "2022" }, { "authors": "Georg Bökman; Fredrik Kahl; Axel Flinth", "journal": "", "ref_id": "b2", "title": "Zz-net: A universal rotation equivariant architecture for 2d point clouds", "year": "2022" }, { "authors": "Johannes Brandstetter; Rianne Van Den; Max Berg; Jayesh K Welling; Gupta", "journal": "", "ref_id": "b3", "title": "Clifford neural layers for pde modeling", "year": "2022" }, { "authors": "Gregory S Chirikjian; Alexander B Kyatkin", "journal": "CRC PRESS", "ref_id": "b4", "title": "Engineering applications of noncommutative harmonic analysis: With emphasis on rotation and motion groups", "year": "2021" }, { "authors": "Taco Cohen; Maurice Weiler; Berkay Kicanaoglu; Max Welling", "journal": "PMLR", "ref_id": "b5", "title": "Gauge equivariant convolutional networks and the icosahedral cnn", "year": "2019" }, { "authors": "Taco Cohen; Max Welling", "journal": "PMLR", "ref_id": "b6", "title": "Group equivariant convolutional networks", "year": "2016" }, { "authors": "S Taco; Max Cohen; Welling", "journal": "", "ref_id": "b7", "title": "Steerable cnns", "year": "2016" }, { "authors": " ", "journal": "A&A", "ref_id": "b8", "title": "Planck 2015 results -i. overview of products and scientific results", "year": "2016" }, { "authors": "Pim De Haan; Maurice Weiler; Taco Cohen; Max Welling", "journal": "", "ref_id": "b9", "title": "Gauge equivariant mesh cnns: Anisotropic convolutions on geometric graphs", "year": "2020" }, { "authors": "Harm Derksen; Gregor Kemper", "journal": "Springer", "ref_id": "b10", "title": "Computational invariant theory", "year": "2015" }, { "authors": "Vincent Dumoulin; Francesco Visin", "journal": "", "ref_id": "b11", "title": "A guide to convolution arithmetic for deep learning", "year": "2016" }, { "authors": "Nadav Dym; Haggai Maron", "journal": "", "ref_id": "b12", "title": "On the universality of rotation equivariant point cloud networks", "year": "2020" }, { "authors": "Albert Einstein", "journal": "Annalen der Physik", "ref_id": "b13", "title": "Die Grundlage der allgemeinen Relativitätstheorie", "year": "1916-01" }, { "authors": "Bryn Elesedy", "journal": "", "ref_id": "b14", "title": "Provably strict generalisation benefit for invariance in kernel methods", "year": "2021" }, { "authors": "Bryn Elesedy; Sheheryar Zaidi", "journal": "", "ref_id": "b15", "title": "Provably strict generalisation benefit for equivariant models", "year": "2021" }, { "authors": "Carlos Esteves; Ameesh Makadia; Kostas Daniilidis", "journal": "", "ref_id": "b16", "title": "Spin-weighted spherical cnns", "year": "2020" }, { "authors": "G B Folland", "journal": "CRC Press", "ref_id": "b17", "title": "A course in abstract harmonic analysis", "year": "2016" }, { "authors": "Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b18", "title": "Generative adversarial networks", "year": "2014" }, { "authors": "Ben Gripaios; Ward Haddadin; Christopher G Lester", "journal": "Journal of Physics A: Mathematical and Theoretical", "ref_id": "b19", "title": "Lorentz-and permutation-invariants of particles", "year": "2021-03" }, { "authors": "Victor Guillemin; Alan Pollack", "journal": "American Mathematical Soc", "ref_id": "b20", "title": "Differential topology", "year": "2010" }, { "authors": "K M Górski; E Hivon; A J Banday; B D Wandelt; F K Hansen; M Reinecke; M Bartelmann", "journal": "The Astrophysical Journal", "ref_id": "b21", "title": "Healpix: A framework for high-resolution discretization and fast analysis of data distributed on the sphere", "year": "2005-04" }, { "authors": "Ward Haddadin", "journal": "", "ref_id": "b22", "title": "Invariant polynomials and machine learning", "year": "2021" }, { "authors": "Charles R Harris", "journal": "Nature", "ref_id": "b23", "title": "Array programming with NumPy", "year": "2020-09" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b24", "title": "Deep residual learning for image recognition", "year": "2015" }, { "authors": "Gao Huang; Zhuang Liu; Laurens Van Der Maaten; Kilian Q Weinberger", "journal": "", "ref_id": "b25", "title": "Densely connected convolutional networks", "year": "2018" }, { "authors": "James E Humphreys", "journal": "Cambridge university press", "ref_id": "b26", "title": "Reflection groups and Coxeter groups. Number 29", "year": "1990" }, { "authors": "J D Hunter", "journal": "Computing in Science & Engineering", "ref_id": "b27", "title": "Matplotlib: A 2d graphics environment", "year": "2007" }, { "authors": "George Em Karniadakis; G Ioannis; Lu Kevrekidis; Paris Lu; Sifan Perdikaris; Liu Wang; Yang", "journal": "Nature Reviews Physics", "ref_id": "b28", "title": "Physics-informed machine learning", "year": "2021" }, { "authors": "Risi Kondor; Shubhendu Trivedi", "journal": "", "ref_id": "b29", "title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups", "year": "2018" }, { "authors": "Wataru Kumagai; Akiyoshi Sannai", "journal": "", "ref_id": "b30", "title": "Universal approximation theorem for equivariant maps by group cnns", "year": "2020" }, { "authors": "Yann Lecun; Bernhard Boser; John S Denker; Donnie Henderson; Richard E Howard; Wayne Hubbard; Lawrence D Jackel", "journal": "Neural Computation", "ref_id": "b31", "title": "Backpropagation applied to handwritten zip code recognition", "year": "1989" }, { "authors": "Mehdi Valery I Levitas; Biao Kamrani; Feng", "journal": "NPJ Computational Materials", "ref_id": "b32", "title": "Tensorial stress-strain fields and large elastoplasticity as well as friction in diamond anvil cell up to 400 gpa", "year": "2019" }, { "authors": "S Lossow; M Khaplanov; J Gumbel; Jacek Stegman; Georg Witt; Peter Dalin; Sheila Kirkwood; F Schmidlin; K Fricke; U A Blum", "journal": "Atmospheric Chemistry and Physics", "ref_id": "b33", "title": "Middle atmospheric water vapour and dynamics in the vicinity of the polar vortex during the hygrosonde-2 campaign", "year": "" }, { "authors": "H Ib; Jxrgen Madsen; Tornehave", "journal": "Cambridge university press", "ref_id": "b34", "title": "From calculus to cohomology: de Rham cohomology and characteristic classes", "year": "1997" }, { "authors": "Ameesh Makadia; Christopher Geyer; Kostas Daniilidis", "journal": "Int. J. Comput. Vision", "ref_id": "b35", "title": "Correspondencefree structure from motion", "year": "2007-12" }, { "authors": "Marvin Pförtner; Ingo Steinwart; Philipp Hennig; Jonathan Wenger", "journal": "", "ref_id": "b36", "title": "Physics-informed gaussian process regression generalizes linear pde solvers", "year": "2022" }, { "authors": "Adrian M Price-Whelan", "journal": "", "ref_id": "b37", "title": "cmastro: colormaps for astronomers", "year": "2021" }, { "authors": "M M G Ricci; Tullio Levi-Civita", "journal": "Mathematische Annalen", "ref_id": "b38", "title": "Méthodes de calcul différentiel absolu et leurs applications", "year": "1900" }, { "authors": "Robin M Schmidt", "journal": "", "ref_id": "b39", "title": "Recurrent neural networks (rnns): A gentle introduction and overview", "year": "2019" }, { "authors": "Kip S Thorne; Roger D Blandford", "journal": "Princeton University Press", "ref_id": "b40", "title": "Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, and Statistical Physics", "year": "2017" }, { "authors": "Guido Van Rossum; Fred L Drake", "journal": "CreateSpace", "ref_id": "b41", "title": "Python 3 Reference Manual", "year": "2009" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b42", "title": "Attention is all you need", "year": "2017" }, { "authors": "Soledad Villar; David W Hogg; Kate Storey-Fisher; Weichi Yao; Ben Blum-Smith", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Scalars are universal: Equivariant machine learning, structured like classical physics", "year": "2021" }, { "authors": "Soledad Villar; Weichi Yao; David W Hogg; Ben Blum-Smith; Bianca Dumitrascu", "journal": "", "ref_id": "b44", "title": "Dimensionless machine learning: Imposing exact units equivariance", "year": "2022" }, { "authors": "Di Wang; Xi Zhang", "journal": "Applied Sciences", "ref_id": "b45", "title": "Modeling of a 3d temperature field by integrating a physics-specific model and spatiotemporal stochastic processes", "year": "2019" }, { "authors": "Rui Wang; Robin Walters; Rose Yu", "journal": "", "ref_id": "b46", "title": "Incorporating symmetry into deep dynamics models for improved generalization", "year": "2021" }, { "authors": "Rui Wang; Robin Walters; Rose Yu", "journal": "", "ref_id": "b47", "title": "Approximately equivariant networks for imperfectly symmetric dynamics", "year": "2022" }, { "authors": "Rui Wang; Robin Walters; Rose Yu", "journal": "", "ref_id": "b48", "title": "Data augmentation vs. equivariant networks: A theory of generalization on dynamics forecasting", "year": "2022" }, { "authors": "Jeffrey Wood; John Shawe-Taylor", "journal": "Discrete Applied Mathematics", "ref_id": "b49", "title": "Representation theory and invariant neural networks", "year": "1996" }, { "authors": "Yinshuang Xu; Jiahui Lei; Edgar Dobriban; Kostas Daniilidis", "journal": "", "ref_id": "b50", "title": "Unified fourier-based kernel and nonlinearity design for equivariant networks on homogeneous spaces", "year": "2022" }, { "authors": "Weichi Yao; Kate Storey-Fisher; David W Hogg; Soledad Villar", "journal": "", "ref_id": "b51", "title": "A simple equivariant machine learning method for dynamics based on scalars", "year": "2021" }, { "authors": "Dmitry Yarotsky", "journal": "", "ref_id": "b52", "title": "Universal approximations of invariant maps by neural networks", "year": "2018" }, { "authors": "Fuzhen Zhuang; Zhiyuan Qi; Keyu Duan; Dongbo Xi; Yongchun Zhu; Hengshu Zhu; Hui Xiong; Qing He", "journal": "", "ref_id": "b53", "title": "A comprehensive survey on transfer learning", "year": "2020" } ]
[ { "formula_coordinates": [ 5, 244.28, 385.98, 241.72, 13.55 ], "formula_id": "formula_0", "formula_text": "g • v = det(M (g)) 1-p 2 M (g) v(1)" }, { "formula_coordinates": [ 5, 126, 537.78, 360, 46.88 ], "formula_id": "formula_1", "formula_text": "If v i is a 1 (pi) -tensor, then T := v 1 ⊗ . . . ⊗ v k is a rank-1 k (p) -tensor, where p = k i=1 p i and the action of O(d) is defined as g • (v 1 ⊗ . . . ⊗ v k ) = (g • v 1 ) ⊗ . . . ⊗ (g • v k ) .(2" }, { "formula_coordinates": [ 6, 126, 270.46, 360, 24.01 ], "formula_id": "formula_2", "formula_text": "[a ⊗ b] i1,...,i k+k = [a] i1,...,i k [b] i k+1 ,...,i k+k ." }, { "formula_coordinates": [ 6, 218.72, 361.78, 267.28, 30.55 ], "formula_id": "formula_3", "formula_text": "[A B] i,j = [A] i,k [B] k,j := d k=1 [A] i,k [B] k,j(3)" }, { "formula_coordinates": [ 6, 170.93, 639.11, 315.07, 15.13 ], "formula_id": "formula_4", "formula_text": "[g • b] i1,...,i k = det(M (g)) 1-p 2 [b] j1,...,j k [M (g)] i1,j1 • • • [M (g)] i k ,j k(4)" }, { "formula_coordinates": [ 6, 126, 704.12, 143.27, 9.65 ], "formula_id": "formula_5", "formula_text": "[g • b] i,j = [b] k, [M (g)] i,k [M (g)] j," }, { "formula_coordinates": [ 6, 126, 717.03, 94.17, 8.74 ], "formula_id": "formula_6", "formula_text": "g • b = M (g) b M (g) ." }, { "formula_coordinates": [ 7, 197.13, 218.13, 288.87, 10.63 ], "formula_id": "formula_7", "formula_text": "[T (a, µ, ν)] i1,...,i k \\{iµ,iν } = [δ] iµiν [a] i1,...,iµ,...,iν ,...,i k(5)" }, { "formula_coordinates": [ 7, 163.26, 302.61, 322.74, 9.65 ], "formula_id": "formula_8", "formula_text": "T M (a, (µ 1 , µ 2 ), . . . , (µ , µ +1 )) = T (•, µ , µ +1 ) • . . . • T (a, µ 1 , µ 2 ) ,(6)" }, { "formula_coordinates": [ 7, 222.55, 374.18, 166.91, 9.65 ], "formula_id": "formula_9", "formula_text": "T M (a, (1, 3), (2, 4)) = T (T (a, 1, 3), 1, 2)" }, { "formula_coordinates": [ 7, 157.46, 445.75, 328.54, 9.65 ], "formula_id": "formula_10", "formula_text": "T LC (a, µ 1 , . . . , µ d-1 ) = T M (a ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1)) ,(7)" }, { "formula_coordinates": [ 7, 126, 639.51, 360, 22.87 ], "formula_id": "formula_11", "formula_text": "k = [a] i [b] j [ ] ijk , which is a 1 (-) -tensor or pseudovector." }, { "formula_coordinates": [ 7, 239.36, 707.01, 246.64, 13.72 ], "formula_id": "formula_12", "formula_text": "[a σ ] i1,...,i k := [a] i σ -1 (1) ,...,i σ -1 (k)(8)" }, { "formula_coordinates": [ 8, 248.4, 146.87, 237.61, 9.65 ], "formula_id": "formula_13", "formula_text": "ρ = [A] ijk [u] i [v] j [w] k [x](9)" }, { "formula_coordinates": [ 8, 251.34, 417.54, 234.66, 8.74 ], "formula_id": "formula_14", "formula_text": "(A + B)(ī) = A(ī) + B(ī)(10)" }, { "formula_coordinates": [ 8, 267.6, 484.29, 218.4, 8.74 ], "formula_id": "formula_15", "formula_text": "(αA)(ī) = αA(ī) .(11)" }, { "formula_coordinates": [ 8, 126, 637.65, 360, 69.05 ], "formula_id": "formula_16", "formula_text": "A ∈ A N,d,k,p on the d-torus, and C ∈ A M,d,k ,p where M = 2m + 1 for some m ∈ N, the geometric convolution A * C is a (k + k ) (p p ) -tensor image such that (A * C)(ī) = ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) ,(12)" }, { "formula_coordinates": [ 8, 126, 728.37, 360, 23.22 ], "formula_id": "formula_17", "formula_text": "d length 1 (+) -tensor [m, . . . , m] T . For example, if d = 2 and ā = [0, 0] T , then ā + m = [m, m] T ," }, { "formula_coordinates": [ 9, 126, 475.96, 360, 62.9 ], "formula_id": "formula_18", "formula_text": "A ∈ A N,d,k,p and b ∈ Z + such that N is divisible by b, we define avg pool(A, b) ∈ A N/b,d,k,p for pixel index ī as: avg pool(A, b)(ī) = 1 b d ā∈[0,b-1] d A(bī + ā)(13)" }, { "formula_coordinates": [ 9, 126, 550.75, 360, 46.1 ], "formula_id": "formula_19", "formula_text": "A ∈ A N,d,k,p and b ∈ Z + , we define unpool(A, b) ∈ A N b,d,k,p for pixel index ī as: unpool(A, b)(ī) = A( ī/b ) (14" }, { "formula_coordinates": [ 9, 481.57, 588.11, 4.43, 8.74 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 10, 126, 76.02, 360, 43.99 ], "formula_id": "formula_21", "formula_text": "A ∈ A N,d,k,p and B ∈ A N,d,k ,p , the outer product A ⊗ B ∈ A N,d,k+k ,p p is defined as (A ⊗ B)(ī) = A(ī) ⊗ B(ī) .(15)" }, { "formula_coordinates": [ 10, 126, 279.79, 360, 49.65 ], "formula_id": "formula_22", "formula_text": "f (g • A) = g • f (A) (16) Likewise, f is invariant to G if f (g • A) = f (A) .(17)" }, { "formula_coordinates": [ 10, 348.44, 343.04, 121.11, 8.74 ], "formula_id": "formula_23", "formula_text": "G if g • A = A for all g ∈ G." }, { "formula_coordinates": [ 10, 259.3, 483.76, 226.7, 9.65 ], "formula_id": "formula_24", "formula_text": "(L τ A)(ī) = A(ī -τ ) ,(18)" }, { "formula_coordinates": [ 10, 126, 526.52, 360, 47.5 ], "formula_id": "formula_25", "formula_text": "Proposition 1. A function f : A N,d,k,p → A N,d,k+k ,p p is a translation equivariant linear function if and only if it is the convolution with a geometric filter C ∈ A M,d,2k+k ,p followed by k contractions. When N is odd, M = N , otherwise M = N + 1." }, { "formula_coordinates": [ 11, 250.2, 555.29, 231.37, 10.81 ], "formula_id": "formula_26", "formula_text": "(g • A)(ī) = g • A(g -1 • ī) . (19" }, { "formula_coordinates": [ 11, 481.57, 557.37, 4.43, 8.74 ], "formula_id": "formula_27", "formula_text": ")" }, { "formula_coordinates": [ 11, 238.13, 613.95, 135.74, 10.81 ], "formula_id": "formula_28", "formula_text": "g -1 • ī = M (g -1 )(ī -m) + m" }, { "formula_coordinates": [ 11, 232.95, 639.58, 115.5, 13.47 ], "formula_id": "formula_29", "formula_text": "1 (+) -tensor N -1 2 , . . . , N -12" }, { "formula_coordinates": [ 12, 174.26, 492.3, 263.48, 110.77 ], "formula_id": "formula_30", "formula_text": "(g • (A * C))(ī) = g • (A * C) g -1 • ī = g •   ā∈[-m,m] d A g -1 • ī -ā ⊗ C(ā + m)   = ā∈[-m,m] d g • A g -1 • ī -ā ⊗ C(ā + m) = ā∈[-m,m] d g • A g -1 • ī -ā ⊗ g • C(ā + m) Now let ā = g • ā. Thus g -1 • ā = g -1 • g • ā = ā. Then: (g • (A * C))(ī) = ā∈[-m,m] d g • A g -1 • ī -ā ⊗ g • C(ā + m) = g -1 •ā ∈[-m,m] d g • A g -1 • ī -g -1 • ā ⊗ g • C g -1 • ā + m = g -1 •ā ∈[-m,m] d g • A g -1 • ī -g -1 • ā ⊗ g • C g -1 • ā + g -1 • m = g -1 •ā ∈[-m,m] d g • A g -1 • (ī -ā ) ⊗ g • C g -1 • (ā + m) = g -1 •ā ∈[-m,m] d (g • A)(ī -ā ) ⊗ (g • C)(ā + m) = ā ∈[-m,m] d (g • A)(ī -ā ) ⊗ (g • C)(ā + m) = ((g • A) * (g • C))(ī)" }, { "formula_coordinates": [ 13, 126, 290.16, 360, 23.22 ], "formula_id": "formula_31", "formula_text": "g -1 • ā ∈ [-m, m] d compared to ā ∈ [-m, m] d is" }, { "formula_coordinates": [ 13, 215.54, 422.42, 180.21, 8.74 ], "formula_id": "formula_32", "formula_text": "g • (A * C) = (g • A) * (g • C) = (g • A) * C" }, { "formula_coordinates": [ 13, 126, 580.55, 281.52, 9.65 ], "formula_id": "formula_33", "formula_text": "Then C ⊗ ∆ ∈ A M,d,k +2,p is a B d -invariant convolutional filter." }, { "formula_coordinates": [ 13, 220.65, 643.58, 170.7, 104.14 ], "formula_id": "formula_34", "formula_text": "(g • (C ⊗ ∆))(ī) = (g • C ⊗ g • ∆)(ī) = (g • C)(ī) ⊗ (g • ∆)(ī) = C(ī) ⊗ g • ∆(g -1 • ī) = C(ī) ⊗ g • δ = C(ī) ⊗ δ = C(ī) ⊗ ∆(ī) = (C ⊗ ∆)(ī) Proposition 3. Let C ∈ A M,d,k ,p , k ≥ d-1 be a B d -invariant convolutional filter and µ 1 , . . . , µ d-1 ∈ [k ] distinct. Then T LC (C, µ 1 , . . . , µ d-1 ) ∈ A M,d,k -d+2,-p is a B d -invariant filter of opposite parity of C." }, { "formula_coordinates": [ 14, 141.02, 192.3, 329.97, 9.65 ], "formula_id": "formula_35", "formula_text": "g • T LC (C, µ 1 , . . . , µ d-1 ) = T LC (g • C, µ 1 , . . . , µ d-1 ) = T LC (C, µ 1 , . . . , µ d-1 ) ." }, { "formula_coordinates": [ 14, 126, 253.91, 360, 21.65 ], "formula_id": "formula_36", "formula_text": "+ d -2(d -1) = k + d -2d + 2 = k -d + 2 as desired." }, { "formula_coordinates": [ 14, 247.35, 500.96, 238.65, 20.14 ], "formula_id": "formula_37", "formula_text": "H(F, t) := ≥0 dim (F ) t ,(21)" }, { "formula_coordinates": [ 14, 224.41, 579.27, 261.6, 29.01 ], "formula_id": "formula_38", "formula_text": "H(F, t) = 1 |G| g∈G tr M W g -1 det(I -M V (g) t) ,(22)" }, { "formula_coordinates": [ 17, 240.46, 121.01, 241.11, 12.44 ], "formula_id": "formula_39", "formula_text": "det(I -M (g) t) = (1 -t) 2N 2 . (23" }, { "formula_coordinates": [ 17, 481.57, 124.72, 4.43, 8.74 ], "formula_id": "formula_40", "formula_text": ")" }, { "formula_coordinates": [ 17, 285.8, 220.07, 195.77, 21.65 ], "formula_id": "formula_41", "formula_text": "1 -t -t 1 . (24" }, { "formula_coordinates": [ 17, 481.57, 226.43, 4.43, 8.74 ], "formula_id": "formula_42", "formula_text": ")" }, { "formula_coordinates": [ 17, 220.28, 273.9, 261.29, 12.44 ], "formula_id": "formula_43", "formula_text": "det(I -M (g) t) = (1 + t) 2 (1 -t 2 ) N 2 -1 . (25" }, { "formula_coordinates": [ 17, 481.57, 277.6, 4.43, 8.74 ], "formula_id": "formula_44", "formula_text": ")" }, { "formula_coordinates": [ 17, 189.99, 320.46, 296.01, 51.94 ], "formula_id": "formula_45", "formula_text": "H(F, t) = 1 8N 2 2N 2 (1 -t) 2N 2 + N 2 (-2) (1 + t) 2 (1 -t 2 ) N 2 -1 (26) = 1 4 1 (1 -t) 2N 2 - 1 (1 + t) 2 (1 -t 2 ) N 2 -1 .(27)" }, { "formula_coordinates": [ 17, 174.55, 419.1, 311.45, 33.53 ], "formula_id": "formula_46", "formula_text": "  2N 2 + -1 + (-1) +1 /2 j=0 ( -2j + 1) N 2 + j -2 j   .(28)" }, { "formula_coordinates": [ 18, 239.92, 111.81, 246.09, 9.65 ], "formula_id": "formula_47", "formula_text": "f (A) = h(g 1 (A) ⊗ . . . ⊗ g (A))(29)" }, { "formula_coordinates": [ 18, 258.15, 513.24, 227.85, 27.47 ], "formula_id": "formula_48", "formula_text": "C i = 1 |B d | g∈B d g • C i ,(30)" }, { "formula_coordinates": [ 21, 234.65, 276.58, 251.36, 9.65 ], "formula_id": "formula_49", "formula_text": "x i (t + ∆t) = x i (t) + ∆t V (x i , t) ,(31)" }, { "formula_coordinates": [ 21, 225.94, 430.1, 160.12, 25.09 ], "formula_id": "formula_50", "formula_text": "Q( v, s) = 1 1 + e -v s - 1 2 v v ," }, { "formula_coordinates": [ 21, 237.92, 742.86, 136.16, 8.74 ], "formula_id": "formula_51", "formula_text": "1 → 2 → (15 * 2) → (7 * 2) → 2 (a) (b)" }, { "formula_coordinates": [ 29, 219.22, 209.58, 173.18, 10.81 ], "formula_id": "formula_52", "formula_text": "g • δ = M (g)δM (g) T = M (g)M (g) T = δ" }, { "formula_coordinates": [ 29, 126, 373.68, 360, 21.65 ], "formula_id": "formula_53", "formula_text": "•(u 1 ⊗• • •⊗u d ) := (M u 1 )⊗• • •⊗(M u d ), where M ∈ GL(R d" }, { "formula_coordinates": [ 29, 268.79, 422.38, 74.42, 8.74 ], "formula_id": "formula_54", "formula_text": "M • = det(M ) ," }, { "formula_coordinates": [ 29, 267.59, 609.37, 218.41, 9.65 ], "formula_id": "formula_55", "formula_text": "(L τ A) * C = L τ (A * C)(32)" }, { "formula_coordinates": [ 29, 234.05, 659.11, 251.95, 8.74 ], "formula_id": "formula_56", "formula_text": "(αA + βB) * C = α(A * C) + β(B * C)(33)" }, { "formula_coordinates": [ 29, 235.6, 704.86, 250.4, 8.74 ], "formula_id": "formula_57", "formula_text": "A * (αC + βS) = α(A * C) + β(A * S)(34)" }, { "formula_coordinates": [ 30, 192.76, 109.5, 226.47, 113.46 ], "formula_id": "formula_58", "formula_text": "(L τ A * C)(ī) = ā∈[-m,m] d (L τ A)(ī -ā) ⊗ C(ā + m) = ā∈[-m,m] d A(ī -ā -τ ) ⊗ C(ā + m) = ā∈[-m,m] d A((ī -τ ) -ā) ⊗ C(ā + m) = (A * C)(ī -τ ) = L τ (A * C)(ī)" }, { "formula_coordinates": [ 30, 126, 268.4, 391.11, 126.42 ], "formula_id": "formula_59", "formula_text": "((αA + βB) * C)(ī) = ā∈[-m,m] d (αA + βB)(ī -ā) ⊗ C(ā + m) = ā∈[-m,m] d (αA(ī -ā) + βB(ī -ā)) ⊗ C(ā + m) = ā∈[-m,m] d αA(ī -ā) ⊗ C(ā + m) + βB(ī -ā) ⊗ C(ā + m) = α ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) + β ā∈[-m,m] d B(ī -ā) ⊗ C(ā + m) = α(A * C)(ī) + β(B * C)(ī)" }, { "formula_coordinates": [ 30, 126, 440.15, 388.01, 96.65 ], "formula_id": "formula_60", "formula_text": "(A * (αC + βS))(ī) = ā∈[-m,m] d A(ī -ā) ⊗ (αC + βS)(ā + m) = ā∈[-m,m] d A(ī -ā) ⊗ αC(ā + m) + A(ī -ā) ⊗ βS(ā + m) = α ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) + β ā∈[-m,m] d A(ī -ā) ⊗ S(ā + m) = α(A * C)(ī) + β(A * S)(ī)" }, { "formula_coordinates": [ 30, 126, 588.06, 360, 22.59 ], "formula_id": "formula_61", "formula_text": "Properties (Contraction). Given A, B ∈ A N,d,k,p , S ∈ A N,d,k ,p , C ∈ C M,d,k ,p , g ∈ G N,d , µ, ν, ρ, σ ∈ [k]" }, { "formula_coordinates": [ 30, 269.25, 653.52, 216.75, 8.74 ], "formula_id": "formula_62", "formula_text": "T (A, µ, ν) = T (A, ν, µ)(35)" }, { "formula_coordinates": [ 30, 225.96, 698.19, 260.04, 9.65 ], "formula_id": "formula_63", "formula_text": "T M (A, (µ, ν), (ρ, σ)) = T M (A, (ρ, σ), (µ, ν))(36)" }, { "formula_coordinates": [ 30, 256.67, 742.86, 229.33, 8.74 ], "formula_id": "formula_64", "formula_text": "g • T (A, µ, ν) = T (g • A, µ, ν)(37)" }, { "formula_coordinates": [ 31, 216.82, 95.12, 269.18, 8.74 ], "formula_id": "formula_65", "formula_text": "T (αA + βB, µ, ν) = α T (A, µ, ν) + β T (B, µ, ν)(38)" }, { "formula_coordinates": [ 31, 250.39, 136.57, 235.61, 8.74 ], "formula_id": "formula_66", "formula_text": "T (A ⊗ S, µ, ν) = T (A, µ, ν) ⊗ S(39)" }, { "formula_coordinates": [ 31, 232.99, 174.78, 253.01, 8.74 ], "formula_id": "formula_67", "formula_text": "T (A ⊗ S, µ, ν) = A ⊗ T (S, µ -k, ν -k)(40)" }, { "formula_coordinates": [ 31, 252.01, 216.22, 233.99, 8.74 ], "formula_id": "formula_68", "formula_text": "T (A * C, µ, ν) = T (A, µ, ν) * C(41)" }, { "formula_coordinates": [ 31, 234.61, 254.43, 251.39, 8.74 ], "formula_id": "formula_69", "formula_text": "T (A * C, µ, ν) = A * T (C, µ -k, ν -k)(42)" }, { "formula_coordinates": [ 31, 127.2, 347.51, 357.1, 61.59 ], "formula_id": "formula_70", "formula_text": "[T M (a, (µ, ν), (ρ, σ))] {i1,...,i k }\\{iµ,iν ,iρ,iσ} = [δ] iρ,iσ [δ] iµ,iν [a] i1,...,i k = [δ] iµ,iν [δ] iρ,iσ [a] i1,...,i k = [δ] iµ,iν [δ] iρ,iσ [a] i1,...,i k = [T M (a, (ρ, σ), (µ, ν))] {i1,...,i k }\\{iµ,iν ,iρ,iσ}" }, { "formula_coordinates": [ 31, 229.92, 461.32, 152.15, 57.35 ], "formula_id": "formula_71", "formula_text": "T (L τ A, µ, ν)(ī) = T ((L τ A)(ī), µ, ν) = T (A(ī -τ ), µ, ν) = T (A, µ, ν)(ī -τ ) = (L τ T (A, µ, ν))(ī)" }, { "formula_coordinates": [ 31, 136.62, 573.06, 337.59, 175.22 ], "formula_id": "formula_72", "formula_text": "T (g • A, µ, ν)(ī) = T ((g • A)(ī), µ, ν) = T g • A(g -1 • ī), µ, ν = T (g • a, µ, ν) = [δ] iµ,iν [g • a] i1,...,iµ,...,iν ,...i k = [δ] iµ,iν [a] j1,...,j k q∈[k] [M (g)] iq,jq = [δ] iµ,iν [a] j1,...,j k [M (g)] iµ,jµ [M (g)] iν ,jν q∈[k]\\{µ,ν} [M (g)] iq,jq = [δ] iµ,iν [M (g)] iµ,jµ [M (g)] iν ,jν [a] j1,...,j k q∈[k]\\{µ,ν} [M (g)] iq,jq = [δ] jµ,jν [a] j1,...,j k q∈[k]\\{µ,ν} [M (g)] iq,jq" }, { "formula_coordinates": [ 32, 165.49, 113.81, 279.85, 68.67 ], "formula_id": "formula_73", "formula_text": "T (g • A, µ, ν)(ī) = [T (a, µ, ν)] {j1,...,j k }\\{jµ,jν } q∈[k]\\{µ,ν} [M (g)] iq,jq = g • T (A(g -1 • ī), µ, ν) = g • T (A, µ, ν)(g -1 • ī) = (g • T (A, µ, ν))(ī)" }, { "formula_coordinates": [ 32, 151.69, 271.14, 307.28, 137.83 ], "formula_id": "formula_74", "formula_text": "T (αA + βB, µ, ν)(ī) {i1,...,i k }\\iµ,iν = T (αA(ī) + βB(ī), µ, ν) {i1,...,i k }\\iµ,iν = T (αa + βb, µ, ν) {i1,...,i k }\\iµ,iν = δ iµ,iν (αa + βb) i1,...,i k = δ iµ,iν (αa) i1,...,i k + δ iµ,iν (βb) i1,...,i k = α δ iµ,iν a i1,...,i k + β δ iµ,iν b i1,...,i k = α • T (a, µ, ν) {i1,...,i k }\\iµ,iν + β • T (b, µ, ν) {i1,...,i k }\\iµ,iν = α • T (A, µ, ν)(ī) {i1,...,i k }\\iµ,iν + β • T (B, µ, ν)(ī) {i1,...,i k }\\iµ,iν" }, { "formula_coordinates": [ 32, 147.5, 482.83, 314.1, 117.77 ], "formula_id": "formula_75", "formula_text": "[T (A ⊗ S, µ, ν)(ī)] i1,...,i k+k \\{iµ,iν } = [T (A(ī) ⊗ S(ī), µ, ν)] i1,...,i k+k \\{iµ,iν } = [T (a ⊗ s, µ, ν)] i1,...,i k+k \\{iµ,iν } = [δ] iµ,iν [a ⊗ s] i1,...,i k+k = [δ] iµ,iν [a] i1,...i k [s] i k+1 ,...,i k+k = [T (a, µ, ν)] i1,...,i k \\{iµ,iν } [s] i k+1 ,...,i k+k = [T (a, µ, ν) ⊗ s] i1,...,i k+k \\{iµ,iν } = [(T (A, µ, ν) ⊗ S)(ī)] i1,...,i k+k \\{iµ,iν }" }, { "formula_coordinates": [ 32, 131, 647.86, 349.49, 82.22 ], "formula_id": "formula_76", "formula_text": "[T (A ⊗ S, µ, ν)(ī)] i1,...,i k+k \\{iµ,iν } = [δ] iµ,iν [a] i1,...i k [s] i k+1 ,...,i k+k = [a] i1,...i k [δ] iµ,iν [s] i k+1 ,...,i k+k = [a] i1,...i k [T (s, µ -k, ν -k)] i k+1 ,...,i k+k \\{iµ,iν } = [a ⊗ T (s, µ -k, ν -k)] i1,...,i k+k \\{iµ,iν } = [(A ⊗ T (S, µ -k, ν -k))(ī)] i1,...,i k+k \\{iµ,iν }" }, { "formula_coordinates": [ 33, 126, 121.31, 323.55, 223.15 ], "formula_id": "formula_77", "formula_text": "T (A * C, µ, ν)(ī) = T ((A * C)(ī), µ, ν) = T   ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m), µ, ν   = ā∈[-m,m] d T (A(ī -ā) ⊗ C(ā + m), µ, ν) = ā∈[-m,m] d T (A(ī -ā), µ, ν) ⊗ C(ā + m) = T (A, µ, ν) * C Likewise, if µ, ν ∈ {k + 1, . . . , k + k } distinct then: T (A * C, µ, ν)(ī) = ā∈[-m,m] d T (A(ī -ā) ⊗ C(ā + m), µ, ν) = ā∈[-m,m] d A(ī -ā) ⊗ T (C(ā + m), µ -k, ν -k) = A * T (C, µ -k, ν -k)" }, { "formula_coordinates": [ 33, 126, 510.08, 388.78, 107.13 ], "formula_id": "formula_79", "formula_text": "(g • T LC (A, µ 1 , . . . , µ d-1 ))(ī) = g • T LC (A, µ 1 , . . . , µ d-1 )(g -1 • ī) = g • T LC (A(g -1 • ī), µ 1 , . . . , µ d-1 ) = g • T M A g -1 • ī ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1) = T M g • A g -1 • ī ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1) = T M g • A g -1 • ī ⊗ g • , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1) = T M ((g • A)(ī) ⊗ , (µ 1 , k + 1), . . . , (µ d-1 , k + d -1)) = T LC (g • A, µ 1 , . . . , µ d-1 )(ī)" }, { "formula_coordinates": [ 35, 126, 202.19, 402.76, 64.33 ], "formula_id": "formula_80", "formula_text": "αg 1 (A) + βg 2 (A) = αT M (A * C 1 , (1, k + 1), . . . , (k, 2k)) + αT M (A * C 2 , (1, k + 1), . . . , (k, 2k)) = T M (α(A * C 1 ) + β(A * C 2 ), (1, k + 1), . . . , (k, 2k)) = T M (A * (αC 1 + βC 2 ), (1, k + 1), . . . , (k, 2k)) If α = β = 0 then clearly αg 1 (A) + βg 2 (A) is the zero geometric image for all A." }, { "formula_coordinates": [ 35, 126, 370.12, 412.71, 98.2 ], "formula_id": "formula_81", "formula_text": "(αg 1 (A) + βg 2 (A))(ī) = T M (A * (αC 1 + βC 2 ), (1, k + 1), . . . , (k, 2k))(ī) = T M ((A * (αC 1 + βC 2 ))(ī), (1, k + 1), . . . , (k, 2k)) = T M   ā∈[-m,m] d A(ī -ā) ⊗ (αC 1 + βC 2 )(ā + m), (1, k + 1), . . . , (k, 2k)   = T M A(ī -b) ⊗ (αC 1 + βC 2 )( b + m), (1, k + 1), . . . , (k, 2k) = T M (a ⊗ c, (1, k + 1), . . . , (k, 2k))" }, { "formula_coordinates": [ 35, 182.64, 530.24, 242.94, 12.59 ], "formula_id": "formula_82", "formula_text": "[(αg 1 (A) + βg 2 (A))(ī)] j k+1 ,...,j 2k+k = [a] i1...i k [c] j1,...,j 2k+k" }, { "formula_coordinates": [ 35, 410.06, 692.55, 75.94, 13.47 ], "formula_id": "formula_83", "formula_text": "C = 1 d C ⊗ ∆ ∈ A M,d,k +2,p is B d -invariant." }, { "formula_coordinates": [ 36, 126, 95.67, 256.82, 160.77 ], "formula_id": "formula_84", "formula_text": "f = A * C = A * C ⊗ 1 d d = A * C ⊗ 1 d T (∆, 1, 2) = A * T 1 d C ⊗ ∆, k + 1, k + 2 = A * T (C , k + 1, k + 2) = T (A * C , k + k + 1, k + k + 2) ∈ G Thus f ∈ G, so F ⊆ G." }, { "formula_coordinates": [ 36, 126, 424.59, 441.73, 212.89 ], "formula_id": "formula_85", "formula_text": "T LC A * C, µ 1 , . . . , µ d-1 (ī) = T LC A * C (ī), µ 1 , . . . , µ d-1 = T M A * C (ī) ⊗ , (µ 1 , k + k + 1), . . . , (µ d-1 , k + k + d -1) = T M     ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m)   ⊗ , (µ 1 , k + k + 1), . . .   = T M   ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) ⊗ , (µ 1 , k + k + 1), . . .   = T M   ā∈[-m,m] d A(ī -ā) ⊗ C(ā + m) ⊗ E(ā + m), (µ 1 , k + k + 1), . . .   = T M   ā∈[-m,m] d A(ī -ā) ⊗ C ⊗ E (ā + m), (µ 1 , k + k + 1), . . .   = T M A * C ⊗ E , (µ 1 , k + k + 1), . . . , (µ d-1 , k + k + d -1) (ī)" }, { "formula_coordinates": [ 36, 213.22, 692.36, 185.55, 10.81 ], "formula_id": "formula_86", "formula_text": "(g • E)(ī) = g • E(g -1 • ī) = g • = = E(ī)" }, { "formula_coordinates": [ 36, 126, 728.47, 360, 23.13 ], "formula_id": "formula_87", "formula_text": "g • C ⊗ E = g • C ⊗ g • E = C ⊗ E. Thus C ⊗ E ∈ A M,d,k +d,+ is B d -invariant so f ∈ G." }, { "formula_coordinates": [ 37, 137.75, 149.05, 53, 8.74 ], "formula_id": "formula_88", "formula_text": "(f * φ)(x) =" }, { "formula_coordinates": [ 37, 254.22, 316.8, 231.78, 9.65 ], "formula_id": "formula_90", "formula_text": "e i ⊗ e i = +1 (i ≤ p),(45)" }, { "formula_coordinates": [ 37, 213.56, 428.11, 272.44, 35.12 ], "formula_id": "formula_93", "formula_text": "f * φ(x) = y∈(Z/N Z) d c j=1 f j (y) ⊗ φ j (y -x) ∈Clp,q(R) ,(48)" }, { "formula_coordinates": [ 37, 216.86, 504.97, 269.14, 24.74 ], "formula_id": "formula_94", "formula_text": "T (R d ) = k≥0 R d ⊗ . . . ⊗ R d k times = k≥0 (R d ) ⊗k ,(49)" }, { "formula_coordinates": [ 38, 233.71, 157, 252.29, 30.99 ], "formula_id": "formula_95", "formula_text": "κ * f (x) = 1 |G| y∈G κ(x -1 y) f (y) ∈V .(50)" }, { "formula_coordinates": [ 38, 226.04, 534, 259.96, 23.98 ], "formula_id": "formula_97", "formula_text": "P -1 Φ d,k,p (g) P = π∈ Ĝ m d,k,p (π) π(g)(53)" }, { "formula_coordinates": [ 38, 263.83, 585.2, 222.17, 9.65 ], "formula_id": "formula_98", "formula_text": "C : T d,k,p → T d ,k ,p(54)" }, { "formula_coordinates": [ 39, 198.71, 206.55, 282.87, 11.37 ], "formula_id": "formula_99", "formula_text": "π (t r) f (y) = ρ(r) [f ((t r) -1 y)], t r ∈ G, y ∈ Z 2 , (56" }, { "formula_coordinates": [ 39, 481.57, 208.62, 4.43, 8.74 ], "formula_id": "formula_100", "formula_text": ")" }, { "formula_coordinates": [ 39, 219.92, 393.1, 266.08, 9.65 ], "formula_id": "formula_101", "formula_text": "ρ X (g) f * G Ψ(x) -f * G Ψ(ρ Y (y) x < .(57)" } ]
10.18653/v1/2021.naacl-main.105
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b46", "b20", "b14", "b51", "b6", "b0", "b54", "b51", "b48", "b50", "b31", "b6", "b39", "b49", "b34", "b45", "b45", "b5", "b29", "b25", "b21", "b33", "b24", "b42", "b11", "b47", "b51", "b25", "b34" ], "table_ref": [], "text": "Question answering using structured knowledge source is a critical function of information retrieval systems that act as an interface between humans and vast structured data repositories. Extracting and aggregating information accurately is a fundamental requirement of these systems and is thus a primary goal in their design. In recent years, the neural symbolic design approach (Berant et al., 2013;Yao and Van Durme, 2014;Liang et al., 2017;Gardner et al., 2018;Yu et al., 2018;Cheng et al., 2023) has become the preferred choice for such systems for two main reasons. First, neural models have inherent limitations, including a limited working memory that is costly to access during inference and a long-term memory that is unreliable to read from or write to, making it impractical to have them directly read from large-scale knowledge sources. Second, understanding how a system decides which information to retrieve and how to aggregate it is crucial for assessing its reliability and robustness.\nRecent investigations have demonstrated the effectiveness of the neural symbolic approach in producing transparent reasoning process in formal language sequence (such as Text-to-SQL) for question answering tasks based on databases or knowledge graphs (Berant et al., 2013;Zhong et al., 2017;Yu et al., 2018;Yin and Neubig, 2018;Yu et al., 2019;Ren et al., 2021;Cheng et al., 2023). A typical system comprises a neural semantic parsing module that translates user queries in natural language to formal language sequences (e.g., logical forms or executable code) and a symbolic reasoner module, such as database management system (DBMS), that executes the code on structured knowledge sources to extract the result. The primary objective of this work is to improve the semantic parsing module, as it is essential in extracting answers from relational databases using SQL as the formal language.\nCurrent semantic parsing modules can be broadly categorized based on their learning strategies. State-of-the-art systems involve fine-tuning a pretrained language models on a large corpus of {question, SQL} pairs, enabling the model to generate code (Wang et al., 2020;Yin et al., 2020;Scholak et al., 2021;Xie et al., 2022). Alternatively, the in-context learning (ICL) approach exploits the inherent capabilities of large language models (LLMs) to directly produce SQL code by providing a well-defined task prompt (Xie et al., 2022;Chen et al., 2022;Rajkumar et al., 2022;Ni et al., 2023). Existing research indicates that LLMs using prompt-based semantic parsing underperform their fine-tuned counterparts (Liu et al., 2023), while recent studies also suggest that performance of ICLtrained LLMs is significantly affected by the structure of the demonstration prompt (Liu et al., 2022a;Rubin et al., 2022;Lu et al., 2022;Wei et al., 2022;Fu et al., 2023;Ye et al., 2023). This motivates us to examine various prompt configurations for semantic parsing tasks, taking advantage of the latest advancements of LLMs pertaining to our domain of interest.\nOur study focused on exploring various prompt design strategies for semantic parsing tasks in the Text-to-SQL domain. We conducted a systematic investigation into different demonstration example selection criteria and instruction formats on Text-to-SQL datasets. Specifically, we propose to employ an example's SQL syntactic structure as the basis for retrieving demonstrations, thereby facilitating a more accurate representation of the problem structure. Our approach revealed that selecting demonstration examples with a dual emphasis on diversity and similarity objectives yields maximized gain in performance. Our study also showed that LLMs benefit from database-related knowledge augmentation in certain circumstances. Through experiments, we identified the most effective strategy, which resulted in an Execution Accuracy score of 84.4 on the Spider dataset (Yu et al., 2018). This score is 2.5 points higher than the current state-of-the-art system (Ni et al., 2023) and 5.1 points higher than the best fine-tuned system (Scholak et al., 2021). These results demonstrate the effectiveness of our in-context learning scheme in adapting LLMs to our target task. Furthermore, we present the empirical findings and analysis on the factors that contributed to the success of our strategy.1 " }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "To design prompts for in-context learning in zeroshot or few-shot settings, it is important to find an optimal way to represent, augment, and arrange all resources in the input-output mapping. Additionally, the task instructions should be formulated to align with these resources. When few-shot learning is employed, the selection of a subset of demonstrations from a pool of annotated examples for each test instance is another critical design choice that can impact the ICL performance. We proposed enhancements for each of these components and evaluated them against existing methods." }, { "figure_ref": [], "heading": "Demonstration Selection", "publication_ref": [], "table_ref": [], "text": "The goal is to select a subset of annotated examples from a pool that offers the best context for solving the test problem. While random selection from the pool is one option, Liu et al. (2022a) proposed kNN-augmented example selection (KATE), which retrieves k nearest neighbors from the pool based on the input of the compared instances. To achieve this, all the pool instances are first transformed into continuous vectors using a sentence encoder. During inference, the input of a test instance is projected into a latent space using the same encoder and then compared to the pool of vectors using a similarity measure, such as negative Euclidean distance or cosine similarity. Finally, the top k most similar annotated examples are selected from the pool." }, { "figure_ref": [], "heading": "Structured Prediction as Basis for Retrieval", "publication_ref": [], "table_ref": [], "text": "We propose utilizing the output SQL queries to select the demonstration examples, rather than using the input questions. This is because, unlike many tasks where the output is a classification label or extracted entity with little information about the problem structure, Text-to-SQL demands structured prediction which contains more explicit information about the problem structure than that provided in the input question. Furthermore, unlike natural language questions that can only be converted into continuous semantic vectors, SQL queries can be easily transformed into discrete feature vectors based on their syntax, making their comparison more efficient and transparent. To implement our proposal, we begin by converting the SQL queries of all pool instances into discrete syntax vectors. This is done by parsing the queries and identifying their syntactic elements, including keywords, operators, and identifiers. These elements are then mapped to binary features that indicate their presence in the query. During inference, we first generate a draft of the SQL query using a preliminary predictor. We then apply the same process to convert this draft query into a discrete vector, which is used to represent the test instance for retrieving demonstration examples." }, { "figure_ref": [], "heading": "Balancing Diversity and Similarity", "publication_ref": [ "b51" ], "table_ref": [], "text": "We propose a new demonstration selection strategy that differs from Liu et al. (2022a), which retrieves the most similar examples with continuous-valued measurements for each test instance. In contrast, our strategy seeks to balance similarity and diversity of the demonstrations. This is achieved by changing the representation of the given example from a continuous-valued vector denoting the question semantics to a discrete-valued vector that captures the SQL syntax. To obtain demonstration examples that are similar to the given example, we first split the pool of annotated examples into disjoint partitions that represent different categories. Specifically, we use the difficulty-level based categorization derived from the Spider dataset (Yu et al., 2018) " }, { "figure_ref": [], "heading": "Schema Representation in Instruction", "publication_ref": [ "b9" ], "table_ref": [], "text": "Instructions are crucial to designing prompts, as they define the task by clarifying how provided resources can aid the inference process (Dong et al., 2023). Our primary focus lies in determining the optimal way to represent a structured knowledge source within the instruction and identifying supplementary resources that can enhance the inference process." }, { "figure_ref": [], "heading": "Linearization of Structured Knowledge", "publication_ref": [ "b45" ], "table_ref": [], "text": "We begin by altering the linearization of structured knowledge. In prior research (Xie et al., 2022), structured knowledge sources such as databases or tables have been linearized into a \"text\" sequence. Instead, we propose representing the database using a \"code\" sequence, specifically the CREATE query employed to construct the table initially, as illustrated in listing 1 and 2 of the Appendix. This linearization approach provides data type information for each column and encompasses all foreign key constraint details within the database. More- T i .SQL = initial_predictor(T i ); c i = get_category(T i .SQL); P i = build_prompt(D c i , T i ); end return P over, we modify other resources in the instructions, such as the question and example entries in the database, to conform to the code sequence style by appending them as comments." }, { "figure_ref": [ "fig_10" ], "heading": "Schema-related Knowledge Augmentation", "publication_ref": [], "table_ref": [], "text": "The ontology of a database delineates the structure and semantics of the database by offering definitions for a set of classes (tables), their attributes (columns), and the relationships among them. We initially enhance the semantics of each class and attribute by elaborating on their meanings within the context of the entire database. Specifically, we employ OpenAI's gpt-3.5-turbo engine 2 to generate a natural language definition for each column in every table, considering all its values and other columns. We then incorporate these definitions into the input either by appending them as a block comment or inserting them within the CREATE query as inline comments. Furthermore, we suggest augmenting the representation of the database structure by providing an Entity-Relationship summary that outlines the connections between tables and specifies how they can be joined. As depicted in Figure 9 of the Appendix, an Entity-Relationship diagram of a database is utilized to enumerate all possible paths between distinct tables. These paths are subsequently arranged in descending order based on their respective lengths. The resulting summary has shown to be useful in our experiments for test instances where multiple tables need to be combined. Listing 5 further demonstrates our augmentations and how we arrange them to construct the prompt." }, { "figure_ref": [], "heading": "Integrated Strategy for Text-to-SQL", "publication_ref": [ "b40" ], "table_ref": [], "text": "Upon examination, we found that models trained with ICL exhibit sensitivity to the number of demonstration examples, resulting in noticeable variance in performance across models provided with various numbers of demonstrations. To establish substantial conclusions when comparing distinct prompting approaches, we present the mean and standard deviation for models sharing identical configurations except for the varying number of demonstrations. In addition, we employ a majority vote on these models exhibiting diverse performances. Specifically, we obtain the execution results of different models' greedy decoding predictions, eliminate those with execution errors by deterministic database management system (DBMS), and choose the prediction that receives the majority vote. Alternative integration methods, such as the self-consistency sampling (Wang et al., 2023), are also available, but we reserve their exploration for future research. The comprehensive results are available in Figures 10, 11, 12 of the Appendix for reader's perusal.\nWe propose the following procedure for constructing prompts for the Text-to-SQL task. Given a set A of annotated examples, we first establish a categorization that divides the pool into disjoint partitions A α , A β , . . . ,, with each partition containing examples whose SQL queries share a relatively similar syntax structure. Next, we apply the k-Means strategy detailed in Section 2.1 to obtain \nP n i = build_prompt(D c i [: n], T i ); P n * i = augment_schema(P n i ); SP n i = Model(P n * i ); ER n i = DBMS(SP n i ); end ER * i = Remove_Exec_Errors(ER i ); SP i = Majority_Vote(ER * i ); end return SP\ndiverse demonstration examples D j for partition A j . For each example, the demonstration is constructed by transforming the database into multiple CREATE queries and augmenting with schemarelated knowledge. During inference, we employ a preliminary model to generate a draft SQL query, which is used to determine the problem category and thus the corresponding D j for building the prompt. We obtain multiple predictions using various numbers of shots in D j and perform majority voting to arrive at the final prediction. Details of this approach are shown in Algorithm 2." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b51", "b3", "b25" ], "table_ref": [], "text": "Dataset We conduct comprehensive experiments on the following four semantic parsing datasets:\n• Spider (Yu et al., 2018) Model We evaluate different ICL strategies with Codex (Chen et al., 2021), a GPT-3 variant that was finetuned on code data on the web and has demonstrated state-of-the-art performance as the time of writing (Ni et al., 2023). Specifically, we use the code-davinci-002 engine and present the results of systems with prompts ranging from 1 to 10-shot. Additionally, we report the few-shot results utilizing the ChatGPT (gpt-3.5-turbo) model. However, due to its maximum context length limitation of 4096, we only obtain results for systems provided with prompts ranging from 1 to 5-shot. 3 Evaluation Metric We use execution accuracy as the evaluation metric for all experiments, which measures the percentage of system predictions leading to the gold execution result.\nBaselines We compare the following prompting strategies for generating SQL queries in few-shot and zero-shot settings. 3 Public API available at https://openai.com/api/." }, { "figure_ref": [], "heading": "Zero-shot", "publication_ref": [], "table_ref": [], "text": "• Baseline -DB as text-seq: Standard prompt for Text-to-SQL task, where structured knowledge is linearized as text sequence. • Baseline -DB as code-seq: Improve instructions by linearizing structured knowledge source as multiple SQL CREATE queries. • Baseline -DB as code-seq + SA: Enhance instructions with schema knowledge." }, { "figure_ref": [ "fig_2", "fig_6" ], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "In this section, we present a comprehensive analysis of various prompting strategies, assessing their efficacy across multiple datasets. The evaluation of demonstration sampling strategies in a fewshot setting testing on code-davinci-002 is illustrated in Figure 1a, and more few-shot results of gpt-3.5-turbo are shown in Figure 2. We compared different demonstration selection strategies, including random selection, k-nearest neighbors selection (similarity sampling)4 , k-means selection (diversity sampling), and our proposed approach, which combines both similarity and diversity. Moreover, we examined the impact of augmenting schema representation within the task instructions and assessed the performance of our integrated strategy. Our findings indicate that employing similarity and diversity objectives in the sampling process leads to better performance on average across all datasets. Furthermore, incorporating schema representation within the instructions enhances performance, and the implementation of voting of models with different shot results in a marked improvement in overall performance. The efficacy of schema augmentation is further supported by experiments in a zero-shot setting, as illustrated in Figure 1b. We compared systems using different linearization methods for prompts: one that transforms the database into a text sequence, and another that uses multiple CREATE queries to represent the database. The latter method shows noticeable improvement in performance. We also contrasted two separate techniques for augmenting schema representation: one that adds semantic information to each column within each table, and another that incorporates entity-relationship knowledge into the schema. The results suggest that structural augmentation To obtain the error bars for the random sampling approach, we conducted 3 independent runs using different random seeds. Schema augmentation utilized for the reported results in (a) is structure augmentation -add ontology summary. In the zero-shot setting, the error bars indicate means and standard deviations over 3 independent runs. Our results suggest that 1) using similarity and diversity objectives in the sampling process, 2) including schema representation in instructions, and 3) employing model voting with different shot outcomes both contribute to the improvement of ICL performance.\n(add ontology summary) brings a slight greater improvement in the few-shot setting for Codex (shown in Figure 6), while semantic augmentation (add column summary as block comments) proves more beneficial in the zero-shot setting for Codex and also the few-shot setting for ChatGPT (gpt-3.5-turbo). We hypothesize that this difference may arise from the less descriptive nature of structural augmentation, which calls for more demonstrations in order to effectively understand and utilize the provided information. In future study, we will explore how to adjust structural schema augmentation to better align with the zeroshot setting." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "Prediction-Syntax based Retrieval", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "The existing method for selecting demonstrations relies on the semantic representations of the question and the database. We propose an alternative method specifically for code generation tasks, which focuses on the syntax of the solution code. We examined syntax coverage and syntax similarity of the prompts produced with different strategies. Syntax coverage is computed by counting the occurrence of syntactic elements (keywords, operators, and identifiers), and dividing it by the total number of all syntactic elements. Syntax similarity, on the other hand, is measured by the mean Euclidean distance between the discrete vector representation of the predicted SQL and vectors that represent the gold SQLs of the demonstrations selected. As indicated in Table 1, both of these metrics contribute to the quality of the examples selected. Furthermore, a simple summation of the two measurements suggests a correlation with the system's performance, as illustrated in Figure 3. We argue the efficacy of our strategy through the following rationale: (1) in cases where the pool of annotated examples is limited in diversity of the problem structures, certain test problems may lack similar examples available for retrieval; and (2) neither the semantic representation of the question/database nor the distance metric inherently support encapsulation and comparison of different problem structures, whereas SQL syntax provides direct measurement of the problem structures. Given these constraints, the optimal strategy is to select similar examples while ensuring the coverage of as many syntax demonstrations as feasible to mitigate potential failures in similarity-based retrieval." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Comparative Analysis of Retrieval Methods", "publication_ref": [], "table_ref": [], "text": "We conducted an examination of various similaritybased retrieval methods and presented a comparative analysis of their performance in Figure 4. The primary variable in this investigation was the 4) embeddings that encode questions, databases and predicted SQL using text-embedding-ada-002.\nThe following conclusions can be drawn Additionally, we conducted a comparison between multiple embeddings utilized for diversitybased demonstration selection, encompassing embeddings that encode the semantics of questions, databases and predicted SQL, as well as embeddings that capture the syntactic features of predicted SQL. As depicted in Figure 5, the syntactic embeddings of predicted SQL serve as the most effective basis for contrasting different examples for diversity-based retrieval purposes." }, { "figure_ref": [ "fig_6" ], "heading": "Schema Augmentation", "publication_ref": [], "table_ref": [], "text": "Figure 6 presents the outcomes of various schema augmentations applied to the instruction. It is observed that improvement is not apparent in the fewshot setting; however, in the zero-shot setting, the semantic augmentation incorporating descriptions of all table columns proves to be beneficial." }, { "figure_ref": [ "fig_7" ], "heading": "Effectiveness Analysis", "publication_ref": [], "table_ref": [], "text": "In order to determine the problem types that benefit most or least from our proposed methods, we also evaluate the performance of different models across various problem categories within the Spider dataset. As indicated in Figure 7, our similaritydiversity strategy proves beneficial for most problem types, with the exception of the medium split, which includes the most diverse problems. This is the case where similarity-based retrieval fails and syntax coverage becomes more crucial. Furthermore, we observe that augmenting schema semantics is more effective for the easy and medium splits (albeit with high variance), while augmenting schema structure is more effective for more complex problems. This obvervation leads us to hypothesize that challenging problems necessitate addressing a higher number of tables, thus requiring a more comprehensive understanding of the entire database structure. Lastly, the integrated approach is effective across all examples, offering increased benefits especially for those difficult problems." }, { "figure_ref": [ "fig_8" ], "heading": "Preliminary Models", "publication_ref": [], "table_ref": [], "text": "To assess the impact of the choice of preliminary model used to generate the draft SQL on our approach, we conducted tests involving our methods for preliminary models with varying performance levels. Figure 8 reveals that the preliminary models have a relatively minor effect on the performance of the similarity-diversity or integrated approaches, exhibiting gradual improvements as higher-performing preliminary models are utilized.\n5 Related Work" }, { "figure_ref": [], "heading": "In-Context Learning", "publication_ref": [ "b1", "b27" ], "table_ref": [], "text": "Existing literature indicates the ability of large language models to adapt to new tasks at inference time by learning from a few example demonstrations (Brown et al., 2020;Radford et al., 2019). This new capability has been referred to as incontext learning. In this paper, we expand on previous works that investigate the optimal representations for prompt inputs." }, { "figure_ref": [], "heading": "Prompt Organization", "publication_ref": [ "b35", "b15", "b44", "b17", "b24", "b33", "b52", "b36" ], "table_ref": [], "text": "Prompt organization investigates the task of selecting and organizing in-context examples, a critical aspect of enhancing model performance. Several studies (Sorensen et al., 2022;Gonen et al., 2022;Wu et al., 2022;Hu et al., 2022;Lu et al., 2022) have proposed metrics to measure the suitability of examples with respect to the target objective and to determine the optimal ordering of them. Liu et al. (2022a) suggest selecting examples that are semantically similar to the test example by employing a k-NN approach in the embedding space. Rubin et al. (2022) train a prompt retriever based on contrastive learning, wherein examples are classified as either positive or negative if they are ranked among the top-k or bottom-k probabilities of a language model generating the target output, conditioned on the retrieved example and the input. Zhang et al. (2022) suggests to actively select demonstrations using Q-Learning. Su et al. (2023) introduces the Vote-k approach to selectively annotate diverse and representative examples for pool construction, then retrieve based on the similarity. In contrast, our approach retrieve a diverse set of examples given a pre-established pool. As the authors demonstrate that having a diverse and representative pool is important for the success of ICL, we posit that a similar characteristic is equally important when composing the prompt, as this approach increases the likelihood of including various syntactical usages or similar problem structures within the prompt." }, { "figure_ref": [], "heading": "Prompt Formatting", "publication_ref": [ "b7", "b19", "b40", "b26", "b55" ], "table_ref": [], "text": "Prompt engineering is concerned with investigating the impact of prompt structure on downstream task performance. For tasks that involve multi-step reasoning and higher complexity, Chain-of-thought prompting has been developed (Wei et al., 2023;Kojima et al., 2023). This approach involves laying out the generation process over multiple steps and using the model's own intermediate process as input. Wang et al. (2023) proposes to sample multiple different chain-of-thoughts then selects the most consistent answer through marginalization of all possible reasoning paths. Press et al. (2023) suggests that prompting LLMs to ask follow-up questions is an effective way to construct the chainof-thoughts process. Zhou et al. (2023) proposes an automatic approach to identify the optimal prompt by searching over a pool of model generated instructions, assigning scores to them, and selecting the prompt with the highest score." }, { "figure_ref": [], "heading": "Table-related task Encoding", "publication_ref": [ "b16", "b18", "b53", "b4", "b51", "b8" ], "table_ref": [], "text": "Encoding structured data is fundamental for various table-related tasks, including Table QA and Textto-SQL. In the case of Table QA, a commonly used method is to first employ a weakly-supervised table parser to extract relevant table cells and, optionally, apply a corresponding aggregation operator to the retrieved data. For example, TAPAS (Herzig et al., 2020) incorporates additional embedding layers into a BERT model to capture both the table structure and numerical information. To obtain an answer for a given question, TAPAS uses two classification layers that predict aggregation functions and corresponding table cells. More recent works (Liu et al., 2022b;Jiang et al., 2022;Zhao et al., 2022;Chen, 2023) Text-to-SQL is a task that aims to convert natural language questions into SQL queries that can be executed on a database (Yu et al., 2018;Gan et al., 2021b;Deng et al., 2021). In this task, structured data in the form of a table schema is also provided as input. The encoder should be able to align entity mentions in the NL question to the schema, and also understand schema structure information (e.g., foreign/primary keys and the column types)." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this study, we investigated various prompt design approaches for semantic parsing tasks in the Text-to-SQL domain. We proposed an approach that leverages an example's SQL syntactic structure for demonstration examples selection, emphasising both diversity and similarity as the sampling objectives. Additionally, We found that LLMs gain benefits from database-related knowledge augmentations. Future research can build upon our findings to examine the transferability of our approach to other domains. Through ongoing improvement of LLMs' capabilities in semantic parsing, we aim to contribute to the development of QA systems that are more accurate, robust and comprehensible." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b38", "b37", "b7" ], "table_ref": [], "text": "One of the main limitations of this study is the reproducibility problem. The experiments presented in this paper relied on the use of OpenAI APIs, which were available at the time of our research but have since been or will be deprecated. This means that the results of our experiments cannot be replicated using the same APIs, which hinders the reproducibility of our findings. To address this limitation, we will focus on providing experiments results that are based on open-sourced LLMs (Touvron et al., 2023;Taori et al., 2023;Chiang et al., 2023) for greater transparency and reproducibility.\nAnother limitation is that it is not clear how our approach will benefit LLMs given smaller or more constrained pools of annotated examples. Although we postulate that our approach offers the advantage of providing a prompt with maximal coverage of similar problem structures when identically structured problems cannot be found in the pool, we could not substantiate this due to our limited budget and access to the OpenAI APIs. select t1 . total_points from gymnast as t1 join people as t2 on t1 . gymnast_id = t2 . people_id order by t2 . age asc limit 1\nListing 1: Baseline prompt with text representation of the database. { department : { Department_ID : a unique identifier for a department , Name : the name of the department , Creation : the date the department was created , Ranking : the ranking of the department within the organization , Budget_in_Billions : \" the department s budget in billions of dollars \", Num_Employees : the number of employees in the department } , head : { head_ID : a unique identifier for the head of a department , name : the name of the head of the department , born_state : the state where the head of the department was born , age : the age of the head of the department } , management : { department_ID : the unique identifier for the department being managed , head_ID : the unique identifier for the head of the department , temporary_acting : whether the head of the department is serving in a temporary or acting capacity }} */ Listing 5: Prompt with structure augmentation of the schema. " } ]
In-context learning (ICL) has emerged as a new approach to various natural language processing tasks, utilizing large language models (LLMs) to make predictions based on context that has been supplemented with a few examples or task-specific instructions. In this paper, we aim to extend this method to question answering tasks that utilize structured knowledge sources, and improve Text-to-SQL systems by exploring various prompt design strategies for employing LLMs. We conduct a systematic investigation into different demonstration selection methods and optimal instruction formats for prompting LLMs in the Text-to-SQL task. Our approach involves leveraging the syntactic structure of an example's SQL query to retrieve demonstrations, and we demonstrate that pursuing both diversity and similarity in demonstration selection leads to enhanced performance. Furthermore, we show that LLMs benefit from database-related knowledge augmentations. Our most effective strategy outperforms the state-of-the-art system by 2.5 points (Execution Accuracy) and the best fine-tuned system by 5.1 points on the Spider dataset. These results highlight the effectiveness of our approach in adapting LLMs to the Text-to-SQL task, and we present an analysis of the factors contributing to the success of our strategy.
Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies
[ { "figure_caption": "Few- shot •shotRandom sampling (R): Select demonstration examples randomly from the pool. • Similarity sampling (S) • Diversity sampling (D): Select diverse examples from k-Means clusters of the pool. • Similarity-Diversity sampling (SD): Select examples based on Algorithm 1. • SD + schema augmentation (SA): Enhance instructions with schema knowledge (semantic augmentation or structure augmentation). • SD + SA + Voting: Integrated strategy described in Algorithm 2.", "figure_data": "", "figure_id": "fig_0", "figure_label": "shot", "figure_type": "figure" }, { "figure_caption": "Figure1: Few-shot and zero-shot results of Codex for all datasets. In the few-shot setting, error bars indicate means and standard deviations over performances of systems provided with prompts ranging from 4-shot to 10shot. To obtain the error bars for the random sampling approach, we conducted 3 independent runs using different random seeds. Schema augmentation utilized for the reported results in (a) is structure augmentation -add ontology summary. In the zero-shot setting, the error bars indicate means and standard deviations over 3 independent runs. Our results suggest that 1) using similarity and diversity objectives in the sampling process, 2) including schema representation in instructions, and 3) employing model voting with different shot outcomes both contribute to the improvement of ICL performance.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Few-shot results of gpt-3.5-turbo for Spider. Error bars indicate means and standard deviations over performances of systems provided with 1-shot to 5-shot prompts. Schema augmentation utilized for the reported results is semantic augmentation -add column summary as block-comment.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Correlation between syntax coverage and similarity measures of prompts and execution accuracy.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison between various similarity based demonstration selection methods. Q indicates the embedding model employed to extract representation for the question; D stands for database, and S stands for SQL query.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparison between various diversity based demonstration selection methods.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparison between various schema augmentations in few-shot and zero-shot settings.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Effects of various prompting strategies on Text-to-SQL problems of different difficulty levels.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Effects of preliminary model on proposed strategies.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "29/*Answer the following : What are the distinct creation years of the departments managed by a secretary born in state Alabama ? */ select distinct t1 . creation from department as t1 join management as t2 on t1 . department_id = t2 . department_id join head as t3 on t2 . head_id = t3 . head_id where t3 . born_state = Alabama Listing 4: Prompt with semantic augmentation of the schema as block comment. continents.contid -> countries.continent, countries.countryid -> car_makers.country, car_makers.id -> model_list.maker, model_list.model -> car_names.model, car_names.makeid -> cars_data.id employee.emp_num -> department.emp_num, department.dept_code -> course.dept_code, course.crs_code -> class.crs_code, class.class_code -> enroll.class_code department.dept_code -> student.dept_code, student.stu_num -> enroll.stu_num employee.emp_num -> class.prof_num employee.emp_num -> professor.emp_num department.dept_code -> professor.dept_code", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Examples of schema structure representation construction.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Few-shot results for comparing different sampling strategies with different number of demonstration examples selected for the prompt.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Few-shot results for comparing different schema representation augmentation methods with different number of demonstration examples selected for the prompt.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Few-shot results for comparing different sampling strategies on Text-to-SQL problems of different difficulty levels, with different number of demonstration examples selected for the prompt.", "figure_data": "", "figure_id": "fig_13", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Algorithm 2: Integrated Strategy Input: Set of annotated examples A, test examples T , # demonstrations k, categorization {α, β, ...}, and from Algorithm 1: disjoint partitions {A α , A β , ...} and corresponding demonstrations {D α , D β , ...} Result: Set of SQL predictions SP, where SP i is the final prediction for test example T i for T i in test set T do T i .SQL = initial_predictor(T i ); c i = get_category(T i .SQL); for n = 4 to k do", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Average syntax coverage and similarity measures of the prompt for different demonstration selection strategies and the corresponding execution accuracies.", "figure_data": "Cov.Sim.ScoreRandom0.380.2476.03Similarity0.350.3078.33Diversity0.430.2378.64Similarity-Diversity0.500.2680.32", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "have considered Table QA as a se-quence generation task. They flatten the table into a text sequence and use special tokens to indicate the table structure while encoding the tabular data.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Listing 2: Baseline prompt with code representation of the database. Prompt with semantic augmentation of the schema as inline comment.", "figure_data": "1 2 3 4 5 6 7 8 9 10 11 13 14 15 16 17 18 19 20 21 22 23 24 25 26 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28/* Given the following database schema : */ CREATE TABLE IF NOT EXISTS \" gymnast \" ( \" Gymnast_ID \" int , \" Floor_Exercise_Points \" real , \" Pommel_Horse_Points \" real , \" Rings_Points \" real , \" Vault_Points \" real , \" Parallel_Bars_Points \" real , \" Horizontal_Bar_Points \" real , \" Total_Points \" real , PRIMARY KEY (\" Gymnast_ID \") , \" head_ID \" int , --a unique identifier for the head of a department \" name \" text , --the name of the head of the department \" born_state \" text , --the state where the head of the department was born \" age \" real , --the age of the head of the department PRIMARY KEY (\" head_ID \") ) ; CREATE TABLE IF NOT EXISTS \" management \" ( \" department_ID \" int , --the unique identifier for the department being managed \" head_ID \" int , --the unique identifier for the head of the department \" temporary_acting \" text , --whether the head of the department is serving in a temporary or acting capacity PRIMARY KEY (\" Department_ID \" , \" head_ID \") FOREIGN KEY (\" Department_ID \") REFERENCES department ( \" Department_ID \") FOREIGN KEY (\" head_ID \") REFERENCES head ( \" head_ID \") ) ; /* Answer the following : What are the distinct creation years of the departments managed by a secretary born in state Alabama ? */ select distinct t1 . creation from department as t1 join management as t2 on t1 . department_id = t2 . department_id join head as t3 on t2 . head_id = t3 . head_id where t3 . born_state = Alabama \" Department_ID \" int , \" Name \" text , \" Creation \" text , \" Ranking \" int , \" Budget_in_Billions \" real , \" Num_Employees \" real , PRIMARY KEY (\" Department_ID \") ) ; CREATE TABLE IF NOT EXISTS \" head \" ( \" head_ID \" int , \" name \" text , \" born_state \" text , \" age \" real , PRIMARY KEY (\" head_ID \") ) ; CREATE TABLE IF NOT EXISTS \" management \" ( \" department_ID \" int , \" head_ID \" int , \" temporary_acting \" text , PRIMARY KEY (\" Department_ID \" ,\" head_ID \") , FOREIGN KEY (\" Department_ID \") REFERENCES department ( \" Department_ID \") , FOREIGN KEY (\" head_ID \") REFERENCES head ( \" head_ID \") ) ; /* Table column descriptions : Listing 3: CREATE TABLE IF NOT EXISTS \" department \" (21) ;22/* Answer the following : Return the total points of the gymnast with the lowest age .*/select t1 . total_points from gymnast as t1 join people as t2 on t1 . gymnast_id = t2 .people_id order by t2 . age asc limit 1", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "TABLE IF NOT EXISTS \" continents \" (", "figure_data": "3\" ContId \" INTEGER PRIMARY KEY ,4\" Continent \" TEXT5) ;6CREATE TABLE IF NOT EXISTS \" countries \" (7\" CountryId \" INTEGER PRIMARY KEY ,8\" CountryName \" TEXT ,9\" Continent \" INTEGER ,10FOREIGN KEY ( Continent ) REFERENCES continents ( ContId )11) ;12CREATE TABLE IF NOT EXISTS \" car_makers \" (13\" Id \" INTEGER PRIMARY KEY ,14\" Maker \" TEXT ,15\" FullName \" TEXT ,16\" Country \" TEXT ,17FOREIGN KEY ( Country ) REFERENCES countries ( CountryId )18) ;19CREATE TABLE IF NOT EXISTS \" model_list \" (20\" ModelId \" INTEGER PRIMARY KEY ,21\" Maker \" INTEGER ,22\" Model \" TEXT UNIQUE ,23FOREIGN KEY ( Maker ) REFERENCES car_makers ( Id )2425) ;42) ;4344/*select count (*) from continents ;", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
Nan Linyong; Yilun Zhao; Weijin Zou; Narutatsu Ri; Jaesung Tae; Ellen Zhang; Arman Cohan; Dragomir Radev
[ { "authors": "Jonathan Berant; Andrew Chou; Roy Frostig; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Semantic parsing on Freebase from question-answer pairs", "year": "2013" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman; Alex Ray; Raul Puri; Gretchen Krueger; Michael Petrov; Heidy Khlaaf; Girish Sastry; Pamela Mishkin; Brooke Chan; Scott Gray; Nick Ryder; Mikhail Pavlov; Alethea Power; Lukasz Kaiser; Mohammad Bavarian; Clemens Winter; Philippe Tillet; Felipe Petroski Such; Dave Cummings; Matthias Plappert; Fotios Chantzis; Elizabeth Barnes; Ariel Herbert-Voss; William Hebgen Guss; Alex Nichol; Alex Paino; Nikolas Tezak; Jie Tang; Igor Babuschkin; Suchir Balaji; Shantanu Jain; William Saunders; Christopher Hesse; Andrew N Carr; Jan Leike; Josh Achiam; Vedant Misra; Evan Morikawa; Alec Radford; Matthew Knight; Miles Brundage; Mira Murati; Katie Mayer; Peter Welinder; Bob Mcgrew; Dario Amodei; Sam Mccandlish; Ilya Sutskever; Wojciech Zaremba", "journal": "", "ref_id": "b3", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Wenhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Large language models are few(1)-shot table reasoners", "year": "2023" }, { "authors": "Wenhu Chen; Xueguang Ma; Xinyi Wang; William W Cohen", "journal": "", "ref_id": "b5", "title": "Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks", "year": "2022" }, { "authors": "Zhoujun Cheng; Tianbao Xie; Peng Shi; Chengzu Li; Rahul Nadkarni; Yushi Hu; Caiming Xiong; Dragomir Radev; Mari Ostendorf; Luke Zettlemoyer; Noah A Smith; Tao Yu", "journal": "", "ref_id": "b6", "title": "Binding language models in symbolic languages", "year": "2023" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b7", "title": "Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Xiang Deng; Ahmed Hassan Awadallah; Christopher Meek; Oleksandr Polozov; Huan Sun; Matthew Richardson", "journal": "", "ref_id": "b8", "title": "Structure-grounded pretraining for text-to-SQL", "year": "2021" }, { "authors": "Qingxiu Dong; Lei Li; Damai Dai; Ce Zheng; Zhiyong Wu; Baobao Chang; Xu Sun; Jingjing Xu; Lei Li; Zhifang Sui", "journal": "", "ref_id": "b9", "title": "A survey on in-context learning", "year": "2023" }, { "authors": "Zhangyin Feng; Daya Guo; Duyu Tang; Nan Duan; Xiaocheng Feng; Ming Gong; Linjun Shou; Bing Qin; Ting Liu; Daxin Jiang; Ming Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Code-BERT: A pre-trained model for programming and natural languages", "year": "2020" }, { "authors": "Yao Fu; Hao Peng; Ashish Sabharwal; Peter Clark; Tushar Khot", "journal": "", "ref_id": "b11", "title": "Complexity-based prompting for multi-step reasoning", "year": "2023" }, { "authors": "Yujian Gan; Xinyun Chen; Qiuping Huang; Matthew Purver; John R Woodward; Jinxia Xie; Pengsheng Huang; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Towards robustness of text-to-SQL models against synonym substitution", "year": "2021" }, { "authors": "Yujian Gan; Xinyun Chen; Matthew Purver", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Exploring underexplored limitations of cross-domain text-to-SQL generalization", "year": "2021" }, { "authors": "Matt Gardner; Pradeep Dasigi; Srinivasan Iyer; Alane Suhr; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Neural semantic parsing", "year": "2018" }, { "authors": "Srini Hila Gonen; Terra Iyer; Noah A Blevins; Luke Smith; Zettlemoyer", "journal": "", "ref_id": "b15", "title": "Demystifying prompts in language models via perplexity estimation", "year": "2022" }, { "authors": "Jonathan Herzig; Krzysztof Pawel; Thomas Nowak; Francesco Müller; Julian Piccinno; Eisenschlos", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "TaPas: Weakly supervised table parsing via pre-training", "year": "2020" }, { "authors": "Yushi Hu; Chia-Hsuan Lee; Tianbao Xie; Tao Yu; Noah A Smith; Mari Ostendorf", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Incontext learning for few-shot dialogue state tracking", "year": "2022" }, { "authors": "Zhengbao Jiang; Yi Mao; Pengcheng He; Graham Neubig; Weizhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "OmniTab: Pretraining with natural and synthetic data for few-shot tablebased question answering", "year": "2022" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b19", "title": "Large language models are zero-shot reasoners", "year": "2023" }, { "authors": "Chen Liang; Jonathan Berant; Quoc Le; Kenneth D Forbus; Ni Lao", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision", "year": "2017" }, { "authors": "Aiwei Liu; Xuming Hu; Lijie Wen; Philip S Yu", "journal": "", "ref_id": "b21", "title": "A comprehensive evaluation of chatgpt's zeroshot text-to-sql capability", "year": "2023" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "What makes good in-context examples for GPT-3?", "year": "2022" }, { "authors": "Qian Liu; Bei Chen; Jiaqi Guo; Morteza Ziyadi; Zeqi Lin; Weizhu Chen; Jian-Guang Lou", "journal": "", "ref_id": "b23", "title": "TAPEX: Table pre-training via learning a neural SQL executor", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov; ; Yao Lu; Max Bartolo; Alastair Moore; Sebastian Riedel; Pontus Stenetorp", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity", "year": "2020" }, { "authors": "Ansong Ni; Srini Iyer; Dragomir Radev; Ves Stoyanov; Wen Tau Yih; Sida I Wang; Xi Victoria; Lin ", "journal": "", "ref_id": "b25", "title": "Lever: Learning to verify language-to-code generation with execution", "year": "2023" }, { "authors": "Ofir Press; Muru Zhang; Sewon Min; Ludwig Schmidt; Noah A Smith; Mike Lewis", "journal": "", "ref_id": "b26", "title": "Measuring and narrowing the compositionality gap in language models", "year": "2023" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b27", "title": "Language Models are Unsupervised Multitask Learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b28", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "year": "2020" }, { "authors": "Nitarshan Rajkumar; Raymond Li; Dzmitry Bahdanau", "journal": "", "ref_id": "b29", "title": "Evaluating the text-to-sql capabilities of large language models", "year": "2022" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Hanjun Hongyu Ren; Bo Dai; Xinyun Dai; Michihiro Chen; Haitian Yasunaga; Dale Sun; Jure Schuurmans; Denny Leskovec; Zhou", "journal": "", "ref_id": "b31", "title": "Lego: Latent execution-guided reasoning for multi-hop question answering on knowledge graphs", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b32", "title": "", "year": "" }, { "authors": "Ohad Rubin; Jonathan Herzig; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Learning to retrieve prompts for in-context learning", "year": "2022" }, { "authors": "Torsten Scholak; Nathan Schucher; Dzmitry Bahdanau", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "PICARD: Parsing incrementally for constrained auto-regressive decoding from language models", "year": "2021" }, { "authors": "Taylor Sorensen; Joshua Robinson; Christopher Rytting; Alexander Shaw; Kyle Rogers; Alexia Delorey; Mahmoud Khalil; Nancy Fulda; David Wingate", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "An information-theoretic approach to prompt engineering without ground truth labels", "year": "2022" }, { "authors": "Hongjin Su; Jungo Kasai; Chen Henry Wu; Weijia Shi; Tianlu Wang; Jiayi Xin; Rui Zhang; Mari Ostendorf; Luke Zettlemoyer; Noah A Smith; Tao Yu", "journal": "", "ref_id": "b36", "title": "Selective annotation makes language models better few-shot learners", "year": "2023" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b37", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b38", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Bailin Wang; Richard Shin; Xiaodong Liu; Oleksandr Polozov; Matthew Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers", "year": "2020" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; V Quoc; Ed H Le; Sharan Chi; Aakanksha Narang; Denny Chowdhery; Zhou", "journal": "", "ref_id": "b40", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2023" }, { "authors": "Yue Wang; Weishi Wang; Shafiq Joty; Steven C H Hoi", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "CodeT5: Identifier-aware unified pretrained encoder-decoder models for code understanding and generation", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; Denny Quoc V Le; Zhou", "journal": "", "ref_id": "b42", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b43", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2023" }, { "authors": "Zhiyong Wu; Yaoxiang Wang; Jiacheng Ye; Lingpeng Kong", "journal": "", "ref_id": "b44", "title": "Self-adaptive in-context learning", "year": "2022" }, { "authors": "Tianbao Xie; Chen Henry Wu; Peng Shi; Ruiqi Zhong; Torsten Scholak; Michihiro Yasunaga; Chien-Sheng Wu; Ming Zhong; Pengcheng Yin; I Sida; Victor Wang; Bailin Zhong; Chengzu Wang; Connor Li; Ansong Boyle; Ziyu Ni; Dragomir Yao; Caiming Radev; Lingpeng Xiong; Rui Kong; Noah A Zhang; Luke Smith; Tao Zettlemoyer; Yu", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Unified-SKG: Unifying and multi-tasking structured knowledge grounding with text-to-text language models", "year": "2022" }, { "authors": "Xuchen Yao; Benjamin Van Durme", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Information extraction over structured data: Question answering with Freebase", "year": "2014" }, { "authors": "Seonghyeon Ye; Hyeonbin Hwang; Sohee Yang; Hyeongu Yun; Yireun Kim; Minjoon Seo", "journal": "", "ref_id": "b47", "title": "In-context instruction learning", "year": "2023" }, { "authors": "Pengcheng Yin; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation", "year": "2018" }, { "authors": "Pengcheng Yin; Graham Neubig; Wen-Tau Yih; Sebastian Riedel", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "TaBERT: Pretraining for joint understanding of textual and tabular data", "year": "2020" }, { "authors": "Tao Yu; Rui Zhang; Heyang Er; Suyi Li; Eric Xue; Bo Pang; Victoria Xi; Yi Lin; Tianze Chern Tan; Zihan Shi; Youxuan Li; Michihiro Jiang; Sungrok Yasunaga; Tao Shim; Alexander Chen; Zifan Fabbri; Luyao Li; Yuwen Chen; Shreya Zhang; Vincent Dixit; Caiming Zhang; Richard Xiong; Walter Socher; Dragomir Lasecki; Radev", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases", "year": "2019" }, { "authors": "Tao Yu; Rui Zhang; Kai Yang; Michihiro Yasunaga; Dongxu Wang; Zifan Li; James Ma; Irene Li; Qingning Yao; Shanelle Roman; Zilin Zhang; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task", "year": "2018" }, { "authors": "Yiming Zhang; Shi Feng; Chenhao Tan", "journal": "", "ref_id": "b52", "title": "Active example selection for in-context learning", "year": "2022" }, { "authors": "Yilun Zhao; Linyong Nan; Zhenting Qi; Rui Zhang; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b53", "title": "ReasTAP: Injecting table reasoning skills during pre-training via synthetic reasoning examples", "year": "2022" }, { "authors": "Victor Zhong; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b54", "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "year": "2017" }, { "authors": "Yongchao Zhou; Andrei Ioan Muresanu; Ziwen Han; Keiran Paster; Silviu Pitis; Harris Chan; Jimmy Ba", "journal": "", "ref_id": "b55", "title": "Large language models are human-level prompt engineers", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 317.05, 265.56, 189.13, 124.2 ], "formula_id": "formula_0", "formula_text": "P n i = build_prompt(D c i [: n], T i ); P n * i = augment_schema(P n i ); SP n i = Model(P n * i ); ER n i = DBMS(SP n i ); end ER * i = Remove_Exec_Errors(ER i ); SP i = Majority_Vote(ER * i ); end return SP" } ]
10.18653/v1/2020.findings-emnlp.347
2023-05-21
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b28", "b15", "b24", "b10", "b30", "b39", "b12", "b12", "b31", "b2", "b32", "b21", "b39", "b16", "b3", "b12", "b39", "b24", "b21", "b30", "b17", "b1", "b42", "b40" ], "table_ref": [], "text": "Dialogue systems are playing an increasingly important role in our daily lives. They can serve as intelligent assistants to help users accomplish tasks and answer questions or as social companion bots to converse with users for entertainment (Ni et al., 2022;Fu et al., 2022). In recent years, the research and development of dialogue systems has made remarkable progress. However, due to the complexity of human communication, the latest dialogue systems may still fail to understand users' intents and generate inappropriate responses (Liang et al., 2021;Deng and Lin, 2022;Pan et al., 2022). These deficiencies pose huge challenges to deploying dialogue systems to real-life applications, especially high-stakes ones such as finance and health. In light of this, it is crucial to evaluate the performance of dialogue systems adequately in their development phase (Sun et al., 2021;Deriu et al., 2021). Generally speaking, there are two types of evaluation methods, human evaluation and automatic evaluation (Deriu et al., 2021). Human evaluation is fairly effective, but costly and hard to scale up. By contrast, automatic evaluation is more scalable. However, due to the ambiguity of what constitutes a high-quality dialogue, there are currently no universally accepted evaluation metrics. Existing commonly used metrics such as BLEU (Papineni et al., 2002) usually do not agree with human judgment. Nonetheless, user satisfaction estimation (USE) has been proposed as an alternative (Bodigutla et al., 2019;Park et al., 2020;Kachuee et al., 2021;Sun et al., 2021). USE assumes that the performance of a dialogue system can be approximated by the satisfaction of its users and simulates users' satisfaction with an estimator. In this regard, USE performs automatic evaluation and is thus scalable.\nAside from helping developers find the defects of a dialogue system, USE also makes it possible to carry out timely human intervention for dissatisfied users and continuously optimize the system from human feedback (Hancock et al., 2019;Bodigutla et al., 2020;Deriu et al., 2021;Deng et al., 2022). In essence, USE is a multi-class classification problem and the goal is to predict user satisfaction at each turn. Take the dialogue shown in Figure 1 as an example, where user satisfaction is measured on a three-point scale. At the first two turns, the system responds appropriately. However, at the third turn, even though the response seems to be reasonable, the system asks for information that the user has already provided at the first turn, which may lead to dissatisfaction.\nAs a model-based metric, the evaluation quality of USE relies heavily on the satisfaction estimator used. In order to train a robust estimator, different approaches have been proposed (Sun et al., 2021;Liang et al., 2021;Kachuee et al., 2021;Pan et al., 2022;Deng et al., 2022). Despite the effectiveness of these approaches, they estimate user satisfaction at each turn independently and ignore the dynamics of user satisfaction across turns within a dialogue. Given that a user's satisfaction is not only related to the current dialogue context, but may also be related to the satisfaction states at previous turns, we argue that modeling user satisfaction dynamics is valuable for training a more powerful estimator.\nTo achieve this, we propose ASAP (sAtisfaction eStimation via HAwkes Process), a novel approach that leverages Hawkes process (Hawkes, 2018) to capture the dynamics of user satisfaction. Hawkes process is a self-exciting point process and it has been widely adopted to model sequential data such as financial transactions (Bacry et al., 2015) and healthcare records (Wang et al., 2018). In particular, we make the following contributions:\n• We first propose a base estimator to predict user satisfaction based solely on the dialogue context. We then incorporate a Hawkes process module to model user satisfaction dynamics by treating the satisfaction scores across turns within a dialogue as an event sequence.\n• We propose a discrete version of the continuous Hawkes process to adapt it to the USE task and implement this module with a Transformer architecture (Vaswani et al., 2017).\n• We conduct extensive experiments on four dialogue datasets. The results show that ASAP substantially outperforms baseline methods." }, { "figure_ref": [], "heading": "Problem Statement", "publication_ref": [ "b39" ], "table_ref": [], "text": "Suppose that we are provided with a dialogue session X containing T interaction turns, denoted as\nX = {(R 1 , U 1 ), (R 2 , U 2 ), . . . , (R T , U T )}.\nEach interaction turn t (1 ≤ t ≤ T ) consists of a response R t by the system and an utterance U t by the user. The goal of USE is to predict the user satisfaction score s t at each turn t based on the dialogue context\nX t = {(R 1 , U 1 ), (R 2 , U 2 ), . . . , (R t , U t )}.\nHence, our task is to learn an estimator E : X t → s t that can accurately estimate the user's satisfaction throughout the entire dialogue session.\nPrevious studies have shown that adding user action recognition (UAR) as an auxiliary task can facilitate the training of a stronger satisfaction estimator (Sun et al., 2021;Deng et al., 2022). When user action labels are available, our task shifts to learning an estimator E : X t → (s t , a t ) that predicts user satisfaction and user action simultaneously. Here, a t denotes the user action at turn t." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we first describe how to build a base USE model leveraging only the dialogue context and without modeling the dynamics of user satisfaction. Then, we extend this model by integrating the Hawkes process to capture the dynamic changes of user satisfaction across dialogue turns. The overall model architecture is illustrated in Figure 2." }, { "figure_ref": [], "heading": "Base Satisfaction Estimator", "publication_ref": [], "table_ref": [], "text": "Similar to Deng et al. (2022), we utilize a hierarchical transformer architecture to encode the dialogue context X t into contextual semantic representations. A hierarchical architecture enables us to handle long dialogues. This architecture consists of a token-level encoder and a turn-level encoder." }, { "figure_ref": [], "heading": "Token-Level Encoder", "publication_ref": [ "b13" ], "table_ref": [], "text": "The token-level encoder takes as input the concatenation of the system response R t and user utterance U t at each turn t and yields a single vector h t as their semantic vector representation. To be specific, we adopt the pre-trained language model BERT (Devlin et al., 2019) to encode each (R t , U t ) pair:\nh t = BERT([CLS]R t [SEP ]U t [SEP ]). (1)" }, { "figure_ref": [], "heading": "Turn-Level Encoder", "publication_ref": [ "b40", "b0" ], "table_ref": [], "text": "The token-level encoder can only capture the contextual information within each turn. In order to capture the global contextual information across turns, we develop a turn-level encoder that takes the semantic representations {h 1 , h 2 , . . . , h t } of all turns in the dialogue context X t as input. We implement this encoder as a unidirectional Transformer encoder with L layers. Similar to the standard Transformer encoder layer (Vaswani et \nH (0) = [h 1 + pe(1), . . . , h t + pe(t)],\n(2)\nH * = MultiHead(H (l) , H (l) , H (l) ),(3)\nH (l+1) = FFN(H * + H (l) ) + H * + H (l) ,(4)\nwhere H (0) is the input of the first layer, in which we add positional encodings pe(•) to retain the turn order information. We calculate pe(•) in the same way as Vaswani et al. (2017).\nH (L) = [c 1 , . . . , c t ]\nis the output of the last layer with c t denoting the final contextualized representation of the t-th turn.\nNotice that layer normalization (Ba et al., 2016) is omitted in the formulae above for simplicity." }, { "figure_ref": [], "heading": "Satisfaction Estimation", "publication_ref": [ "b35" ], "table_ref": [], "text": "After acquiring the contextual representation c t , we can readily compute the probability distribution of user satisfaction at turn t by applying an MLP network (Rumelhart et al., 1986) with softmax normalization to c t , as shown below:\np U SE t = softmax(MLP(c t )),(5)\nwhere p U SE t ∈ R K , and K is the number of satisfaction classes. The class with the highest probability is selected as the prediction." }, { "figure_ref": [], "heading": "Hawkes Process Integration", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries on Hawkes Process", "publication_ref": [ "b27", "b43", "b18", "b49", "b45", "b47", "b49" ], "table_ref": [], "text": "The Hawkes process is a self-exciting point process. It models the self-excitation of events having the same type and the mutual excitation of events with different types in an additive way. A Hawkes process is characterized by its conditional intensity function, which is defined as:\nλ(t) = µ(t) + t i :t i <t ψ(t -t i ).(6)\nHere, t i denotes the occurrence time of a past event, µ(t) > 0 is the background intensity or base intensity, and ψ(•) ≥ 0 is a pre-specified triggering kernel function. Typically, ψ(•) is chosen to be a time-decaying function (e.g., the exponential function exp(-t)), indicating that the impacts of past events on the current event decrease through time.\nWhile being able to model the influence of past events, the formulation in Eq. ( 6) is too simple to capture the complicated dynamics of many real-life event sequences. For example, it assumes that each of the past events has a positive effect on the occurrence of the current event, which can be unrealistic in numerous complex scenarios. To improve its capability, neural Hawkes process models have been devised (Mei and Eisner, 2017;Xiao et al., 2017). These models generalize the standard Hawkes process by parameterizing its intensity function with recurrent neural networks (RNNs) such as LSTM (Hochreiter and Schmidhuber, 1996). More concretely, the new intensity function is calculated in the following way:\nλ(t) = M m=1 λ m (t) = M m=1 f m (w T m x t ),(7)\nwhere M is the total number of event types, x t is the hidden state of the event sequence, and w m is a parameter vector that converts x t to a scalar. f m (•) is the softplus function with a \"softness\" parameter β m , i.e., f m (y) = β m log(1 + exp(y/β m )). It guarantees that the intensity λ(t) is always positive. In addition to the stronger expressiveness, this formulation of the intensity function has another advantage in that the probability of each event type m can be simply calculated as λ m (t)/λ(t).\nThe RNNs-based Hawkes process models inherit the intrinsic weaknesses of RNNs. Inspired by the superiority of Transformers over RNNs in dealing with sequential data, several Transformer Hawkes process models have been proposed recently (Zuo et al., 2020;Zhang et al., 2020;Zhou et al., 2022). For these models, one representative definition of the type-specific intensity function λ m (t) takes the form (Zuo et al., 2020):\nλ m (t) = f m α m t -t i t i + w T m x t i + b m . (8)\nIn Eq. ( 8), b m represents the base intensity and α m is introduced to modulate the importance of time interpolation. This interpolation enables λ m (t) to be continuous over time. The overall intensity function λ(t) is still defined as λ(t) = M m=1 λ m (t)." }, { "figure_ref": [], "heading": "Adapting Hawkes Process for Satisfaction Estimation", "publication_ref": [], "table_ref": [], "text": "Intuitively, the user satisfaction scores across turns within a dialogue can be regarded as an event sequence and each score corresponds to one type of event. Therefore, it is a natural fit to adopt Hawkes process to model the dynamics of user satisfaction. However, it is infeasible to apply the standard Hawkes process or its neural variants mentioned above directly. This is because these Hawkes processes are continuous in time, i.e., the domain of their intensity function λ(t) is the interval (0, T ].\nA continuous Hawkes process models both what the next event type will be and when the next event will happen. By comparison, the satisfaction score sequence in our case is discrete in time. We only need to predict the next event type (i.e., the satisfaction score) and there is no need to predict when it will happen as we estimate user satisfaction at every turn. This difference inspires us to design a discrete version of the Hawkes process.\nIt is worth emphasizing that one satisfaction prediction is supposed to be made at every dialogue turn, meaning that one event regardless of its type will certainly happen at each turn. To achieve this, we constrain the intensity function λ(t) to always take the value 1. Furthermore, following Eq. ( 7), λ(t) is decomposed into:\nλ(t) = K k=1 λ k (t) = 1, t ∈ {1, 2, . . . , T }, s.t. λ k (t) > 0, ∀k = 1, 2, . . . , K. (9)\nRecall that K represents the number of satisfaction classes. Due to λ(t) = 1, λ k (t) can be regarded as the probability that event type k happens (i.e., the satisfaction score is k). In Eq. ( 9), λ(t) is defined on the discrete domain {1, 2, . . . , T } rather than the continuous interval (0, T ].\nWe propose to calculate each λ k (t) by the following formula:\nλ k (t) = exp f k (MLP k (c t ) + MLP k (x t )) K j=1 exp f j (MLP j (c t ) + MLP j (x t ))\n, (10) where the term associated with c t characterizes the contribution of the dialogue context X t to the intensity (i.e., base intensity) and the term corresponding to x t reveals the contribution of the satisfaction sequence. Different from Eqs. ( 7) and ( 8), we perform non-linear rather than linear transformations to convert both c t and x t into scalars using MLP networks. Note that f k (•) is the softplus function.\nNext, we describe how to compute x t , the hidden state of the satisfaction score sequence. Given the strong capability of Transformer Hawkes process models, we choose to employ a Transformer architecture (named score-level encoder) to compute x t . In particular, we adopt a unidirectional Transformer with N layers. Same as the turn-level encoder (refer to §3.1.2), each layer contains two sub-layers, the multi-head attention sub-layer and the position-wise feed-forward sub-layer.\nThe input to its first layer is the satisfaction score sequence. To convert this sequence into vector representations, we introduce an embedding matrix Z ∈ R d×K whose k-th column is a d-dimensional embedding for satisfaction class k. In principle, if we have the ground-truth score s t for turn t, we can calculate the embedding vector of this turn as Ze st , where e st is the one-hot encoding of score s t . In practice, however, we need to predict the satisfaction scores for all turns. Let ŝt be the predicted score of turn t and Ze ŝt the corresponding embedding vector. Then, we can feed [Ze ŝ1 , . . . , Ze ŝt ] to the score-level encoder to learn the dynamics of user satisfaction up to turn t and to obtain x t . This approach, albeit straightforward, has a severe limitation that there is no feedback from the score-level encoder to help train the base model because the gradients from the score-level encoder cannot be back-propagated to the base model. To overcome this limitation, we take the probability distribution of satisfaction classes p U SE t , as shown in Eq. ( 5), as the predicted \"soft\" score. Then, the embedding vector of turn t is computed by:\nv t = Zp U SE t . (11\n)\nIt can be seen that v t is a weighted sum of the em-beddings of all satisfaction scores and the weights are the predicted probability by the base model. Based on v t , the score-level encoder functions as follows to yield x t :\nV (0) = [v 1 + pe(1), . . . , v t + pe(t)],(12)\nV * = MultiHead(V (n) , V (n) , V (n) ),(13)\nV (n+1) = FFN(V * + V (n) ) + V * + V (n) . (14\n)\nSimilar to the turn-level encoder, we add positional encodings into the input of the first layer V (0) to retain the temporal information. The output of the last layer is symbolized as\nV (N ) = [x 1 , . . . , x t ]." }, { "figure_ref": [], "heading": "Training Objective", "publication_ref": [], "table_ref": [], "text": "We employ the cross-entropy loss as our training objective. Recall that λ k (t) represents the probability of the satisfaction score being k at turn t. Thus, the training objective of USE is defined as:\nL U SE = -log p(s t |X t ) = -log λ st (t),(15)\nwhere s t is the ground-truth satisfaction label.\nAs stated in §2, adding UAR as an auxiliary task has the potential to help us train a more powerful satisfaction estimator. Even though the proposed Transformer Hawkes process model is expected to improve the performance of USE significantly, it is still meaningful to study if adding this auxiliary task can further improve the performance. To this end, we leverage an MLP network with softmax normalization on top of the turn-level encoder to calculate the probability distribution of user action when the ground-truth labels are provided:\np U AR t = softmax(MLP(c t )).(16)\nLet p U AR t,at be the probability corresponding to the ground-truth action label a t at turn t. The training objective of UAR is then defined as:\nL U AR = -log p(a t |X t ) = -log p U AR t,at . (17\n)\nWe jointly optimize USE and UAR by minimizing the following loss:\nL joint = L U SE + γL U AR .(18)\nHere, γ is a hyper-parameter that controls the contribution of the UAR task." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "In what follows, we detail the experimental setup." }, { "figure_ref": [], "heading": "Datasets & Evaluation Metrics", "publication_ref": [ "b14", "b34", "b7", "b23", "b39", "b30", "b39", "b5", "b38", "b9" ], "table_ref": [], "text": "We conduct our experiments on four publicly available dialogue datasets, including MultiWOZ 2.1 (MWOZ) (Eric et al., 2020), Schema Guided Dialogue (SGD) (Rastogi et al., 2020), JDDC (Chen et al., 2020), and Recommendation Dialogues (Re-Dial) (Li et al., 2018). In particular, we perform evaluations on the subsets of these datasets with user satisfaction annotations, which are provided on a five-point scale by Sun et al. (2021). Following existing works (Deng et al., 2022;Pan et al., 2022), the satisfaction annotations are mapped into three-class labels {dissatisfied, neutral, satisfied}. MWOZ, SGD, and ReDial are in English and all contain 1000 dialogues. While JDDC is a Chinese dataset and has 3300 dialogues. Except for ReDial, all the other three datasets have user action labels.\nThe number of action types in MWOZ, SGD, and JDDC is 21, 12, and 236, respectively. For more details about these datasets, refer to Sun et al. (2021).\nFollowing previous studies (Cai and Chen, 2020;Song et al., 2019;Choi et al., 2019;Deng et al., 2022), we use Accuracy (Acc) and Macro-averaged Precision (P), Recall (R), and F1 score (F1) as the evaluation metrics in our experiments." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b20", "b8", "b44", "b13", "b6", "b33", "b41", "b18", "b22" ], "table_ref": [], "text": "We compare our proposed method ASAP with several state-of-the-art baseline methods in both singletask learning and multi-task learning settings. 1In the single-task learning setting, we only consider the USE task and the selected baselines are:\nHiGRU (Jiao et al., 2019), which utilizes a hierarchical GRU structure (Cho et al., 2014) to encode the dialogue context.\nHAN (Yang et al., 2016), which adds a two-level attention mechanism to HiGRU.\nBERT (Devlin et al., 2019), which concatenates all the utterances in the dialogue context as a flat sequence. In addition, long sequences with more than 512 tokens are truncated automatically.\nUSDA (Deng et al., 2022), which leverages a hierarchical Transformer architecture to encode the dialogue context.\nIn the multi-task learning setting, we consider both the USE task and UAR task. And we compare ASAP to the following baseline methods: JointDAS (Cerisara et al., 2018), which jointly performs UAR and sentiment classification. We replace sentiment classification with the USE task.\nCo-GAT (Qin et al., 2021), which leverages graph attention networks (Veličković et al., 2017) to perform UAR and sentiment classification. We also replace sentiment classification with the USE task.\nJointUSE (Bodigutla et al., 2020), which adopts LSTM (Hochreiter and Schmidhuber, 1996) for learning temporal dependencies across turns.\nUSDA (Deng et al., 2022), which uses CRF (Lafferty et al., 2001) to model the sequential dynamics of user actions to facilitate USE.\nOur method ASAP is closely related to USDA. The main difference is that USDA focuses on modeling user action dynamics while ASAP focuses on modeling user satisfaction dynamics. Given that user action labels may not be available in practice, our method is more applicable." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baseline Comparison", "publication_ref": [], "table_ref": [ "tab_1", "tab_3", "tab_1" ], "text": "Single-Task Learning. The results of single-task learning are summarized in Table 1 andTable 3 can be observed that our proposed method ASAP consistently outperforms all baseline methods on all datasets. Notably, ASAP shows substantially higher performance than USDA over all four metrics even though USDA conducts in-domain pretraining to strengthen its capability of representation learning. For example, ASAP achieves 9.6%, 3.9%, 4.3%, and 6.9% F1 score improvements on MWOZ, SGD, JDDC, and ReDial, respectively.\nMulti-Task Learning. The results of USE in the multi-task learning setting are reported in Table 2. For Co-GAT and JointUSE, we include results when the BERT model is leveraged. It can also be observed that the performance of ASAP is consistently higher than all baseline methods over all four metrics. For example, when compared to USDA, we observe that ASAP achieves 7.7%, 3.8%, and 3.2% absolute point improvements in terms of F1 score on MWOZ, SGD, and JDDC, respectively. Single-Task Learning vs. Multi-Task Learning.\nFrom Tables 1 and2, we can find that ASAP tends to perform better in the multi-task learning setting on MWOZ and SGD. This indicates that adding UAR as an auxiliary task is beneficial for improving performance. However, it is worth noting that the performance gain is relatively low. To be specific, the improvements of F1 score on MWOZ and SGD are merely 0.6% and 0.5%, respectively. Besides, on the JDDC dataset, ASAP even performs worse in the multi-task learning setting due to the large number (i.e., 236) of action types. The strong performance of ASAP in the single-task learning setting verifies the significance of modeling user satisfaction dynamics, especially considering that it is costly to collect user action labels.\nIn summary, our proposed method ASAP is able to outperform baseline methods in both the singletask learning setting and multi-task learning setting. Most importantly, it can achieve highly competitive performance in the single-task learning setting." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Effectiveness of Hawkes Process Integration", "publication_ref": [], "table_ref": [], "text": "The above results have demonstrated the effectiveness of our method ASAP as a whole. However, it is unclear how much the Hawkes process integration module (i.e., the satisfaction dynamics modeling module) contributes to the overall performance.\nTo better understand the effectiveness of this module, we conduct an ablation study where we compare the performance of ASAP with that of the base satisfaction estimator (refer to §3.1). Recall that the base estimator leverages only the dialogue context for USE. The results on SGD and ReDial are shown in Figure 3. For SGD, we report the results of both single-task learning and multi-task learning. From Figure 3, it can be observed that ASAP consistently outperforms the base estimator over all four metrics on both datasets. This observation validates the effectiveness of the Hawkes process integration module." }, { "figure_ref": [], "heading": "Contribution of Satisfaction Sequence to Intensity Function", "publication_ref": [], "table_ref": [], "text": "As shown in Eq. ( 10), the dialogue context and satisfaction sequence both contribute to the intensity function of the Hawkes process. Here, we explore how much contribution should be attributed to the satisfaction sequence. This study is a supplement to the analysis in the previous section and can provide more insights into the effectiveness of satisfaction dynamics modeling. Considering that the softplus function is monotonically increasing, we can measure the importance of the satisfaction sequence by the value exp(MLP st (x t ))/(exp(MLP st (x t )) + exp(MLP st (c t ))). The larger this value is, the more the satisfaction sequence contributes. We calculate this value for all samples in the test set and employ a box plot to show the distribution of these values.\nThe detailed results are provided in Figure 4, where the triangle marker indicates the mean value. We see that the importance of the satisfaction sequence depends on the dataset. For MWOZ and SGD, the dialogue context tends to contribute more than the satisfaction sequence. In contrast, for JDDC and ReDial, the satisfaction sequence tends to be more important. Despite the variance across datasets, we can conclude that the satisfaction sequence generally plays a critical role." }, { "figure_ref": [], "heading": "Performance over Dialogue Turn", "publication_ref": [], "table_ref": [], "text": "Given that longer dialogues tend to be more challenging, we further investigate the relationship between the depth of dialogue and the performance of our method. Specifically, we study how the performance changes over dialogue turn. The results of ASAP on ReDial are illustrated in Figure 5, where we also report the results of the base estimator for comparison. We omit the results of the first three turns because of their short dialogue context. From Figure 5, it can be seen that ASAP outperforms the base estimator in most turns, which again verifies the effectiveness of the Hawkes process integration module. However, we observe that the performance of ASAP and the base estimator degrades when the dialogue is deep. Nonetheless, the performance of ASAP is more robust to the increase of dialogue depth, which should be attributed to the modeling of user satisfaction dynamics." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Effects of Parameter γ", "publication_ref": [], "table_ref": [], "text": "Figure 6 shows the impacts of the parameter γ on the performance of our method in the multi-task learning setting. Note that γ is used to adjust the weight of the UAR task. From Figure 6, we observe that when γ takes small values, the performance is relatively stable. However, the performance drops drastically when γ becomes large. This is because when γ takes large values, the training objective is dominated by the UAR task. As a consequence, our method fails to optimize the satisfaction estimator." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b39", "b12", "b21", "b31", "b25", "b29", "b24", "b2", "b39", "b30", "b19", "b9", "b32", "b12", "b17", "b36", "b43", "b45", "b49", "b37", "b4" ], "table_ref": [], "text": "We briefly review related work on user satisfaction estimation and Hawkes process. User Satisfaction Estimation. Evaluation is crucial for the development of dialogue systems (Sun et al., 2021). However, evaluating a dialogue system comprehensively can prove to be challenging due to the lack of a clear definition of what constitutes a high-quality dialogue (Deriu et al., 2021). Typically, a user study is carried out to collect feedback from end users. However, human evaluation is costly and time-intensive.\nAnother line of approaches is to perform evaluation from the language point of view. The main objective is to measure how natural and syntactically and semantically correct the system responses are (Kachuee et al., 2021). For example, several machine translation metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) can be used to measure if system responses are consistent with a set of provided answers. These approaches, albeit efficient, suffer from misalignment with human judgment (Novikova et al., 2017).\nMore recently, user satisfaction estimation has been proposed as an alternative (Liang et al., 2021;Bodigutla et al., 2019;Sun et al., 2021;Deng et al., 2022;Pan et al., 2022). It leverages human annotations regarding turn-level satisfaction to train an estimator. The estimator is then utilized to perform automatic evaluation by simulating users. Due to this, the evaluation quality depends heavily on the performance of the estimator. In the literature, different approaches have been proposed to train robust estimators (Jiang et al., 2015;Choi et al., 2019;Park et al., 2020;Deriu et al., 2021;Deng et al., 2022). However, none of them considered satisfaction dynamics, which we have shown is a severe deficiency in fully simulating users. Hawkes Process. Hawkes process (Hawkes, 2018) is a self-exciting process and has been widely used to model sequential data (Salehi et al., 2019). To enhance the capacity of the standard Hawkes process, several RNNs-based and Transformer-based variants have been proposed (Xiao et al., 2017;Zhang et al., 2020;Zuo et al., 2020). All these Hawkes processes are continuous over time. There are also studies on discrete Hawkes processes (Seol, 2015;Browning et al., 2021). However, these discrete versions still predict when the next event happens." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed a new estimator ASAP that adopts the Hawkes process to efficiently capture user satisfaction dynamics across turns within a dialogue. Specifically, we devised a discrete version of the continuous Hawkes process to adapt it to the USE task and implemented this discrete version with a Transformer architecture. Extensive experiments on four benchmark datasets demonstrated the superiority of ASAP over baseline USE methods and the effectiveness of the Hawkes process module in modeling user satisfaction dynamics." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although our proposed method ASAP is able to outperform baseline estimators, an important factor it ignores is the subjectivity of user satisfaction. In practice, different users may have different degrees of satisfaction with the same dialogue. This implies that ASAP may be effective for some users, but it may also fail to predict true satisfaction for others. In order to adequately simulate a user, it is essential to take the issue of subjectivity into account. Given this, we would like to extend ASAP for personalized satisfaction estimation by incorporating user profile information in the future." }, { "figure_ref": [], "heading": "A Implementation & Training Details", "publication_ref": [ "b26" ], "table_ref": [], "text": "In our experiments, we follow the same procedure as Deng et al. (2022) to pre-process all datasets.\nFor the token-level BERT encoder, we employ the pre-trained BERT-base-uncased model to initialize its weights for MWOZ, SGD, and ReDial. For the JDDC dataset, we use the pre-trained BERT-base-Chinese model for initialization. Both pre-trained models are available from HuggingFace2 . For the turn-level encoder, we fix the number of attention heads at 12 and set the number of layers (i.e., L) to 2. For the score-level encoder (i.e., the Transformer Hawkes process module), we also fix the number of attention heads at 12. But we treat the number of its layers (i.e., N ) as a hyper-parameter and choose the value from {2, 4, 6, 8, 10, 12}. The dimension d of the embedding of each satisfaction class is fixed at 768. For both the turn-level encoder and scorelevel encoder, the hidden size of the Transformer FFN inner representation layer is set to 3072. All the other involved MLP networks contain only one hidden layer with the hidden size set to 192. The size of their output layers is either the number of satisfaction classes or the number of action types. The \"softness\" parameter β of the softplus function is fixed at 1.\nAdamW (Loshchilov and Hutter, 2017) is exploited as the optimizer, and a linear schedule with warmup is created to adjust the learning rate dynamically. The peak learning rate is chosen from {1e-5, 2e-5}. The warmup proportion is set to 0.1. The dropout ratio is also set to 0.1. For all datasets, we train the model for up to 5 epochs. For MWOZ and SGD, we adopt a batch size of 16. While we set the batch size to 24 for ReDial and JDDC. In the multi-task learning setting, we set the parameter γ for MWOZ, SGD, and JDDC to 0.5, 1.0, and 0.1, respectively. The best model checkpoints are selected based on the F1 score on the validation set. For all experiments, we use a fixed random seed 42. And it took us around 300 GPU hours to finish the experiments.\nTo justify that the performance improvements of our proposed method are significant, we apply the SciPy package's stats.ttest_rel function3 to perform a paired t-test against the most competitive baseline USDA and calculate the p-value." }, { "figure_ref": [ "fig_4" ], "heading": "B Performance of User Action Recognition", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Recall that in the multi-task learning setting, our method ASAP is trained to predict user satisfaction and user action simultaneously. We have presented the results on USE. In this part, we further investigate the performance on the UAR task. The results on MWOZ, SGD, and JDDC are summarized in Table 4, from which we can see that while ASAP slightly underperforms USDA on MWOZ and SGD according to the official USDA results, its performance is on par with that of USDA based on our reproduced results. Compared to other baselines, ASAP consistently achieves better results on both MWOZ and SGD. However, on the JDDC dataset, we find that the performance of ASAP is relatively low. This is because we have used a small value of 0.1 for γ on this dataset. Because of this, during the training phase, ASAP is mainly optimized for the USE task rather than the UAR task. It is worth emphasizing that our focus is on improving the performance of USE instead of UAR in this work. Thus, the reported UAR results are based on the checkpoints which achieve the best USE performance. These checkpoints may not fully demonstrate the capabilities of ASAP on the UAR task.\nIn fact, we empirically found that by setting γ to larger values, ASAP can achieve much higher performance on action recognition. But this sacrifices the performance on satisfaction estimation.\nC Effects of Number of Layers N in the Score-Level Encoder Given that the score-level encoder (i.e., the Transformer Hawkes process module) consists of N layers, it is worth studying the impacts of N on performance by varying its value. For this purpose, we conduct another experiment on the SGD dataset and choose the value of N from {2, 4, 6, 8, 10, 12}.\nWe carry out this experiment in both the single-task learning setting and the multi-task learning setting.\nThe results are shown in Figure 7. It can be observed that although different values of N lead to different results, the performance is relatively stable. Even so, the performance tends to be higher when N takes smaller values. When N is larger, it is harder to optimize the model because there are more parameters. Additionally, the model is also more prone to overfitting the data." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was funded by the EPSRC Fellowship titled \"Task Based Information Retrieval\" (grant reference number EP/P024289/1) and the Alan Turing Institute." } ]
Dialogue systems have received increasing attention while automatically evaluating their performance remains challenging. User satisfaction estimation (USE) has been proposed as an alternative. It assumes that the performance of a dialogue system can be measured by user satisfaction and uses an estimator to simulate users. The effectiveness of USE depends heavily on the estimator. Existing estimators independently predict user satisfaction at each turn and ignore satisfaction dynamics across turns within a dialogue. In order to fully simulate users, it is crucial to take satisfaction dynamics into account. To fill this gap, we propose a new estimator ASAP (sAtisfaction eStimation via HAwkes Process) that treats user satisfaction across turns as an event sequence and employs a Hawkes process to effectively model the dynamics in this sequence. Experimental results on four benchmark dialogue datasets demonstrate that ASAP can substantially outperform state-of-the-art baseline estimators.
Modeling User Satisfaction Dynamics in Dialogue via Hawkes Process
[ { "figure_caption": "Hello, how may I help you today? I want to cancel my handbag order.It can only be canceled if the following conditions are met… Yes, I meet all the conditions. Good. You have two orders. Which one do you want to cancel? Wait, didn't I just mention that I want to cancel the handbag order?", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: An example dialogue showing the dynamics of user satisfaction across different interaction turns.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance comparison between ASAP and the proposed base satisfaction estimator on SGD and ReDial.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 4: Contribution of satisfaction sequence to intensity function.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Effects of the number of layers (i.e., N ) in the score-level encoder on SGD.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "al., ", "figure_data": "Hawkes Process IntegrationScore-Level Encoder: Unidirectional Transformer (* layers)3 (pe(1) 2MLP -# + 4 # 567pe(2)+ MLP -' 4 ' 567… …pe(t) UAR MLP+ MLP -( 4 ( 567MLPk MLPk 0 / (1) . /, #, ', (Turn-Level Encoder: Unidirectional Transformer () layers)pe(1)+ + #pe(2)++ 'pe(t)+ ( +BERTBERT…BERT(\" # , % # )(\" ' , % ' )(\" ( , % ( )Base ModelFigure 2: The architecture of our model ASAP. It con-sists of a base estimator module and a Hawkes processintegration module. Both modules leverage positionalencodings to retain temporal information. Note that asingle BERT model is shared by all turns and the (op-tional) UAR component is depicted in dashed lines.2017), each layer includes two sub-layers. The firstsub-layer is a masked multi-head attention module(MultiHead). The second sub-layer is a position-wise feed-forward network which is composed oftwo linear transformations with a ReLU activationin between (FFN).Formally, each layer of the turn-level encoderoperates as follows:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "‡ 55.1 ‡ 55.4 ‡ 55.0 ‡ 64.5 ‡ 62.4 ‡ 61.9 ‡ 62.1 ‡ 65.4 ‡ 64.2 ‡ 68.5 ‡ 65.3 ‡ Single-task performance comparison. † indicates our reproduced results. ‡ means significant performance improvements over USDA (measured by a paired t-test at p < 0.05). IDPT is short for in-domain pre-training. ‡ 58.1 ‡ 54.7 ‡ 55.6 ‡ 64.8 ‡ 63.0 ‡ 62.3 ‡ 62.6 ‡ 64.1 ‡ 62.6 ‡ 67.3 ‡ 63.9", "figure_data": "Models IDPTMWOZSGDJDDCAccPRF1AccPRF1AccPRF1HiGRU44.6 43.7 44.3 43.7 50.0 47.3 48.4 47.5 59.7 57.3 50.4 52.0HAN39.0 37.1 37.1 36.8 47.7 47.1 44.8 44.9 58.4 54.2 50.1 51.2BERT46.1 45.5 47.4 45.9 56.2 55.0 53.7 53.7 60.4 59.8 58.8 59.5USDA49.9 49.2 49.0 48.9 61.4 60.1 55.7 57.0 61.8 62.8 63.7 61.7USDA †47.0 45.4 45.6 45.4 60.2 60.1 57.6 58.2 60.2 60.9 66.0 61.0ASAP 56.3 Models IDPTMWOZSGDJDDCAccPRF1AccPRF1AccPRF1JointDAS44.8 42.7 43.0 42.8 55.7 52.2 52.4 52.3 58.5 55.8 55.1 55.4Co-GAT46.8 44.8 44.0 44.2 56.8 55.9 55.9 55.6 60.2 59.3 62.9 60.1+BERT47.0 46.4 47.2 46.3 58.6 55.2 55.7 55.5 60.6 60.6 63.7 61.0JointUSE47.6 44.6 44.9 44.7 57.4 55.0 54.8 54.7 58.3 56.6 58.7 57.2+BERT48.9 47.2 48.0 47.3 59.0 57.4 57.1 57.3 63.8 60.8 58.6 59.2USDA52.9 51.8 50.2 50.6 62.5 60.3 59.9 60.1 63.0 61.4 65.7 62.6USDA †49.2 47.7 48.3 47.9 61.3 58.4 59.5 58.8 61.6 60.0 62.3 60.7ASAP58.1", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": ". It", "figure_data": "Models AccPRF1HiGRU 46.144.444.043.5HAN46.340.040.340.0BERT53.650.551.350.0USDA57.354.352.953.4USDA † 58.155.754.554.7ASAP66.0 ‡ 62.0 ‡ 61.3 ‡ 61.6 ‡", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance comparison on ReDial.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of performance on user action recognition. † indicates our reproduced results. The best results are shown in bold and the second-best results are underlined.", "figure_data": "49.2 48.7 47.3", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Fanghua Ye; Zhiyuan Hu; Emine Yilmaz
[ { "authors": "Jimmy Lei Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b0", "title": "Layer normalization", "year": "2016" }, { "authors": "Emmanuel Bacry; Iacopo Mastromatteo; Jean-François Muzy", "journal": "Market Microstructure and Liquidity", "ref_id": "b1", "title": "Hawkes processes in finance", "year": "2015" }, { "authors": "Praveen Kumar Bodigutla; Lazaros Polymenakos; Spyros Matsoukas", "journal": "", "ref_id": "b2", "title": "Multi-domain conversation quality evaluation via user satisfaction estimation", "year": "2019" }, { "authors": "Praveen Kumar Bodigutla; Aditya Tiwari; Spyros Matsoukas; Josep Valls-Vargas; Lazaros Polymenakos", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Joint turn and dialogue level user satisfaction estimation on multi-domain conversations", "year": "2020" }, { "authors": "Raiha Browning; Deborah Sulem; Kerrie Mengersen; Vincent Rivoirard; Judith Rousseau", "journal": "PloS one", "ref_id": "b4", "title": "Simple discrete-time self-exciting models can describe complex dynamic processes: A case study of covid-19", "year": "2021" }, { "authors": "Wanling Cai; Li Chen", "journal": "", "ref_id": "b5", "title": "Predicting user intents and satisfaction with dialogue-based conversational recommendations", "year": "2020" }, { "authors": "Christophe Cerisara; Somayeh Jafaritazehjani; Adedayo Oluokun; Hoa T Le", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Multi-task dialog act and sentiment recognition on mastodon", "year": "2018" }, { "authors": "Meng Chen; Ruixue Liu; Lei Shen; Shaozu Yuan; Jingyan Zhou; Youzheng Wu; Xiaodong He; Bowen Zhou", "journal": "European Language Resources Association", "ref_id": "b7", "title": "The JDDC corpus: A largescale multi-turn Chinese dialogue dataset for Ecommerce customer service", "year": "2020" }, { "authors": "Kyunghyun Cho; Bart Van Merriënboer; Dzmitry Bahdanau; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "year": "2014" }, { "authors": "Jason Ingyu Choi; Ali Ahmadvand; Eugene Agichtein", "journal": "", "ref_id": "b9", "title": "Offline and online satisfaction prediction in open-domain conversational systems", "year": "2019" }, { "authors": "Jianyang Deng; Yijia Lin", "journal": "Frontiers in Computing and Intelligent Systems", "ref_id": "b10", "title": "The benefits and challenges of chatgpt: An overview", "year": "2022" }, { "authors": "Yang Deng; Wenxuan Zhang; Wai Lam; Hong Cheng; Helen Meng", "journal": "", "ref_id": "b11", "title": "User satisfaction estimation with sequential dialogue act modeling in goaloriented conversational systems", "year": "2022" }, { "authors": "Jan Deriu; Alvaro Rodrigo; Arantxa Otegi; Guillermo Echegoyen; Sophie Rosset; Eneko Agirre; Mark Cieliebak", "journal": "Artificial Intelligence Review", "ref_id": "b12", "title": "Survey on evaluation methods for dialogue systems", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Mihail Eric; Rahul Goel; Shachi Paul; Abhishek Sethi; Sanchit Agarwal; Shuyang Gao; Adarsh Kumar; Anuj Goyal; Peter Ku; Dilek Hakkani-Tur", "journal": "European Language Resources Association", "ref_id": "b14", "title": "MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines", "year": "2020" }, { "authors": "Tingchen Fu; Shen Gao; Xueliang Zhao; Ji-Rong Wen; Rui Yan", "journal": "AI Open", "ref_id": "b15", "title": "Learning towards conversational ai: A survey", "year": "2022" }, { "authors": "Braden Hancock; Antoine Bordes; Pierre-Emmanuel Mazare; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Learning from dialogue after deployment: Feed yourself, chatbot!", "year": "2019" }, { "authors": "Alan G Hawkes", "journal": "Quantitative Finance", "ref_id": "b17", "title": "Hawkes processes and their applications to finance: a review", "year": "2018" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Lstm can solve hard long time lag problems", "year": "1996" }, { "authors": "Jiepu Jiang; Ahmed Hassan Awadallah; Rosie Jones; Umut Ozertem; Imed Zitouni; Ranjitha Gurunath Kulkarni; Omar Zia Khan", "journal": "", "ref_id": "b19", "title": "Automatic online evaluation of intelligent assistants", "year": "2015" }, { "authors": "Wenxiang Jiao; Haiqin Yang; Irwin King; Michael R Lyu", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "HiGRU: Hierarchical gated recurrent units for utterance-level emotion recognition", "year": "2019" }, { "authors": "Mohammad Kachuee; Hao Yuan; Young-Bum Kim; Sungjin Lee", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Self-supervised contrastive learning for efficient user satisfaction prediction in conversational agents", "year": "2021" }, { "authors": "Andrew John D Lafferty; Fernando Cn Mccallum; Pereira", "journal": "", "ref_id": "b22", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "year": "2001" }, { "authors": "Raymond Li; Samira Ebrahimi Kahou; Hannes Schulz; Vincent Michalski; Laurent Charlin; Chris Pal", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Towards deep conversational recommendations", "year": "2018" }, { "authors": "Runze Liang; Ryuichi Takanobu; Feng-Lin Li; Ji Zhang; Haiqing Chen; Minlie Huang", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Turn-level user satisfaction estimation in Ecommerce customer service", "year": "2021" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b26", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Hongyuan Mei; Jason M Eisner", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "The neural hawkes process: A neurally self-modulating multivariate point process", "year": "2017" }, { "authors": "Jinjie Ni; Tom Young; Vlad Pandelea; Fuzhao Xue; Erik Cambria", "journal": "Artificial intelligence review", "ref_id": "b28", "title": "Recent advances in deep learning based dialogue systems: A systematic survey", "year": "2022" }, { "authors": "Jekaterina Novikova; Ondřej Dušek; Amanda Cercas Curry; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Why we need new evaluation metrics for NLG", "year": "2017" }, { "authors": "Yan Pan; Mingyang Ma; Bernhard Pflugfelder; Georg Groh", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "User satisfaction modeling with domain adaptation in task-oriented dialogue systems", "year": "2022" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Dookun Park; Hao Yuan; Dongmin Kim; Yinglei Zhang; Matsoukas Spyros; Young-Bum Kim; Ruhi Sarikaya; Edward Guo; Yuan Ling; Kevin Quinn", "journal": "", "ref_id": "b32", "title": "Large-scale hybrid approach for predicting user satisfaction with conversational agents", "year": "2020" }, { "authors": "Libo Qin; Zhouyang Li; Wanxiang Che; Minheng Ni; Ting Liu", "journal": "", "ref_id": "b33", "title": "Co-gat: A co-interactive graph attention network for joint dialog act recognition and sentiment classification", "year": "2021" }, { "authors": "Abhinav Rastogi; Xiaoxue Zang; Srinivas Sunkara; Raghav Gupta; Pranav Khaitan", "journal": "", "ref_id": "b34", "title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset", "year": "2020" }, { "authors": "Geoffrey E David E Rumelhart; Ronald J Hinton; Williams", "journal": "nature", "ref_id": "b35", "title": "Learning representations by backpropagating errors", "year": "1986" }, { "authors": "Farnood Salehi; William Trouleau; Matthias Grossglauser; Patrick Thiran", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Learning hawkes processes from a handful of events", "year": "2019" }, { "authors": "Youngsoo Seol", "journal": "Statistics & Probability Letters", "ref_id": "b37", "title": "Limit theorems for discrete hawkes processes", "year": "2015" }, { "authors": "Kaisong Song; Lidong Bing; Wei Gao; Jun Lin; Lujun Zhao; Jiancheng Wang; Changlong Sun; Xiaozhong Liu; Qiong Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Using customer service dialogues for satisfaction analysis with contextassisted multiple instance learning", "year": "2019" }, { "authors": "Weiwei Sun; Shuo Zhang; Krisztian Balog; Zhaochun Ren; Pengjie Ren; Zhumin Chen; Maarten De Rijke", "journal": "", "ref_id": "b39", "title": "Simulating user satisfaction for the evaluation of task-oriented dialogue systems", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Attention is all you need", "year": "2017" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio", "journal": "", "ref_id": "b41", "title": "Graph attention networks", "year": "2017" }, { "authors": "Lu Wang; Wei Zhang; Xiaofeng He; Hongyuan Zha", "journal": "", "ref_id": "b42", "title": "Supervised reinforcement learning with recurrent neural network for dynamic treatment recommendation", "year": "2018" }, { "authors": "Shuai Xiao; Junchi Yan; Xiaokang Yang; Hongyuan Zha; Stephen Chu", "journal": "", "ref_id": "b43", "title": "Modeling the intensity function of point process via recurrent neural networks", "year": "2017" }, { "authors": "Zichao Yang; Diyi Yang; Chris Dyer; Xiaodong He; Alex Smola; Eduard Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Hierarchical attention networks for document classification", "year": "2016" }, { "authors": "Qiang Zhang; Aldo Lipani; Omer Kirnap; Emine Yilmaz", "journal": "", "ref_id": "b45", "title": "Self-attentive hawkes process", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b46", "title": "", "year": "" }, { "authors": "Zihao Zhou; Xingyi Yang; Ryan Rossi; Handong Zhao; Rose Yu", "journal": "", "ref_id": "b47", "title": "Neural point process for learning spatiotemporal event dynamics", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b48", "title": "", "year": "" }, { "authors": "Simiao Zuo; Haoming Jiang; Zichong Li; Tuo Zhao; Hongyuan Zha", "journal": "PMLR", "ref_id": "b49", "title": "Transformer hawkes process", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 70.87, 736.47, 189.69, 10.69 ], "formula_id": "formula_0", "formula_text": "X = {(R 1 , U 1 ), (R 2 , U 2 ), . . . , (R T , U T )}." }, { "formula_coordinates": [ 2, 342.09, 101.47, 184.23, 10.63 ], "formula_id": "formula_1", "formula_text": "X t = {(R 1 , U 1 ), (R 2 , U 2 ), . . . , (R t , U t )}." }, { "formula_coordinates": [ 2, 322.32, 614.32, 202.09, 10.67 ], "formula_id": "formula_2", "formula_text": "h t = BERT([CLS]R t [SEP ]U t [SEP ]). (1)" }, { "formula_coordinates": [ 3, 78.28, 447.88, 171.89, 13.13 ], "formula_id": "formula_3", "formula_text": "H (0) = [h 1 + pe(1), . . . , h t + pe(t)]," }, { "formula_coordinates": [ 3, 78.28, 466.41, 210.85, 12.37 ], "formula_id": "formula_4", "formula_text": "H * = MultiHead(H (l) , H (l) , H (l) ),(3)" }, { "formula_coordinates": [ 3, 78.28, 484.94, 210.85, 12.37 ], "formula_id": "formula_5", "formula_text": "H (l+1) = FFN(H * + H (l) ) + H * + H (l) ,(4)" }, { "formula_coordinates": [ 3, 200.32, 548.33, 88.81, 12.58 ], "formula_id": "formula_6", "formula_text": "H (L) = [c 1 , . . . , c t ]" }, { "formula_coordinates": [ 3, 116.57, 713.23, 172.56, 14.19 ], "formula_id": "formula_7", "formula_text": "p U SE t = softmax(MLP(c t )),(5)" }, { "formula_coordinates": [ 3, 347.51, 200.29, 176.9, 22.55 ], "formula_id": "formula_8", "formula_text": "λ(t) = µ(t) + t i :t i <t ψ(t -t i ).(6)" }, { "formula_coordinates": [ 3, 325.13, 542.25, 199.28, 33.58 ], "formula_id": "formula_9", "formula_text": "λ(t) = M m=1 λ m (t) = M m=1 f m (w T m x t ),(7)" }, { "formula_coordinates": [ 4, 78.82, 151.75, 210.31, 25.5 ], "formula_id": "formula_10", "formula_text": "λ m (t) = f m α m t -t i t i + w T m x t i + b m . (8)" }, { "formula_coordinates": [ 4, 82.89, 659.08, 206.24, 50.32 ], "formula_id": "formula_11", "formula_text": "λ(t) = K k=1 λ k (t) = 1, t ∈ {1, 2, . . . , T }, s.t. λ k (t) > 0, ∀k = 1, 2, . . . , K. (9)" }, { "formula_coordinates": [ 4, 309.71, 135.25, 201.91, 29.57 ], "formula_id": "formula_12", "formula_text": "λ k (t) = exp f k (MLP k (c t ) + MLP k (x t )) K j=1 exp f j (MLP j (c t ) + MLP j (x t ))" }, { "formula_coordinates": [ 4, 384.08, 739.93, 135.78, 14.19 ], "formula_id": "formula_13", "formula_text": "v t = Zp U SE t . (11" }, { "formula_coordinates": [ 4, 519.87, 742.78, 4.54, 9.46 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 84.45, 135.59, 204.69, 13.13 ], "formula_id": "formula_15", "formula_text": "V (0) = [v 1 + pe(1), . . . , v t + pe(t)],(12)" }, { "formula_coordinates": [ 5, 84.45, 154.12, 204.69, 12.37 ], "formula_id": "formula_16", "formula_text": "V * = MultiHead(V (n) , V (n) , V (n) ),(13)" }, { "formula_coordinates": [ 5, 84.45, 172.66, 200.14, 25.85 ], "formula_id": "formula_17", "formula_text": "V (n+1) = FFN(V * + V (n) ) + V * + V (n) . (14" }, { "formula_coordinates": [ 5, 284.59, 189.05, 4.54, 9.46 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 5, 188.13, 249.37, 95.08, 12.58 ], "formula_id": "formula_19", "formula_text": "V (N ) = [x 1 , . . . , x t ]." }, { "formula_coordinates": [ 5, 81.25, 354.66, 207.88, 10.69 ], "formula_id": "formula_20", "formula_text": "L U SE = -log p(s t |X t ) = -log λ st (t),(15)" }, { "formula_coordinates": [ 5, 116.31, 547.34, 172.82, 14.19 ], "formula_id": "formula_21", "formula_text": "p U AR t = softmax(MLP(c t )).(16)" }, { "formula_coordinates": [ 5, 80.74, 620.57, 203.85, 14.19 ], "formula_id": "formula_22", "formula_text": "L U AR = -log p(a t |X t ) = -log p U AR t,at . (17" }, { "formula_coordinates": [ 5, 284.59, 623.42, 4.54, 9.46 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 5, 120.77, 682.75, 168.36, 10.68 ], "formula_id": "formula_24", "formula_text": "L joint = L U SE + γL U AR .(18)" } ]
[ { "figure_ref": [ "fig_4" ], "heading": "Introduction", "publication_ref": [ "b17", "b9", "b18", "b1", "b14", "b23", "b5", "b19", "b32", "b26", "b22", "b10", "b15", "b13", "b26", "b27", "b20", "b20" ], "table_ref": [], "text": "The iris is a thin, circular structure in the eye that controls the amount of light that enters the eye by adjusting the size of the pupil. It is located in front of the lens and behind the cornea and is composed of muscles and pigmented tissue. The distinctive nature of the iris pattern has led to its use as a reliable biometric cue in identification and authentication systems [18]. With the advent of technology, iris sensors are now available in commercial and personal devices, paving the way for secure authentication and access control [10,19]. However, the accuracy of iris recognition systems relies heavily on the quality and size of the dataset used for training. The limited availability of large-scale iris datasets due to the difficulty in collecting operational quality iris images, has become a major challenge in this field. For example, most of the iris datasets available in the literature have frontal view images [2,15], and the number of subjects and total number of samples in these datasets are limited. Further, in some instances, collecting and sharing iris datasets may be stymied due to privacy or legal concerns [24]. Therefore, researchers have been studying the texture and morphology of the iris in order to model its unique patterns and to create large-scale synthetic iris datasets. For example, Cui et al. [6] utilized principal component analysis with super-resolution to generate synthetic iris images. Shah and Ross [20] used a Markov model to capture and synthesize the iris texture followed by embedding of elements such as spots and stripes to improve visual realism. In [33], Zuo et al. analyzed various features of real iris images, such as texture, boundary regions, eyelashes, etc. and used these features to create a generative model based on the Hidden Markov Model for synthetic iris image generation. These methods while successfully generating synthetic iris images, are found lacking in terms of quality (visual realism and good-resolution) and diversity in the generated samples [27].\nOver the past few years, deep learning-based approaches have set a benchmark in various fields including synthetic image generation and attribute editing, using Convolutional Autoencoders (CAEs) [23] and Generative Adversarial Networks (GANs) [11,16]. In [14,27,28], authors proposed GAN-based synthetic image generation methods that input a random noise vector and output a synthetic iris image. While these methods address some of the concerns mentioned previously, the generated images are often similar to each other [21]. Additionally, due to insufficient number of training samples, the generator is often over-trained to synthesize images with patterns seen during training [21], which affects the uniqueness of the synthesized iris images (as shown in Figure 4).\nIn this paper, we address the following limitations of current synthetic iris generators: (1) difficulty in generating good quality synthetic iris images, (2) failure to incorporate inter and intra class variations in the generated images, (3) generating images that are similar to the training data, and (4) utilizing domain-knowledge to guide the synthetic generation process. We achieve this by proposing iWarpGAN that aims to disentangle identity and style using two transformation pathways: (1) Identity Transformation and (2) Style Transformation. The goal of Identity Transformation pathway is to transform the identity of the input iris image in the latent space to generate identities that are different from the training set. This is achieved by learning RBF-based warp function, f p , in the latent space of a GAN, whose gradient gives non-linear paths along the p th family of paths for each latent code z ∈ R. The Style Transformation pathway aims to generate images with different styles, which are extracted from a reference iris image, without changing the identity. Therefore, by concatenating the reference style code with the transformed identity code, iWarpGAN generates iris images with both inter and intra-class variations. Thus, the contributions of this research are as follows: (a) We propose a synthetic image generation method, iWarpGAN, which aims to disentangle identity and style in two steps: identity transformation and style transformation. (b) We evaluate the quality and realism of the generated iris images using ISO/IEC 29794-6 Standard Quality Metrics which uses a non-reference single image quality evaluation method. (c) We show the utility of the generated iris dataset in training deep-learning based iris matchers by increasing the number of identities and overall images in the dataset.\nIn the remainder of this paper, we will discuss the proposed method in more detail and demonstrate the advantages of the proposed method in generating good quality, unique iris images compared to other GAN-based methods." }, { "figure_ref": [ "fig_4" ], "heading": "Background", "publication_ref": [ "b10", "b13", "b26", "b24", "b24", "b12", "b4", "b27", "b7", "b30", "b28", "b6", "b8", "b6", "b31" ], "table_ref": [], "text": "Generative Adversarial Networks (GANs) [11] are generative models that typically take a random noise vector as input and output a visually realistic synthetic image. A GAN consists of two main components: (1) Generative Network known as Generator (G), and (2) Discriminative Network known as Discriminator (D) that are in competition with each other. The Generator aims to generate realistic looking images that can fool the discriminator, while Discriminator (D) aims to distinguish between real and synthetic images generated by G. In the literature, different methods have been proposed to generate generate good quality biometric images such as face, iris and fingerprint. Some of these methods are discussed below: Generation using Random Noise: Kohli et. al. [14] proposed a GAN-based approach to synthesize cropped iris images using iris Deep Convolution Generative Adversarial Network (iDCGAN). While this method generates good quality cropped iris images of size 64×64, unrealistic distortions and noise were observed when trained to generate high resolution images. In [27], Yadav et. al. overcame this issue by utilizing Relativistic Average Standard Generative Network (RaSGAN) that aims to generate good quality high resolution iris images. However, since RaSGAN generates synthetic images from a random noise vector, it is hard to generate irides with intra-class variations. Also, as shown in Figure 4, the uniqueness of generated images is limited and the network was often observed to repeat certain patterns, restricting the diversity in the generated dataset. Wang et. al. [25] proposed a method for generating iris images that exhibit a wide range of intraand inter-class variations. Their approach incorporates contrastive learning techniques to effectively disentangle identity-related features, such as iris texture and eye orientation, from condition-variant features, such as pupil size and iris exposure ratio, in the generated images. While their method seems promising but the experiments presented in their paper [25] are not sufficient to comment on quality of iris and uniqueness of the generated images.\nGeneration via Image Translation: Image translation refers to the process of translating an image from one domain to another by learning the mapping between various domains. Therefore, image translation GANs focus on translating a source image to the target domain with the purpose of either changing some style attribute in the source image or adding/mixing different styles together. For example, StyleGAN [13] learns a mapping to different styles in face images (such as hair color, gender, expression, etc.) using a non-linear mapping function that embeds the style code of the target domain into the generated image. Unlike StyleGAN, StarGAN [5] and CIT-GAN [28] require paired training data to translate a source image to an image with the attributes of the target domain using style code of a reference image. This forces the generator to learn mappings across various domains, making it scalable to multiple domains. However, when trained using real iris images, Star-GAN and CIT-GAN were seen to assume the identity of the source image (as shown in Figures 6 and7). So, both methods fail to generate irides whose identities are not present in the training dataset.\nThere are other GAN-based methods in the literature that aim to edit certain portions of the image using warp fields or color transformations. Warp fields have been widely used for editing images such as modifying eye-gaze [8], semantically adding objects to an image [31], reconstructing facial features [29], etc. Dorta et. al [7] argues that warp fields are more comprehensive than pixel differences that allow more flexibility in terms of partial edits. Geng et. al. [9] proposed WG-GAN that aims to fit a dense warp field to an input source image to translate it according to the target image. This method showed good results at low resolution, but the quality of synthetic data deteriorates at high resolution. Also, as mentioned earlier, the source-target relationship in WG-GAN can restrict the uniqueness of the output image. Dorta et. al. [7] overcame these issues by proposing Warp-GAN that allows partial edits without the dependency on the source-target image pair. The generator takes as input a source image and a target attribute vector and then learns the warp field to make the desired edits in the source image. This method has been proven to make more realistic semantic edits in the input image than StarGAN and CycleGAN [32]. Further, with the ability of controlled or partial edits, WarpGAN provides the mechanism to generate images with intra-class variations. However, using a real image as input to the generator restricts the number of unique images that can be generated from this network." }, { "figure_ref": [ "fig_1" ], "heading": "Proposed Method", "publication_ref": [ "b4", "b4", "b9", "b9" ], "table_ref": [], "text": "In this section, we will discuss the proposed method, iWarpGAN, that has the capability to synthesize an iris dataset in such a way that: (1) it contains iris images with unique identities that are not seen during training, (2) generates multiple samples per identity, (3) it is scalable to hundred thousand unique identities, and (4) images are generated in real-time.\nLet x d1 s1 ∈ P be an input image with identity d1 and style s1, and another input image x d2 s2 ∈ P with identity d2 and style s2. Here, s1 and s2 denote image with attribute y. The attribute vector y is a 12-bit binary vector, where the first 5 bits correspond to a one-hot encoding of angle, the next 5 bits correspond to a one-hot encoding of position shift, and the last 2 bits denote contraction and dilation, respectively. Here, angle and position define eye orientation and the shift of iris center in the given image. The possible angles are 0 o , 10 o , 12 o , 15 o , 18 o and the possible position shifts are [0,0], [5,5], [10,10], [-10,10], [-10,-10]. For example, an image with angle 10 o , position shift [0,0] and dilation, the attribute vector y will be [0,1,0,0,0,1,0,0,0,0,0,1]. The angle value defines the image orientation and position defines the offset of the iris center from the image center. Given x d1 s1 and x d2 s2 , our aim is to synthesize a new iris image x d3 s2 with identity d3 different from the training data and possessing the style attribute s2 from x d2 s2 . To achieve this, as shown in Figure 2, the framework of iWarpGAN has been divided into five parts: (1) Style Encoder, E S , that encodes style of the input image, (2) Identity Encoder, E D , that learns an encoding to generate an identity different from the input image, (3) Generative Network, G, that uses encoding from both E D and E S to generate an image with a unique identity and the given style attribute, (4) Discriminator, D, that predicts whether the image is real or synthetic and emits an attribute vector y ′ and (5) Pre-trained Classifier, C, that returns the distance score between a real input image and new the identity generated by G." }, { "figure_ref": [ "fig_1" ], "heading": "Disentangling Identity and Style to Generate New Iris Identities", "publication_ref": [ "b0", "b29", "b21", "b21" ], "table_ref": [], "text": "Generally, the number of samples available in the training dataset is limited. This restricts the latent space learned by G thereby limiting the number of unique identities generated by the trained GAN. Some GANs focus too much on editing or modifying style attributes in the images while generating previously seen identities in the training dataset. This motivated us to divide the problem into two parts: (1) Learning new identities that are different from those in the training dataset, and (2) Editing style attributes for ensuring intra-class variation. Inspired by [30], we achieve this by training the proposed GAN using two pathways -Style Transformation Pathway and Identity Transformation Pathway. Style Transformation Pathway: Similar to StyleGAN, this pathway entirely focuses on learning the transformation of the style. Therefore, this sub-path aims to train the networks E S , D and G, while keeping the networks E D and C fixed. Input to the generator G is the concatenated latent vector d and s to generate an iris image with style attribute y. G tries to challenge G by maximizing,\nLG-Sty = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))](1)\nHere, x = (G(E D (x di si ), E S (x dj sj , y))) is the image generated by G. At the same time, D competes with G by minimizing,\nLD-Sty = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))] -Ex[D(x)](2)\nIn order to enforce that an iris image is generated with style attributes y, the following loss function is utilized:\nLSty-Recon = ||ES(x) -ES(x dj sj )|| 2 2(3)\nIdentity Transformation Pathway: This pathway focuses on learning identities in latent space that are different from the training dataset. Therefore, this sub-path aims to train the networks E D , D and G, while keeping the networks E S and C fixed. Therefore,\nLG-ID = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))] (4) LD-ID = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))] -Ex[D(x)](5)\nHere, the goal is to learn encodings that represent identities different from those in the training dataset. For this, E D is divided into two parts (as shown in Figure 2) -Encoder E that extracts the encoding from given input image and Warping Network W that aims to learn M warping functions (f 1 , ....., f M ) to discover M non-linear paths in the latent space of G. The gradient of these can be utilized to define the direction at each latent code z [22] such that new z represents encoding of an identity different from the input image. In order to achieve this, the encoder E D is broken down to two parts -an encoder E that extracts the latent code of the given input image and passes it on to the warping network W .\nFor a vector space R d , the function f : R is defined as,\nf (z) = K k=1 b i exp(-u i ||z -v i || 2 )(6)\nHere, v i ∈ R d represents the center, b i ∈ R represents weight and u i ∈ R represents scale of i th RBF. This function for warping is differentiable and for a specific value of z, the direction from ∆f can be used to define a curve in R d by shifting z as [22]:\nδz = ϵ ∆f (z) ||∆f (z)||(7)\nHere, ϵ is the shift magnitude that determines the shift from z to z using above equation. The Warping Network, W , contains two components: warper and reconstructor R. The warper can be parameterized using the triplet (V m , B m , U m ) denoting the center, weight and parameters. Here, m = 1, 2, ....M and each triplet help warping the latent space in R d . Also, the reconstructor is utilized to estimate the support set and magnitude shift that led to the transformation at hand. Therefore, the objective function for the Warping Network can be defined as,\nmin V,B,U,R E z,ϵ [L W -Reg (ϵ, ε)](8)\nHere, L W -Reg refers to regression loss. To further emphasize the uniqueness of identity learned by G in latent space, we maximize, Here, F eat(x) are the features extracted by the trained iris classifier (i.e., matcher) C.\nL Ident-Recon = ||E D (x) -E(x di si )|| 2 2(9)\nBy employing distinct pathways for style and identity, the proposed method enables the manipulation of identity features to generate synthetic images with distinct identities that diverge from the training dataset. Additionally, this methodology allows for the generation of images with varied styles for each identity. This is achieved by keeping the input image to the identity pathway constant and varying the input image to the style pathway to enforce that the generated images have the same identity d but different styles (i.e., intra-class variation) s 1 , s 2 , ...., s n ." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b1", "b25", "b2" ], "table_ref": [], "text": "In this work, we utilized three publicly available iris datasets for conducting experiments and performing our analysis: D1: CASIA-Iris-Thousand This dataset [2] released by the Chinese Academy of Sciences Institute of Automation has been widely used to study distinctiveness of iris fea-tures and to develop state-of-the-art iris recognition methods. It contains 20,000 irides from 1,000 subjects (2,000 unique identities with left and right eye) captured using an iris scanner with a resolution of 640×480. The dataset is divided into train and test sets using a 70-30 split based on unique identities, i.e., 1,400 identities in the training set and 600 in the test set. D2: CASIA Cross Sensor Iris Dataset (CSIR) For this work, we had access to only the train set of the CASIA-CSIR dataset [26] released by the Chinese Academy of Sciences Institute of Automation. This dataset consists of 7,964 iris images from 100 subjects (200 unique identities with left and right eye), which is divided into train and test sets using a 70-30 split on unique identities for training and testing deep learning based iris recognition methods, i.e., training set contains 5,411 images and test set contains 2,553 images. D3: IITD-iris This dataset [3] was released by the Indian Institute of Technology, Delhi, and was acquired in an indoor environment. It contains 1,120 iris images from 224 subjects captured using JIRIS, JPC1000 and digital CMOS cameras with a resolution of 320×240. This dataset is divided into train and test sets using 70-30 split based Training Data for Proposed Method The proposed method is trained using cropped iris images of size 256×256, where the style of each image is represented using the attribute vector y. Current datasets do not contain balanced number of iris images across these attributes. Therefore, variations such as angle and position is added via image transformations on randomly selected images from the dataset. In order to achieve this, first iris coordinates are first obtained using the VeriEye iris matcher, images are then translated to different angles and positions with respect to these centers, and cropped iris image of size 256×256 extracted. This helps create a training dataset with balanced samples across different attributes. Since the proposed method uses an image translation GAN, during image synthesis two images x d1 s1 , x d2 s2 and an attribute vector y of image x d2 s2 are used as input to synthesize a new iris image x d3 s2 with identity d3 which is different from the training data and possesses the style attribute s2 of x d2 s2 ." }, { "figure_ref": [], "heading": "Experiments & Results", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss different experiments utilized to study and analyze the performance of the proposed method. First, three sets of 20,000 number of iris images corresponding to 2,000 identities are generated. The three sets correspond to three different training datasets, D1, D2 and D3. For some of the experiments below, a subset of the Figure 6. This figure shows the uniqueness of iris images generated using iWarpGAN when the GANs are trained using CASIA-CS iris dataset. The y-axis represents the similarity scores obtained using VeriEye. Here, R=Real, S=Synthetic, Gen=Genuine and Imp=Impostor.\nFigure 7. This figure shows the uniqueness of iris images generated using iWarpGAN when the GANs are trained using IITD iris dataset. The y-axis represents the similarity scores obtained using VeriEye. Here, R=Real, S=Synthetic, Gen=Genuine and Imp=Impostor. generated images were used in order to be commensurate with the corresponding real dataset." }, { "figure_ref": [ "fig_4" ], "heading": "Experiment-1: Quality of Generated Images", "publication_ref": [ "b0", "b3", "b26", "b27" ], "table_ref": [], "text": "ISO/IEC 29794-6 Standard Quality Metrics The quality of generated images is compared with the real images using ISO/IEC 29794-6 Standard Quality Metrics [1]. We also evaluated the quality of images generated by other techniques, viz., WGAN [4], RaSGAN [27] and CITGAN [28] and compared them with the images generated using iWarp-GAN. The ISO metric evaluates the quality of an iris image using factors such as usable iris area, iris-sclera contrast, sharpness, iris-pupil contrast, pupil circularity, etc. to generate an overall quality score. The quality score ranges from [0-100] with 0 representing poor quality and 100 representing the highest quality. The images that cannot be processed by this method (either due to extremely poor quality or error during segmentation) are given a score of 255.\nAs shown in Figure 4, the quality scores of iris images generated by iWarpGAN and CITGAN are comparable with real irides. On the other hand, WGAN and RaSGAN have many images with a score of 255 due to the poor image quality. Also, when comparing the images in the three datasets, it can be seen that CASIA-CSIR dataset has more images with a score of 255 than IITD-iris and CASIA-Iris-Thousand dataset.\nVeriEye Rejection Rate To further emphasize the superiority of the proposed method in generating good quality iris images, we compare the rate of rejection of the generated images by a commercial iris matcher known as VeriEye. We compare the rejection rate for images generated by iWarp-GAN with the real images as well as those generated by WGAN, RaSGAN and CITGAN:\n(a) IITD-Iris-Dataset: This dataset contains a total of 1,120 iris images out of which 0.18% images are rejected by Ver-iEye. For comparison, we generated 1,120 iris images each using iWarpGAN, WGAN, RaSGAN and CITGAN. For the generated images, the rejection rate is as high as 9.73% and 4.55% for WGAN and RaSGAN, respectively. However, the rejection rate for CITGAN and iWarpGAN is 2.85% and 0.73%, respectively.\n(b) CASIA-CS Iris Dataset: This dataset contains a total of 7,964 iris images out of which 2.81% images are rejected by VeriEye. For comparison, we generated 7,964 iris images each using iWarpGAN, WGAN, RaSGAN and CITGAN. For the generated images, the rejection rate is as high as 4.17% and 2.06% for WGAN and RaSGAN, respectively. However, the rejection rate for CITGAN and iWarpGAN is 2.71% and 2.74%, respectively.\n(c) CASIA-Iris-Thousand Dataset: This dataset contains a total of 20,000 iris images out of which 0.06% images are rejected by VeriEye. For comparison, we generated 20,000 iris images each using iWarpGAN, WGAN, RaSGAN and CITGAN. For the generated images, the rejection rate is as high as 0.615% and 0.34% for WGAN and RaSGAN, respectively. However, the rejection rate for CITGAN and iWarpGAN is 0.24% and 0.18%, respectively." }, { "figure_ref": [], "heading": "Experiment-2: Uniqueness of Generated Images", "publication_ref": [], "table_ref": [], "text": "This experiment analyzes the uniqueness of the synthetically generated images, i.e., we evaluate whether iWarp-GAN is capable of generating unique identities with intraclass variations.\nExperiment-2A: Experiment-2A focuses on studying the uniqueness in the synthetic iris dataset generated using different GAN methods with respect to training samples. For this, we studied the genuine and impostor distribution of real iris images used to train GAN methods and compared it with the distribution of synthetically generated iris images. We utilized VeriEye matcher in this experiment to evaluate the similarity score between a pair of iris image. The score ranges from [0, 1557] where a higher score denotes a better match.\nExperiment-2B: Experiment-2B focuses on studying the uniqueness and intra-class variations within the generated iris dataset. For this, we studied the genuine and impostor distributions of the generated iris images and compare it with the distribution of real iris datasets. As mentioned earlier, this study is done for various unique generated identities to study both uniqueness and scalability. We utilized VeriEye matcher in this experiment to evaluate the similarity score between a pair of iris images." }, { "figure_ref": [ "fig_5" ], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "As shown in the Figures 5, 6 and7, unlike other GAN methods, the iris images generated by iWarpGAN do not share high similarity with the real iris images used in training. This shows that iWarpGAN is capable of generating irides with identities that are different from the training dataset. Further, looking at the impostor distribution of synthetically generated images, which overlaps with the impostor distribution of real iris images, we can conclude that the generated identities are different from each other. Note that low similarity scores in WGAN for real v/s synthetic and synthetic v/s synthetic distributions are due to poor quality iris images generated by WGAN." }, { "figure_ref": [], "heading": "Experiment-3: Utility of Synthetic Images", "publication_ref": [ "b11", "b16" ], "table_ref": [], "text": "In this experiment, we analyze the performance of deep learning algorithms trained and tested for iris recognition using a triplet training method, and compare it with the performance when these algorithms are trained using real and synthetically generated iris images.\nExperiment-3A: Baseline Analysis This is a baseline experiment where EfficientNet [12] and Resnet-101 [17] are trained with the training set of CASIA-Iris-Thousand, CASIA-CSIR and IITD-iris datasets using the triplet training method. The trained networks are tested for iris recognition on the test set of the above mentioned datasets (as mentioned in Section IV)." }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "Experiment-3B: Cross-Dataset Analysis", "publication_ref": [], "table_ref": [], "text": "In this experiment, we analyze the benefits of synthetically generated iris datasets in improving the performance of deep learning based iris recognition methods. EfficientNet and Resnet-101 are trained using the training set of CASIA-Iris-Thousand, CASIA-CSIR and IITD-iris datasets, as well as the synthetically generated iris dataset from the iWarp-GAN.\nAnalysis: As shown in Figures 8 and9, the performance of the deep learning based iris recognition system improves when trained with more data, i.e., when combining real and synthetically generated iris images from iWarpGAN. While there is a slight improvement in the performance of ResNet-101, a significant improvement in the performance is seen for EfficientNet." }, { "figure_ref": [], "heading": "Summary & Future Work", "publication_ref": [], "table_ref": [], "text": "The results in Section 5 show that unlike current GANs, the proposed method is capable of generating good quality iris images with identities that are different from the training dataset. Also, the generated identities are unique with respect to each other with some variations. We also showed the usefulness of the generated dataset in improving the performance of deep learning-based iris recognition methods by providing additional synthetic training data with numerous unique identities. The proposed method is based on image transformation, i.e., the network needs an input and reference image to transform the identity and the style and produce an output image. This can limit the feature space explored by iWarpGAN. For future work, we would like to extensively study the capacity of the proposed method in terms of number of unique identities it can generate and further explore how to make the proposed method more generalizable so that the new identities learnt by iWarpGAN is not limited by the training set." } ]
Generative Adversarial Networks (GANs) have shown success in approximating complex distributions for synthetic image generation. However, current GAN-based methods for generating biometric images, such as iris, have certain limitations: (a) the synthetic images often closely resemble images in the training dataset; (b) the generated images lack diversity in terms of the number of unique identities represented in them; and (c) it is difficult to generate multiple images pertaining to the same identity. To overcome these issues, we propose iWarpGAN that disentangles identity and style in the context of the iris modality by using two transformation pathways: Identity Transformation Pathway to generate unique identities from the training set, and Style Transformation Pathway to extract the style code from a reference image and output an iris image using this style. By concatenating the transformed identity code and reference style code, iWarpGAN generates iris images with both inter-and intra-class variations. The efficacy of the proposed method in generating such iris Deep-Fakes is evaluated both qualitatively and quantitatively using ISO/IEC 29794-6 Standard Quality Metrics and the Ver-iEye iris matcher. Further, the utility of the synthetically generated images is demonstrated by improving the performance of deep learning based iris matchers that augment synthetic data with real data during the training process.
iWarpGAN: Disentangling Identity and Style to Generate Synthetic Iris Images
[ { "figure_caption": "Figure 1 .1Figure 1. Examples of real cropped iris images from publicly available datasets [2][3][26].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. The proposed iWarpGAN consists of five parts: (1) Style Encoder, ES, that aims to encode the style of the input image as s, (2) Identity Encoder, ED, that aims to learn encoding d that generates an identity different from the input image, (3) Generative Network, G, that uses encoding from both ED and ES to generate an image with a unique identity and the given style attribute, (4) Discriminator, D, that inputs either a real or synthetic image and predicts whether the image is real or synthetic and also emits an attribute vector y ′ ∈ {angle, position, contraction, dilation of pupil}, and (5) Pre-trained Classifier, C, that computes the distance score between the real input image and the new identity generated by G.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Examples of images generated using iWarpGAN with unique identities and intra-class variations. A total of 20,000 irides corresponding to 2,000 identities were generated for each of the three training datasets. The figure shows the average similarity score (SScore) for both inter and intra class.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "(a) CASIA-Iris-Thousand dataset v/s synthetically generated images from different GANs (b) CASIA-CSIR dataset v/s synthetically generated images from different GANs (c) IITD-iris dataset v/s synthetically generated images from different GANs", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Histograms showing the quality scores of real iris images from three different datasets and the synthetically generated iris images. The quality scores were are generated using ISO/IEC 29794-6 Standard Quality Metrics[1] in the score range of [0-100]. Higher the score, better the quality. Iris images that failed to be processed by this method are given the score of 255.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. This figure shows the uniqueness of iris images generated using iWarpGAN when the GANs are trained using CASIA-Iris-Thousand dataset. The y-axis represents the similarity scores obtained using VeriEye. Here, R=Real, S=Synthetic, Gen=Genuine and Imp=Impostor.on unique identities, i.e., images from 314 identities in the training set and images from 134 identities in the testing set.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. This figure shows the performance of Resnet-101 in the cross-dataset evaluation scenario. (a) Trained using train set of CASIA-CSIR & IIT-Delhi datasets and tested using test set of CASIA-Iris-Thousand. (b) Trained using CASIA-Iris-Thousand & IIT-Delhi datasets and tested using test set of CASIA-CSIR dataset. (c) Trained using CASIA-Iris-Thousand & CASIA-CSIR datasets and tested using test set of IIT-Delhi iris dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. This figure shows the performance of EfficientNet in the cross-dataset evaluation scenario. (a) Trained using train set of CASIA-CSIR & IIT-Delhi datasets and tested using test set of CASIA-Iris-Thousand. (b) Trained using CASIA-Iris-Thousand & IIT-Delhi datasets and tested using test set of CASIA-CSIR dataset. (c) Trained using CASIA-Iris-Thousand & CASIA-CSIR datasets and tested using test set of IIT-Delhi iris dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" } ]
Shivangi Yadav; Arun Ross
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Information technology Biometric sample quality Part 6: Iris image data. Standard, International Organization for Standardization", "year": "2014" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "Casia Iris Image Database Version 4.0 (Casia Iris Thousand", "year": "2017" }, { "authors": "", "journal": "IIT Delhi Database", "ref_id": "b2", "title": "", "year": "2017" }, { "authors": "M Arjovsky; S Chintala; L Bottou", "journal": "", "ref_id": "b3", "title": "Wasserstein generative adversarial networks", "year": "2017" }, { "authors": "Y Choi; Y Uh; J Yoo; J.-W Ha", "journal": "", "ref_id": "b4", "title": "Stargan v2: Diverse image synthesis for multiple domains", "year": "2020" }, { "authors": "J Cui; Y Wang; J Huang; T Tan; Z Sun", "journal": "", "ref_id": "b5", "title": "An iris image synthesis method based on PCA and super-resolution", "year": "2004" }, { "authors": "G Dorta; S Vicente; N D Campbell; I J Simpson", "journal": "", "ref_id": "b6", "title": "The GAN that warped: Semantic attribute editing with unpaired data", "year": "2020" }, { "authors": "Y Ganin; D Kononenko; D Sungatullina; V Lempitsky", "journal": "Springer", "ref_id": "b7", "title": "Deepwarp: Photorealistic image resynthesis for gaze manipulation", "year": "2016" }, { "authors": "J Geng; T Shao; Y Zheng; Y Weng; K Zhou", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b8", "title": "Warpguided GANs for single-photo facial animation", "year": "2018" }, { "authors": "E Gent", "journal": "IEEE Spectrum", "ref_id": "b9", "title": "A cryptocurrency for the masses or a universal id?: Worldcoin aims to scan all the world's eyeballs", "year": "2023" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b10", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "C.-S Hsiao; C.-P Fan; Y.-T Hwang", "journal": "IEEE", "ref_id": "b11", "title": "Design and analysis of deep-learning based iris recognition technologies by combination of u-net and efficientnet", "year": "2021" }, { "authors": "T Karras; S Laine; T Aila", "journal": "", "ref_id": "b12", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "N Kohli; D Yadav; M Vatsa; R Singh; A Noore", "journal": "", "ref_id": "b13", "title": "Synthetic iris presentation attack using iDCGAN", "year": "2017" }, { "authors": "A Kumar; A Passi", "journal": "Pattern recognition", "ref_id": "b14", "title": "Comparison and combination of iris matchers for reliable personal authentication", "year": "2010" }, { "authors": "M B Lee; J K Kang; H S Yoon; K R Park", "journal": "IEEE Access", "ref_id": "b15", "title": "Enhanced iris recognition method by generative adversarial networkbased image reconstruction", "year": "2021" }, { "authors": "S Minaee; A Abdolrashidi", "journal": "", "ref_id": "b16", "title": "Deepiris: Iris recognition using a deep learning approach", "year": "2019" }, { "authors": "I Nigam; M Vatsa; R Singh", "journal": "Information Fusion", "ref_id": "b17", "title": "Ocular biometrics: A survey of modalities and fusion approaches", "year": "2015" }, { "authors": "A Perala", "journal": "", "ref_id": "b18", "title": "Princeton identity tech powers galaxy s8 iris scanning", "year": "2017" }, { "authors": "S Shah; A Ross", "journal": "", "ref_id": "b19", "title": "Generating synthetic irises by feature agglomeration", "year": "2006" }, { "authors": "P Tinsley; A Czajka; P J Flynn", "journal": "", "ref_id": "b20", "title": "Haven't I Seen You Before? Assessing Identity Leakage in Synthetic Irises", "year": "2022" }, { "authors": "C Tzelepis; G Tzimiropoulos; I Patras", "journal": "", "ref_id": "b21", "title": "WarpedGANSpace: Finding non-linear RBF paths in GAN latent space", "year": "2021" }, { "authors": "A Van Den Oord; N Kalchbrenner; L Espeholt; O Vinyals; A Graves", "journal": "", "ref_id": "b22", "title": "Conditional image generation with CNN decoders", "year": "2016" }, { "authors": "P Voigt; A Von; Bussche", "journal": "Springer International Publishing", "ref_id": "b23", "title": "The EU General Data Protection Regulation (GDPR). A Practical Guide", "year": "2017" }, { "authors": "C Wang; Z He; C Wang; Q Tian", "journal": "", "ref_id": "b24", "title": "Generating intra-and inter-class iris images by identity contrast", "year": "2022" }, { "authors": "L Xiao; Z Sun; R He; T Tan", "journal": "", "ref_id": "b25", "title": "Coupled feature selection for cross-sensor iris recognition", "year": "2013" }, { "authors": "S Yadav; C Chen; A Ross", "journal": "", "ref_id": "b26", "title": "Synthesizing iris images using rasgan with application in presentation attack detection", "year": "2019" }, { "authors": "S Yadav; A Ross", "journal": "", "ref_id": "b27", "title": "CIT-GAN: Cyclic image translation generative adversarial network with application in iris presentation attack detection", "year": "2021" }, { "authors": "R Yeh; Z Liu; D B Goldman; A Agarwala", "journal": "", "ref_id": "b28", "title": "Semantic facial expression editing using autoencoded flow", "year": "2016" }, { "authors": "B Zeno; I Kalinovskiy; Y Matveev", "journal": "Springer", "ref_id": "b29", "title": "IP-GAN: learning identity and pose disentanglement in generative adversarial networks", "year": "2019" }, { "authors": "T Zhou; S Tulsiani; W Sun; J Malik; A A Efros", "journal": "Springer", "ref_id": "b30", "title": "View synthesis by appearance flow", "year": "2016" }, { "authors": "J.-Y Zhu; T Park; P Isola; A A Efros", "journal": "", "ref_id": "b31", "title": "Unpaired imageto-image translation using cycle-consistent adversarial networks", "year": "2017" }, { "authors": "J Zuo; N A Schmid; X Chen", "journal": "IEEE Transactions on Information Forensics and Security (TIFS)", "ref_id": "b32", "title": "On generation and analysis of synthetic iris images", "year": "2007" } ]
[ { "formula_coordinates": [ 4, 58.7, 519.38, 227.67, 15.18 ], "formula_id": "formula_0", "formula_text": "LG-Sty = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))](1)" }, { "formula_coordinates": [ 4, 58.46, 595.82, 227.9, 28.57 ], "formula_id": "formula_1", "formula_text": "LD-Sty = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))] -Ex[D(x)](2)" }, { "formula_coordinates": [ 4, 98.58, 665.37, 187.78, 11.88 ], "formula_id": "formula_2", "formula_text": "LSty-Recon = ||ES(x) -ES(x dj sj )|| 2 2(3)" }, { "formula_coordinates": [ 4, 318.11, 115.86, 227, 64.57 ], "formula_id": "formula_3", "formula_text": "LG-ID = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))] (4) LD-ID = E x di si ,x dj sj ∼P real [D(G(ED(x di si ), ES(x dj sj , y)))] -Ex[D(x)](5)" }, { "formula_coordinates": [ 4, 358.69, 366.93, 186.42, 30.55 ], "formula_id": "formula_4", "formula_text": "f (z) = K k=1 b i exp(-u i ||z -v i || 2 )(6)" }, { "formula_coordinates": [ 4, 393.12, 473.49, 151.99, 22.31 ], "formula_id": "formula_5", "formula_text": "δz = ϵ ∆f (z) ||∆f (z)||(7)" }, { "formula_coordinates": [ 4, 371.77, 634.32, 173.35, 14.58 ], "formula_id": "formula_6", "formula_text": "min V,B,U,R E z,ϵ [L W -Reg (ϵ, ε)](8)" }, { "formula_coordinates": [ 4, 350.18, 702.12, 194.94, 12.69 ], "formula_id": "formula_7", "formula_text": "L Ident-Recon = ||E D (x) -E(x di si )|| 2 2(9)" } ]
10.1162/tacl_a_00416
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b24", "b25", "b26", "b31" ], "table_ref": [], "text": "Language models such as BERT (Devlin et al., 2019) and other Transformer-based (Vaswani et al., 2017) language models (TLMs) are notoriously difficult to understand. Evaluation datasets such as SuperGLUE (Wang et al., 2019), BLiMP (Warstadt et al., 2020), and others have been essential resources for understanding and comparing different models' capabilities. By measuring two models' performance on a question-answering task, for example, we are able to make an assessment about the models' capabilities relative to each other. Unfortunately, these evaluation tasks almost always require annotated data produced by a human being, and these datasets are therefore very scarce except for the most well-resourced languages, especially English. This scarcity of evaluation datasets has been a significant hindrance for research on TLMs for low-resource languages, as it is much harder to assess the quality and properties of models without them.\nHere, we present PrOnto, a dataset consisting of projections of OntoNotes' New Testament annotations into New Testament translations in 859 different languages. OntoNotes (Weischedel et al., 2013) is a corpus with many annotation types covering a wide variety of phenomena in grammar and meaning. A subset of the English portion of OntoNotes contains the Easy-to-Read Version (ERV) translation of the New Testament, complete with a segmentation of each sentence into the book, chapter, and verse of the Bible that it appeared in. Using these verse alignments, we can create new annotations for a given target language, yielding high-quality annotated data for the target language, ready to use in an evaluation, without requiring more human annotation. We focus on annotations which do not require token alignments (e.g., number of referential noun phrases that appear in a verse), as this avoids a source of noise (poor alignments) in annotation projection.\nIn this work, we describe our methods for creating the PrOnto dataset, and also provide experimental results demonstrating its utility as an evaluation resource. We summarize our contributions as follows:\n• We publish evaluation datasets for 5 tasks across 1051 New Testament translations in 859 languages.1 \n• We publish the system we used to create this dataset, which can be used by anyone to extend this dataset to any language that has a New Testament translation or a part of one.\n• We perform experiments covering a wide range of languages with respect to typological variables and data-richness which demonstrate the utility of this dataset for assessing pretrained language model quality." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b7", "b7", "b10", "b6", "b11", "b14", "b1", "b4", "b5", "b13", "b18", "b28", "b12" ], "table_ref": [], "text": "Beginning with the publication of the first modern TLM, BERT (Devlin et al., 2019), pretrained TLMs have had their quality assessed by applying them to a wide array of downstream tasks. It is typical to apply the TLM in question to as many downstream evaluations as practically possible, since downstream tasks vary considerably in which properties of language they are sensitive to. A syntactic parsing task, for example, is presumably more discriminative of formal aspects of grammar, while a sentiment analysis task is presumably more discriminative of meaning-related aspects of grammar. All 11 of the tasks used to evaluate BERT in Devlin et al. (2019) are meaning-oriented tasks, with natural language understanding (NLU) and question answering (QA) being heavily represented. Most post-BERT English TLMs have followed its lead in favoring meaning-related tasks (e.g. Liu et al., 2019;Zhang, 2022, inter alia). The English TLM evaluation dataset ecosystem has continued to grow, and some evaluation dataset suites have grown to encompass over 200 tasks (BIG-bench collaboration, 2021). Among other high-resource languages, there is more variation: MacBERT (Cui et al., 2020), a Mandarin Chinese BERT, is evaluated using tasks comparable in kind and quantity to those used with BERT, while CamemBERT (Martin et al., 2020), a French BERT, is evaluated with a large proportion of Universal Dependencies (UD) (Nivre et al., 2016) tasks.\nThe situation for low-resource languages is quite different. Since annotated datasets are so rare and small for low-resource languages, most lowresource TLM evaluation has been centered on just a few datasets, all of which are fairly formoriented in terms of what they are assessing models for. Occasionally, a family of low-resource languages might have a high-quality evaluation dataset: for example, Ogueji et al. (2021a) train a low-resource TLM for 11 African languages, and evaluate on named-entity recognition (NER) using the MasakhaNER dataset (Adelani et al., 2021). However, more often, low-resource languages do not have resources like this.\nMuch recent work on low-resource TLMs (Chau et al., 2020;Chau and Smith, 2021;Muller et al., 2021;Gessler and Zeldes, 2022, inter alia) uses only two datasets. The first is UD corpora, which consist of human-annotated syntactic trees and tags which can be used for form-related tasks such as part-of-speech tagging and syntactic dependency parsing. The second is the WikiAnn (Pan et al., 2017) dataset, an NER dataset that was automatically generated for 282 languages based on the structure of Wikipedia hyperlinks. While evaluations that use both of these datasets have proven to be useful, the UD dataset and to a lesser extent the WikiAnn dataset are both more form-than meaning-based in terms of what they assess in models. This could mean that many low-resource TLM evaluations are missing important dimensions of model quality that cannot be assessed well by existing evaluation datasets.\nAnnotation projection is a technique at least as old as Yarowsky and Ngai (2001), where token alignments are used to project noun phrase boundaries and part-of-speech tags across languages. 2018) (named entity recognition). It is also worth noting that the idea of using a large collection of Bible data for NLP/CL is not a new idea (McCarthy et al., 2020)." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "OntoNotes", "publication_ref": [ "b31" ], "table_ref": [], "text": "Before we describe our work, we briefly describe some important details of OntoNotes (Weischedel et al., 2013). OntoNotes is a multilayer annotated corpus whose English portion contains the Easy-to-Read Version (ERV) translation of the New Testament of the Christian Bible. OntoNotes' major annotation types include coreference, Penn Treebankstyle constituency syntax, NER, WordNet sense annotations, and PropBank argument structure annotations. The ERV New Testament subcorpus of OntoNotes has all of these major annotation types with the notable exception of NER and WordNet sense annotation, which was not done for the New Testament.\nAn example annotation of John 11:35 is given in Figure 1. The \"Tree\" annotation has a Penn Treebank-style parse which includes an analysis of the sentence's syntactic structure as well as partof-speech tags. The \"Leaves\" section contains multiple annotation types which are anchored on the annotation's head token. The coref type indicates a coreference annotation, which is then followed by coreference type, coreference chain ID, and token span information. The annotation in Figure 1 tells us that: token 0, Jesus, is the beginning of a new coreference mention; the coreference type of this mention is IDENT; the mention belongs to coreference chain 16; and this mention begins at token 0 and ends at token 0.\nThe prop type indicates the a PropBank annotation headed at the exponent of a predicate, typically a verb, and gives the PropBank sense ID of the predicate as well as the arguments of the predicate. In the example in Figure 1, the annotation tells us that: cried is the head of a PropBank predicate; the sense of the predicate is cry.02; the beginning of the v argument is headed at token 1, and its corresponding constituent is 0 levels up in the parse tree; and the beginning of the ARG0 argument is headed at token 0 and its corresponding constituent is 1 level up in the parse tree.\nFor full details, we refer readers to the official documentation at https://catalog. ldc.upenn.edu/docs/LDC2013T19/ OntoNotes-Release-5.0.pdf." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "We would like to have more evaluation datasets for low-resource TLM evaluation, though constructing these for each individual language is expensive, as the creation of new datasets generally requires human annotation of some kind. However, in this work, we propose a method for creating evaluation datasets without requiring additional human annotation. New Testament translations are also highly common for low-resource languages because of missionary work, and OntoNotes' New Testament subcorpus is richly annotated. Because the New Testament is partitioned into verses that are highly consistent across translations, it is possible to view verse boundaries as sentence-like alignments across translations, which would allow the projection of sentence-level annotations from OntoNotes to another New Testament translation. This is the approach we take up: we propose five annotation projection methods, apply them to Bible translations, and perform evaluations to assess their utility. More specifically, our goal is to take a New Testament translation in a target language, align its verses with the verses present in OntoNotes, and then use OntoNotes' annotations to annotate the target language's translation, verse by verse. Here, we describe the steps we take to process the data." }, { "figure_ref": [], "heading": "Bible Translations", "publication_ref": [], "table_ref": [], "text": "We use all permissively-licensed New Testament translations available at ebible.org, a repository of Bible data, processing the proprietary XML format of these translations into our simple TSV format. Some translations are very small or do not contain any of the New Testament, and we discard any with fewer than 500 verses overlapping with OntoNotes, which we do not count in our totals. The final 1051 translations cover a total of 859 languages.\nIt is important to note that there are many reasons to expect that Bible translations would be quite divergent from ideal naturalistic language data, such transcriptions of spontaneous conversation or formal oral narratives. There are many common rea-Figure 2: Matthew 9:5-6, as translated by the ERV (above) and the NRSVUE (below). In the ERV translation, verses 5 and 6 are fused, which means that no boundary between the two is indicated, and that their contents have been altered in linear ordering. sons which could produce expect this divergence, including: a translator's desire (for theological reasons or other) to use non-idiomatic expressions 2 ; a translator's imperfect grasp of a target language; and the narrow distribution of the kinds of events that make up the subject matter of the Bible, to name just a few. This is all to say that Bible data is distributionally quite different from more typical sources of language data. While we have no choice but to use it in many low-resource situations, we must remember that for a given language, results on Bible data may not fully generalize to other kinds of data.\nWe additionally note that the ERV has particular deficiencies as an individual Bible translation. The ERV's goal is to minimize the degree of reading comprehension needed in order for an individual to be able to read it, but in doing so, it sometimes engages in practices that are counter to our goals. Most notably, the ERV is much less literal than many Bible translations, with sometimes entire clauses either being added or removed relative to the source text. Additionally, the ERV sometimes combines verses (e.g. Acts 1:16 and 1:17 are combined into a single verse in the ERV, 16-17, with no further indication as to which content belonged to which original verse), hindering projection. Unfortunately, there is nothing to be done about these issues, as the ERV is what the creators of OntoNotes chose to use." }, { "figure_ref": [], "heading": "Alignment", "publication_ref": [], "table_ref": [], "text": "We parse OntoNotes' ONF files, and we assume that the target translation is given in a simple TSV 2 An example of this in English is in Exodus 3:14, in which God's utterance is often rendered in English as \"I am that I am\", which is, in the author's opinion as a native English speaker, not idiomatic English due to that's inability to serve as the head of a free relative clause in this context. Presumably, the translator deliberately chose an unnatural English expression in order to attempt to preserve some grammatical properties of the original Hebrew.\nformat where each row contains the textual content of the verse as well as the verse's book, chapter, and verse number. In an ideal situation, an OntoNotes sentence would correspond to exactly one verse in both the ERV and the target translation, but this is not always the case. These are the possible complications:\n1. A verse contains more than one OntoNotes sentence. Some verses simply contain more than one sentence.\n2. An OntoNotes sentence spans more than one ERV verse. Verse boundaries are not guaranteed to coincide with sentence boundaries, so sometimes a sentence will begin in one verse and end in another. In OntoNotes, a sentence never spans more than two verses.\n3. The verse in either the ERV or the target translation has been combined with one or more other verses. Bible translators sometimes choose to combine verses and in such cases do not provide internal boundaries for the verses that have been merged.\nFor determining a mapping, (1) presents no problem-we simply associate multiple OntoNotes sentences with a single verse. For (2), we associate the sentence with both verses, retaining the information that a sentence spanned a verse boundary.\n(For all of the tasks described in this paper, we discard verses that have sentences that cross verse boundaries, but the alignments are still constructed and ready to use.) For (3), if verses have been combined in either the ERV or the target translation, we simply remove the combined verses from consideration. In the ERV, combined verses are very rare, accounting for well under 1% of all verses. In other translations, this figure is also quite small." }, { "figure_ref": [], "heading": "Tasks", "publication_ref": [ "b21" ], "table_ref": [], "text": "Once alignment is complete, we are prepared to generate task data. We propose five tasks, all of which are sequence classification tasks either on single sequences or on paired (à la BERT's next sentence prediction) sequences. While we do not pursue this in our present work, we expect that it may also be possible to produce annotations for token-level tasks using high-quality automatically generated word alignments.3 A fundamental assumption for our approach is that some linguistic properties a sentence might have ought to be similar enough in all languages to yield projected annotations which are useful for model evaluation. Of course, short of examining every last verse, we cannot know with certainty that just because, for example, an English sentence has declarative sentence mood, its Farsi translation would also have declarative sentence mood. But we do have reason to believe that sentence mood ought to be fairly well preserved across translations, given that sentence mood is so highly associated with semantic-pragmatic rather than formal aspects of language (Portner, 2018), and so we can have some justification in assuming that sentence mood ought to be the same between translation pairs. At any rate, regardless of the justifiability of this assumption, we contend that if this assumption does hold for a certain annotation type, then we should see differential performance across pretrained TLMs, which we will examine in §5.2.\nTask 1: Non-pronominal Mention Counting (NMC) Predict the number of non-pronominal mentions in a verse. The intuition for this task is that it ought to require a model to understand which spans in a sentence could co-refer, which requires knowledge of both form and meaning. A mention is a span of tokens, often but not always a noun phrase, that has been annotated for coreference, according to the OntoNotes-specific coreference annotation guidelines. 4It is important to point out that some entity must be mentioned at least twice in a document in order to be annotated: if an entity is only mentioned once, then the mention is not annotated. This makes this task somewhat pathological, because models will only be getting verse-level (not document-level) context, and this ought to make it impossible to tell in many cases whether a given markable (some tokens that could be a mention) genuinely is a mention. This is unfortunate, but this is not necessarily fatal for the utility of this task.5 \nTask 2: Proper Noun in Subject (PNS) Predict whether the subject of the first sentence in the verse contains a proper noun. To determine whether the subject contains a proper noun, we attempt to find a constituent labeled NP-SBJ in the main clause, and if we succeed in finding exactly one, we consider it a positive instance if any of the tokens within it are tagged with \"NNP\" or \"NNPS\". Note that this does not necessarily mean that the head of the subject is a proper noun: scholars/NNS from/IN Burundi/NNP would count as a positive instance by our criterion, despite the fact that a common noun heads it.\nTask 3: Sentence Mood (SM) Predict whether the mood of the main clause of the first sentence is declarative, interrogative, or imperative. In Penn Treebank parse trees, sentence mood is encoded in the label of the highest constituent: for example, S and S-CLF are defined as having declarative sentence mood, S-IMP is imperative, and SQ, SBARQ, and SQ-CLF are interrogative. If the top constituent does not have a label that falls into any of these categories, which likely means it is a sentence fragment or some other unusual sentence type, we discard it.\nTask 4: Same Sense (SS) Given two verses v 1 and v 2 , and given further that v 1 contains at least one usage of the predicate identified by sense label s, predict whether v 2 also has a usage of sense label s. Note that in our formulation of this task, the sense label s is explicitly given as an input rather than left unexpressed because otherwise the model would need to look for whether any senseusages overlap across the two verses, which is likely too hard. Pairs are sampled so that negative and positive instances are balanced.\nThis task is perhaps the most suspect of all of our five proposed tasks given the great diversity of distinctions that may or may not be made at the word sense level. For example, for the English word go, Bukiyip has at least three different lexical items, distinguished by vertical motion relative to the mover's position at the beginning of the going event: nato 'go up, ascend'; nab@h 'go down, descend'; and narih 'go around, go at a level grade'. As such, we should expect that performance will likely be nowhere close to 90% even on non-English highresource languages, as the English sense labels will likely often reflect distinctions which are either unexpressed or not specific enough for the target language's sense-inventory. Still, we expect that for any given language, some sense labels will still be appropriate when projected, and if this is the case, then we expect that higher-quality models will be able to perform better than lower-quality ones.\nTask 5: Same Argument Count (SAC) Given two verses v 1 and v 2 which both feature a usage of the predicate identified by sense label s, predict whether both usages of s have the same number of arguments. Pairs are sampled so that negative and positive instances are balanced. We do not require that the verses have exactly one usage of s, which we do in the interest of using as many distinct verses as possible, though this may be interesting to consider in future work." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "In order to evaluate our dataset, we implement a simple sequence classification model and apply it to our tasks using a wide range of pretrained TLMs. We evaluate a wide range of languages and models in order to get as much information as possible about the utility of our methods. These include several low-resource languages, but we also include some high-and medium-resource languages in order to get additional perspective." }, { "figure_ref": [], "heading": "Languages", "publication_ref": [], "table_ref": [], "text": "The only work we were able to locate in the literature on low-resource TLMs that both worked on a wide range of languages and made all of their pretrained TLMs publicly available is Gessler and Zeldes (2022), and we therefore include all of the languages they studied in their work. These include the the low-resource languages Wolof, Sahidic Coptic, Uyghur, and Ancient Greek. (Gessler and Zeldes also published models for Maltese, but we were unable to locate a permissively-licensed Maltese Bible.) These also include Tamil and Indonesian, two medium-resource languages.\nWe additionally consider the high-resource languages French and Japanese, which may be interesting to look at given that they are both high resource and typologically similar to and divergent from English, respectively. Any differences that emerge between French and Japanese could be indicative of typological distance degrading the quality of our projected annotations. Additionally, both of these languages have high-quality monolingual TLMs, and it would be interesting to examine if different patterns emerge in high-resource settings.\nFinally, we include two different English translations. First, we include the original translation used in OntoNotes, the ERV, because it ought to give us an upper bound on projected annotation quality: ERV annotations projected to the same ERV verses ought to have the highest possible quality. Second, we include the Noah Webster's revision of the King James Version. The Webster Bible differs from the KJV only in that mechanical edits were made to replace archaic words and constructions, and we include it in order to see if relatively small differences across translations (same language, slightly different register) are enough to cause major differences in task performance, which would then indicate differences in projected annotation quality." }, { "figure_ref": [], "heading": "Model Implementation", "publication_ref": [ "b27" ], "table_ref": [], "text": "We use HuggingFace's (Wolf et al., 2020) off-theshelf AutoModelForSequenceClassification model. This model takes a pretrained TLM and adds a sequence classification head (with pretrained weights, if available). The architectural details of this head vary depending on which exact model a pretrained TLM is for (e.g. BertModel or RobertaModel), but most major models, including BERT and RoBERTa, simply use one (BERT) or two (RoBERTa) linear transformations that are applied to the [CLS] (or equivalent) token. The model is trained with a low learning rate for a small number of epochs before it is evaluated on a held-out test set for each task.\nHyperparamters Specifically, we use the default parameters for the transformers package, version 4.28.1, for the Trainer class, with the following exceptions. Learning rate is set to 2e-5, batch size is set to 16, training epochs is set to 10 except for SM in which case it is 20, and weight decay for AdamW is set to 0.01." }, { "figure_ref": [], "heading": "NMC Capping", "publication_ref": [], "table_ref": [], "text": "For NMC, while we always provide the genuine number of non-pronominal mentions in our dataset, in our experiments, we cap the maximum number of mentions at 3, labeling any sentence with more than 3 mentions as if it only had 3. This was done to make the task easier, as the number of sentences with more than 3 mentions is very low, and the model subsequently suffers while trying to learn how to count higher than three.\nSequence Packing for SS and SAC Recall that for the SS and SAC tasks, the inputs include not only two verses but also a sense label. First, we pack the two verses into a single input sequence, obeying any model-specific rules about where to put special tokens. In a BERT style model, for example, the sequence would look like\n[CLS] v 1 [SEP] v 2 [SEP].\nThere are many ways the sense label s could be provided as an input, but we choose to provide the label as an extra token after the final token of the base sequence. To do this, we extend the vocabulary V with |S| more entries, where S is the inventory of sense labels, so that the new vocabulary has size |V| + |S|. Senses are individually assigned to the new entries, and each sense is put after the final token, e.g.\n[CLS] v 1 [SEP] v 2 [SEP] s.\nMetrics We report accuracy on all tasks. Other more specialized metrics might be more informative for some tasks where e.g. the task is a binary classification problem or the label distribution is highly imbalanced, but we find that accuracy alone is suf-ficient to support our findings here, and choose to work with it exclusively to simplify the discussion." }, { "figure_ref": [], "heading": "List of Bibles", "publication_ref": [], "table_ref": [], "text": "Our complete list of Bibles for the evaluation is as follows. We format them so that our own abbreviation for them comes first, the full title follows, and the code for ebible.org's page follows in parentheses (append this code to ebible.org/details. php?id=). " }, { "figure_ref": [], "heading": "ERV: Easy", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "List of Pretrained Models", "publication_ref": [], "table_ref": [], "text": "Our complete list of pretrained models from Hug-gingFace Hub for the evaluation is as follows. Note that some abbreviations are repeated because language will disambiguate which one is meant. The models beginning with lgessler/microbert are taken from Gessler and Zeldes (2022), and the suffixes indicate whether pretraining took place with just MLM (-m) or the combination of MLM and part-of-speech tagging (-mx). (We refer readers to their paper for further details.)\n1. " }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b23" ], "table_ref": [ "tab_1" ], "text": "English Results for our two English datasets are given in Table 1. A majority-label baseline is given in the row labeled with the translation (ERV or WBT), and results with several common pretrained English models as well as two multilingual models are given. Looking first at our \"control\" dataset, the projection from the ERV translation onto itself, we can see that overall our models perform well above the majority class baseline, indicating that all of our tasks are not intractable, at least in the most easy setting. It's worth noting that the Sentence Mood task is very easy in this condition, with two models getting a perfect score. The hardest task is Same Argument Count, with the best model performing only 13% higher than the baseline. A striking pattern with the sequence-pair tasks is that the RoBERTafamily models perform at chance in three out of four cases. The only obvious reason why this might be is that the other, BERT-family models are pretrained with a sequence-pair task (next sentence prediction), while RoBERTa is not. We set this matter aside for now and note that even very popular and generally high-quality models can have anomalous performance on some tasks.\nTurning now to the other English translation, WBT, we see that performance is lower on the whole but remains discernably higher than the baseline in all cases. It is worth noting that the variety of English used in WBT, a slightly modernized form of Early Modern English, is likely quite out of domain for all of our models, and in this sense, the WBT could be thought of as a few-shot setting. A pattern similar to the one for the ERV emerges where the RoBERTa-family models fail to do anything meaningful for the Same Argument Count task.\nOverall, the results are in line with what we would expect given other published results which have evaluated the quality of these five pretrained mod- Table 2: Task accuracy for \"medium-resource\" languages by language and translation.\nels. The monolingual models almost always do best for ERV and in three out of five tasks for WBT (SS and PNS, where mBERT does best). Among the monolingual models, excepting the anomalous RoBERTa cases described above, BERT most often performs best, with DistilBERT doing best in only two cases, which accords with findings that DistilBERT's quality is usually slightly lower than BERT's (Sanh et al., 2020). In sum, these results on English corroborate our claim that our five tasks are well-posed, not pathologically difficult, and indicative of model quality, at least in English settings." }, { "figure_ref": [], "heading": "Medium-resource Languages", "publication_ref": [ "b9" ], "table_ref": [], "text": "We turn now to our \"medium-resource\" languages in Table 2: French and Japanese at the higher end, and Indonesian and Tamil at the lower end. For all four languages, XLM-RoBERTa continues to struggle with sequence-pair classification tasks, performing essentially at chance for all languages.\nFor French and Japanese, the monolingual BERT model's performance is typically a bit better than either of the multilingual models' performance, with one exception: for the same-sense (SS) task, mBERT performs significantly better than the monolingual model. Thus the broad picture of performance is what we'd expect, though this one surprising result shows that our tasks are broad in what they assess models for.\nFor Indonesian and Tamil, the µBERT models perform slightly worse on average than mBERT, in line with the results reported by Gessler and Zeldes (2022). Compared to the full-size monolingual models, the µBERT models also are slightly worse on Table 3: Task accuracy for low-resource languages by language and translation.\naverage, save for SS and SAC for Tamil, where performance is at-chance for the monolingual BERT." }, { "figure_ref": [], "heading": "Low-resource Languages", "publication_ref": [], "table_ref": [], "text": "Results for lowresource languages are given in Table 3. Something that distinguishes the low-resource languages from the medium-resource languages and English is that many models now perform no better than the majority baseline. Many of the Wolof and Coptic models perform no better than the baseline, and fewer but still some of the Uyghur and Ancient Greek models do not outperform the baseline. For the µBERT models, we note that the frequency with which this happens seems connected to dataset size: the tokens used by the µBERT developers for each language were approximately 500K for Wolof, 1M for Coptic, 2M for Uyghur, and 9M for Ancient Greek. This demonstrates that some of our tasks are too hard to be solved at all by a model if it falls below a quality threshold, which can be seen as a desirable trait. Differences between the best-performing model and the baseline can be very small in some cases, such as for Sentence Mood in most languages. This may indicate that sentence mood annotation projection is inappropriate for some target languages, though the fact that models still do differentiate themselves in how able they are to do it demonstrates that some properties of the target language can at least be correlated with the sentence mood of a translation-equivalent English sentence. The performance gain relative to the baseline remains quite high for the two sense-related tasks.\nQuality Assessment In addition to our main experimental findings, we find in supplementary experiments that our projected annotations for tasks 1-3 have quality that exceeds what would be expected from a random baseline by a sizeable margin. We refer interested readers to Appendix A for details." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have presented PrOnto, a publicly available dataset of evaluation tasks for pretrained language models for 1051 New Testament translations in 859. Overall, our results show that our tasks remain meaningful even when projected to languages which are typologically very different from English, and also even when they are performed by models that were trained on very little data. The fact that pretrained models distribute relative to each other in our tasks mostly in the same way that they do for established evaluation tasks constitutes evidence that these tasks are indeed indicative of model quality. Moreover, while our intent was primarily to develop this resource for low-resource languages, we have shown that it is able to serve medium-and high-resource languages as well.\nIn future work, we intend to continue developing additional tasks. There is still much data that has not been fully used in the OntoNotes annotations, and some tasks (such as SAC) would likely benefit from refinement or reformulation. We further invite interested readers to consider contributing a task, as our annotation projection pipeline has been structured to make tasks very easy to author.\nBeyond language model evaluations, one reviewer of this work has also suggested that scores on PrOnto could be interpreted as a kind of typological distance metric. Moving from the observation described above that the quality of the projected annotations will correlate with a language's typological distance from English, the reviewer further observed that each target language ought to have an upper bound on system performance due to the annotation projection errors. This means that, if we supposed we had a perfect system, its performance would reveal the projection error rate in its task performance metrics, and in doing so, reveal something about a language's typological proximity to English. Of course, systems are not always perfect: for any given language, each one may do much better or worse than another. In order to realize this vision, then, one would need to devise a way of accounting for confounds such as systems' individual strengths. Still, we join our reviewer in thinking this could be a promising thread to pursue, as it would provide a means for computing a quantitative heuristic measure of a language's typological similarity to English using only a Bible translation." }, { "figure_ref": [], "heading": "A. Additional Evaluation", "publication_ref": [ "b22" ], "table_ref": [], "text": "We complement our findings above with some additional evaluations in order to gain more perspective on the quality of the projected annotations. We look in detail at a particular target language, Hindi-specifically, we use the Hindi Contemporary Version Bible 6 . For tasks 1 and 2, we parse the Hindi using a pretrained Stanza (Qi et al., 2020) UD parser, and use the UD parses to construct annotations for tasks 1 and 2. For tasks 3, 4, and 5, we manually inspect 50 annotations per task in order to assess whether annotation projection was successful, and if not, why it failed." }, { "figure_ref": [], "heading": "A.1. Tasks 1 and 2: UD Parser Comparison", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Annotations for task 1 (Non-pronominal Mention Counting) and task 2 (Proper Noun in Subject) are both reconstructable from a UD parse. We use Stanza's pretrained Hindi model and parse all verses in the Hindi Bible. For NMC, we look at each token in a verse, and increment the mention count iff the token is tagged as either PROPN or NOUN; and it is not the case that the token is tagged as NOUN and its dependency relation is compound 7 .\nFor PNS, we perform a breadth-first search from the root to find the first token labeled as nsubj, and label the instance as positive if either the root of nsubj or any of its descendants are tagged as PROPN. If there is no nsubj, we treat the instance as negative. 8 As in the prior evaluation, we cap the maximum mention count at 3 and treat any larger values as 3.\nOnce we have constructed the second set of annotations using the UD parses, we need some way to compare them to each other. For both tasks, we use accuracy and another metric to compare the annotations. For NMC, we use mean squared error as a measure of how different the mention counts are on average. For PNS, we use Jaccard similarity, since it is a binary task. Since both the UDand OntoNotes-based annotations are automatically constructed, we can treat neither as ground 6 https://ebible.org/details.php?id= hincv 7 If a token is tagged as NOUN and its dependency relation is compound, this means that it is a noun modifying a noun, as in the first word of the phrase noun compound. These cases are not counted in order to maintain consistency with OntoNotes, which does not treat the modifier of a noun compound pair as a separate markable. 8 In our previous implementation, if we could not locate a subject, we discarded the verse. However, for our analysis here, we must have exactly the same set of verses that were used for PrOnto, which is why we instead label an instance as negative if we cannot find a subject.\ntruth, so we also compare the PrOnto annotations to a random baseline for both tasks. For all metrics, we expect that the PrOnto-and UD-based annotations ought to have the highest similarities, beating both of the baselines.\nResults are given in Table 4. Looking first at NMC, we see that as predicted, the PrOnto and UD annotations have the greatest similarities. The random baseline has a low similarity, as could be expected given that this task has 4 possible labels. It is worth considering whether an MSE of 1.199 might be high. To some degree this divergence between the UD-based annotations and PrOnto is expected given that, as discussed above, an important limitation of the PrOnto mention count is that referential phrases that do not participate in coreference (i.e. are only mentioned once in a document) are not annotated in OntoNotes, and this presumably accounts for at least some of the divergence between these two annotation-sets. Still, we see that our two annotation methods outperform the baselines, yielding evidence of their quality despite the fact that they are automatically constructed. Turning now to PNS, we see the same pattern as before but more strongly. The similarity between PrOnto and the UD-based annotations is much stronger than between PrOnto and the majority or random annotations, as measured by both metrics.\nIn sum, while it is not possible from this analysis to determine the true annotation quality of either set of annotations (or indeed, even which one might be better), the fact that they both outperform a random baseline by a large margin shows that they at least agree on many cases. While of course there is no guarantee that if an annotation is agreed upon by two different sources it is more likely to be true, it would be surprising if that were not more true than not in this situation. We turn now to describe the remaining tasks (Sentence Mood, Same Sense, Same Argument Count), which cannot easily have their annotations constructed from UD parses." }, { "figure_ref": [], "heading": "A.2. Task 3: Qualitative Evaluation", "publication_ref": [], "table_ref": [], "text": "First, for Sentence Mood (SM), we straightforwardly inspect the Hindi translation of the verse alongside the PrOnto annotation, and judge whether the existing label is correct. (In order to make label judgments, we rely on the same criteria that were used in order to arrive at the labels in the PTB trees from which our SM labels were derived, as described above. These criteria, described in the PTB guidelines, are unproblematically applicable to Hindi.) We compare our results from this procedure to a majority class baseline-recall that declarative is by far the most common sentence mood, accounting for ≈90% of all verses in the ERV.\nWe annotate a hundred Hindi instances using For task 2, we consider accuracy and Jaccard similarity.\nthis procedure, and find that the PrOnto projected label is correct in 95 of 100 cases, while the majority class baseline is correct in 86 of 100 cases. This constitutes evidence that while the projection process is not perfect, the annotation projection for this task is substantially better than guessing." }, { "figure_ref": [], "heading": "A.3. Tasks 4 and 5: Qualitative Evaluation", "publication_ref": [], "table_ref": [], "text": "Tasks 4 and 5 (Same Sense, Same Argument Count) both have to do with meaning and argument structure. In order to assess the PrOnto annotations for these tasks, we would like to know the following information:\n1. Do the two verses actually contain usages of the senses in question?\n2. Do the findings in (1) violate foundational assumptions about either task? (For SAC, we always assume that both verses do contain a usage of the sense. For SS, we always assume that the first verse does contain a usage of the sense.) We consider an instance \"wellformed\" iff no foundational assumptions are violated.\n3. If we have found positively in (2), is the label for the task correct? (For SS, this means asking whether the label correctly identifies whether verse 2 has a genuine usage of the sense; for SAC, this means asking whether the label correctly identifies whether the two usages have the same argument count.)\nWe note that making consistent decisions about (1) is very difficult: how can we precisely say whether one word in the English translation is \"the same\" as the word in the Hindi translation of a verse? We propose the following procedure for determining this:\n• Given the English word that originally bore the PropBank annotation, attempt to identify a Hindi token that best captures its lexical meaning.\n• Look up the candidate Hindi token in the dictionary of Platts (1884) 9 , and accept it as \"the We answer these questions for both tasks on a random sample of 50 instances for each task, each of which consists of two verses, a sense label, and the task annotation. A baseline comparison is not possible for this task because a crucial part of the input for the task, the sense label, is obtained via projection and is not guessable by trivial means.\nOur results are given in Table 5. The picture that emerges for both tasks is similar: around 70% of instances are well-formed, and around 60% are correctly annotated. (Remember that wellformedness is a precondition for correctness, so the correct instances form a subset of the well-formed instances.) Higher would be better, of course, but given that these are numbers for annotation projection, we take these numbers to be indicative of quite high quality for these two tasks, at least for the SS and SAC tasks.\nOf the instances that were not well-formed, a couple of problems came up repeatedly. First, English lemmas for highly frequent words (such as do or be) often participated in light verb constructions or other constructions with highly marginal verbal lexical content. These were often realized in the Hindi translation as highly lexically contentful verbs, which led to ill-formedness. Second, the English translation used in OntoNotes, the Easy-to-Read version, often uses expressions that diverge in content and literalness quite a bit compared to other translations. For example, compare Jude 1:23 in the ERV and NRSVUE translations:\n• NRSVUE: save others by snatching them out of the fire; and have mercy on still others with fear, hating even the tunic defiled by their bodies." }, { "figure_ref": [], "heading": "Task", "publication_ref": [], "table_ref": [], "text": "Total Well-formed Correct Same Sense (4) 50 36 30 Same Argument Count (5) 50 34 29\nTable 5: Analysis results for tasks 4 and 5. An instance is \"well-formed\" if the assumptions about the data hold in the target language, and an instance is \"correct\" if it is well-formed and the projected annotation is correct.\n• ERV: Rescue those who are living in danger of hell's fire. There are others you should treat with mercy, but be very careful that their filthy lives don't rub off on you.\nIn the latter half of the verse, the ERV translators decided to be explicit about a thematic matter that the NRSVUE (and presumably the original Greek) leaves metaphorical. The result is that the predicate rub is introduced, which is present nowhere in the original Greek and is likely not present in other languages' translations given that rub off on is an English idiom. Compare this with a very literal English translation of the Hindi translation:\n• HINCVB: baakiyon ko aag mein se jhapatakar nikaal lo, daya karate hue saavadhaan raho, yahaan tak ki shareer ke dvaara kalankit vastron se bhee ghrna karo.\n• HINCVB, translation: Dash in and snatch the remaining out of the fire, remain cautious while extending grace, to the extent that you hate even the clothes soiled by their bodies.\nThis instance involving rub is representative of a handful of cases in which ill-formedness resulted from a creative translation. In this respect, we can see that the ERV is a Bible translation that is poorly suited to cross-lingual annotation projection.\nIt is interesting to consider the figures in Table 5 against the performance of various models on these two tasks for high-and medium-resource non-English languages (cf. Table 2). At a glance, the median performance for these two tasks is somewhere in the low 60s for all languages, though it occasionally gets quite high (mBERT on Japanese scores 79.74% for SS). Incidentally, we have also just seen here that label correctness for the Hindi-English language pair is around 60%, at least according to our analysis methodology, which, given its rather strict criteria for word-equivalence in wellformedness, may be conservative. The evidence from this analysis thus gives us reason to believe that a \"good\" performance is probably not very much more than 60-70% for most language pairs (since only around that many annotations are actually correct). If this is true, then when we also consider the distribution of scores in Table 2, we have strong reason to believe that the SS and SAC tasks are well-posed and useful for measuring model quality." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank Amir Zeldes for originally suggesting the core idea in this work, and we further thank Nathan Schneider and members of the NERT lab for helpful feedback on a draft of this work. We also thank the maintainers of ebible.org for hosting the openaccess Bibles which were used in this work. We finally thank our reviewers for their exceptionalyl helpful comments." } ]
Evaluation datasets are critical resources for measuring the quality of pretrained language models. However, due to the high cost of dataset annotation, these resources are scarce for most languages other than English, making it difficult to assess the quality of language models. In this work, we present a new method for evaluation dataset construction which enables any language with a New Testament translation to receive a suite of evaluation datasets suitable for pretrained language model evaluation. The method critically involves aligning verses with those in the New Testament portion of English OntoNotes, and then projecting annotations from English to the target language, with no manual annotation required. We apply this method to 1051 New Testament translations in 859 languages and make them publicly available. Additionally, we conduct experiments which demonstrate the efficacy of our method for creating evaluation tasks which can assess language model quality.
PrOnto: Language Model Evaluations for 859 Languages
[ { "figure_caption": "Much similar work has been done for other annotation types-just a few examples of works in this literature include Padó and Lapata (2009) (semantic roles), Asgari and Schütze (2017) (tense), and", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: A sample verse, John 11:35, taken from OntoNotes. Note the annotations for tokenization, part-of-speech, constituency syntax, coreference, and argument structure. This file is in \"OntoNotes Normal Form\" (ONF), a human-readable format which OntoNotes provides its annotations in.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Task accuracy for English by model and translation. ERV is the Easy-to-Read Version, WBT is the Webster Bible.", "figure_data": "bert-base-multilingual-cased: mBERT2. xlm-roberta-base: XLM-R3. bert-base-cased: BERT4. distilbert-base-cased: DistilBERT5. roberta-base: RoBERTa6. camembert-base: BERT7. cl-tohoku/bert-base-japanese: BERT8. l3cube-pune/tamil-bert: BERT9. cahya/bert-base-indonesian-522M: BERT10. lgessler/microbert-...-m:µBERT-M(where ... is one of wolof, ancient-greek, indonesian, coptic, uyghur,tamil)11. lgessler/microbert-...-mx: µBERT-MX", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "UD analysis results for tasks 1 and 2. For task 1, we consider accuracy and mean squared error.", "figure_data": "Pair (NMC)Acc.MSEPair (PNS)Acc. JaccardPrOnto, UD0.517 1.199PrOnto, UD0.8100.680PrOnto, Random 0.252 1.652PrOnto, Random 0.4990.336UD, Random0.246 1.801UD, Random0.4990.332", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Luke Gessler
[ { "authors": "", "journal": "Bibliographical References", "ref_id": "b0", "title": "", "year": "" }, { "authors": "David Ifeoluwa Adelani; Jade Abbott; Graham Neubig; D' Daniel; Julia Souza; Constantine Kreutzer; Chester Lignos; Happy Palen-Michel; Shruti Buzaaba; Sebastian Rijhwani; Stephen Ruder; Israel Mayhew; Shamsuddeen H Abebe Azime; Chris Chinenye Muhammad; Joyce Emezue; Perez Nakatumba-Nabende; Aremu Ogayo; Catherine Anuoluwapo; Derguene Gitau; Jesujoba Mbaye; Seid Alabi; Tajuddeen Muhie Yimam; Ignatius Rabiu Gwadabe; Ezeani; Andre Rubungo; Jonathan Niyongabo; Verrah Mukiibi; Iroro Otiende; Davis Orife; Samba David; Tosin Ngom; Paul Adewumi; Mofetoluwa Rayson; Gerald Adeyemi; Emmanuel Muriuki; Chiamaka Anebi; Nkiruka Chukwuneke; Eric Odu; Samuel Peter Wairagala; Clemencia Oyerinde; Tobius Siro; Temilola Saul Bateesa; Yvonne Oloyede; Victor Wambui; Deborah Akinode; Maurice Nabagereka; Ayodele Katusiime; Awokoya; Mboup Mouhamadane; Dibora Gebreyohannes; Henok Tilaye; Kelechi Nwaike; Degaga Wolde; Abdoulaye Faye; Blessing Sibanda; Orevaoghene Ahia; F P Bonaventure; Kelechi Dossou; Thierno Ogueji; Diop Ibrahima; Abdoulaye Diallo; Adewale Akinfaderin; Tendai Marengereke; Salomey Osei", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b1", "title": "MasakhaNER: Named entity recognition for African languages", "year": "2021" }, { "authors": "Ehsaneddin Asgari; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Past, present, future: A computational investigation of the typology of tense in 1000 languages", "year": "2017" }, { "authors": " ", "journal": "", "ref_id": "b3", "title": "Beyond the imitation game: Measuring and extrapolating the capabilities of language models", "year": "2021" }, { "authors": "Ethan C Chau; Lucy H Lin; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Parsing with multilingual BERT, a small corpus, and a small treebank", "year": "2020" }, { "authors": "Ethan C Chau; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Specializing multilingual language models: An empirical study", "year": "2021" }, { "authors": "Yiming Cui; Wanxiang Che; Ting Liu; Bing Qin; Shijin Wang; Guoping Hu", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Revisiting pre-trained models for Chinese natural language processing", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Jan Vium Enghoff; Søren Harrison; Željko Agić", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Low-resource named entity recognition via multi-source projection: Not quite there yet?", "year": "2018" }, { "authors": "Luke Gessler; Amir Zeldes", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "MicroBERT: Effective training of low-resource monolingual BERTs through parameter reduction and multitask learning", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b10", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "year": "2019" }, { "authors": "Louis Martin; Benjamin Muller; Pedro ; Javier Ortiz Suárez; Yoann Dupont; Laurent Romary; Éric De La Clergerie; Djamé Seddah; Benoît Sagot", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "CamemBERT: a tasty French language model", "year": "2020" }, { "authors": "D Arya; Rachel Mccarthy; Dylan Wicks; Aaron Lewis; Winston Mueller; Oliver Wu; Garrett Adams; Matt Nicolai; David Post; Yarowsky", "journal": "European Language Resources Association", "ref_id": "b12", "title": "The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration", "year": "2020" }, { "authors": "Benjamin Muller; Antonios Anastasopoulos; Benoît Sagot; Djamé Seddah", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "When being unseen from mBERT is just the beginning: Handling new languages with multilingual language models", "year": "2021" }, { "authors": "Joakim Nivre; Marie-Catherine De Marneffe; Filip Ginter; Yoav Goldberg; Jan Hajič; Christopher D Manning; Ryan Mcdonald; Slav Petrov; Sampo Pyysalo; Natalia Silveira; Reut Tsarfaty; Daniel Zeman", "journal": "European Language Resources Association (ELRA", "ref_id": "b14", "title": "Universal Dependencies v1: A Multilingual Treebank Collection", "year": "2016" }, { "authors": "Kelechi Ogueji; Yuxin Zhu; Jimmy Lin; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Small data? no problem! exploring the viability of pretrained multilingual language models for lowresourced languages", "year": "2021" }, { "authors": "Kelechi Ogueji; Yuxin Zhu; Jimmy Lin", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Small Data? No Problem! Exploring the Viability of Pretrained Multilingual Language Models for Low-resourced Languages", "year": "2021" }, { "authors": "Sebastian Padó; Mirella Lapata", "journal": "J. Artif. Int. Res", "ref_id": "b17", "title": "Crosslingual annotation projection of semantic roles", "year": "2009" }, { "authors": "Xiaoman Pan; Boliang Zhang; Jonathan May; Joel Nothman; Kevin Knight; Heng Ji", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Cross-lingual name tagging and linking for 282 languages", "year": "2017" }, { "authors": "Tom Pelsmaeker; Wilker Aziz", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Effective estimation of deep generative language models", "year": "2020" }, { "authors": "John T Platts", "journal": "W. H. Allen & Co", "ref_id": "b20", "title": "A dictionary of Urdu, classical Hindi, and English", "year": "1884" }, { "authors": "Paul Portner", "journal": "Oxford University Press", "ref_id": "b21", "title": "Mood. Oxford Surveys in Semantics and Pragmatics", "year": "2018" }, { "authors": "Peng Qi; Yuhao Zhang; Yuhui Zhang; Jason Bolton; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Stanza: A python natural language processing toolkit for many human languages", "year": "2020" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b23", "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Attention is All you Need", "year": "2017" }, { "authors": "Alex Wang; Yada Pruksachatkun; Nikita Nangia; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "Curran Associates Inc", "ref_id": "b25", "title": "Su-perGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems", "year": "2019" }, { "authors": "Alex Warstadt; Alicia Parrish; Haokun Liu; Anhad Mohananey; Wei Peng; Sheng-Fu Wang; Samuel R Bowman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "Blimp: The benchmark of linguistic minimal pairs for english", "year": "2020" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Jamie Brew", "journal": "", "ref_id": "b27", "title": "Hugging-Face's Transformers: State-of-the-art Natural Language Processing", "year": "2020" }, { "authors": "David Yarowsky; Grace Ngai", "journal": "", "ref_id": "b28", "title": "Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora", "year": "2001" }, { "authors": "Bryan Zhang", "journal": "Association for Machine Translation in the Americas", "ref_id": "b29", "title": "Improve MT for search with selected translation memory using search signals", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b30", "title": "Language Resource References", "year": "" }, { "authors": "Ralph Weischedel; Martha Palmer; Mitchell Marcus; Eduard Hovy; Pradhan; Sameer; Lance Ramshaw; Nianwen Xue; Ann Taylor; Jeff Kaufman; Michelle Franchini; Mohammed El-Bachouti; Robert Belvin; Ann Houston", "journal": "OntoNotes", "ref_id": "b31", "title": "", "year": "2013" } ]
[ { "formula_coordinates": [ 6, 72, 558.02, 218.27, 22.27 ], "formula_id": "formula_0", "formula_text": "[CLS] v 1 [SEP] v 2 [SEP]." }, { "formula_coordinates": [ 6, 72, 665.62, 218.27, 21.36 ], "formula_id": "formula_1", "formula_text": "[CLS] v 1 [SEP] v 2 [SEP] s." } ]
10.18653/v1/2022.findings-naacl.42
2023-05-22
[ { "figure_ref": [], "heading": "Introduction and Related Work", "publication_ref": [ "b10", "b20", "b22", "b43", "b27", "b39", "b7", "b39", "b0", "b12", "b42" ], "table_ref": [], "text": "Current language technologies are dominated by large pretrained language models (LMs) (Devlin Figure 1: Samples generated as slight variations from original BBNLI hypotheses with their corresponding predictions. The generated samples change the prediction from neutral to entailment/contradiction, which uncovers model bias, while the original samples did not. et al., 2019;Clark et al., 2020;He et al., 2021;Ope-nAI, 2022) that can get deployed without properly understanding their powerful capabilities, and more importantly, how to audit their potentially biased behavior towards certain social groups. Developing bias auditing benchmarks is difficult as it involves a non-trivial amount of work and requires a multidisciplinary team. Furthermore, most benchmarks survive a limited amount of time; by the time the benchmark is established and embraced by the community, a new model comes along that makes the benchmark obsolete (Kiela et al., 2021). Recent efforts propose a holistic evaluation of LMs (Srivastava et al., 2022;Liang et al., 2022) across many datasets, tasks, and metrics. Raji et al. (2021) document the potential dangers of generalizing model ability through a set of limited benchmarks, while Bowman (2022) discusses the dangers of underclaiming LM capabilities.\nAs recommended by Raji et al. (2021), in this work, we focus on understanding bias auditing for a specific NLP task: natural language inference (NLI).2 BBNLI3 is a benchmark recently introduced to assess model bias (Akyürek et al., 2022). The benchmark is carefully designed to include samples, which probe for pro-/anti-stereotype bias in US social biases documented in the literature along three different social domains: gender, race, and religion. Unlike other NLI bias benchmarks that are comprised of too simplistic templates (Dev et al., 2020) and that came under recent scrutiny (Seshadri et al., 2022), BBNLI contains premises extracted from real data sources, while the hypotheses are complex templated sentences that present both stereotypical and their counterfactual, i.e., anti-stereotypical generalizations (the counterfactuals are generated by switching words referring to the social group, e.g., women with men). By design, the unbiased, ground truth label for all samples in the benchmark is neutral: no generalization should be inferred or contradicted by the specific premise." }, { "figure_ref": [], "heading": "Motivating Shortcomings of BBNLI", "publication_ref": [], "table_ref": [], "text": "In spite of being recent and comprehensive, we empirically observed that BBNLI is not able to uncover much bias in LMs fine-tuned for the NLI task (see Section 2.5). Upon manual inspection, we discovered an interesting phenomenon: while the original samples included in the benchmarks seem trivial for some LMs, simple lexical variations of the hypotheses that maintain the semantic meaning lead to the model failure. Consider the premise and the hypotheses in Figure 1a. 4 The hypotheses in the original benchmark lead to neutral predictions, and, hence, no bias is uncovered in the model. Slight variations of the hypotheses that maintain the semantics of the pro-/anti-stereotype stance lead to different behavior: the pro-stereotype hypothesis leads to an entailment, while the anti-stereotype hypothesis remains neutral. This behavior is considered pro-stereotype bias.\nMoreover, with the same premise and a lexical variation of the computer field used in the hypotheses (computer programming instead of software engineering), the bias behavior of the model is accentuated, as the anti-stereotype hypothesis also changes to a contradiction as shown in Figure 1b. With both predictions changing, the implication is even stronger that women (and not men) are performing poorly in computer programming. 4 The predictions in the figure were produced by an ELECTRA-large model fine-tuned for NLI. Refer to Section 2.2 for details.\nFigure 2: Machine generated hypotheses that lead to mispredictions irrespective of the social group. We argue that this type of mispredictions are not due to bias, but due to model brittleness." }, { "figure_ref": [], "heading": "Summary of Contributions", "publication_ref": [], "table_ref": [], "text": "Motivated by these examples, we propose to enlist the help of the LMs themselves to create benchmarks that remain challenging. To preserve the intent of the original benchmark, we modify only the hypotheses in the dataset by extending the original templates included in the benchmark with masked words. The masked words are filled-in with lexical variations suggested by the top candidates in a masked LM. We manually validate the generated hypotheses. To further ensure the difficulty of the generated samples, we employ adversarial filtering techniques and keep only the samples that are mispredicted by LMs fine-tuned for NLI. Enlisting the help of the LMs themselves allows us to create a much larger benchmark BBNLI-next (14.5K samples compared to 2.3K samples in the original BBNLI), and further observe interesting behavior.\nConsider the example in Figure 2. The machinegenerated hypotheses with the given premise lead to contradictions for both pro-/anti-stereotype samples. We argue that, in this scenario, the mispredictions may not be due to bias (since they are the same, despite being wrong) and may be induced by model brittleness. The bias scores that were proposed with the BBNLI benchmark do not capture this phenomenon.\nThe bias score used in the BBNLI benchmark measures the difference in pro-versus antistereotype bias in one quantity, and without taking into account how the model behaves within pairs of counterfactual samples. We perceive this as problematic for two main reasons: first, an equally biased model towards pro-and anti-stereotypical context would appear as unbiased; and second, the score also includes behavior that -we argue -is probably due to the model brittleness. To address these issues, we introduce disaggregate counterfactual bias measures and point out the subtleties and the interplay between robustness and bias in model auditing.\nOur contributions are as follows: (a) We propose to enlist the help of the LMs themselves to keep up with bias auditing of LMs, and demonstrate how a specific benchmark can be systematically extended to a much more challenging benchmark with limited manual intervention. (b) We introduce BBNLI-next, an NLI bias auditing benchmark that proves difficult for fine-tuned NLI models. For the four LMs studied, the accuracy of BBNLI is 95.4%, while BBNLI-next reduces it to 58.7%, on average. (c) We point out shortcomings in current bias scores and propose disaggregate counterfactual bias measure to address current issues. (d) We analyze the robustness-bias interplay in bias auditing and emphasize how important it is to properly attribute causes of biased behavior such that we can work on improving model fairness.\nDespite our best efforts and promising findings, we are upfront about the limitations of our work. In the terminology of Raji et al. ( 2021), BBNLInext5 is specific (NLI task), finite (14.5K premisehypothesis samples), and contextual (US-centered pro-/anti-stereotypical bias). Despite these limitations, we believe that the systematic development of bias auditing samples that we employ can be adapted to other tasks and datasets, and extended to different cultural contexts. Furthermore, the derived benchmarks can be used to study and differentiate between brittleness and bias, an outstanding topic of research with limited attention at the moment of this writing." }, { "figure_ref": [], "heading": "Social Bias Auditing in Language Models", "publication_ref": [ "b14", "b28", "b10", "b24", "b20", "b32", "b35", "b2", "b11", "b6", "b16", "b21", "b1", "b44", "b3", "b13", "b5", "b18", "b9", "b23", "b30", "b26", "b31", "b15", "b0", "b33", "b4", "b3", "b41" ], "table_ref": [], "text": "As pretrained LMs (Devlin et al., 2019;Liu et al., 2019;Clark et al., 2020;Lan et al., 2020;He et al., 2021) have become popular and are deployed in real world settings (Nayak, 2019;Perspective API, 2021;OpenAI, 2022), researchers and practitioners have started discussing their societal impacts (Bender et al., 2021;Crawford, 2017), and quantifying their bias and fairness (Borkan et al., 2019;Dixon et al., 2018;Hutchinson et al., 2020;Baldini et al., 2022;Tal et al., 2022). Different bias scores and measures have been proposed (Blodgett et al., 2020;Dev et al., 2022;Bommasani and Liang, 2022) and analyzed (Goldfarb-Tarrant et al., 2021;Cao et al., 2022;Kwon and Mihindukulasooriya, 2022), and several datasets, and benchmarks for bias auditing in language models have been introduced (Nadeem et al., 2020;Li et al., 2020;Nangia et al., 2020;Dhamala et al., 2021;Akyürek et al., 2022;Par-rish et al., 2022;Névéol et al., 2022). Researchers scrutinized deficiencies of current datasets (Blodgett et al., 2021) and the lack of clarity on the definition of social bias in NLP models and its measures (Blodgett et al., 2020;Selvam et al., 2022)." }, { "figure_ref": [], "heading": "Social Bias and Normative Stance", "publication_ref": [ "b5", "b5", "b5", "b0" ], "table_ref": [], "text": "We adopt the precise definition of social bias from the concurrent work by Bommasani and Liang (2022). In our work, social bias is observed and measured when model predictions (associations in Bommasani and Liang (2022)'s terminology, e.g., neutral, contradiction, entailment) vary with different groups (e.g., male or female) for a particular target context (e.g., software engineering). We agree with Bommasani and Liang (2022) that bias is relative. We also endorse the subtle point addressed by the BBNLI benchmark (Akyürek et al., 2022) that human cognitive biases are complex and do not necessarily require a direct comparison between different groups (e.g. one may think that women are bad software engineers without having an explicit representation of whether men are good software engineers).\nIn adopting the terminology of pro-stereotype and anti-stereotype bias from the BBNLI benchmark, we implicitly assume as reference the normative belief that specific human stereotypes exist and they negatively impact a certain population group (e.g., the stereotypical view that women do not perform well in STEM fields prevents women from entering STEM fields, which in itself feeds the stereotype, creating a vicious circle).\nWe believe that both pro-/anti-stereotype bias can be harmful; models that exhibit large but equal amounts of pro-and anti-stereotype bias are not desirable. For this reason, we introduce disaggregate bias measures that separately account for pro-and anti-stereotype bias, instead of measuring only the difference between the two in a single bias score, as done in previous benchmarks (e.g., BBNLI (Akyürek et al., 2022)). Combining both types of biases in one bias score can be problematic as it obfuscates the amount of bias that the model exhibits in each direction (pro-and anti-stereotype), to the extreme that a model with equal amounts of bias would be assigned a bias score of zero. The uninitiated practitioner may be led to believe that such a model is fair and ready to be deployed." }, { "figure_ref": [], "heading": "Bias-Robustness Interplay", "publication_ref": [ "b48", "b38", "b17", "b29", "b45", "b19" ], "table_ref": [], "text": "While social bias auditing and robustness of language models (Wang et al., 2022) have been extensively studied, not many works look at the interaction between the two. Among the few that do, Pruksachatkun et al. (2021) show that improving robustness usually leads to increased fairness. Our examination on the interplay between robustness and bias in NLI is timely, and further exposes the fragility of NLI systems (Glockner et al., 2018;McCoy et al., 2019;Talman et al., 2021;Gubelmann and Handschuh, 2022)." }, { "figure_ref": [], "heading": "BBNLI-next Bias Benchmark", "publication_ref": [ "b0" ], "table_ref": [], "text": "We briefly review the original BBNLI benchmark (Akyürek et al., 2022), describe our systematic extension and the creation of BBNLI-next, and analyze their different behavior using four state-ofthe-art LMs fine-tuned for NLI." }, { "figure_ref": [], "heading": "BBNLI", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "BBNLI (Akyürek et al., 2022", "publication_ref": [ "b0" ], "table_ref": [], "text": ") is a recently introduced dataset meant to evaluate the model bias.\nIn this work, we use the audit part of the dataset (approximately 2.3K samples), in which both pro-/anti-stereotype biases are exposed in an NLI task format. The benchmark includes samples along three different domains of bias (gender, religion, and race), with several stereotype biases covered in each domain. For example, the gender domain includes examples to test for the stereotypical views that men are breadwinners while women are homemakers, or that men are capable computer programmers while women are not competent in computing fields.\nBBNLI is a templated benchmark with comprehensive templates that use the notion of GROUP to represent a social group (e.g., men and women for gender domain) and short, predefined lists of similar concepts to account for lexical variation (e.g., software engineering, computer programming, hardware engineering are used for mendominant jobs, while suitable, competent, talented are words used to indicate ability). As we will demonstrate shortly, this limited lexical variation is not sufficient to lead to difficult samples for LMs fine-tuned for NLI, but the templates can be altered lexically such that they do become difficult. 6The benchmark contains specific premises taken from real data sources, while the hypotheses are biased generalizations. Thus, by design, the ground truth for all samples in the benchmark is neutral. Any model misprediction may be indicative of potential bias in the model. For a full description of the included stereotypes, along with the types of cognitive biases they assess, we refer the reader to Akyürek et al. (2022)." }, { "figure_ref": [], "heading": "Systematic Benchmark Extension", "publication_ref": [ "b0" ], "table_ref": [], "text": "We extend the BBNLI dataset through slight lexical variations of the hypotheses. To generate these variations, we take advantage of the fact that BBNLI is a templated benchmark. We change the hypothesis templates to include masked tokens. These masked tokens are filled-in by a masked language model. With this approach in place, we can generate alternate text for hypotheses at scale, with minimal manual intervention.\nWe derive masked hypotheses from the original hypotheses in BBNLI following simple strategies; some samples are illustrated in Table 1. The main goal is to preserve the biased generalization in the hypothesis. For example, we mask the verb to produce slight variations on a similar theme with more lexical diversity than the original hypothesis. Similarly, we replace the placeholder for the short predefined list by a mask, which will be filled with more variations by the LM. Lastly, we leave the placeholder in place, but add a mask alongside for more similar samples to be generated. Our list of masked samples is by no means exhaustive and this is a simple place where BBNLI-next could be easily further extended. We included enough masked templates to strike a balance between the number of samples generated and our ability to manually validate them.\nWe employ the scripts provided with the original BBNLI to expand the templates to NLI samples with premise and hypothesis. Recall that we do not alter the premises in the benchmark. The templated examples are first expanded using the groups and words included in the BBNLI benchmark. For the samples in Table 1, GROUP1/GROUP2 is filled in with men/women and MJOB takes different values such as computer programming and software engineering. Once the templates are expanded and only the masked tokens in the hypotheses are unspecified, it is time to employ a language model experimental evaluation by Akyürek et al. (2022) Original template Masked template GROUP2 are equipped poorly compared to GROUP1 for MJOB.\nGROUP2 are <MASK> poorly compared to GROUP1 for MJOB. GROUP2 are less WORD2 for jobs in MJOB.\nGROUP2 are less <MASK> for jobs in MJOB. GROUP2 are less WORD2 and <MASK> for jobs in MJOB." }, { "figure_ref": [], "heading": "Filled-in template example", "publication_ref": [], "table_ref": [], "text": "Masked template example Women are equipped poorly compared to men for software engineering.\nWomen are <MASK> poorly compared to men for software engineering. Women are less talented for jobs in software engineering Women are less <MASK> for jobs in software engineering Women are less talented and <MASK> for jobs in software engineering.\nTable 1: Examples of masked templates to be filled-in by a masked language model." }, { "figure_ref": [], "heading": "Language model", "publication_ref": [ "b24", "b10", "b28", "b25", "b8", "b49", "b46", "b34", "b24", "b47" ], "table_ref": [ "tab_0", "tab_1" ], "text": "Parameter size AlBERT-xxlarge (Lan et al., 2020) 222.6M ELECTRA-large (Clark et al., 2020) 335.1 M RoBERTa-large (Liu et al., 2019) 355.1 M BART-large (Lewis et al., 2020) 407.3M to generate variations for our masked tokens. We used a large BERT model, trained with whole word masking (Model name: bert-large-cased-wholeword-masking from the HuggingFace model hub) and selected the top 20 words suggested by the LM, leading to 20 different variations for each hypothesis. We invoke the LM using only the templated hypothesis, without including the premise. The generated hypotheses are used with the premises in the original BBNLI benchmark.\nAdversarial sample filtering: Inspired by adversarial techniques, we further filter the generated samples using three LM models fine-tuned for NLI. A sample is included in our dataset only if at least one of the models produces a prediction that is not neutral, which is the ground truth for all generated samples. All samples that generate correct neutral predictions are deemed too simple to predict and are not useful to uncover bias. The models included in our study are shown in Table 2. These models are state-of-the-art NLI models, fine-tuned with the following NLI datasets: NLI (Bowman et al., 2015), MNLI (Williams et al., 2018), FEVER (Thorne et al., 2018), and ANLI (Nie et al., 2020). We used the publicly available checkpoints in the Hugging-Face Hub (HuggingFace, 2022) (see Section A.2) for all models. We leave the AlBERT (Lan et al., 2020) model out of the adversarial filtering to understand whether difficult samples for some models are transferable to a different model.\nThe new hypotheses are created with the help of a whole-word, masked language model trained on human produced text, and, as a result, the lexical variations that it suggests are expected to match the training set distribution. Thus, the produced samples are close to natural language. The new samples are not created adversarially (Wallace et al., 2019), they are only filtered in an adversarial manner (i.e., the samples that are already trivial for models to predict are not as useful to be included in a bias auditing benchmark).\nDataset validation: After adversarial filtering, the NLI samples contain more than 5K unique hypotheses. Upon manual inspection, the generated hypotheses can be categorized into three groups: valid (pro-/anti-stereotype generalizations), invalid (coherent phrases that do not represent pro-/anti-stereotype generalizations), and incoherent (phrases that are either grammatically incorrect or that do not sound natural). A breakdown of the generated samples and some examples for each category are shown in Table 3. To contrast two examples in the table, one involving substance abuse is valid because it is related to an unwanted social stereotype, whereas the one involving electricity consumption is invalid because there is no prevailing harmful stereotype expressed. The validation of the generated hypotheses was done manually by one of the authors who identifies as belonging to a few unprivileged groups, and required about 16 hours of work. We deem this manual work both necessary and worthwhile given the size of the resulting dataset. For the valid hypotheses, the type of stereotype was also recorded (i.e., pro-/antistereotype). Out of all the NLI samples generated, only the ones with valid hypotheses are included. This resulted in 7888 samples.\nCounterfactual expansion:. During the development of the dataset, we observed that some premise-hypothesis samples lead to the same wrong predictions across all groups (see example in Figure 2). This type of misprediction could be due to some bias in the model; however, since the mispredictions are the same across groups, we argue that this type of misprediction is not due to bias, but due to model brittleness. These types of examples suggest that it would be beneficial to consider the group-counterfactuals in the dataset, where only the group or the sub-population is varied across hypotheses. We create counterfactual hypotheses" }, { "figure_ref": [], "heading": "Generated hypothesis type", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Size Examples", "publication_ref": [], "table_ref": [], "text": "Valid 3776 It is difficult and rare for women to be the income producer.\nIn the past years, substance use dramatically increased within Black Americans. Invalid 1307 In the past years, electricity use dramatically increased within white people. When it comes to child care men should take the hit and run. Incoherent 156 It is rare for men to be the breadwinner and the mother.\nMost Mormon women face marriage by their husbands. by swapping the group with its counterfactual as defined in the original BBNLI dataset (each studied bias category comes with its predefined pair of groups). For a detailed example, please refer to Section A.1. We iterate over all samples and substitute the group with the corresponding counterfactual group in the hypothesis alone. We make sure not to generate any duplicate samples. Note that the counterfactuals are always included, without adversarial filtering. There is no guarantee that the counterfactual samples will lead to mispredictions." }, { "figure_ref": [], "heading": "BBNLI-next: Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "The resulting datatset including the counterfactuals comprises 14.5K samples. A breakdown of samples per bias domain is shown in Section A.5 shows that this bias score represents the surplus in the pro-stereo bias when compared to the anti-stereo bias. The next section discusses issues with this aggregate way of measuring bias. " }, { "figure_ref": [ "fig_0" ], "heading": "BBNLI-next vs BBNLI: Accuracy and Aggregate Bias Score", "publication_ref": [ "b40", "b10", "b0" ], "table_ref": [ "tab_5" ], "text": "When introduced, BBNLI was used to study the performance of T0 (Sanh et al., 2022), a large, multitask model, fine-tuned on many tasks, but not on NLI. As a preliminary experiment with results in Section A.4, we present the accuracy on BBNLI of four models that were fine-tuned for NLI. In this section, we focus on the comparison between BBNLI and BBNLI-next with respect to accuracy and aggregate bias scores obtained using the same four models. Figure 3 presents accuracy on the original BBNLI (first bar in orange), followed by accuracy on BBNLI-next for the entire dataset (label all), as well as split across bias domains. The first important result is the significant difference in accuracy between the original and the new dataset. While BBNLI accuracy for nearly all models (except for ELECTRA) is in the high 90% range, the accuracy for BBNLI-next is much lower across all models. On average, the accuracy for BBNLI is 95.7%, while the accuracy for BBNLI-next is 58.7%. This demonstrates that BBNLI-next is a considerably more challenging dataset. ELECTRA (Clark et al., 2020) yields the lowest overall accuracy of 33.8%. This is an extreme case among the models we considered. The rest of the models vary in accuracy between 62.4% and 70.4%.\nRecall that AlBERT was left out of adversarial filtering. Its performance, 70.4% accuracy, is fairly close to RoBERTa's overall accuracy. This low performance, especially when compared to the original BBNLI, shows that adversarial filtering using other NLI models can be an effective way of constructing bias auditing datasets for new models before getting access to them. If access to their predictions is available, we can always use the models themselves to find challenging examples through adversarial filtering. Empirically, we found that the samples generated by the masked LM were more challenging for ELECTRA, which is reflected in its performance. The performance is not uniform across domains of bias, with no general trend. This behavior is further analyzed in Section 3. Next we analyze how the accuracy behavior is reflected in previously proposed bias scores.\nTable 5 shows aggregate bias scores (column Aggregate), as proposed by previous work (Akyürek et al., 2022), for both BBNLI and BBNLI-next (see Section 2.4) and also the disaggregate measures of pro-/anti-stereotype bias (see Section A.5). The results in this table emphasize that aggregating pro-/anti-stereotype bias measures in one bias score is problematic. The two benchmarks have different bias behavior that is not properly reflected in the aggregate score. Except for ELECTRA, all models exhibit a low aggregate score. However, underlying this low aggregate score, the pro-/anti-stereotype scores for BBNLI-next are an order of magnitude higher than BBNLI. This motivates our focus on disaggregate scoring. Next, we go one step further and propose disaggregate counterfactual measures for characterizing model errors." }, { "figure_ref": [], "heading": "Disaggregate Counterfactual Measures", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "As demonstrated in Section A.5, the bias score proposed in the original BBNLI benchmark captures pro-stereotypical bias surplus when compared to anti-stereotypical bias. We argue that analyzing pro-/anti-stereotype bias separately is more meaningful and the results presented in Section 2.5 support this argument. Separating the mispredictions into pro-/anti-stereotype measures, while meaningful, does not account for the pattern in the mispredictions. Recall the example in Figure 2. The two mispredictions would be accounted as representing anti-stereotype bias (contradiction for a pro-stereotype hypothesis) and pro-stereotype bias (contradiction for an anti-stereotype hypothesis). We argue that these mispredictions are due to groupinsensitive model errors and we propose to account for them separately.\nWe propose analyzing model mispredictions/errors by inspecting pairs of counterfactuals. We introduce disaggregate counterfactual measures that account for pro-/anti-stereotype bias only for the pairs of counterfactuals for which the pro-/anti-stereotype samples lead to different predictions. The remaining errors, i.e., pairs of counterfactual samples that have the same wrong prediction for both samples in the pairs, are attributed to model brittleness since they are insensitive to the social group present in the hypotheses. Table 6 enumerates all possible predictions for a pair of counterfactuals of stereotype/antistereotype hypotheses and how they are accounted for in the disaggregate counterfactual measures. The counts also show how the samples in the pair are accounted for. The denominator of the measure is the total number of samples in the dataset. Consequently by definition, the sum of the disaggregate counterfactual measures of pro-/anti-stereotype and the model error is equal to the misprediction rate. In this way, we assign a cause for each misprediction of the model.\nThe disaggregate counterfactual measures for BBNLI-next are shown in Table 7. Separating the concerns leads to interesting insights. Most of the mispredictions observed in ELECTRA are due to model error; for religion, none of the mispredictions are due to model bias. In fact, across all models, the highest model error is observed for samples belonging to religion. The three models that participated in the adversarial filtering have a more pronounced pro-stereotype racial bias than the antistereotype racial bias. AlBERT has a higher prostereotype gender bias than anti-stereotype gender bias. There is no clear trend across the models with respect to the pro-/anti-stereotype bias. Notably, AlBERT incurs considerable pro-/anti-stereotype bias across all domains despite not being part of the adversarial filtering. These results indicate that benchmarks may be able to stay ahead of LM evolution by standing on the shoulders of other equally powerful models. By presenting disaggregate counterfactual measures, our intent is not to showcase which model is less biased, but to emphasize the importance of carefully analyzing the types of mispredictions when analyzing bias. Our hope is that understanding the types of biased mispredictions will lead to ways of improving model fairness. Our results seem to indicate that brittleness remains the primary problem of NLI models." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b39" ], "table_ref": [], "text": "The pace of LM development is so fast that it seems the next major LM release is always a day away. However, the pace of developing the next major benchmark dataset is not keeping up. In this paper, we proposed an approach that remedies this concern by using state-of-the-art LMs as a key component in creating future benchmark datasets and making them challenging via adversarial filtering, thereby setting up a virtuous symbiotic relationship between modeling and auditing while building upon existing benchmarks. Importantly, it is critical not to abandon the construct validity of benchmark datasets and auditing procedures to singularly focus on pace of development (Raji et al., 2021). Toward this end, we demonstrate the proposed paradigm within a limited scope and with the friction of non-trivial human oversight. In the scope of English-language NLI and auditing for social bias in ways relevant for a US-centric context, our work revealed two threats to construct validity: (a) the way that BBNLI's bias score may hide the presence of harmful model be-havior and (b) entangle bias issues with brittleness. We resolved these threats via manually-validated counterfactual generation and disaggregated metric reporting.\nWith BBNLI-next, we have demonstrated an approach for moving fast without breaking things, but the NLP community needs further incentives to engage in it." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this work we focus specifically on natural language inference and include only three bias domains (gender, race, and religion). We recognize the inadequacy of binary gender, but nevertheless study it in its simplified binary form (e.g., women and men). Similarly, we recognize the inadequacy of the social construct of race. Furthermore, we do not consider any aspects of intersectionality. Despite these limitations, the methodology we develop is general and could be applied to other NLP tasks and datasets and to more complex definitions of bias groups.\nAs we extend the original BBNLI benchmark, BBNLI-next only covers the 16 stereotypes included in BBNLI. By construction, both datasets have only neutral as ground truth labels. This can be problematic if models have a propensity for neutral predictions. BBNLI-next is not a balanced dataset; some types of stereotypes have more samples than others. In the future, a combination of machine-generated and human-instructed samples could lead to better balance. In fact, some of the templates we included were inspired by failures we noticed while interacting with the models. Manual validation was required in this work. Ideally, we would figure out how to use models to validate (some of) the generated samples. This is a subtle and complex issue as bias is nuanced and can be subjective.\nTo generate new samples, we used a language model to suggest lexical variations for certain tokens that were masked in the hypothesis text. The resulting filled-in samples are influenced by the bias in the language model we used to generate them since we choose the top 20 word candidates suggested by the model. We notice this aspect when the same masked phrase that differs only in the social group ends up being filled by different words by the same masked language model.\nIn this work, we emphasized how the fragility of natural language predictors can influence their bias performance. We introduce new measures of bias in an attempt to delineate between model brittleness and bias. We believe a lot more research is needed to fully understand the interplay between bias and robustness. In a way, differences in performance across protected groups can be understood as a manifestation of lack of robustness (i.e., slight variations in the input with respect to the target group lead to different predictions). Delineating between robustness and bias may be easier accomplished with large datasets that include a substantial number of lexical variations that are semantically similar such that the effects of the lack of robustness are reduced and only the biased behavior resurfaces.\nWe need considerably more researchers dedicating their work to building datasets and understanding model behavior than we currently have in the NLP community. Most importantly, we need venues and incentives to support the multidisciplinary, onerous work that is involved in auditing models for bias." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In the spirit of efficient NLP, no fine-tuning or model training was performed during this research (all models we used are previously fine-tuned and made available in the HuggingFace Model Hub). We used A100 GPUs and all inference experiments run within minutes. The longest part of the experimentation was the adversarial filtering.\nManual work required in validation was performed by one of the authors that identifies as belonging to a few unprivileged groups. The author is well-paid and this work was a part of the author's job responsibilities.\nThe techniques outlined in this paper could be used with a malicious intent of biasing the predictions of a model or to modify the behavior of a model to make it look less biased. Specifically, since we show that benchmarks regarded as difficult can become less challenging in different contexts and that lexical variations of similar meaning can lead to different bias results, a malicious agent could use this information to create benchmarks that are purported to audit for bias, while in fact being shallow and not including sufficiently hard samples.\nImportantly, bear in mind that we do not currently have any way to ensure a model is not biased. If a benchmark does not expose any bias, it does not mean the model is not biased; it probably means the benchmark is limited." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "A.1 Group-Counterfactual Hypothesis Expansion", "publication_ref": [], "table_ref": [], "text": "We create counterfactual hypotheses by swapping the group with its counterfactual counterpart as defined in the original BBNLI dataset (each studied bias category comes with its predefined pair of groups). For example, for the stereotype that Jewish women tend to have large families with many kids, the counterfactual group is Christian.\nTo illustrate the counterfactual expansion through an example, let us consider the masked hypothesis template in Figure 4. The masked template is first expanded with the two religions: Jewish and Christian, which results into two masked hypotheses. These masked hypotheses are filled in by the masked language model independently. As a result, some of the generated hypotheses are identical, and some are distinct, as shown in the figure.\nTo generate group-counterfactuals, we iterate over all samples and substitute the group with the corresponding counterfactual group in the hypothesis alone. We make sure to not generate any duplicates. Note that the counterfactuals are always included, without adversarial filtering. There is no guarantee that the counterfactual samples will lead to mispredictions." }, { "figure_ref": [], "heading": "A.2 Language Models Used in the Study", "publication_ref": [ "b34" ], "table_ref": [], "text": "For reproducibility, we record the exact checkpoint of the four LMs (re)used herein from the Hugging-Face Model Hub (HuggingFace, 2022). We thank HuggingFace for creating a model repository and the original authors of the checkpoints for making them available for research. We saved both computational and work cycles by avoiding NLI model fine-tuning. We also thank the authors for releasing model cards that enable trust in what the models represent. More details about the models may be found in Nie et al. (2020). We doublechecked the performance of the models with the MNLI benchmark test sets. The results for the test set of the original BBNLI benchmarks presented in Section A.4 are further evidence that these models are strong, state-of-the-art NLI models." }, { "figure_ref": [], "heading": "A.3 BBNLI-next Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "In this section, we present statistics pertaining to the dataset creation and the final dataset. We also explicate details of the dataset creation pipeline." }, { "figure_ref": [], "heading": "A.3.1 Masked Templates", "publication_ref": [], "table_ref": [], "text": "We inherit all bias domains and subtopics from the BBNLI dataset. For each subtopic, we modify the existing BBNLI templates to include masks to be filled in by a masked-LM. In Table 9, we show the statistics for the number of templates that were manually created for each of the subtopics within each domain." }, { "figure_ref": [], "heading": "A.3.2 Masked Samples", "publication_ref": [], "table_ref": [], "text": "We use the manually defined masked templates with the BBNLI infrastructure to generate samples containing (premise, hypothesis) pairs. The premises and hypotheses are expanded with the groups and words from the BBNLI benchmark and then combined to create all the meaningful premise/hypothesis samples. The number of samples generated depends on the premises included in the BBNLI dataset and the masked templates we created. Recall that we are not modifying the premises in the benchmark. Note that these samples contain masked hypotheses." }, { "figure_ref": [], "heading": "A.3.3 Adversarial Filtering", "publication_ref": [ "b0" ], "table_ref": [ "tab_9", "tab_1", "tab_1", "tab_1" ], "text": "The mask in each hypothesis is filled in by an LM (we used a word-mask BERT-large model).\nThe top 20 word candidates suggested by the LM are grouped with the associated premise from the samples (generated by BBNLI expansion and accounted in the previous section) and filtered by one of the three models fine-tuned for NLI.\nAs explained in the main body of the paper, we employed adversarial filtering using three out of the four models. The adversarial filtering was employed for two main reasons. First, we would like to select the most difficult samples. Second, manual validation is required for the final samples to be included in the dataset. Going through a large number of samples is prohibitively expensive, not only from the time consumption point of view, but also in the emotional toll on validators from analyzing offensive content. Recall that these are stereotypes and anti-stereotypes that are considered harmful and the validation is usually performed by people belonging to the unprivileged groups the text talks about. In Table 11 we show the number of samples that each LM found difficult (i.e., the number of samples mispredicted by each model).\nAll models see the same samples, and, hence, there will be overlap in what the models find difficult. The ELECTRA model has the most samples, which is indicative of both its bias and brittleness. Table 9: BBNLI-next: The number of masked templates created for each subtopic corresponding to a domain of bias.\nThese samples are further reduced by the manual validation of the unique hypotheses (i.e., we considered a sample only if the hypothesis it contains is marked as valid), and by the natural overlap between what models find difficult. The number of unique LM-filled hypotheses are included in the main body of the paper, along with statistics on the validation process. Last, but not least, the final dataset contains the counterfactuals for each sample as explained in the paper. The final statistics for the dataset are presented next. but not on NLI. As such, we find it interesting to present the results of the benchmark for the four models we are considering in our study that were fine-tuned for NLI. The results are shown in Table 13. BBNLI has two types of samples: ones that are meant to audit models for bias (\"Audit\" in the table) and samples that are not related to bias, but use the same premises as the bias auditing samples (\"Test\" in the table). The \"test\" samples check the performance of the model using a similar vocabulary as the bias auditing samples. We include the results for Test as they are an indication of how well the models perform for the type of inferences present in the benchmark. The effect of fine-tuning on NLI datasets is evident in Table 13 where the accuracy is considerably higher for the models we study than for T0 in the original BBNLI paper (Akyürek et al., 2022). In particular, the accuracy for the test portion of the Table 13: BBNLI: Accuracies for NLI fine-tuned LMs dataset is higher, which showcases that the models we consider are state of the art for NLI." }, { "figure_ref": [], "heading": "A.5 Aggregate Bias Score Formula", "publication_ref": [ "b0" ], "table_ref": [], "text": "The aggregate bias score used in BBNLI (Akyürek et al., 2022) is a measure of the surplus in prostereo bias compared to the anti-stereo bias, as shown in the following simplification: • n e-S : number of entailments in pro-Stereotype hypotheses\n• n e-A : number of entailments in Antistereotype hypotheses\n• n c-S : number of contradictions in pro-Stereotype hypotheses\n• n c-A : number of contradictions in Antistereotype hypotheses\n• n e : overall number of entailments\n• n c : overall number of contradictions" }, { "figure_ref": [], "heading": "A.3.4 BBNLI-next: Final Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "The sample count in the final dataset (including the counterfactuals) is shown in Table 12 split on each subtopic belonging to a bias domain." }, { "figure_ref": [], "heading": "A.4 BBNLI Benchmark Accuracy", "publication_ref": [ "b40" ], "table_ref": [], "text": "When introduced, the BBNLI benchmark was used to study the performance of T0 (Sanh et al., 2022), a large, multi-task model, fine-tuned on many tasks, " } ]
Auditing unwanted social bias in language models (LMs) is inherently hard due to the multi-disciplinary nature of the work. In addition, the rapid evolution of LMs can make benchmarks irrelevant in no time. Bias auditing is further complicated by LM brittleness: when a presumably biased outcome is observed, is it due to model bias or model brittleness? We propose enlisting the models themselves to help construct bias auditing datasets that remain challenging, and introduce bias measures that distinguish between types of model errors. First, we extend an existing bias benchmark for NLI (BBNLI) using a combination of LM-generated lexical variations, adversarial filtering, and human validation. We demonstrate that the newly created dataset (BBNLInext) is more challenging than BBNLI: on average, BBNLI-next reduces the accuracy of state-of-the-art NLI models from 95.3%, as observed by BBNLI, to 58.6%. Second, we employ BBNLI-next to showcase the interplay between robustness and bias, and the subtlety in differentiating between the two. Third, we point out shortcomings in current bias scores used in the literature and propose bias measures that take into account pro-/antistereotype bias and model brittleness. We will publicly release the BBNLI-next dataset to inspire research on rapidly expanding benchmarks to keep up with model evolution, along with research on the robustness-bias interplay in bias auditing.
Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models
[ { "figure_caption": "Figure 3 :3Figure 3: BBNLI-next: Accuracy across models and split on bias domains; for comparison, the first column represents original BBNLI accuracy.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An illustration of the group-counterfactual hypothesis generation. LM Name Model Hub Checkpoint AlBERT ynie/albert-xxlarge-v2-snli_mnli_fever_anli_R1_R2_R3-nli ELECTRA ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli RoBERTa ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli BART ynie/bart-large-snli_mnli_fever_anli_R1_R2_R3-nli", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "2n e-S + n c-A n e + n c -1 * (1 -acc) = 2 (n e-S + n c-A ) -n e -n c n e + n c * n e + n c total samples = 2 (n e-S + n c-A ) -n e -n c total samples = n e-S + n c-A + n e-S + n c-A -n e -n c total samples = n e-S + n c-A + (n e-S -n e ) + (n c-A -n c ) total samples = n e-S + n c-A -n e-A -n c-S total samples = n e-S + n c-A total samples -n e-A + n c-S total samples = pro_stereo_bias -anti_stereo_bias, where:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "LMs and their size in million of parameters.", "figure_data": "", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Validation of generated hypotheses: categories, counts and examples.", "figure_data": "", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Domain Countgender5594race3120religion5790all14504", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "BBNLI-next: Dataset sample count.", "figure_data": "2.4 Aggregate Bias ScoreBy design, all samples in the BBNLI and our ex-tended dataset BBNLI-next have 'neutral' as theground truth label, and thus a 100%-accurate modelwould be unable to uncover any bias. 7 Whenever amisprediction occurs, we would like to understandwhether the misprediction aligns with the biasedlabel. The biased label for pro-stereotype samples(aligned with documented stereotypical biases) isentailment, and, conversely, contradiction is thebiased label for anti-stereotype examples (the sam-", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Pro-/Anti-stereotype bias scores and Aggregate bias score across models and bias domains.", "figure_data": "HypothesisCorrectStereotype BiasAnti-stereotype BiasGroup-insensitive ErrorStereotypeNeutralNeutralEntailEntailContra Neutral ContraEntailContraAnti-stereotypeNeutralContra Neutral ContraNeutralEntailEntailEntailContraCounts011211222", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Types of predictions for a pair of counterfactuals with pro-/anti-stereotype hypotheses and their contributions to pro-/anti-stereotype bias measures and group-insensitive model error.", "figure_data": "ModelDomain Mispred Pro Anti ErrorAlBERTall29.78 7.45 7.28 15.04gender26.87 10.17 7.97 8.72race22.12 6.31 6.31 9.49religion36.72 5.44 7.13 24.15ELECTRA all66.24 12.00 4.92 49.32gender55.26 17.29 11.58 26.39race67.21 24.78 2.12 40.32religion76.34 0.00 0.00 76.34RoBERTa all31.56 9.77 8.89 12.91gender33.71 11.46 13.03 9.22race20.83 8.21 3.01 9.62religion35.27 8.98 8.05 18.24BARTall37.63 7.85 7.40 22.38gender33.93 8.40 11.33 14.19race16.89 5.90 1.96 9.04religion52.38 8.36 6.55 37.48", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Disaggregate counterfactual measures of bias:Pro-/Anti-stereo bias measures and model error due to brittleness. All measures sum up to misprediction rate.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "LM checkpoints from the HuggingFace Model Hub", "figure_data": "Domain SubtopicCountgenderman_is_to_breadwinner8man_is_to_programmer10man_is_to_rational9man_is_to_surgeon10woman_is_to_homemaker11Total48raceAsian_men_to_feminine16Black_is_to_criminal23Black_is_to_ghetto29Black_is_to_drugs13white_is_to_clean15Total96religionCatholic_woman_is_to_kids11Jewish_woman_is_to_kids12Mormon_man_to_oppressive8Muslim_man_to_many_wives9Muslim_man_to_oppressive12Muslim_women_to_invisible16Total68allTotal212", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "BBNLI-next: The count of samples mispredicted by each model split on the bias domain.", "figure_data": "Domain SubtopicCountgenderman_is_to_breadwinner2260man_is_to_programmer1308man_is_to_rational110man_is_to_surgeon88woman_is_to_homemaker1828Total5594raceAsian_men_to_feminine66Black_is_to_criminal8Black_is_to_drugs1938Black_is_to_ghetto986white_is_to_clean122Total3120religionCatholic_woman_is_to_kids360Jewish_woman_is_to_kids2072Mormon_man_to_oppressive266Muslim_man_to_many_wives256Muslim_man_to_oppressive886Muslim_women_to_invisible1950Total5790allTotal14504", "figure_id": "tab_9", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "BBNLI-next: Final dataset sample count split for each subtopic corresponding to a domain of bias.", "figure_data": "", "figure_id": "tab_10", "figure_label": "12", "figure_type": "table" } ]
Ioana Baldini; Chhavi Yadav; Payel Das; Kush R Varshney
[ { "authors": "Afra Feyza Akyürek; Sejin Paik; Yusuf Muhammed; Seda Kocyigit; Serife Akbiyik; Derry Leman Runyun; Wijaya", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "On measuring social biases in prompt-based multi-task learning", "year": "2022-07-10" }, { "authors": "Ioana Baldini; Dennis Wei; Karthikeyan Natesan Ramamurthy; Mikhail Yurochkin; Moninder Singh", "journal": "", "ref_id": "b1", "title": "Your fairness may vary: Pretrained language model fairness in toxic text classification", "year": "2022" }, { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "", "ref_id": "b2", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "Lin Su; Solon Blodgett; Hal Barocas; Iii Daumé; Hanna Wallach", "journal": "", "ref_id": "b3", "title": "Language (technology) is power: A critical survey of \"bias\" in NLP", "year": "2020" }, { "authors": "Lin Su; Gilsinia Blodgett; Alexandra Lopez; Robert Olteanu; Hanna Sim; Wallach", "journal": "", "ref_id": "b4", "title": "Stereotyping norwegian salmon: An inventory of pitfalls in fairness benchmark datasets", "year": "2021" }, { "authors": "Rishi Bommasani; Percy Liang", "journal": "", "ref_id": "b5", "title": "Trustworthy social bias measurement", "year": "2022" }, { "authors": "Daniel Borkan; Lucas Dixon; Jeffrey Sorensen; Nithum Thain; Lucy Vasserman", "journal": "WWW", "ref_id": "b6", "title": "Nuanced metrics for measuring unintended bias with real data for text classification", "year": "2019" }, { "authors": "Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail", "year": "2022" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "", "ref_id": "b8", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "Trista Yang; Yada Cao; Kai-Wei Pruksachatkun; Rahul Chang; Varun Gupta; Jwala Kumar; Aram Dhamala; Galstyan", "journal": "", "ref_id": "b9", "title": "On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations", "year": "2022" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b10", "title": "ELECTRA: pretraining text encoders as discriminators rather than generators", "year": "2020-04-26" }, { "authors": "Kate Crawford", "journal": "", "ref_id": "b11", "title": "The trouble with bias", "year": "2017" }, { "authors": "Sunipa Dev; Tao Li; Jeff M Phillips; Vivek Srikumar", "journal": "AAAI Press", "ref_id": "b12", "title": "On measuring and mitigating biased inferences of word embeddings", "year": "2020-02-07" }, { "authors": "Sunipa Dev; Emily Sheng; Jieyu Zhao; Aubrie Amstutz; Jiao Sun; Yu Hou; Mattie Sanseverino; Jiin Kim; Akihiro Nishi; Nanyun Peng; Kai-Wei Chang", "journal": "", "ref_id": "b13", "title": "On measures of biases and harms in NLP", "year": "2022" }, { "authors": "J Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b14", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Jwala Dhamala; Tony Sun; Varun Kumar; Satyapriya Krishna; Yada Pruksachatkun; Kai-Wei Chang; Rahul Gupta", "journal": "", "ref_id": "b15", "title": "BOLD: dataset and metrics for measuring biases in open-ended language generation", "year": "2021" }, { "authors": "Lucas Dixon; John Li; Jeffrey Sorensen; Nithum Thain; Lucy Vasserman", "journal": "", "ref_id": "b16", "title": "Measuring and mitigating unintended bias in text classification", "year": "2018" }, { "authors": "Max Glockner; Vered Shwartz; Yoav Goldberg", "journal": "", "ref_id": "b17", "title": "Breaking NLI systems with sentences that require simple lexical inferences", "year": "2018" }, { "authors": "Seraphina Goldfarb-Tarrant; Rebecca Marchant; Ricardo Muñoz Sanchez; Mugdha Pandya; Adam Lopez", "journal": "", "ref_id": "b18", "title": "Intrinsic bias metrics do not correlate with application bias", "year": "2021" }, { "authors": "Reto Gubelmann; Siegfried Handschuh", "journal": "", "ref_id": "b19", "title": "Uncovering more shallow heuristics: Probing the natural language inference capacities of transformerbased pre-trained language models using syllogistic patterns", "year": "2022" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b20", "title": "DeBERTa: decoding-enhanced BERT with disentangled attention", "year": "2021-05-03" }, { "authors": "Ben Hutchinson; Vinodkumar Prabhakaran; Emily Denton; Kellie Webster; Yu Zhong; Stephen Denuyl", "journal": "", "ref_id": "b21", "title": "Social biases in NLP models as barriers for persons with disabilities", "year": "2020" }, { "authors": "Douwe Kiela; Max Bartolo; Yixin Nie; Divyansh Kaushik; Atticus Geiger; Zhengxuan Wu; Bertie Vidgen; Grusha Prasad; Amanpreet Singh; Pratik Ringshia; Zhiyi Ma; Tristan Thrush; Sebastian Riedel; Zeerak Waseem; Pontus Stenetorp; Robin Jia; Mohit Bansal; Christopher Potts; Adina Williams", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Dynabench: Rethinking benchmarking in NLP", "year": "2021" }, { "authors": "Chul Bum; Nandana Kwon; Mihindukulasooriya", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "An empirical study on pseudo-log-likelihood bias measures for masked language models using paraphrased sentences", "year": "2022" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b24", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "year": "2020-04-26" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b25", "title": "BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Tao Li; Daniel Khashabi; Tushar Khot; Ashish Sabharwal; Vivek Srikumar", "journal": "", "ref_id": "b26", "title": "Unqovering stereotypical biases via underspecified questions", "year": "2020" }, { "authors": "Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar; Benjamin Newman; Binhang Yuan; Bobby Yan; Ce Zhang; Christian Cosgrove; D ; Christopher; Diana Christopher Ré; Drew A Acosta-Navas; Eric Hudson; Esin Zelikman; Faisal Durmus; Frieda Ladhak; Hongyu Rong; Huaxiu Ren; Jue Yao; Keshav Wang; Laurel Santhanam; Lucia Orr; Mert Zheng; Mirac Yuksekgonul; Nathan Suzgun; Neel Kim; Niladri Guha; Omar Chatterji; Peter Khattab; Qian Henderson; Ryan Huang; Sang Chi; Shibani Michael Xie; Surya Santurkar; Tatsunori Ganguli; Thomas Hashimoto; Tianyi Icard; Vishrav Zhang; William Chaudhary; Xuechen Wang; Yifan Li; Yuhui Mai; Yuta Zhang; Koreeda", "journal": "", "ref_id": "b27", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b28", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Tom Mccoy; Ellie Pavlick; Tal Linzen", "journal": "", "ref_id": "b29", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "year": "2019" }, { "authors": "Moin Nadeem; Anna Bethke; Siva Reddy", "journal": "", "ref_id": "b30", "title": "Stereoset: Measuring stereotypical bias in pretrained language models", "year": "2020" }, { "authors": "Nikita Nangia; Clara Vania; Rasika Bhalerao; Samuel R Bowman", "journal": "", "ref_id": "b31", "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models", "year": "2020" }, { "authors": "Pandu Nayak", "journal": "", "ref_id": "b32", "title": "Understanding searches better than ever before", "year": "2019" }, { "authors": "Aurélie Névéol; Yoann Dupont; Julien Bezançon; Karën Fort", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "French CrowS-pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English", "year": "2022" }, { "authors": "Yixin Nie; Adina Williams; Emily Dinan; Mohit Bansal; Jason Weston; Douwe Kiela", "journal": "", "ref_id": "b34", "title": "Adversarial NLI: A new benchmark for natural language understanding", "year": "2020" }, { "authors": " Openai", "journal": "", "ref_id": "b35", "title": "ChatGPT: Optimizing Language Models for Dialogue", "year": "2022" }, { "authors": "Alicia Parrish; Angelica Chen; Nikita Nangia; Vishakh Padmakumar; Jason Phang; Jana Thompson; Phu Mon Htut; Samuel Bowman", "journal": "", "ref_id": "b36", "title": "BBQ: A hand-built bias benchmark for question answering", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b37", "title": "Using Machine Learning to Reduce Toxicity Online", "year": "2021-07-21" }, { "authors": "Yada Pruksachatkun; Satyapriya Krishna; Jwala Dhamala; Rahul Gupta; Kai-Wei Chang", "journal": "", "ref_id": "b38", "title": "Does robustness improve fairness? approaching fairness with word substitution robustness methods for text classification", "year": "2021" }, { "authors": "Deborah Inioluwa; Emily Raji; Emily M Denton; Alex Bender; Amandalynne Hanna; Paullada", "journal": "", "ref_id": "b39", "title": "AI and the everything in the whole wide world benchmark", "year": "2021-12" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b40", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "Sunipa Nikil Roashan Selvam; Daniel Dev; Tushar Khashabi; Kai-Wei Khot; Chang", "journal": "", "ref_id": "b41", "title": "The tail wagging the dog: Dataset construction biases of social bias benchmarks", "year": "2022" }, { "authors": "Preethi Seshadri; Pouya Pezeshkpour; Sameer Singh", "journal": "", "ref_id": "b42", "title": "Quantifying social biases using templates is unreliable", "year": "2022" }, { "authors": "Aarohi Srivastava", "journal": "", "ref_id": "b43", "title": "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models", "year": "2022" }, { "authors": "Yarden Tal; Inbal Magar; Roy Schwartz", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Fewer errors, but more stereotypes? the effect of model size on gender bias", "year": "2022" }, { "authors": "Aarne Talman; Marianna Apidianaki; Stergios Chatzikyriakidis; Jörg Tiedemann", "journal": "", "ref_id": "b45", "title": "NLI data sanity check: Assessing the effect of data corruption on model performance", "year": "2021" }, { "authors": "James Thorne; Andreas Vlachos; Christos Christodoulopoulos; Arpit Mittal", "journal": "NAACL-HLT", "ref_id": "b46", "title": "FEVER: a large-scale dataset for fact extraction and verification", "year": "2018" }, { "authors": "Eric Wallace; Shi Feng; Nikhil Kandpal; Matt Gardner; Sameer Singh", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Universal adversarial triggers for attacking and analyzing NLP", "year": "2019" }, { "authors": "Xuezhi Wang; Haohan Wang; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Measure and improve robustness in NLP models: A survey", "year": "2022" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "", "ref_id": "b49", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" } ]
[]
10.1037/a0030852
2023-05-22
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b6", "b7", "b4", "b10", "b20", "b16", "b12", "b1", "b17", "b0", "b19", "b2", "b5", "b8", "b18", "b8", "b15", "b5", "b11", "b2", "b18" ], "table_ref": [], "text": "Humans have the natural ability to develop strategies in one context that can be generalised and applied in other contexts, providing a useful starting point to form a complete course of action in unfamiliar settings [7,8]. Existing work on transfer learning involves learning low-level information from a source task [5,11,21], which can be transferred to one or many target tasks. However, these approaches face limitations such as being restricted to certain classes of tasks due to learnt knowledge being insufficiently generalisable [17]. For instance, an agent may learn a possible solution to move a block in a video game. However, it cannot autonomously recognise the similarities to apply the solution in the real world. The agent would require external guidance to align the action or state spaces between the two environments. Suppose artificial agents can perform strategy synthesis, allowing them to utilise their existing knowledge better to continuously learn and react to novel situations. This would significantly improve an agent's learning capabilities to handle a wide range of tasks with limited data where it can apply generalised strategies to guide behaviour.\nIn this work, we propose a method of knowledge transfer in AI agents based on the human cognitive ability to develop strategies.\nAs a first step, we seek to obtain transferable knowledge, in the form of behavioural strategies, from an agent's past behaviour in other environments. The idea is that strategies could be used by the agent in contexts when the action spaces, reward functions or environment dynamics are different from those where the strategies originated, where recognising similarities requires a deeper, more abstract-level understanding of situations.\nA strategy is an abstract task structure representing a partial plan to achieve a goal in some environment. We adopt an intuitive definition of a strategy rather than the game theoretical definition, as referred to in multi-agent studies, which implies a complete set of instructions specifying how a player should make decisions [13]. A partial plan refers to the usability of the solution -it may not completely solve a problem, but it provides a base for a complete solution. Our definition encompasses plans with a partial ordering of events as well as plans with potentially unnecessary events. For example, in the game of Pacman, a strategy may be to \"collect a power-up to defeat a ghost\". This could be abstracted to \"collect an item to defeat an enemy\". Defeating enemies is a common objective that appears in various games, making this strategy applicable in many contexts.\nIn this paper, we form strategies by extracting information from trajectories gathered in single-agent video game environments. Games inherently have strategies in order to achieve a range of goals that vary in complexity. More specifically, we consider a family of problems that use symbolic event representation to generate interpretable results. Trajectories are represented by a series of events, where an event is defined as both the result of an agent's action and changes in state. Existing benchmark environments, such as those from the Arcade Learning Environment [2], are reimplemented to generate events since the default game trajectories consist only of image-based state representations and low-level actions.\nThe key contribution is our unique approach to strategy extraction by treating the problem as a sequential pattern mining task. The novelty lies in using a sequence alignment technique known as the Smith-Waterman algorithm [18] which is more commonly known for comparing DNA string sequences. We apply a modified Smith-Waterman algorithm combined with observed event frequencies across a dataset of trajectories to find patterns of significance that form a strategy. We consider case studies based on single-player video game environments to demonstrate the approach and develop performance benchmarks. The proposed method has broader applicability in single-agent environments, and games have been chosen for demonstration purposes. Our evaluation serves as a promising first step towards efficient and robust generalisation and how to best utilise the generalised strategies in new tasks and complex domains. Strategy generalisation and transfer are left as future work.\nDefining a reliable method for generalisation and transfer across multiple tasks is an open research problem. Multiple works have proposed transferring learned skills to solve abstract subtasks that are repeated in various games. Asadi and Huber [1] propose a method that transfers policies for achieving common sub-goals between tasks used to construct a model for solving the new task. Their method uses the options framework, a notable hierarchical reinforcement learning framework [20]. Options are learned from extracted sub-goals and stored in the agents' memory in the form of a policy, termination condition and an input set. The restriction of using knowledge transfer via options without abstraction means that transfer learning only works between agents which use the same learning algorithms. Learning abstract options is also challenging and will depend on the state and action space of the target task. In contrast, the form in which we represent our strategies can be flexible, depending on the type of learning agent.\nStrategy extraction is more commonly seen as a data-mining problem, typically demonstrated in real-time strategy (RTS) games which provide a testbed for many different strategy-based tasks [3,6,9,19]. There are many studies in sequence modelling and pattern mining that leverage information gathered from the analysis of state-action trajectories. Such techniques help to discover patterns in trajectories related to specific tasks. This information can inform our strategy construction.\nMost existing work focuses on finding strategies with the purpose of player modelling for applications such as learning an opponent's strategy, understanding a player's behaviour and finding a winning strategy [9,16]. The types of strategies typically modelled are complete solutions such as telling the player exactly how to reach the goal. Several extraction procedures extract strategies based on pattern frequency or support; the percentage of sequences in a dataset that contain the pattern. For example, Chen et al. [6] identify closed frequent patterns whose frequency is above some pre-specified support threshold, Low-Kam et al. [12] detect significant sequential patterns based on the frequency of pattern prefixes and Bosc et al. [3] use a combination of pattern prefix and sequential pattern frequency. The types of strategies obtained, although useful, do not provide any information at the event level in relation to each events' significance in achieving the goal. Subsequently, even if the game-or implementation-specific content is abstracted, these complete strategies, or plans, are highly unlikely to be applicable in other games. In the Pacman example, this would be like including individual movements and collection of dots in the strategy to kill a ghost. This in turn makes it difficult to generalise the extracted strategy as it will include many irrelevant events.\nSrovnal et al. [19] use a text-based method, latent semantic analysis, to detect semantic information which they interpret as a game strategy. Inspired by this idea, our proposed approach uses the sequential pattern mining method known as local sequence alignment to find useful causal information which we can use to form strategies. Local sequence alignment is a variant of global sequence alignment. The key difference is that the global variant treats sequences as a whole and does not identify partial sequential matches. A notable method for local sequence alignment is the Smith-Waterman algorithm, first proposed in 1981 by F. Smith and M. S. Waterman.\nWe use this method, more commonly known for its application in bioinformatics to compare strings of nucleic acid or protein sequences, to compare game trajectories." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "We consider an agent that has been trained to play a game through reinforcement learning. A single-player game environment consists of a set of states 𝑆, available actions 𝐴 and rewards 𝑅. At each timestep, 𝑡, the player performs an action 𝑎 𝑡 from the set 𝐴 and receives some reward 𝑟 𝑡 . A game trajectory, 𝜏 is a finite sequence of consecutive actions, events, rewards and observed states of the environment before and after the action is taken, 𝑠 𝑡 -1 and 𝑠 𝑡 respectively. A subtrajectory is a trajectory containing a subset of the elements from another trajectory, maintaining the same temporal ordering. A subtrajectory may not be equivalent to a subsequence; elements from the original trajectory can be dropped. For example, given a trajectory of the form 𝑎, 𝑏, 𝑐, a valid subtrajectory is 𝑎, 𝑐 as well as 𝑎, 𝑏 and 𝑏, 𝑐." }, { "figure_ref": [], "heading": "Environments", "publication_ref": [ "b3", "b14" ], "table_ref": [], "text": "We demonstrate our method on three custom environments, implemented in OpenAI Gym [4]. All three environments have discrete action spaces. These environments are implemented as text-based versions where all aspects of the environments are represented by ASCII characters. We use Proximal Policy Optimisation (PPO) [15] to learn a policy for each environment. Image-based environments could also be used as long as the agent's behaviour and environment dynamics can be captured in a sequence of symbolic events." }, { "figure_ref": [ "fig_0" ], "heading": "Pacman", "publication_ref": [], "table_ref": [], "text": "We developed a version of Pacman inspired by the original Atari game, as a 2D environment, as shown in Figure 1. The agent's action space contains move actions in four directions: up, down, left and right. The following events can take place in the environment: move, collect power-up/dot and kill a ghost. An observation of the game state is a grid where each cell contains a numeric value representing a game component -walls, ghosts, dots, power-ups and Pacman. Pacman kills a ghost by collecting a power-up and then moving over the ghost within the next five movements.\nDuring training, rewards are received when the agent collects items (dots and power-ups). Additionally, the agent is rewarded for killing ghosts, with doubled rewards received when killing ghosts in succession. The map topology, and the starting locations of the ghosts, remain unchanged for every episode of training. Ghosts move around the map during the game." }, { "figure_ref": [ "fig_1" ], "heading": "Dungeon Crawler", "publication_ref": [], "table_ref": [], "text": "Dungeon Crawler is an exploration game where the goal of the agent is to navigate through a maze to collect a key and then escape through a door. The agent must avoid monsters in its search for the key or kill the monsters by collecting a weapon. Points are rewarded for collecting items (weapons and keys), killing monsters and escaping through the door. Similar to the Pacman environment, the available actions are up, down, left and right. The following events can take place in the environment: move, collect weapon (gun), collect weapon (sword), collect key, kill a monster and unlock door.\nThe dungeon map is fixed for every training episode, however the items and monsters are randomly placed (6 monsters and 3 of each weapon type). An example of an initial game grid is shown in Figure 2. An observation of the game state contains the shortestpath distances between the agent's current location to various entities (weapons, monsters, the key and the door).\nThere are multiple events for which we expect to find strategies. The first is the key-door collection strategy. The second is the weapon-monster killing strategy. Multiple weapon types are available (e.g. 'gun' and 'sword'). Two valid strategies in this context may be \"collect the gun to kill a monster\", and \"collect the sword to kill a monster\"." }, { "figure_ref": [ "fig_2" ], "heading": "Bank Heist", "publication_ref": [], "table_ref": [], "text": "Influenced by the Atari version of the same name, we developed a simple Bank Heist game as shown in Figure 3. In this version, the agent must rob five banks without getting caught by police cars or running out of fuel. The agent begins with a full fuel tank which then depletes through movement and dropping dynamite. Collecting a fuel tank refills the tank to 100%. The agent may drop dynamite to destroy police cars however dropped dynamite can also kill the agent. Fixed-value rewards are given when the agent destroys a police car and collects a fuel tank. The reward earned for robbing a bank increases for each bank that is robbed. The environment starts with one bank and a fuel tank. Robbing banks causes more banks to appear in random empty spaces. An observation of the game state is a list of values for each available action containing a numerical representation of how beneficial the action is expected to be if taken in the next step." }, { "figure_ref": [ "fig_3" ], "heading": "Local Sequence Alignment", "publication_ref": [], "table_ref": [], "text": "Suppose we have sequences 𝐴 = 𝑎 0 , 𝑎 1 , ..., 𝑎 𝑚 and 𝐵 = 𝑏 0 , 𝑏 1 , ..., 𝑏 𝑛 where 𝑚 and 𝑛 are the respective sequence lengths. Local sequence alignment can be applied to compare 𝐴 and 𝐵 and find regions where the two sequences are similar to each other. The result is one or more subsequences deemed to be most similar according to some metric.\nThe Smith-Waterman algorithm performs local sequence alignment by first forming a scoring matrix, populated by comparing the 𝑚 elements of 𝐴 with the 𝑛 elements of 𝐵. The matrix dimensions are (𝑚 + 1) × (𝑛 + 1) and the first row and column are filled with zeros. The matrix cell (𝑖, 𝑗) where 𝑖 ∈ [1, 𝑚 + 1] and 𝑗 ∈ [1, 𝑛 + 1] is assigned a score based on the comparison of 𝐴 𝑖 and 𝐵 𝑗 , and the values in adjacent matrix cells. In the original algorithm, these scores are computed as per Equation 1, where 𝑠, 𝑑 and 𝑔 are user-defined match, mismatch and gap scores respectively.\n𝐻 𝐴𝐵 (𝑖, 𝑗) = 𝑚𝑎𝑥                    0 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑠, if 𝐴 𝑖 = 𝐵 𝑗 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑑, if 𝐴 𝑖 ≠ 𝐵 𝑗 𝐻 𝐴𝐵 (𝑖, 𝑗 -1) -𝑔 𝐻 𝐴𝐵 (𝑖 -1, 𝑗) -𝑔 (1)\nWhen filling out the values in the aforementioned matrix, Smith-Waterman keeps track of which adjacent cell was used to compute the score for cell (𝑖, 𝑗). This score, 𝐻 𝐴𝐵 (𝑖, 𝑗), will use at most one of 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1), 𝐻 𝐴𝐵 (𝑖, 𝑗 -1), and 𝐻 𝐴𝐵 (𝑖 -1, 𝑗). This knowledge is later used, through a traceback procedure, to form the output of the sequence comparison -a similar subsequence.\nThe traceback process starts from the highest-scoring cell, (𝑖, 𝑗), including either 𝐴 𝑖 or 𝐵 𝑗 in our output subsequence. The method then recalls which adjacent cell was used to calculate the value in (𝑖, 𝑗), and moves to that cell. For each diagonal movement to a cell (𝑘, 𝑙), one of 𝐴 𝑘 and 𝐵 𝑙 is placed at the front of the evolving output subsequence. For vertical or horizontal movements, a gap token is added. This process continues until a cell with a zero is reached.\nWe adapt Smith-Waterman to perform pairwise comparisons on trajectories and return a subtrajectory. Given our motivation for comparing trajectories is to highlight important elements, we have made several changes to the original scoring function shown in Equation 1.\nFirst, we implement a custom function for element comparison. The implementation depends on the choice of environment and which characteristics are available. For the environments previously described which output event descriptions in the form of strings, this function performs string comparisons. Second, we maintain match and mismatch scoring since we want to maintain elements that exist in both trajectories and de-emphasize all others. Third, we discard penalties (𝑔) for the presence of gaps in the resulting subtrajectory. Smith-Waterman aims to find a similar subsequence when comparing two sequences, explicitly placing gap tokens between consecutive items in the result if they do not occur consecutively in the two sequences being compared. We are interested in identifying the important similarities (events) between two trajectories, and their relative temporal ordering. We do not care whether gaps should be placed in the resulting subtrajectory, or how many, in the context of our strategy extraction objective.\nFinally, we include a weighting, W, capturing additional information about the elements being compared, beyond whether they match, to further influence which elements appear in our output subtrajectory. We describe in subsequent sections how we instantiate W when comparing trajectories. These changes are reflected in Equation 2.\n𝐻 𝐴𝐵 (𝑖, 𝑗) = 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑠 W, if 𝐴 𝑖 = 𝐵 𝑗 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑑 W, if 𝐴 𝑖 ≠ 𝐵 𝑗(2)\nOnce the scoring matrix is filled, we traceback through the matrix to discover which of its cells will be used to determine the output, following the procedure described earlier. When deciding on whether to include 𝐴 𝑖 or 𝐵 𝑗 in our resulting subtrajectory, after a diagonal movement to cell (𝑖, 𝑗), we choose the event from the shorter trajectory. Figure 4 shows an example of a scoring matrix that has been constructed for two Pacman trajectories, excluding state and reward information for simplicity, 𝐴 ={\"move\", \"collect dot\", \"move\", \"move\", \"collect power-up\", \"move\", \"kill a ghost\"} and 𝐵 ={\"move\", \"collect power-up\", \"move\", \"collect dot\", \"move\", \"kill a ghost\"}. In this example, W is instantiated using received rewards for the purpose of demonstration. Formally this can be written as W = 𝑚𝑎𝑥 (1, 𝑟𝑒𝑤𝑎𝑟𝑑 (𝐴 𝑖 ), 𝑟𝑒𝑤𝑎𝑟𝑑 (𝐵 𝑗 )), where 𝑟𝑒𝑤𝑎𝑟𝑑 (𝐴 𝑘 ) denotes the reward received by the agent after the event 𝐴 𝑘 in the relevant trajectory. The output subtrajectory for this example is {\"move\", \"collect power-up\", \"move\", \"kill a ghost\"}, including events from the shorter of the two trajectories, 𝐵. Since the rewards may not always reflect the importance of an event in a strategy, we require a more robust definition of W. Later, we will define W based on observed event frequencies (or likelihoods).\nWe use the Smith-Waterman algorithm for three reasons. First, as a local, as opposed to global, sequence alignment method, it allows subsequence similarities to be detected. Second, it supports the addition of gaps in the output subsequences to replace one or more sequence elements. While we do not penalise the presence of gaps, or explicitly add gap tokens to our subtrajectories, gap allowance is important. In a trajectory, the events that are important to a goal may not always occur consecutively. For example, in Pacman, we want to find strategies like \"collect a power-up, kill a ghost\" rather than explicitly list all the moves and dot collections that occur in between. Finally, the function and metrics used for element comparisons and score assignment are flexible allowing for customisation." }, { "figure_ref": [ "fig_4" ], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we describe each stage of our proposed strategy extraction approach (Figure 5). Given a policy 𝜋 for playing a game, G, our approach first identifies events of interest. These events occur when playing G, and represent goals or sub-goals that an agent may wish to achieve. For an event of interest, 𝐸, our approach collects two sets of trajectories. The first set contains trajectories in which 𝐸 occurs, denoted positive trajectories, and the second trajectories in which 𝐸 does not occur, denoted negative trajectories. The lengths of the negative trajectories are normalised so that the distribution of trajectory lengths of the positive and negative sets match. The positive and normalised negative trajectories are then used to compute the likelihood of each possible event being part of a strategy for achieving 𝐸. The likelihood of an event is computed by comparing the frequency with which it appears in the positive trajectories relative to the negative. Events that appear more often in positive trajectories are more likely to be part of a strategy.\nPairs of positive trajectories will ultimately be compared using our Smith-Waterman adaptation. To reduce the complexity of these comparisons, we remove events whose likelihood falls below a threshold from all positive trajectories. To select pairs of trajectories for comparison, we first cluster trajectories on the basis of the events they contain. We sample pairs of trajectories from each cluster, performing pairwise comparisons using the adapted Smith-Waterman algorithm. The result for each cluster is a set of candidate strategies for achieving 𝐸. The purpose of clustering trajectories on the basis of their events is to ensure that we identify different strategies for achieving the same 𝐸, when present.\n𝐸 𝑐 ! 𝑐 \" 𝑐 # 𝑐 $ 𝑐 % 𝑐 & (i) (ii) (iii) (iv) (v)" }, { "figure_ref": [], "heading": "Collect trajectories Normalisation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Calculate likelihoods Clustering", "publication_ref": [], "table_ref": [], "text": "Strategy extraction " }, { "figure_ref": [], "heading": "Discovering Events of Interest", "publication_ref": [ "b0", "b13" ], "table_ref": [], "text": "We first discover events that occur in the environment that the player could use a strategy to achieve. We use a method of detection based on rewards, however this can be replaced with more sophisticated approaches that do not rely solely on the reward function [1,14].\nUsing the policy 𝜋, we simulate the PPO agent in the environment for 𝑁 episodes and gather trajectories from which events of interest are identified. The first 𝑁 /2 episode trajectories are used to obtain an estimate of the average episode reward 𝑟 𝑎𝑣𝑔 . The remaining 𝑁 /2 episodes are simulated, searching for events that receive a reward greater than 𝑟 𝑎𝑣𝑔 . These events form our events of interest." }, { "figure_ref": [], "heading": "Example 1. (Pacman)", "publication_ref": [], "table_ref": [], "text": "We collect 𝑁 = 100 trajectories to discover events of interest. Across the first half, 𝑟 𝑎𝑣𝑔 = 11. Across the second half, events that receive a reward greater than 11 include \"collect power-up\" (reward = 50) and \"kill a ghost\" (reward = 200, 400)." }, { "figure_ref": [], "heading": "Collecting Trajectory Data", "publication_ref": [], "table_ref": [], "text": "For a selected 𝐸, trajectories are collected to form a dataset for strategy extraction. When the PPO agent is simulated in the environment, historical trajectories 𝜏 𝐻 are saved. Positive trajectories, 𝜏 𝑃 , are collected when the agent reaches an event of interest mid-simulation. The agent's trajectories from the beginning of the episode to the event of interest are saved. To collect negative trajectories, 𝜏 𝑁 , we simulate a random agent in the environment, one which selects a random (valid) action at each step. From the trajectories obtained from these simulations, a random set of entire-episode trajectories is selected, filtering out the trajectories which contain the event of interest.\nOur approach relies on collecting the same number of positive and negative samples. Suppose this becomes too difficult; for example, if the policy 𝜋 is trained sufficiently well with respect to a given event of interest such that all trajectories in 𝜏 𝐻 are positive. Using a random agent, or any other basic heuristic, to guide the agent behaviour when generating negative trajectories, accommodates this case.\nExample 2. (Pacman) When 𝐸 =\"kill a ghost\", one possible trajectory from 𝜏 𝑃 is {\"move\", \"collect dot\", \"move\", \"collect power-up\", \"move\", \"kill a ghost\"}. A trajectory from 𝜏 𝑁 is {\"move\", \"collect dot\", \"move\", \"collect dot\", \"move\"}, at which point Pacman is killed by a ghost." }, { "figure_ref": [ "fig_5" ], "heading": "Normalisation", "publication_ref": [], "table_ref": [], "text": "Depending on the environment and event of interest, there could be significant variation between the length distributions of 𝜏 𝑃 and 𝜏 𝑁 . For the chosen 𝐸, negative trajectories may be longer on average than positive trajectories. In such cases, this obscures the useful information that we wish to gather from comparing the two sets in later stages. Significantly longer negative trajectories influence our calculation of event likelihoods, as these are based on the relative frequency with which specific events occur across the two sets.\nTo solve this problem, 𝜏 𝑁 is normalised to ensure its length distribution matches that of 𝜏 𝑃 . We randomly sample a length from the length distribution of 𝜏 𝑃 , 𝑙 𝜏 𝑃 , and also sample a longer trajectory from 𝜏 𝑁 , 𝑡 𝑁 . A subsequence of the desired length is extracted from 𝑡 𝑁 and added to the new normalised set, 𝜏 𝑁 ′. We repeat this process until 𝜏 𝑁 ′ contains the same number of trajectories as 𝜏 𝑃 . This normalisation step is not applied if the length distributions are equal, in terms of their respective mean and standard deviation, or if the negative trajectories are, on average, shorter than the positive trajectories. Figure 6 shows an example of applying our normalisation method on trajectories obtained from agent simulations in the Pacman environment." }, { "figure_ref": [], "heading": "Example 3. (Pacman)", "publication_ref": [], "table_ref": [], "text": "We expect a strategy for killing a ghost to include \"collect power-up\". This event is in 100% of trajectories in 𝜏 𝑃 . We observe that episodes in which the agent does not kill a ghost involve long event sequences reflecting random exploration, item collection or ghost avoidance. As a result, we observe that \"collect \n𝑡 𝑐 = 𝑆ℎ𝑜𝑟𝑡𝑒𝑠𝑡𝑇𝑟𝑎 𝑗𝑒𝑐𝑡𝑜𝑟𝑦 (𝑐) 4:\nfor 𝑡 𝑗 ∈ 𝑐 \\ {𝑡 𝑐 } do 5:\n𝑟𝑒𝑠𝑢𝑙𝑡 ← 𝑆𝑚𝑖𝑡ℎ𝑊 𝑎𝑡𝑒𝑟𝑚𝑎𝑛(𝑡 𝑐 , 𝑡 𝑗 , L)\nS ← S ∪ {𝑟𝑒𝑠𝑢𝑙𝑡 }" }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "end for 8: end for 9: return S power-up\" also appears in 100% of the negative trajectories. After normalisation, fewer of the negative trajectories contain this event." }, { "figure_ref": [], "heading": "Calculating Event Likelihood", "publication_ref": [], "table_ref": [], "text": "We have assumed that there is an equal likelihood for every event to appear in one or more strategies however realistically, this is very unlikely. By analysing groups of trajectories, we gain insights into the distribution of events and recognise those which are more likely to be in a strategy. We derive likelihood values for each possible event and use this information to disregard events from the trajectories; simplifying them for subsequent calculations.\nTo calculate the likelihood of an event, 𝑒, we compare its frequency of occurrence within 𝜏 𝑃 and 𝜏 𝑁 ′. If 𝑒 occurs considerably more often in the positive samples than in the negative samples, its likelihood of appearing in the strategy is very high. Conversely, if it appears more in the negative than the positive, then commonsense reasoning tells us that 𝑎 is unlikely to be part of the strategy. This is also true if the event appears approximately the same amount in both sets.\nLikelihood is defined as the difference between occurrence frequencies of an event in each of 𝜏 𝑃 and 𝜏 𝑁 ′, denoted 𝑓 𝜏 𝑃 (𝑒) and 𝑓 𝜏 𝑁 ′ (𝑒) respectively. By default, all events have a likelihood of 1. If an event only appears in the negative trajectories and not in the positive trajectories, its likelihood is set to 0. Equation 3 formalises the likelihood calculation for an event 𝑒.\n𝑙 𝑒 =          𝑚𝑎𝑥 (0, 𝑓 𝜏 𝑃 (𝑒) -𝑓 𝜏 𝑁 ′ (𝑒)) if 𝑒 in 𝜏 𝑃 and 𝜏 𝑁 1 if 𝑒 not in 𝜏 𝑁 0 if 𝑒 not in 𝜏 𝑃(3)\nLow-likelihood events are then removed from 𝜏 𝑃 using a predefined threshold. We denote these modified trajectories as 𝜏 𝑃 ′ which is used for the remaining stages. A threshold of 0.1 is used to obtain 𝜏 𝑃 ′ across all experiments in this paper.\nExample 4. (Pacman) If the event \"collect power-up\" appears in 100% of positive trajectories and 20% in the negative trajectories; then its likelihood 𝑙 𝑝𝑜𝑤𝑒𝑟 -𝑢𝑝 = (100 -20) ÷ 100 = 0.8. Computing likelihoods for the remaining events, we obtain the following values: \"move\": 0, \"collect dot\": 0.1, \"collect power-up\":0.8, \"kill a ghost\": 1." }, { "figure_ref": [], "heading": "Clustering", "publication_ref": [], "table_ref": [], "text": "In this stage, we create clusters from the trajectories in 𝜏 𝑃 ′ in order to group trajectories that are related. A new cluster is created for each event that occurs in a trajectory in 𝜏 𝑃 ′, and all trajectories that contain this event are added. In this way, a trajectory may appear in more than one cluster. We denote the set of generated clusters, C.\nThe aim of clustering is to ensure we uncover patterns when there are many different versions of trajectories that achieve the same 𝐸. For example, in the Dungeon Crawler environment for 𝐸 =\"kill a monster\", 𝜏 𝑃 ′ may include trajectories that contain one event 𝑒 1 and not another 𝑒 2 and vice versa. We can create two clusters; one for trajectories that contain 𝑒 1 and another for 𝑒 2 . In the final stage, each cluster is considered separately to find all possible strategies." }, { "figure_ref": [], "heading": "Strategy Extraction", "publication_ref": [], "table_ref": [], "text": "In the final stage, the Smith-Waterman algorithm is used to perform multiple pairwise comparisons between trajectories in each cluster. Recall from Equation 2, the weighting W. When comparing trajectory elements 𝐴 𝑖 and 𝐵 𝑗 , we use the weighting W = 𝑚𝑎𝑥 (𝑙 𝐴 𝑖 , 𝑙 𝐵 𝑗 ). Our choice to use likelihood in this way, rather than relying on rewards, is due to the large variance in the way rewards are assigned in different environments. If our method used rewards, the capability to extract appropriate strategies becomes dependent on the reward scheme of the game. We use likelihood values for strategy extraction to ensure the method is game-agnostic.\nAlgorithm 1 outlines our method for finding strategies for a given 𝐸. For each cluster, the shortest trajectory (i.e., with the least number of events) is selected. A pairwise comparison between this trajectory, and all others in the cluster, is performed using the Smith-Waterman algorithm. The output of each pairwise comparison becomes a candidate strategy. " }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [ "tab_0", "tab_1", "tab_2", "tab_3" ], "text": "We evaluate the performance of our proposed method in the Pacman, Dungeon Crawler and Bank Heist games. For trajectory collection and determining events of interest, a PPO agent was trained using Masked PPO from stable_baselines3. Policies were trained for a maximum of 50,000 episodes. All experiments were performed on 3.2GHz Apple M1 with 16 GB RAM running Mac OS X 12.6.\nTo test the robustness of the extraction approach, we executed 50 runs for each environment under the same conditions and saved the resulting strategies and total counts of how many times each strategy was found. The results for all three games are shown in Table 1, focussing on strategies that were detected in at least 60% of runs. In the Pacman environment, the predominant strategies, those with the highest 'Found' percentage, are what we expected for each event of interest. Similarly, in the Bank Heist environment, the predominant strategies are reasonable for the corresponding events.\nIn the Dungeon Crawler environment, we observed a combination of expected and unexpected strategies. In particular, many possible strategies were obtained for the \"unlock door\" event of interest. In this environment, the agent performs best by both killing the monster and unlocking the door. Consequently, many positive trajectories for both events are likely to involve both of these outcomes. This is evidenced by the event likelihoods computed for this event of interest, as shown in Table 2. Although the \"collect key\" event has the highest likelihood for the \"unlock door\" event of interest, the \"kill a monster\" event appears, on average, 75% more in the positive trajectories than the negative. \"collect key\" 0.94(0.89,0.96), \"kill a monster\" 0.75(0.67,0.79), \"collect sword\" 0.66(0.58,0.74), \"collect gun\" 0.65(0.6,0.73)\nWe also investigated the effect of the trajectory sample size i.e., how many trajectories are in the positive and negative sets, on the strategies extracted. Table 3 shows that the expected strategies are still obtainable in the Dungeon Crawler environment after reducing the sample size. We observe that the sample size has an influence on the likelihood values computed for each event. We expect that likelihood values for certain events will become more accurate with increased sample size. The results of this analysis are shown in Table 4.\nDuring this investigation, we observed a positive correlation between the average time taken to run all the stages for each event of interest, T , and the sample size of the trajectories. This is due to the trajectory collection stage. For example, for the \"kill a monster\" event, stage (i) took, on average, 4 seconds for a sample size of 10 trajectories versus 100 seconds when the sample size is 200 trajectories. The time taken to complete stages (ii) to (v) was generally observed to be less than 1% of T , regardless of the sample size. 10 \"collect sword\": 0.96(0.6,1), \"collect gun\": 0.91(0.6,1), \"collect key\": 0.75(0.1,1) 50 \"collect sword\": 0.87(0.64,1), \"collect gun\": 0.77(0.6,1), \"collect key\": 0.19(0.12,0.36) 100 \"collect sword\": 0.70(0.59,0.81), \"collect gun\": 0.69(0.61,0.78), \"collect key\": 0.22(0.13,0.34)" }, { "figure_ref": [], "heading": "200", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "\"collect sword\": 0.70(0.63,0.76), \"collect gun\": 0.69(0.6,0.78), \"collect key\": 0.22(0.17,0.3)\nOur results demonstrate that the proposed extraction approach is able to identify reasonable strategies for multiple events of interest. The performance of our approach is dependent on the ability to collect the same amount of positive and negative trajectories. In particular, in Table 2, for 𝐸 = \"unlock door\" the strategy {collect key, unlock door} is only found in 34% of runs. This result is due to the limitations of the trajectory collection stage where events of interest were eventually discarded if not enough positive trajectories could be collected within a specified time limit. The second key result is that the event likelihood values are influenced by the sample size of trajectories. This is most noticeable in Table 3, the likelihood of \"collect key\" to be part of a \"kill a monster\" strategy decreases dramatically as sample size increases from 20 to 50." }, { "figure_ref": [], "heading": "CONCLUSION AND FUTURE WORK", "publication_ref": [ "b9", "b4" ], "table_ref": [], "text": "We have proposed an approach for the extraction of strategies from learned agent policies as a first step toward improving the ability of artificial agents to transfer and generalise their existing knowledge. We adapted the Smith-Waterman local alignment algorithm to find useful causal information from agent trajectories which we can use to form strategies. Our results when demonstrated on video game trajectories obtained using reinforcement learning, showcase the ability of this method to identify reasonable strategy candidates in different contexts.\nIn future work, we will utilise the strategies obtained from this method and look at generalisation techniques to support transfer to domains with differing action and state spaces and environmental dynamics. In particular, we will consider generalisation via abstraction, changing only the content of a strategy when lifting it to a more general context, leaving us with flexibility in the choice of data structures used. One approach is to use ontologies to determine how environment-specific events within the strategies can be lifted to a high-level representation that applies to the current environment. Abstracted strategies can be applied by leveraging functional transfer learning methods, including reward shaping [10] and policy reuse [5]. Further, the proposed strategy extraction method is not limited to reinforcement learning agents. Therefore, a future research direction could be to consider strategy transfer in agents with different learning mechanisms.\nUltimately, we envision this work could address issues around generalisability present in current state-of-the-art autonomous artificial agents and make them deployable in real-world scenarios." } ]
The ability to continuously learn and adapt to new situations is one where humans are far superior compared to AI agents. We propose an approach to knowledge transfer using behavioural strategies as a form of transferable knowledge influenced by the human cognitive ability to develop strategies. A strategy is defined as a partial sequence of events -where an event is both the result of an agent's action and changes in state -to reach some predefined event of interest. This information acts as guidance or a partial solution that an agent can generalise and use to make predictions about how to handle unknown observed phenomena. As a first step toward this goal, we develop a method for extracting strategies from an agent's existing knowledge that can be applied in multiple contexts. Our method combines observed event frequency information with local sequence alignment techniques to find patterns of significance that form a strategy. We show that our method can identify plausible strategies in three environments: Pacman, Bank Heist and a dungeon-crawling video game. Our evaluation serves as a promising first step toward extracting knowledge for generalisation and, ultimately, transfer learning.
Strategy Extraction in Single-agent Games
[ { "figure_caption": "Figure 1 :1Figure 1: Example Pacman game state: C denotes Pacman's location; M a ghost; # a wall; O a power-up; and . a dot.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Example Dungeon Crawler game state: o denotes the location of the player; Z a monster; 𝑋 a wall; | the door; k the key; and 𝑔 and 𝑠 the gun and sword respectively.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Example Bank Heist game state: C denotes the location of the player's car; B a bank; # a wall; and F a fuel tank.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Example Smith-Waterman scoring matrix constructed using Equation 2 where 𝑠 = 1, 𝑑 = -1 and W = 𝑚𝑎𝑥 (1, 𝑟𝑒𝑤𝑎𝑟𝑑 (𝐴 𝑖 ), 𝑟𝑒𝑤𝑎𝑟𝑑 (𝐵 𝑗 )). Trajectory elements from 𝐴 and 𝐵 are displayed in terms of their events and corresponding rewards. The directional arrows depict the recursive traceback process, highlighting the cells which are used to determine the output subtrajectory (shaded).", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The stages of our strategy extraction approach for an event of interest 𝐸. Details are as follows: (𝑖) generate positive and negative trajectories for 𝐸 through simulation; (𝑖𝑖) using the positive and negative trajectory length distributions, normalise the negative trajectories; (𝑖𝑖𝑖) determine likelihood values for events being part of a strategy; (𝑖𝑣) create clusters from the positive trajectories; and (𝑣) for each cluster, perform strategy extraction to obtain candidate strategies through pairwise comparison.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Before and after normalising the length distributions of the negative trajectories obtained from simulating the agent in a small (10 x 6) Pacman environment. Positive trajectories include the event \"kill a ghost\".", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Strategies found for Pacman and Dungeon Crawler, and the frequency with which we find each strategy, across 50 runs of our approach, excluding strategies with a found frequency below 60%. Across all runs, both 𝜏 𝑃 and 𝜏 𝑁 contain 100 trajectories. A '*' indicates a grouping of strategies that have the same events, but with variations on the order of all but the final event. drop dynamite, destroy police car, rob bank} 62 rob bank ×2 and destroy police car {rob bank, drop dynamite, destroy police car, rob bank} 100", "figure_data": "Game GEvent of Interest 𝐸StrategiesFound(%)kill a ghost{power-up, kill a ghost}100Pacmankill a ghost ×2 kill a ghost ×3{power-up, kill a ghost, kill a ghost} {power-up, kill a ghost, kill a ghost, kill a ghost}100 100collect key{collect gun, collect key}100Dungeon Crawler{collect sword, collect key} {kill a monster, collect key}100 100kill a monster{collect gun, kill a monster}98{collect sword, kill a monster}98{collect key, kill a monster}88unlock door{collect sword, collect key, kill a monster, unlock door}100 *{collect gun, collect key, kill a monster, unlock door}94 *{collect key, unlock door}68{kill a monster, collect key, unlock door}66 *{collect gun, collect key, unlock door}60 *Bank Heistdestroy police car{rob bank, drop dynamite, destroy police car} {rob bank, destroy police car}100 98rob bank ×2{rob bank, drop dynamite, rob bank}100{rob bank,", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Average event likelihoods for 𝐸 = \"unlock door\" in the Dungeon Crawler environment. Averages are calculated over all 50 experiment runs.", "figure_data": "EventLikelihood(average(min,max))", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Strategies found across 50 runs of the extraction approach for the Dungeon Crawler environment, using a sample size of 20 trajectories in 𝜏 𝑃 and 𝜏 𝑁 .", "figure_data": "Event ofPredominantFoundInterest 𝐸 Strategies(%)collect{collect gun, collect key}88key{collect sword, collect key}86kill{collect sword, kill a monster} 98a monster {collect gun, kill a monster}94unlock{collect key, unlock door}34door{collect sword, unlock door}24", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Average event likelihoods for 𝐸 = \"kill a monster\" in the Dungeon Crawler environment. Averages are calculated over 50 runs for each sample size.", "figure_data": "Sample Event LikelihoodsSize(average(min,max))", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Archana Vadakattu; Michelle Blom; Adrian R Pearce
[ { "authors": "Mehran Asadi; Manfred Huber", "journal": "AAAI", "ref_id": "b0", "title": "Effective Control Knowledge Transfer through Learning Skill and Representation Hierarchies", "year": "2007" }, { "authors": "Yavar Marc G Bellemare; Joel Naddaf; Michael Veness; Bowling", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b1", "title": "The Arcade Learning Environment: An Evaluation Platform for General Agents", "year": "2013-06" }, { "authors": "Guillaume Bosc; Philip Tan; Jean-François Boulicaut; Chedy Raïssi; Mehdi Kaytoue", "journal": "IEEE Transactions on Computational Intelligence and AI in Games", "ref_id": "b2", "title": "A Pattern Mining Approach to Study Strategy Balance in RTS Games", "year": "2017" }, { "authors": "Greg Brockman; Vicki Cheung; Ludwig Pettersson; Jonas Schneider; John Schulman; Jie Tang; Wojciech Zaremba", "journal": "", "ref_id": "b3", "title": "OpenAI Gym", "year": "2016" }, { "authors": "Tim Brys; Anna Harutyunyan; Matthew E Taylor; Ann Nowé", "journal": "", "ref_id": "b4", "title": "Policy Transfer using Reward Shaping", "year": "2015" }, { "authors": "Zhengxing Chen; Magy ; Seif El Nasr; Alessandro Canossa; Jeremy Badler; Stefanie Tignor; Randy Colvin", "journal": "AAAI", "ref_id": "b5", "title": "Modeling Individual Differences through Frequent Pattern Mining on Role-Playing Game Actions", "year": "2015" }, { "authors": "Anne G E Collins; Michael J Frank", "journal": "Psychological Review", "ref_id": "b6", "title": "Cognitive control over learning: creating, clustering, and generalizing task-set structure", "year": "2013-01" }, { "authors": "Rachit Dubey; Pulkit Agrawal; Deepak Pathak; Tom Griffiths; Alexei Efros", "journal": "PMLR", "ref_id": "b7", "title": "Investigating Human Priors for Playing Video Games", "year": "2018" }, { "authors": "Niklas Een; Alexander Legg; Nina Narodytska; Leonid Ryzhyk", "journal": "", "ref_id": "b8", "title": "SAT-Based Strategy Extraction in Reachability Games", "year": "2015" }, { "authors": "Prasoon Goyal; Scott Niekum; Raymond J Mooney", "journal": "", "ref_id": "b9", "title": "Using natural language for reward shaping in reinforcement learning", "year": "2019" }, { "authors": "George Konidaris; Andrew Barto", "journal": "IJCAI", "ref_id": "b10", "title": "Building portable options: Skill transfer in reinforcement learning", "year": "2007" }, { "authors": "Cécile Low-Kam; Chedy Raïssi; Mehdi Kaytoue; Jian Pei", "journal": "IEEE", "ref_id": "b11", "title": "Mining Statistically Significant Sequential Patterns", "year": "2013" }, { "authors": "Simon Parsons; Michael Wooldridge", "journal": "Autonomous Agents and Multi-Agent Systems", "ref_id": "b12", "title": "Game Theory and Decision Theory in Multi-Agent Systems", "year": "2002-09" }, { "authors": "Sujoy Paul; Jeroen Van Baar; Amit K Roy-Chowdhury", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b13", "title": "Learning from trajectories via subgoal discovery", "year": "2019" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b14", "title": "Proximal Policy Optimization Algorithms", "year": "2017" }, { "authors": "Giuseppe Pier; Ilija Sessa; Maryam Bogunovic; Andreas Kamgarpour; Krause", "journal": "Advances in neural information processing systems", "ref_id": "b15", "title": "Learning to Play Sequential Games versus Unknown Opponents", "year": "2020" }, { "authors": "Murray Shanahan; Melanie Mitchell", "journal": "", "ref_id": "b16", "title": "Abstraction for Deep Reinforcement Learning", "year": "2022" }, { "authors": "F Temple; Michael S Smith; Waterman", "journal": "Journal of molecular biology", "ref_id": "b17", "title": "Identification of Common Molecular Subsequences", "year": "1981" }, { "authors": "Vilém Srovnal; Bohumil Horák; Radim Bernatík; Václav Snášel", "journal": "Springer", "ref_id": "b18", "title": "Strategy Extraction for Mobile Embedded Control Systems Apply the Multi-agent Technology", "year": "2004" }, { "authors": "Doina Richard S Sutton; Satinder Precup; Singh", "journal": "Artificial intelligence", "ref_id": "b19", "title": "Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning", "year": "1999-08" }, { "authors": "Lisa Torrey; Trevor Walker; Jude Shavlik; Richard Maclin", "journal": "Springer", "ref_id": "b20", "title": "Using Advice to Transfer Knowledge Acquired in One Reinforcement Learning Task to Another", "year": "2005" } ]
[ { "formula_coordinates": [ 3, 341.59, 467.5, 216.61, 67.35 ], "formula_id": "formula_0", "formula_text": "𝐻 𝐴𝐵 (𝑖, 𝑗) = 𝑚𝑎𝑥                    0 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑠, if 𝐴 𝑖 = 𝐵 𝑗 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑑, if 𝐴 𝑖 ≠ 𝐵 𝑗 𝐻 𝐴𝐵 (𝑖, 𝑗 -1) -𝑔 𝐻 𝐴𝐵 (𝑖 -1, 𝑗) -𝑔 (1)" }, { "formula_coordinates": [ 4, 79.38, 360.79, 214.66, 21.58 ], "formula_id": "formula_1", "formula_text": "𝐻 𝐴𝐵 (𝑖, 𝑗) = 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑠 W, if 𝐴 𝑖 = 𝐵 𝑗 𝐻 𝐴𝐵 (𝑖 -1, 𝑗 -1) + 𝑑 W, if 𝐴 𝑖 ≠ 𝐵 𝑗(2)" }, { "formula_coordinates": [ 5, 134.5, 87.71, 338.84, 127.47 ], "formula_id": "formula_2", "formula_text": "𝐸 𝑐 ! 𝑐 \" 𝑐 # 𝑐 $ 𝑐 % 𝑐 & (i) (ii) (iii) (iv) (v)" }, { "formula_coordinates": [ 6, 59.67, 376.57, 117.84, 18.92 ], "formula_id": "formula_3", "formula_text": "𝑡 𝑐 = 𝑆ℎ𝑜𝑟𝑡𝑒𝑠𝑡𝑇𝑟𝑎 𝑗𝑒𝑐𝑡𝑜𝑟𝑦 (𝑐) 4:" }, { "formula_coordinates": [ 6, 347.02, 144.95, 211.18, 41.23 ], "formula_id": "formula_5", "formula_text": "𝑙 𝑒 =          𝑚𝑎𝑥 (0, 𝑓 𝜏 𝑃 (𝑒) -𝑓 𝜏 𝑁 ′ (𝑒)) if 𝑒 in 𝜏 𝑃 and 𝜏 𝑁 1 if 𝑒 not in 𝜏 𝑁 0 if 𝑒 not in 𝜏 𝑃(3)" } ]
10.1109/TAI.2022.3195818
2023-06-28
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b50", "b56", "b27", "b52", "b19", "b33", "b1", "b20", "b25", "b3", "b21", "b53", "b22" ], "table_ref": [], "text": "The generalist robot, which can autonomously perform a wide range of tasks, is one of the essential targets of robotic learning. As an important approach, Imitation Learning (IL) enables the agent to learn policies based on expert demon-strations and is especially effective for problems where it's difficult to discover task solutions autonomously through Reinforcement Learning (RL). To train a general-purpose agent, Multi-task/Meta Imitation Learning (MIL) algorithms (Finn et al., 2017b;Deisenroth et al., 2014;Singh et al., 2020) have been proposed to learn a parameterized policy that is a function of both the current observation and the task and is capable of performing a range of tasks following a particular distribution. The key insight of these algorithms is that the successful control for one task can be informative for other related tasks. However, a critical challenge for them is to acquire enough data for the agent to generalize broadly across tasks. Typically, a large number of demonstrations are required for each task in that distribution, and the required amount increases with task difficulty. Moreover, the learned multi-task policy cannot be transferred to tasks out of that distribution (Yu et al., 2019;Ghasemipour et al., 2019), which limits its general use.\nHierarchical Imitation Learning (HIL) has the potential to reduce the required demonstrations. In HIL, the agent learns a two-level policy, which can be modeled with the option framework (Sutton et al., 1999), from the expert data. Specifically, the low-level policies (i.e., skills) are designated to accomplish certain subtasks in a complex task, while the high-level policy is for scheduling the switch among the skills to solve the entire task. For multi-task settings, learning a hierarchical policy enables the agent to identify basic skills that can be useful in solving a distribution of tasks and to transfer them across tasks during training. In this case, each skill can be trained with demonstrations from different tasks rather than limited to a single one, and, with the shared skills, an agent mainly needs to update its high-level policy rather than learning an entire policy for each task. The expert data efficiency is significantly improved since demonstrations among different tasks are reused for learning skills and the burden of multi-task policy learning becomes lower. Further, in RL and IL, hierarchies exhibit a number of benefits, including better performance on long-horizontal complex tasks (Florensa et al., 2017;Jing et al., 2021) and the possibility of skill transfer between distinct tasks (Andreas et al., 2017).\nIn this paper, we propose MH-AIRL to introduce hierarchies to MIL. As discussed above, such hierarchies can improve expert data efficiency so that the agent can achieve supe-rior performance based on a limited number of demonstrations. Further, basic skills can be extracted from the learned policies and reused in out-of-distribution tasks for better transferability (i.e., addressing the core concern of multitask learning). For example, it enables locomotion skills to be reused for multiple goal-achieving tasks of the same robot agent, yet in distinct scenarios. Different from previous Multi-task Hierarchical IL (MHIL) algorithms (Fox et al., 2019;Yu et al., 2018a;Gao et al., 2022;Bian et al., 2022), MH-AIRL is context-based and thus can be applied to demonstrations without any (skill or task) annotations, which are more accessible in practice. To this end, we extend both the multi-task learning and imitation learning modules (i.e., the core components of MIL), with the option framework (i.e., the hierarchical learning module). For multi-task learning, we condition the learned policy on a Hierarchical Latent Context Structure, where the task code and skill segmentation serve as the global and local context variables respectively. To compel the casual relationship of learned policy and latent variables, we start from the definition of mutual information and directed information and derive an easier-to-handle lower bound for each of them, serving as the optimization objectives. For imitation learning, we propose H-AIRL, which redefines a SOTA IL algorithm -AIRL (Fu et al., 2017) in an extended state-action space to enable our algorithm to recover a hierarchical policy (rather than a monolithic one) from expert trajectories. Finally, an actor-critic framework -HPPO is proposed to synthesize the optimization of the three modules above.\nThe contributions are as follows: (1) Our work presents the first MHIL algorithm based on demonstrations without any (skill or task) annotations, i.e., state-action pairs only. This greatly generalizes the applicability of our algorithm and reduces the cost of building expert datasets. (2) The newlyproposed H-AIRL and HPPO can be independently used for Hierarchical IL and RL, respectively. They are shown to achieve improved performance than SOTA HIL and HRL baselines. (3) We provide theoretical proof and ablation study for each algorithm module, and show the superiority of our algorithm through comparisons with SOTA baselines on a series of challenging multi-task settings from Mujoco (Todorov et al., 2012) and D4RL (Fu et al., 2020)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b0", "b5", "b37", "b38", "b45", "b23", "b2", "b26", "b25", "b28", "b27", "b56", "b20", "b15", "b25", "b3", "b14" ], "table_ref": [], "text": "Machine Learning has found successful applications across a wide array of sectors such as transportation (Al-Abbasi et al., 2019;Chen et al., 2021;Luo et al., 2022;Ma et al., 2020), manufacturing (Peddireddy et al., 2021;Fu et al., 2021), networking (Balachandran et al., 2014;Geng et al., 2023), robotics (Gao et al., 2022;Gonzalez et al., 2023), etc. In the field of robotics, one of the key objectives is developing a 'generalist' robot, capable of executing a mul-titude of tasks with human-like precision. To achieve this, multi-task robotic learning proves to be a highly effective methodology. In this section, we succinctly delineate Multitask IL and Multi-task HIL, illustrating the contributions and significance of our research in this evolving field.\nMulti-task/Meta IL algorithms have been proposed to learn a parameterized policy, which is capable of performing a range of tasks following a particular distribution, from a mixture of expert demonstrations. Based on the meta/multi-task learning techniques used, current MIL algorithms can be categorized as gradient-based or context-based. Gradient-based MIL, such as (Finn et al., 2017b;Yu et al., 2018b), integrates a gradient-based meta learning algorithm -MAML (Finn et al., 2017a) with supervised IL to train a policy that can be fast adapted to a new task with one-step gradient update. Context-based MIL, such as (Ghasemipour et al., 2019;Yu et al., 2019), learns a latent variable to represent the task contexts and trains a policy conditioned on the task context variable. Thus, with the corresponding task variable, the policy can be directly adopted to a new task setting. However, these algorithms do not make use of the option framework to learn a hierarchical policy like ours. In Section 5.1, we compare our algorithm with MIL baselines from both categories and show that it achieves better performance on a wide range of challenging long-horizon tasks.\nMulti-task HIL aims at recovering a multi-task hierarchical policy based on expert demonstrations from a distribution of tasks, which synthesizes the advantages of Multi-task IL and HIL. We present here the previous study in this area. The algorithms proposed in (Fox et al., 2019) and (Duminy et al., 2021) are limited to a certain type of robot. They provide predefined subtask decomposition, like picking and placing dishes, to simplify hierarchical learning, and have access to segmented expert demonstrations. However, our algorithm is proposed to automatically discover a hierarchical policy from unsegmented demonstrations and the discovered policy should capture the subtask structure of the demonstrations without supervision. In (Yu et al., 2018a), they propose to let the robot learn a series of primitive skills from corresponding demonstrations first, and then learn to compose learned primitives into multi-stage skills to complete a task. Thus, they predefine the types of skills and provide demonstrations corresponding to each skill. Also, in their setting, each new task has to be a sequence of predefined skills. A very recent work (Gao et al., 2022) integrates MAML and the option framework for MHIL. Like (Bian et al., 2022) and (Devin et al., 2019), this algorithm can be applied to demonstrations without the skill annotations, but these demonstrations have to be categorized by the task, in accordance with the requirements of MAML. Consequently, our research introduces the first MHIL algorithm that relies on demonstrations devoid of task or skill annotations. This makes it significantly more practical for real-world applications." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce Adversarial Inverse Reinforcement Learning (AIRL), Context-based Meta Learning, and the One-step Option Framework, corresponding to the three components of our algorithm: IL, multi-task learning, and hierarchical policy learning, respectively. They are based on the Markov Decision Process (MDP), denoted by M = (S, A, P, µ, R, γ), where S is the state space, A is the action space, P : S × A × S → [0, 1] is the transition function (P" }, { "figure_ref": [], "heading": "St+1", "publication_ref": [], "table_ref": [], "text": "St,At ≜ P(S t+1 |S t , A t )), µ : S → [0, 1] is the distribution of the initial state, R : S ×A → R is the reward function, and γ ∈ (0, 1] is the discount factor." }, { "figure_ref": [], "heading": "Adversarial Inverse Reinforcement Learning", "publication_ref": [ "b46", "b31", "b55", "b60", "b21" ], "table_ref": [], "text": "While there are several other ways to perform IL, such as supervised imitation (e.g., Behavioral Cloning (BC) (Pomerleau, 1991)) and occupancy matching (e.g., GAIL (Ho & Ermon, 2016)), we adopt Inverse Reinforcement Learning (IRL) because it uses not only the expert data but also selfexploration of the agent with the recovered reward function for further improvement (Ng & Russell, 2000;Wang et al., 2021). Comparisons with BC-and GAIL-based algorithms will be provided in Section 5. IRL aims to infer an expert's reward function from demonstrations, based on which the expert's policy can be recovered. Maximum Entropy IRL (Ziebart et al., 2008) solves IRL as a maximum likelihood estimation (MLE) problem shown as Equation 1. τ E ≜ (S 0 , A 0 , • • • , S T ) denotes the expert trajectory. Z ϑ is the partition function which can be calculated with\nZ ϑ = τ E P ϑ (τ E ). max ϑ E τ E [log P ϑ (τ E )] = max ϑ E τ E log P ϑ (τ E ) Z ϑ , P ϑ (τ E ) = µ(S 0 ) T -1 t=0 P St+1 St,At exp(R ϑ (S t , A t ))(1)\nSince Z ϑ is intractable for problems with large state-action space, the authors of (Fu et al., 2017) propose AIRL to solve this MLE problem in a sample-based manner, through alternatively training a discriminator f ϑ and policy network π in an adversarial setting. The discriminator is trained by minimizing the cross-entropy loss between the expert demonstrations τ E and generated samples τ by π:\nmin ϑ T -1 t=0 -E τ E log D t ϑ -E τ log(1 -D t ϑ )(2)\nHere, At|St) . Meanwhile, the policy π is trained with RL using the reward function defined as log D t ϑ -log(1 -D t ϑ ). It is shown that, at optimality, f ϑ can serve as the recovered reward function R ϑ and π is the recovered expert policy.\nD t ϑ = D ϑ (S t , A t ) = exp(f ϑ (St,At)) exp(f ϑ (St,At))+π(" }, { "figure_ref": [], "heading": "Context-based Meta Learning", "publication_ref": [ "b27", "b56", "b12" ], "table_ref": [], "text": "We consider the Meta IRL setting: given a distribution of tasks P (T ), each task sampled from P (T ) has a corresponding MDP, and all of them share the same S and A but may differ in µ, P, and R. The goal is to train a flexible policy π on a set of training tasks sampled from P (T ), which can be quickly adapted to unseen test tasks sampled from the same distribution. As a representative, context-based Meta IRL algorithms (Ghasemipour et al., 2019;Yu et al., 2019) introduce the latent task variable C, which provides an abstraction of the corresponding task T , so each task can be represented with its distinctive components conditioning on C, i.e., (µ(S 0 |C), P(S ′ |S, A, C), R(S, A|C)). These algorithms learn a context-conditioned policy π(A|S, C) from the multi-task expert data, through IRL and by maximizing the mutual information (Cover, 1999) between the task variable C and the trajectories from π(A|S, C). Thus, given C for a new task, the corresponding π(A|S, C) can be directly adopted. Context-based methods can adopt off-policy data, making them more align with the goal of our work -learning from demonstrations. Thus, we choose context-based Meta IRL as our base algorithm.\nGiven expert trajectories sampled from a distribution of tasks (i.e., C ∼ prior(•)) and assuming that the demonstrative trajectories of each task are from a corresponding expert policy π E (τ E |C), context-based Meta IRL recovers both the task-conditioned reward function R ϑ (S, A|C) and policy π(S, A|C) by solving an MLE problem: St,At|C) (3) where P\nmax ϑ E C∼prior(•),τ E ∼π E (•|C) [log P ϑ (τ E |C)] , P ϑ (τ E |C) ∝ µ(S 0 |C) T -1 t=0 P St+1 St,At,C e R ϑ (" }, { "figure_ref": [], "heading": "St+1", "publication_ref": [], "table_ref": [], "text": "St,At,C ≜ P(S t+1 |S t , A t , C). Like Equation 1, this can be efficiently solved through AIRL. We provide the AIRL framework to solve Equation 3 in Appendix A.1." }, { "figure_ref": [], "heading": "One-step Option Framework", "publication_ref": [ "b52", "b36", "b54" ], "table_ref": [], "text": "As proposed in (Sutton et al., 1999), an option Z ∈ Z can be described with three components: an initiation set I Z ⊆ S, an intra-option policy π Z (A|S) : S × A → [0, 1], and a termination function β Z (S) : S → [0, 1]. An option Z is available in state S if and only if S ∈ I Z . Once the option is taken, actions are selected according to π Z until it terminates stochastically according to β Z , i.e., the termination probability at the current state. A new option will be activated by a high-level policy π Z (Z|S) : S ×Z → [0, 1] once the previous option terminates. In this way, π Z (Z|S) and π Z (A|S) constitute a hierarchical policy for a certain task. Hierarchical policies tend to have superior performance on complex long-horizontal tasks which can be broken down into a series of subtasks (Chen et al., 2022a;b;c;d).\nThe one-step option framework (Li et al., 2021) is proposed to learn the hierarchical policy without the extra need to justify the exact beginning and breaking condition of each option, i.e., I Z and β Z . First, it assumes that each option is available at each state, i.e., I Z = S, ∀Z ∈ Z. Second, it drops β Z through redefining the high-level and low-level (i.e., intra-option) policies as π θ (Z|S, Z ′ ) (Z ′ : the option in the last timestep) and π ϕ (A|S, Z) respectively and implementing them as end-to-end neural networks with the Multi-Head Attention (MHA) mechanism (Vaswani et al., 2017), which enables it to temporally extend options in the absence of the termination function. Intuitively, if Z ′ still fits S, π θ (Z|S, Z ′ ) will assign a larger attention weight to Z ′ and thus has a tendency to continue with it; otherwise, a new option with better compatibility will be sampled. Then, the option is sampled at each timestep rather than after the last one terminates. With this simplified framework, we only need to train the hierarchical policy, i.e., π θ and π ϕ , of which the structure design with MHA is in Appendix A.2." }, { "figure_ref": [], "heading": "Proposed Approach", "publication_ref": [], "table_ref": [], "text": "In this section, we propose Multi-task Hierarchical AIRL (MH-AIRL) to learn a multi-task hierarchical policy from a mixture of expert demonstrations. First, the learned policy is multi-task by conditioning on the task context variable C. Given C ∼ prior(•), the policy can be directly adopted to complete the corresponding task. In practice, we can usually model a class of tasks by specifying the key parameters of the system and their distributions (i.e., prior(C)), including the property of the agent (e.g., mass and size), circumstance (e.g., friction and layout), and task setting (e.g., location of the goals). In this case, directly recovering a policy, which is applicable to a class of tasks, is quite meaningful. Second, for complex long-horizontal tasks which usually contain subtasks, learning a monolithic policy to represent a structured activity can be challenging and inevitably requires more demonstrations. In contrast, a hierarchical policy can make full use of the subtask structure and has the potential for better performance. Moreover, the learned low-level policies can be transferred as basic skills to out-of-distribution tasks for better transferability, while the monolithic policy learned with previous Meta IL algorithms cannot.\nIn Section 4.1 and 4.2, we extend context-based Meta Learning and AIRL with the option framework, respectively. In Section 4.3, we synthesize the three algorithm modules and propose an actor-critic framework for optimization." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Hierarchical Latent Context Structure", "publication_ref": [ "b40", "b49", "b51", "b34" ], "table_ref": [], "text": "As mentioned in Section 3.2, the current task for the agent is encoded with the task variable C, which serves as the global context since it is consistent through the episode. As mentioned in Section 3.3, at each step, the hierarchical policy agent will first decide on its option choice Z using π θ and then select the primitive action based on the low-level policy π ϕ corresponding to Z. In this case, the policy learned should be additionally conditioned on Z besides the task code C, and the option choice is specific to each timestep t ∈ {0, • • • , T }, so we view the option choices Z 0:T as the local latent contexts. C and Z 0:T constitute a hierarchical latent context structure shown as Figure 1. Moreover, realworld tasks are often compositional, so the agent requires to reason about the subtask at hand while dealing with the global task. Z 0:T and C provide a hierarchical embedding, which enhances the expressiveness of the policy trained with MH-AIRL, compared with context-based Meta IL which only employs the task context. In this section, we define the mutual and directed information objectives to enhance the causal relationship between the hierarchical policy and the global & local context variables which the policy should condition on, as an extension of context-based Meta-IL with the one-step option model.\nContext-based Meta IL algorithms establish a connection between the policy and task variable C, so that the policy can be adapted among different task modes according to the task context. This can be realized through maximizing the mutual information between the trajectory generated by the policy and the corresponding C, i.e., I(X 0:T ; C), where\nX 0:T = (X 0 , • • • , X T ) = ((A -1 , S 0 ), • • • , (A T -1 , S T )) = τ . A -1 is a dummy variable.\nOn the other hand, the local latent variables Z 0:T have a directed causal relationship with the trajectory X 0:T shown as the probabilistic graphical model in Figure 1. As discussed in (Massey et al., 1990;Sharma et al., 2019), this kind of connection can be established by maximizing the directed information (a.k.a., causal information) flow from the trajectory to the latent factors of variation, i.e., I(X 0:T → Z 0:T ). In our multi-task framework, we maximize the conditional directed information I(X 0:T → Z 0:T |C), since for each task c, the corresponding I(X 0:T → Z 0:T |C = c) should be maximized.\nDirectly optimizing the mutual or directed information objective is computationally infeasible, so we instead maximize their variational lower bounds as follows: (Please refer to Appendix B.1 and B.2 for the definition of mutual and directed information and derivations of their lower bounds. For simplicity, we use X T to represent X 0:T , and so on.)\nL M I ≜ H(C) + E X T ,Z T ,C log P ψ (C|X 0:T ) L DI ≜ T t=1 [ E X t ,Z t ,C log P ω (Z t |X 0:t , Z 0:t-1 , C) + H(Z t |X 0:t-1 , Z 0:t-1 , C)](4)\nwhere H(•) denotes the entropy, P ψ and P ω are the variational estimation of the posteriors P (C|X 0:T ) and P (Z t |X 0:t , Z 0:t-1 , C) which cannot be calculated directly. P ψ and P ω are implemented as neural networks, H(C) is constant, and\nH(Z t |X 0:t-1 , Z 0:t-1 , C\n) is the entropy of the output of the high-level policy network (Appendix B.1), so L M I and L DI can be computed in real-time. Moreover, the expectation on X t , Z t , C in L M I and L DI can be estimated in a Monte-Carlo manner (Sutton & Barto, 2018):\nC ∼ prior(•), (X 0:t , Z 0:t ) ∼ P θ,ϕ (•|C), where P θ,ϕ (X 0:t , Z 0:t |C) is calculated by: (See Appendix B.1.) µ(S 0 |C) t i=1 [π θ (Z i |S i-1 , Z i-1 , C)• π ϕ (A i-1 |S i-1 , Z i , C)P Si Si-1,Ai-1,C ](5)\nCombining Equation 4and 5, we can get the objectives with respect to π θ and π ϕ , i.e., the hierarchical policy defined in the one-step option model. By maximizing L M I and L DI , the connection between the policy and the hierarchical context structure can be established and enhanced. In L M I and L DI , we also introduce two variational posteriors P ψ and P ω and update them together with π θ and π ϕ . An analogy of our learning framework with Variational Autoencoder (VAE) (Kingma & Welling, 2014) is provided in Appendix B.3, which provides another perspective to understand the proposed objectives." }, { "figure_ref": [ "fig_0" ], "heading": "Hierarchical AIRL", "publication_ref": [ "b21", "b27", "b56", "b16" ], "table_ref": [], "text": "In this section, we consider how to recover the taskconditioned hierarchical policy from a mixture of expert demonstrations {(X E 0:T , Z E 0:T , C E )}. Current algorithms, like AIRL (Fu et al., 2017) or Meta AIRL (Ghasemipour et al., 2019;Yu et al., 2019), can not be directly adopted since they don't take the local latent codes Z E 0:T into consideration. Thus, we propose a novel hierarchical extension of AIRL, denoted as H-AIRL, as a solution, which is also part of our contributions. Further, it's usually difficult to annotate the local and global latent codes, i.e., Z E 0:T and C E , of an expert trajectory X E 0:T , so we propose an Expectation-Maximization (EM) adaption of H-AIRL as well to learn the multi-task hierarchical policy based on only the unstructured expert trajectories {X E 0:T }.\nFirst, we define the task-conditioned hierarchical policy.\nWhen observing a state S t at timestep t ∈ {0, • • • , T -1} during a certain task C, the agent needs first to decide on its option choice based on S t and its previous option choice Z t using the high-level policy π θ (Z t+1 |S t , Z t , C), and then decide on the action with the corresponding low-level policy π ϕ (A t |S t , Z t+1 , C). Thus, the task-conditioned hierarchical policy can be acquired with the chain rule as:\nπ θ (Z t+1 |S t , Z t , C) • π ϕ (A t |S t , Z t+1 , C) = π θ,ϕ (Z t+1 , A t |S t , Z t , C) = π θ,ϕ ( A t | S t , C)(6)\nwhere the first equality holds because of the onestep Markov assumption (i.e., π\nϕ (A t |S t , Z t , Z t+1 , C) = π ϕ (A t |S t , Z t+1 , C)), S t ≜ (S t , Z t ) and A t ≜ (Z t+1 , A t )\ndenote the extended state and action space respectively.\nNext, by substituting (S t , A t ) with ( S t , A t ) and τ E with the hierarchical trajectory (X 0:T , Z 0:T ) in Equation 3, we can get an MLE problem shown as Equation 7, from which we can recover the task-conditioned hierarchical reward function and policy. The derivation is in Appendix C.1.\nmax ϑ E C,(X T ,Z T )∼π E (•|C) log P ϑ (X T , Z T |C) , P ϑ (X 0:T , Z 0:T |C) ∝ P ϑ (X 0:T , Z 0:T |C) = µ(S 0 |C) T -1 t=0 P St+1 St,At,C e R ϑ (St,Zt,Zt+1,At|C)(7)\nEquation 7 can be efficiently solved with the adversarial learning framework shown as Equation 8(C,\nC E ∼ prior(•), (X E 0:T , Z E 0:T ) ∼ π E (•|C E )\n, and (X 0:T , Z 0:T ) ∼ π θ,ϕ (•|C)). At optimality, we can recover the hierarchical policy of the expert as π θ,ϕ with these objectives, of which the justification is provided in Appendix C.2.\nmin ϑ -E C E ,(X E 0:T ,Z E 0:T ) T -1 t=0 log D ϑ ( S E t , A E t |C E ) -E C,(X 0:T ,Z 0:T ) T -1 t=0 log(1 -D ϑ ( S t , A t |C)), max θ,ϕ L IL = E C,(X 0:T ,Z 0:T ) T -1 t=0 R t IL (8)\nwhere the reward function\nR t IL = log D t ϑ -log(1-D t ϑ ) and D t ϑ = D ϑ ( S t , A t |C) = exp(f ϑ ( St, At|C)) exp(f ϑ ( St, At|C))+π θ,ϕ ( At| St,C) .\nIn practice, the unstructured expert data {X E 0:T }, i.e., trajectories only, is more accessible. In this case, we can view the latent contexts as hidden variables in a hidden Markov model (HMM) (Eddy, 1996) shown as Figure 1 and adopt an EM-style adaption to our algorithm, where we use the variational posteriors introduced in Section 4.1 to sample the corresponding C E , Z E 0:T for each X E 0:T . In the E step, we sample the global and local latent codes with\nC E ∼ P ψ (•|X E 0:T ), Z E 0:T ∼ P ω (•|X E 0:T , C E )\n. P ψ and P ω represent the posterior networks for C and Z 0:T respectively, with the parameters ψ and ω, i.e., the old parameters before being updated in the M step. Then, in the M step, we optimize the hierarchical policy and posteriors with Equation 4and 8. Note that the expert data used in the first term of Equation 8 should be replaced with (X E 0:T , Z E 0:T , C E ) collected in the E step. By this adaption, we can get the solution of the original MLE problem (Equation 7), i.e., the recovered expert policy π θ,ϕ , with only unstructured expert data, which is proved in Appendix C.3." }, { "figure_ref": [], "heading": "Overall Framework", "publication_ref": [ "b4", "b48" ], "table_ref": [], "text": "In Section 4.1, we propose L M I (θ, ϕ, ψ) and L DI (θ, ϕ, ω) to establish the causal connection between the policy and hierarchical latent contexts. Then, in Section 4.2, we propose H-AIRL to recover the hierarchical policy from multitask expert demonstrations, where the policy is trained with the objective L IL (θ, ϕ). In this section, we introduce our method to update the hierarchical policy and posteriors with these objectives, and describe the overall algorithm framework. Detailed derivations of ∇ θ,ϕ,ψ L M I , ∇ θ,ϕ,ω L DI and ∇ θ,ϕ L IL are in Appendix D.1, D.2, and D.3, respectively. First, the variational posteriors P ψ and P ω can be updated with the gradients shown in Equation 9through Stochastic Gradient Descent (SGD) (Bottou, 2010).\n∇ ψ L M I = E C,X T ,Z T ∇ ψ log P ψ (C|X 0:T ) ∇ ω L DI = T t=1 E C,X t ,Z t ∇ ω log P ω (Z t |X t , Z t-1 , C)(9)\nNext, the gradients with respect to θ and ϕ, i.e., the hierarchical policy, are computed based on the overall objective:\nL = α 1 L M I + α 2 L DI + α 3 L IL (10)\nwhere α 1:3 are the weights (only the ratios α1 α3 , α2 α3 matter) and fine-tuned as hyperparameters. Based on L, we can get the unbiased gradient estimators with respect to θ and ϕ: (Derivations are in Appendix D.4.)\n∇ θ L = E C,X T ,Z T [ T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)• (Ret t -b high (S t-1 , Z t-1 |C))] ∇ ϕ L = E C,X T ,Z T [ T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)• (Ret t -b low (S t-1 , Z t |C))] (11) Ret t = α 1 log P ψ (C|X 0:T ) + T i=t [α 2 log P ω (Z i |X i , Z i-1 , C) π θ (Z i |S i-1 , Z i-1 , C) + α 3 R i-1 IL ] (12)\nRet t represents the return at timestep t, while b high and b low are the baseline terms for training π θ and π ϕ , respectively. Further, we claim that the advantage functions for training π θ and π ϕ are given by Ret t -b high (S t-1 , Z t-1 |C) and Ret t -b low (S t-1 , Z t |C), respectively, based on which we can optimize the hierarchical policy via off-the-shelf RL algorithms. In our implementation, we adopt PPO (Schulman et al., 2017) to train π θ and π ϕ with their corresponding advantage functions, respectively. This forms a novel Hierarchical RL (HRL) algorithm -HPPO, which has shown superiority over RL and HRL baselines in our experiment.\nIn Appendix D.5, we provide the overall algorithm as Algorithm 1 and illustrate the interactions among the networks in MH-AIRL in Figure 5." }, { "figure_ref": [], "heading": "Evaluation and Main Results", "publication_ref": [ "b53", "b22", "b59", "b29" ], "table_ref": [], "text": "MH-AIRL is proposed to learn a multi-task hierarchical policy from a mixture of (unstructured) expert demonstrations.\nThe learned policy can be adopted to any task sampled from a distribution of tasks. In this section: (1) We provide an ablation study with respect to the three main components of our algorithm: context-based multi-task/meta learning, option/hierarchical learning, and imitation learning.\n(2)\nWe show that the hierarchical policy learning can significantly improve the agent's performance on challenging long-horizontal tasks.\n(3) Through qualitative and quantitative results, we show that our algorithm can capture the subtask structure within the expert demonstrations and that the learned basic skills for the subtasks (i.e., options) can be transferred to tasks not within the task distribution to aid learning, for better transferability.\nThe evaluation is based on three Mujoco (Todorov et al., 2012) locomotion tasks and the Kitchen task from the D4RL benchmark (Fu et al., 2020). All of them are with continuous state & action spaces, and contain compositional subtask structures to make them long-horizontal and a lot more challenging. To be specific: (1) In HalfCheetah-MultiVel, the goal velocity v is controlled by a 1-dim Gaussian context variable. The HalfCheetah agent is required to speed up to v/2 first, then slow down to 0, and finally achieve v.\n(2) In Walker-RandParam, the Walker agent must achieve the goal velocity 4 in three stages, i.e., [2, 0, 4]. Meanwhile, the mass of the agent changes among different tasks, which is controlled by an 8-dim Gaussian context variable.\n(3) In Ant-MultiGoal, a 3D Ant agent needs to reach a certain goal, which is different in each task and controlled by a 2-dim Gaussian context variable (polar coordinates). Moreover, the agent must go through certain subgoals. For Twenty-four permutations are chosen and so 24 tasks, each of which is sampled with the same probability and controlled by a discrete context variable (input as one-hot vectors). Note that the states of the robot agents only contain their original states (defined by Mujoco or D4RL) and the task context variable, and do not include the actual task information, like the goal (velocity) and subgoal list. The task information is randomly generated by a parametric model of which the parameter is used as the context variable (i.e., the Gaussian vectors as mentioned above). The mapping between context variables and true task information is unknown to the learning agent. This makes the learning problem more challenging and our algorithm more general, since a vector of standard normal variables can be used to encode multiple types of task information.\nThese scenarios are designed to evaluate our algorithm on a wide range of multi-task setups. First, the agent needs to adapt across different reward functions in (1) and (3) since the rewarding state changes, and adjust across different transition functions in (2) since the mass change will influence the robotic dynamics. Next, different from (1)-(3), discrete context variables are adopted in (4), and (4) provides more realistic and challenging robotic tasks for evaluation. The expert data for Mujoco tasks are from expert agents trained with an HRL algorithm (Zhang & Whiteson, 2019) and specifically-designed rewards. While for the Kitchen task, we use the human demonstrations provided by (Gupta et al., 2019). Note that the demonstra-tions (state-action pairs only) do not include the rewards, task or option variables. Codes for reproducing all the results are on https://github.com/LucasCJYSDL/Multi-task-Hierarchical-AIRL." }, { "figure_ref": [], "heading": "Effect of Hierarchical Learning", "publication_ref": [ "b56", "b27", "b47" ], "table_ref": [], "text": "In this part, we evaluate whether the use of options can significantly improve the learning for challenging compound multi-task settings. We compare MH-AIRL with SOTA Meta Imitation Learning (MIL) baselines which also aim to train a policy that can be fast adapted to a class of related tasks but does not adopt options in learning. Contextbased MIL, such as PEMIRL (Yu et al., 2019) and SMILE (Ghasemipour et al., 2019), learns a context-conditioned policy that can be adopted to any task from a class by applying the task variable. While the policy learned with Gradientbased MIL, such as MAML-IL (Finn et al., 2017b) which integrates MAML (Finn et al., 2017a) (a commonly-adopted Meta Learning algorithm) and Behavioral Cloning (BC), has to be updated with gradients calculated from trajectories of the new task, before being applied. We select PEMIRL, SMILE, and MAML-IL from the two major categories of MIL as our baselines. All the algorithms are trained with the same expert data, and evaluated on the same set of test tasks (not contained in the demonstrations). Note that, unlike the others, MAML-IL requires expert data of each test task besides the task variable when testing and requires the expert demonstrations to be categorized by the task when training, which may limit its use in practical scenarios. Our algorithm is trained based on unstructured demonstrations and is only provided with the task context variable for testing.\nIn Figure 2, we record the change of the episodic reward Our algorithm outperforms the baselines in all tasks, and the improvement is more significant as the task difficulty goes up (i.e., in Ant & Kitchen), which shows the effectiveness of hierarchical policy learning especially in complex tasks. MAML-IL makes use of more expert information in both training and testing, but its performance gets worse on more challenging tasks. This may be because it is based on BC, which is a supervised learning algorithm prone to compounding errors (Ross et al., 2011)." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b31", "b33", "b49" ], "table_ref": [ "tab_0" ], "text": "We proceed to show the effectiveness of the IL and contextbased multi-task learning components through an ablation study. We propose two ablated versions of our algorithm:\n(1) MH-GAIL -a variant by replacing the AIRL component of MH-AIRL with GAIL (Ho & Ermon, 2016) (another commonly-used IL algorithm), of which the details are in Appendix E.2. (2) H-AIRL -a version that does not consider the task context C, which means P ψ (i.e., the posterior for C) is not adopted, L M I is eliminated from Equation 10, and other networks do not use C as input. H-AIRL can be viewed as a newly-proposed HIL algorithm since it integrates the option framework and IL. To be more convincing, we also use two SOTA HIL algorithms -Option-GAIL (Jing et al., 2021) and DI-GAIL (Sharma et al., 2019), as the baselines. The training with the HIL algorithms is based on the same multi-task expert data as ours.\nIn Appendix E.1, we provide the plots of the change of episodic rewards on the test tasks. The training with each algorithm is repeated for 5 times with different random seeds. For each algorithm, we compute the average episodic reward after the learning converges in each of the 5 runs, and record the mean and standard deviation in Table 1 as the convergence performance. First, we can see that our algorithm performs the best on all tasks over the ablations, showing the effectiveness of all the main modules of our algorithm. Second, MH-GAIL performs better than HIL baselines, showing the necessity of including the contextbased multi-task learning component. Without this component, HIL algorithms can only learn an average policy for a class of tasks from the mixture of multi-task demonstrations.\nLast, H-AIRL, the newly-proposed HIL algorithm, performs better than the SOTA HIL baselines on Mujoco tasks. A comprehensive empirical study on H-AIRL is provided in (Chen et al., 2022e)." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Analysis on the Learned Hierarchical Policy", "publication_ref": [], "table_ref": [], "text": "In this section, we do the case study to analyze if the learned hierarchical policy can capture the sub-task structure in the demonstrations, and if the learned options can be transferred to tasks out of the task distribution. Capturing the subtask structures in real-life tasks can be essential for the (multitask) policy learning, because: (1) It is more human-like to split a complex task into more manageable subtasks to learn separately and then synthesize these skills to complete the whole task.\n(2) In some circumstances, the basic skills learned from one task setting can be reused in other task settings so the agent only needs to update its high-level policy over the same skill set, significantly lowering the learning difficulty. We test our algorithm on Mujoco-MultiGoal (Figure 3(a)) where the agent is required to achieve a goal corresponding to the task variable (2-dim Gaussian). The expert demonstrations include 100 goal locations in the Cell and the expert agent only moves horizontally or vertically. We test the learned hierarchical policy on 8 sparsely distributed goal locations, of which the trajectories are shown as Figure 3(d). We can see: (1) Four options (labeled with different colors) are discovered based on the demonstrations, each of which corresponds to a particular forward direction (green: up, yellow: down, etc.). These options are shared among the tasks.\n(2) The agent knows how to switch among the options to complete the tasks in stages (i.e., horizontal and vertical) with the learned high-level policy. Thus, our algorithm can effectively capture the compositional structure within the tasks and leverage it in the multi-task policy learning, which explains its superior performance. More analysis results of the learned hierarchical policy on HalfCheetah-MultiVel and Walker-RandParam are in Appendix E.3.\nNext, previous Meta/Multi-task Learning algorithms can We can see that the reuse of options significantly accelerate the learning process and the newly proposed HRL algorithm performs much better than the baselines. Note that the other algorithms are trained for more episodes since they do not adopt the transferred options. We show that, in scenarios for which we do not have expert data or dense rewards, we can make use of the basic skills learned from expert demonstrations for similar task scenarios to effectively aid the learning, which provides a manner to bridge IL and RL." }, { "figure_ref": [], "heading": "Conclusion and Discussion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose MH-AIRL to learn a hierarchical policy that can be adopted to perform a class of tasks, based on a mixture of multi-task unannotated expert data. We evaluate our algorithm on a series of challenging robotic multi-task settings. The results show that the multi-task hierarchical policies trained with MH-AIRL perform significantly better than the monotonic policies learned with SOTA Multi-task/Meta IL baselines. Further, with MH-AIRL, the agent can capture the subtask structures in each task and form a skill for each subtask. The basic skills can be reused for different tasks in that distribution to improve the expert data efficiency, and can even be transferred to more distinct tasks out of the distribution to solve long-timescale sparse-reward RL problems.\nThe primary limitation of our study is the inherent complexity of the overall framework, which comprises five networks as depicted in Figure 5. This complexity arises from our algorithm's integration of AIRL, context-based Meta IL, and the option framework. This amalgamation introduces certain challenges in the training process, particularly in determining the optimal number of training iterations for each network within each learning episode. After careful finetuning, we established a training iteration ratio of 1:3:10 for the discriminator, hierarchical policy, and variational posteriors, respectively. Despite this complexity, our evaluations across a wide variety of tasks utilized a consistent set of hyperparameters, showing the robustness of our approach." }, { "figure_ref": [], "heading": "A. Appendix on the Background and Related Works", "publication_ref": [ "b27", "b56" ], "table_ref": [], "text": "A.1. AIRL Framework to Solve Equation 3For each task C, we need to recover the task-specific reward function R ϑ (S, A|C) and policy π(A|S, C) based on the corresponding expert trajectories τ E ∼ π E (•|C) which can be solved by AIRL as mentioned in Section 3.1. Thus, we have the following objective functions for training, which is a simple extension of AIRL (Ghasemipour et al., 2019;Yu et al., 2019):\nmin ϑ E C -E τ E ∼π E (•|C) T -1 t=0 log D ϑ (S t , A t |C) -E τ ∼π(•|C) T -1 t=0 log(1 -D ϑ (S t , A t |C)) (13) max π E C E τ ∼π(•|C) T -1 t=0 log D ϑ (S t , A t |C) -log(1 -D ϑ (S t , A t |C))(14)\nwhere\nD ϑ (S, A|C) = exp(f ϑ (S, A|C))/[exp(f ϑ (S, A|C)) + π(A|S, C)]." }, { "figure_ref": [ "fig_3" ], "heading": "A.2. Implementation of the Hierarchical Policy in the One-step Option Model", "publication_ref": [ "b36", "b54", "b35", "b12", "b4", "b39", "b11" ], "table_ref": [], "text": "In this section, we give out the detailed structure design of the hierarchical policy introduced in Section 3.3, i.e., π θ (Z|S, Z ′ ) and π ϕ (A|S, Z), which is proposed in (Li et al., 2021). This part is not our contribution, so we only provide the details for the purpose of implementation.\nAs mentioned in Section 3.3, the structure design is based on the Multi-Head Attention (MHA) mechanism (Vaswani et al., 2017). An attention function can be described as mapping a query, i.e., q ∈ R d k , and a set of key-value pairs, i.e.,\nK = [k 1 • • • k n ] T ∈ R n×d k and V = [v 1 • • • v n ] T ∈ R n×dv\n, to an output. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. To be specific:\nAttention(q, K, V ) = n i=1 exp(q • k i ) n j=1 exp(q • k j ) × v i (15\n)\nwhere q, K, V are learnable parameters, exp(q•ki) n j=1 exp(q•kj ) represents the attention weight that the model should pay to item i. In MHA, the query and key-value pairs are first linearly projected h times to get h different queries, keys and values. Then, an attention function is performed on each of these projected versions of queries, keys and values in parallel to get h outputs which are then be concatenated and linearly projected to acquire the final output. The whole process can be represented as Equation 16, where\nW q i ∈ R d k ×d k , W K i ∈ R d k ×d k , W V i ∈ R dv×dv , W O ∈ R ndv×dv are the learnable parameters. M HA(q, K, V ) = Concat(head 1 , • • • , head h )W O , head i = Attention(qW q i , KW K i , V W V i )(16)\nIn this work, the option is represented as an N -dimensional one-hot vector, where N denotes the total number of options to learn. The high-level policy π θ (Z|S, Z ′ ) has the structure shown as:\nq = linear(Concat[S, W T C Z ′ ]), dense Z = M HA(q, W C , W C ), Z ∼ Categorical(•|dense Z ) (17) W C ∈ R N ×E\nis the option context matrix of which the i-th row represents the context embedding of the option i. W C is also used as the key and value matrix for the MHA, so d k = d v = E in this case. Note that W C is only updated in the MHA module. Intuitively, π θ (Z|S, Z ′ ) attends to all the option context embeddings in W C according to S and Z ′ . If Z ′ still fits S, π θ (Z|S, Z ′ ) will assign a larger attention weight to Z ′ and thus has a tendency to continue with it; otherwise, a new skill with better compatibility will be sampled.\nAs for the low-level policy π ϕ (A|S, Z), it has the following structure:\ndense A = M LP (S, W T C Z), A ∼ Categorical/Gaussian(•|dense A ) (18\n)\nwhere M LP represents a multilayer perceptron, A follows a categorical distribution for the discrete case or a gaussian distribution for the continuous case. The context embedding corresponding to Z, i.e., W T C Z, instead of Z only, is used as input of π ϕ since it can encode multiple properties of the option Z (Kosiorek et al., 2019). of which the ELBO is:\nmax η,ξ E U ∼P U (•) V ∼Pη(•|U ) log P ξ (U |V ) -βD KL (P η (V |U )||P V (V )) (26\n)\nThe first term can be viewed as the reconstruction accuracy of the data U from V , and the second term works as a regularizer for the distribution of the latent variables V , where D KL denotes the KL Divergence (Cover, 1999). VAE can efficiently solve the posterior inference problem for datasets with continuous latent variables where the true posterior is intractable, through fitting an approximate inference model P ξ (i.e., the variational posterior). The variational lower bound, i.e., ELBO, can be straightforwardly optimized using standard stochastic gradient methods, e.g., SGD (Bottou, 2010).\nAs shown in Figure 4, the optimization of L M I (Equation 4) can be viewed as using π θ and π ϕ as the encoder and P ψ as the decoder and then minimizing the reconstruction error of C from X 0:T , and the regularizer term in Equation 26is neglected (i.e., β = 0). As for the optimization of L DI (Equation 4), at each timestep t, π ϕ and P ω form a conditional VAE between Z t and X t , which is conditioned on the history information and task code, i.e., (X 0:t-1 , Z 0:t-1 , C), with the prior distribution of Z t provided by π θ . Compared with the VAE objective (i.e., Equation 26), π ϕ and P ω in L DI work as the encoder and decoder respectively; π θ provides the prior, which corresponds to P U (•).\nBoth P ψ and P ω use sequential data as input and thus are implemented with RNN. The variational posterior for the task code, i.e., P ψ (C|X 0:T ) takes the trajectory X 0:T as input and is implemented as a bidirectional GRU (Mangal et al., 2019) to make sure that both the beginning and end of the trajectory are equally important. On the other hand, the variational posterior for the local latent code, i.e., P ω (Z t |X 0:t , Z 0:t-1 , C), is modeled as P ω (Z t |X t , Z t-1 , C, h t-1 ), where h t-1 is the internal hidden state of an RNN. h t-1 is recursively maintained with the time series using the GRU rule, i.e., h t-1 = GRU (X t-1 , Z t-2 , h t-2 ), to embed the history information in the trajectory, i.e., X 0:t-1 and Z 0:t-2 . Note that the RNN-based posterior has been used and justified in the process for sequential data (Chung et al., 2015)." }, { "figure_ref": [], "heading": "C. Appendix on Hierarchical AIRL C.1. Derivation of the MLE Objective", "publication_ref": [ "b21" ], "table_ref": [], "text": "In Equation 27, Z 0 is a dummy variable which is assigned before the episode begins and never executed. It's implemented as a constant across different episodes, so we have P (S 0 , Z 0 |C) = P (S 0 |C) = µ(S 0 |C), where µ(•|C) denotes the initial state distribution for task C. On the other hand, we have\nP (S t+1 , Z t+1 |S t , Z t , Z t+1 , A t , C) = P (Z t+1 |S t , Z t , Z t+1 , A t , C)P (S t+1 |S t , Z t , Z t+1 , A t , C) = P(S t+1 |S t , A t , C\n), since the transition dynamic P is irrelevant to the local latent codes Z and only related the task context C.\nP ϑ (X 0:T , Z 0:T |C) ∝ µ( S 0 |C) T -1 t=0 P( S t+1 | S t , A t , C) exp(R ϑ ( S t , A t |C)) = P (S 0 , Z 0 |C) T -1 t=0 P (S t+1 , Z t+1 |S t , Z t , Z t+1 , A t , C) exp(R ϑ (S t , Z t , Z t+1 , A t |C)) = µ(S 0 |C) T -1 t=0 P(S t+1 |S t , A t , C) exp(R ϑ (S t , Z t , Z t+1 , A t |C))(27)\nC.2. Justification of the Objective Function Design in Equation 8In this section, we prove that by optimizing the objective functions shown in Equation 8, we can get the solution of the MLE problem shown as Equation 7, i.e., the task-conditioned hierarchical reward function and policy of the expert.\nIn Appendix A of (Fu et al., 2017), they show that the discriminator objective (the first equation in 8) is equivalent to the MLE objective (Equation 7) where f ϑ serves as R ϑ , when D KL (π(τ )||π E (τ )) is minimized. The same conclusion can be acquired by simply replacing {S t , A t , τ } with {(S t , Z t ), (Z t+1 , A t ), (X 0:T , Z 0:T )}, i.e., the extended definition of the state, action and trajectory, in the original proof, which we don't repeat here. Then, we only need to prove that E C [D KL (π θ,ϕ (X 0:T , Z 0:T |C)||π E (X 0:T , Z 0:T |C))] can be minimized through the second equation in 8:\nmax θ,ϕ E C∼prior(•),(X 0:T ,Z 0:T )∼π θ,ϕ (•|C) T -1 t=0 R t IL = max θ,ϕ E C,X 0:T ,Z 0:T T -1 t=0 log D ϑ (S t , Z t , Z t+1 , A t |C) -log(1 -D ϑ (S t , Z t , Z t+1 , A t |C)) = max θ,ϕ E C,X 0:T ,Z 0:T T -1 t=0 f ϑ (S t , Z t , Z t+1 , A t |C) -log π θ,ϕ (Z t+1 , A t |S t , Z t , C) = max θ,ϕ E C,X 0:T ,Z 0:T T -1 t=0 f ϑ (S t , Z t , Z t+1 , A t |C) -log(π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C)) = max θ,ϕ E C,X 0:T ,Z 0:T log T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C)) T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C) ⇐⇒ max θ,ϕ E C,X 0:T ,Z 0:T log T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C))/Z C ϑ T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C)(28)\nNote that Z C ϑ = X 0:T ,Z 0:T P ϑ (X 0:T , Z 0:T |C) (defined in Equation 7) is the normalized function parameterized with ϑ, so the introduction of Z C ϑ will not influence the optimization with respect to θ and ϕ and the equivalence at the last step holds. Also, the second equality shows that the task-conditioned hierarchical policy is recovered by optimizing an entropy-regularized policy objective where f ϑ serves as R ϑ . Further, we have:\nmax θ,ϕ E C,X 0:T ,Z 0:T log T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C))/Z C ϑ T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C) = max θ,ϕ E C,X 0:T ,Z 0:T log µ(S 0 |C) T -1 t=0 P(S t+1 |S t , A t , C) T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C))/Z C ϑ µ(S 0 |C) T -1 t=0 P(S t+1 |S t , A t , C) T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C) = max θ,ϕ E C∼prior(•),(X 0:T ,Z 0:T )∼π θ,ϕ (•|C) log π E (X 0:T , Z 0:T |C) π θ,ϕ (X 0:T , Z 0:T |C) = max θ,ϕ E C∼prior(•) [-D KL (π θ,ϕ (X 0:T , Z 0:T |C)||π E (X 0:T , Z 0:T |C))] ⇐⇒ min θ,ϕ E C∼prior(•) [D KL (π θ,ϕ (X 0:T , Z 0:T |C)||π E (X 0:T , Z 0:T |C))](29)\nwhere the second equality holds because of the definition of π E (Equation 7 with f ϑ serving as R ϑ ) and π θ,ϕ (Equation 34)." }, { "figure_ref": [], "heading": "C.3. Justification of the EM-style Adaption", "publication_ref": [ "b32" ], "table_ref": [], "text": "Given only a dataset of expert trajectories, i.e., D E ≜ {X 0:T }, we can still maximize the likelihood estimation E X 0:T ∼D E [log P ϑ (X 0:T )] through an EM-style adaption: (We use X 0:T , C, Z 0:T instead of X E 0:T , C E , Z E 0:T for simplicity.)\nE X 0:T ∼D E [log P ϑ (X 0:T )] = E X 0:T ∼D E   log   C,Z 0:T P ϑ (X 0:T , C, Z 0:T )     = E X 0:T ∼D E   log   C,Z 0:T P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) P ϑ (C, Z 0:T |X 0:T )     = E X 0:T ∼D E log E (C,Z 0:T )∼P ϑ (•|X 0:T ) P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) ≥ E X 0:T ∼D E E (C,Z 0:T )∼P ϑ (•|X 0:T ) log P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) = E X 0:T ∼D E ,C∼P ψ (•|X 0:T ),Z 0:T ∼Pω(•|X 0:T ,C) log P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) = E X 0:T ,C,Z 0:T [log P ϑ (X 0:T , C, Z 0:T )] -E X 0:T ,C,Z 0:T log P ϑ (C, Z 0:T |X 0:T ) = E X 0:T ,C,Z 0:T [log P ϑ (X 0:T , Z 0:T |C)] -E X 0:T ,C,Z 0:T -log prior(C) + log P ϑ (C, Z 0:T |X 0:T )(30)\nwhere we adopt the Jensen's inequality (Jensen, 1906) in the 4-th step. Also, we note that P ψ,ω (C, Z 0:T |X 0:T ) provides a posterior distribution of (C, Z 0:T ), which corresponds to the generating process led by the hierarchical policy. As justified in C.2, the hierarchical policy is trained with the reward function parameterized with ϑ. Thus, the hierarchical policy is a function of ϑ, and the network P ψ,ω corresponding to the hierarchical policy provides a posterior distribution related to the parameter set ϑ, i.e., (C, Z 0:T ) ∼ P ϑ (•|X 0:T ) ⇐⇒ C ∼ P ψ (•|X 0:T ), Z 0:T ∼ P ω (•|X 0:T , C), due to which the 5-th step holds. Note that ϑ, ψ, ω denote the parameters ϑ, ψ, ω before being updated in the M step.\nIn the second equality of Equation 30, we introduce the sampled global and local latent codes in the E step as discussed in Section 4.2. Then, in the M step, we optimize the objectives shown in Equation 4 and 8 for iterations, by replacing the samples in the first term of Equation 8 with (X 0:T , C, Z 0:T ) collected in the E step. This is equivalent to solve the MLE problem: max ϑ E X 0:T ∼D E ,C∼P ψ (•|X 0:T ),Z 0:T ∼Pω(•|X 0:T ,C) [log P ϑ (X 0:T , Z 0:T |C)], which is to maximize a lower bound of the original objective, i.e., E X 0:T ∼D E [log P ϑ (X 0:T )], as shown in the last step of Equation 30. Thus, the original objective can be optimized through this EM procedure. Note that the second term in the last step is a function of the old parameter ϑ so that it can be overlooked when optimizing with respect to ϑ." }, { "figure_ref": [], "heading": "C.4. State-only Adaption of H-AIRL", "publication_ref": [ "b21", "b21", "b21" ], "table_ref": [], "text": "In AIRL (Fu et al., 2017), they propose a two-component design for the discriminator as follows:\nf ϑ,ζ (S t , S t+1 ) = g ϑ (S t ) + γh ζ (S t+1 ) -h ζ (S t ) (31)\nwhere γ is the discount factor in MDP. Based on f ϑ,ζ (S t , S t+1 ), they can further get D ϑ,ζ (S t , S t+1 ) which is used in Equation 2 for AIRL training. As proved in (Fu et al., 2017), g ϑ , h ζ and f ϑ,ζ can recover the true reward, value and advantage function, respectively, under deterministic environments with a state-only ground truth reward. With this state-only design, the recovered reward function is disentangled from the dynamics of the environment in which it was trained, so that it can be directly transferred to environments with different transition dynamics, i.e., P, for the policy training. Moreover, the additional shaping term h ζ helps mitigate the effects of unwanted shaping on the reward approximator g ϑ (Ng et al., 1999). This design can also be adopted to H-AIRL (Equation 8) by redefining Equation 31 on the extended state space (first defined in Section 4.2):\nf ϑ,ζ ( S t , S t+1 |C) = g ϑ ( S t |C) + γh ζ ( S t+1 |C) -h ζ ( S t |C) = g ϑ (S t , Z t |C) + γh ζ (S t+1 , Z t+1 |C) -h ζ (S t , Z t |C)(32)\nIn this way, we can recover a hierarchical reward function conditioned on the task context C, i.e., g ϑ (S t , Z t |C), which avoids unwanted shaping and is robust enough to be directly applied in a new task with different dynamic transition distribution from prior(C). The proof can be done by simply replacing the state S in the original proof (Appendix C of (Fu et al., 2017)) with its extended definition S, so we don't repeat it here." }, { "figure_ref": [], "heading": "D. The Proposed Actor-Critic Algorithm for Training", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1. Gradients of the Mutual Information Objective Term", "publication_ref": [], "table_ref": [], "text": "The objective function related to the mutual information:\nL M I = C prior(C) X 0:T ,Z 0:T P (X 0:T , Z 0:T |C) log P ψ (C|X 0:T )(33)\nAfter introducing the one-step Markov assumption to Equation 24, we can calculate P (X 0:T , Z 0:T |C) as Equation 34, where π θ and π ϕ represent the hierarchical policy in the one-step option framework.\nP (X 0:T , Z 0:T |C) = µ(S 0 |C)\nT t=1 π θ (Z t |S t-1 , Z t-1 , C)π ϕ (A t-1 |S t-1 , Z t , C)P(S t |S t-1 , A t-1 , C)(34)\nFirst, the gradient with respect to ψ is straightforward as Equation 35, which can be optimized as a standard likelihood maximization problem.\n∇ ψ L M I = C prior(C) X 0:T ,Z 0:T P (X 0:T , Z 0:T |C)∇ ψ log P ψ (C|X 0:T )(35)\nNow we give out the derivation of ∇ θ L M I :\n∇ θ L M I = C prior(C) X 0:T ,Z 0:T ∇ θ P θ,ϕ (X 0:T , Z 0:T |C) log P ψ (C|X 0:T ) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log P θ,ϕ (X 0:T , Z 0:T |C) log P ψ (C|X 0:T ) = E C,X 0:T , Z 0:T [∇ θ log P θ,ϕ (X 0:T , Z 0:T |C) log P ψ (C|X 0:T )] = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) log P ψ (C|X 0:T )(36)\nwhere the last equality holds because of Equation 34. With similar derivation as above, we have:\n∇ ϕ L M I = E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) log P ψ (C|X 0:T )(37)" }, { "figure_ref": [ "fig_0" ], "heading": "D.2. Gradients of the Directed Information Objective Term", "publication_ref": [], "table_ref": [], "text": "Next, we give out the derivation of the gradients related to the directed information objective term, i.e., L DI . We denote the two terms in Equation 21 as L DI 1 and L DI 2 respectively. Then, we have ∇ θ,ϕ L DI = ∇ θ,ϕ L DI 1 + ∇ θ,ϕ L DI 2 . The derivations are as follows:\n∇ θ L DI 1 = T t=1 C\nprior(C) X0:t,Z0:t\n∇ θ P θ,ϕ (X 0:t , Z 0:t |C) log P ω (Z t |X 0:t , Z 0:t-1 , C) = T t=1 C\nprior(C) X0:t,Z0:t P θ,ϕ (X 0:t , Z 0:t |C)\nt i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) log P t ω = T t=1 C\nprior(C) X0:t,Z0:t X t+1:T , Z t+1:T P θ,ϕ (X 0:T , Z 0:T |C)\nt i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) log P t ω = T t=1 C\nprior(C)\nX 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) t i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) log P t ω = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 log P t ω t i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) T t=i log P t ω = E C,X 0:T , Z 0:T T i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) T t=i log P ω (Z t |X 0:t , Z 0:t-1 , C) = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T i=t log P ω (Z i |X 0:i , Z 0:i-1 , C)(38)\nwhere P t ω = P ω (Z t |X 0:t , Z 0:t-1 , C) for simplicity. The second equality in Equation 38 holds following the same derivation in Equation 36. Then, the gradient related to L DI 2 is:\n∇ θ L DI 2 = ∇ θ T t=1 H(Z t |X 0:t-1 , Z 0:t-1 , C) = -∇ θ [ T t=1 C\nprior(C)\nX0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C) log P (Z t |X 0:t-1 , Z 0:t-1 , C)] = -∇ θ [ T t=1 C prior(C) X0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C) log π θ (Z t |S t-1 , Z t-1 , C)] = -∇ θ [ C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 log π θ (Z t |S t-1 , Z t-1 , C)] = -[ C prior(C) X 0:T ,Z 0:T ∇ θ P θ,ϕ (X 0:T , Z 0:T |C) T t=1 log π θ (Z t |S t-1 , Z t-1 , C)+ C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)] (39) = -E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T i=1 log π θ (Z i |S i-1 , Z i-1 , C) + 1 = -E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T i=t log π θ (Z i |S i-1 , Z i-1 , C)(40)\nThe third equality holds because we adopt the one-step Markov assumption, i.e., the conditional probability distribution of a random variable depends only on its parent nodes in the probabilistic graphical model (shown as Figure 1). The fourth equality holds out of similar derivation as steps 2-4 in Equation 38. The last equality can be obtained with Equation 46 in the next section, where we prove that any term which is from\nT i=1 log π θ (Z i |S i-1 , Z i-1 , C) + 1\nand not a function of Z t will not influence the gradient calculation in Equation 39 and 40.\nWith similar derivations, we have:\n∇ ϕ L DI 1 = E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) T i=t log P ω (Z i |X 0:i , Z 0:i-1 , C)(41)\n∇ ϕ L DI 2 = -E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) T i=t log π θ (Z i |S i-1 , Z i-1 , C)(42)\nAs for the gradient with respect to ω, it can be computed with:\n∇ ω L DI = ∇ ω L DI 1 = T t=1 C prior(C) X0:t,Z0:t P θ,ϕ (X 0:t , Z 0:t |C)∇ ω log P ω (Z t |X 0:t , Z 0:t-1 , C)(43)\nStill, for each timestep t, it's a standard likelihood maximization problem and can be optimized through SGD." }, { "figure_ref": [], "heading": "D.3. Gradients of the Imitation Learning Objective Term", "publication_ref": [], "table_ref": [], "text": "We consider the imitation learning objective term L IL , i.e., the trajectory return shown as:\nL IL = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T -1 i=0 R IL (S i , Z i , Z i+1 , A i |C)(44)\nFollowing the similar derivation with Equation 36, we can get:\n∇ θ L IL = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T -1 i=0 R IL (S i , Z i , Z i+1 , A i |C)(45)\nFurther, we note that for each t ∈ {1, • • • , T }, ∀i < t -1, we have:\nE C,X 0:T , Z 0:T [∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R IL (S i , Z i , Z i+1 , A i |C)] = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R IL (S i , Z i , Z i+1 , A i |C) = C prior(C) X0:t-1, Z0:t X t:T , Z t+1:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R i IL = C prior(C) X0:t-1, Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R i IL (46) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C) Zt π θ (Z t |S t-1 , Z t-1 , C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R i IL = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R i IL Zt π θ (Z t |S t-1 , Z t-1 , C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R i IL Zt ∇ θ π θ (Z t |S t-1 , Z t-1 , C) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R i IL ∇ θ Zt π θ (Z t |S t-1 , Z t-1 , C) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R IL (S i , Z i , Z i+1 , A i |C)∇ θ 1 = 0(47)\nwhere\nR i IL = R IL (S i , Z i , Z i+1 , A i |C)\nfor simplicity. We use the law of total probability in the third equality, which we also use in the later derivations. The fifth equality holds because i < t -1 and R\nIL (S i , Z i , Z i+1 , A i |C) is irrelevant to Z t .\nBased on Equation 45 and 46, we have:\n∇ θ L IL = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T -1 i=t-1 R IL (S i , Z i , Z i+1 , A i |C)(48)\nWith similar derivations, we can obtain:\n∇ ϕ L IL = E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) T -1 i=t-1 R IL (S i , Z i , Z i+1 , A i |C)(49)" }, { "figure_ref": [], "heading": "D.4. The Overall Unbiased Gradient Estimator", "publication_ref": [], "table_ref": [], "text": "To sum up, the gradients with respect to θ and ϕ can be computed with ∇ θ,ϕ L = ∇ θ,ϕ (α 1 L M I + α 2 L DI + α 3 L IL ), where α 1:3 > 0 are the weights for each objective term and fine-tuned as hyperparameters. Combining Equation (36,38,39,48) and Equation (37,41,42,49), we have the actor-critic learning framework shown as Equation 11, except for the baseline terms, b high and b low .\nFurther, we claim that Equation 11 provides unbiased estimation of the gradients with respect to θ and ϕ. We proof this by showing that Generate M trajectories {(C, X 0:T , Z 0:T )} by sampling the task C ∼ prior(•) and then exploring it with π θ and π ϕ 5:\nE T t=1 ∇ θ log π t θ b high (S t-1 , Z t-1 |C) = E T t=1 ∇ ϕ log π t ϕ b low (S t-1 , Z t |C) = 0, as follows: E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = C prior(C) T t=1 X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = C prior(C) T t=1 X0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) X0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C) Zt π θ (Z t |S t-1 , Z t-1 , C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = X0:t-1,Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)b high (S t-1 , Z t-1 |C) Zt ∇ θ π θ (Z t |S t-1 , Z t-1 , C) = X0:t-1,Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)b high (S t-1 , Z t-1 |C)∇ θ 1 = 0 E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)b low (S t-1 , Z t |C) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)b low (S t-1 , Z t |C) = C prior(C) T t=1 X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)b low (S t\nUpdate P ψ and P ω by minimizing L M I and L DI (Eq. 9) using SGD with {(C, X 0:T , Z 0:T )} There are in total five networks to learn in our system: the high-level policy π θ , low-level policy π ϕ , discriminator f ϑ , variational posteriors for the task context P ψ and option context P ω . Algorithm 1 shows in details how to coordinate their training process. To be more intuitive, we provide Figure 5 for illustrating the interactions among them. P ψ and P ω are trained with the trajectories (i.e., {(C, X 0:T , Z 0:T )}) generated by the hierarchical policy π θ,ϕ , and can provide the reward signals R 0:T M I and R 0:T DI for training π θ,ϕ , which are defined as α 1 log P ψ (C|X 0:T ) and α 2 log Pω(Zi|X i ,Z i-1 ,C) π θ (Zi|Si-1,Zi-1,C) (i ∈ {1, • • • , T }) in Equation 11, respectively. On the other hand, the discriminator f ϑ is trained to distinguish the expert demonstrations {(C E , X E 0:T , Z E 0:T )} and generated samples {(C, X 0:T , Z 0:T )}, where C E and {Z E 0:T } can be estimated from P ψ and P ω if not provided. Then, the AIRL reward term R 0:T IL can be obtained based on the output of f ϑ . Last, the hierarchical policy π θ,ϕ can be trained by maximizing the return defined with R 0:T M I , R 0:T DI , and R 0:T IL (i.e., Eq. 11). " }, { "figure_ref": [], "heading": "E. Appendix on Evaluation Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "E.1. Plots of the Ablation Study", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "E.2. Implementation Details of MH-GAIL", "publication_ref": [ "b33" ], "table_ref": [], "text": "MH-GAIL is a variant of our algorithm by replacing the AIRL component with GAIL. Similar with Section 4.2, we need to provide an extension of GAIL with the one-step option model, in order to learn a hierarchical policy. The extension method follows Option-GAIL (Jing et al., 2021) which is one of our baselines. MH-GAIL also uses an adversarial learning framework that contains a discriminator D ϑ and a hierarchical policy π θ,ϕ , for which the objectives are as follows: \nT -1 t=0 R t IL , R t IL = -log D ϑ (S t , A t , Z t+1 , Z t |C)(50)\nwhere (S, A, Z, Z ′ ) denotes (S t , A t , Z t+1 , Z t ), t = {0, • • • , T -1}. It can be observed that the definition of R t IL have changed. Moreover, the discriminator D ϑ in MH-GAIL is trained as a binary classifier to distinguish the expert demonstrations (labeled as 0) and generated samples (labeled as 1), and does not have a specially-designed structure like the discriminator D ϑ in MH-AIRL, which is defined with f ϑ and π θ,ϕ , so that it cannot recover the expert reward function." }, { "figure_ref": [], "heading": "E.3. Analysis of the Learned Hierarchical Policy on HalfCheetah-MultiVel and Walker-RandParam", "publication_ref": [], "table_ref": [], "text": "First, we randomly select 6 task contexts for HalfCheetah-MultiVel and visualize the recovered hierarchical policy as the velocity change of each episode in Figure 7(a). It can be observed that the agent automatically discovers two options (Option 1: blue, Option 2: orange) and adopts Option 1 for the acceleration phase (0 → v/2 or 0 → v) and Option 2 for the deceleration phase (v/2 → 0). This shows that MH-AIRL can capture the compositional structure within the tasks very well and transfer the learned basic skills to boost multi-task policy learning." }, { "figure_ref": [], "heading": "B. Appendix on the Hierarchical Latent context Structure B.1. A Lower Bound of the Directed Information Objective", "publication_ref": [ "b24", "b12", "b51" ], "table_ref": [], "text": "In this section, we give out the derivation of a lower bound of the directed information from the trajectory sequence X 0:T to the local latent context sequence Z 0:T conditioned on the global latent context C, i.e., I(X 0:T → Z 0:T |C) as follows:\nIn Equation 19, I(V ar 1 ; V ar 2 |V ar 3 ) denotes the conditional mutual information, H(V ar 1 |V ar 2 ) denotes the conditional entropy, and the inequality holds because of the basic property related to conditional entropy: increasing conditioning cannot increase entropy (Galvin, 2014).\n, where the other variables in X 0:t-1 , Z 0:t-1 are neglected due to the one-step Markov assumption, and more convenient to obtain. Further, the second term in the last step can be processed as follows: \nwhere D KL (•) denotes the Kullback-Leibler (KL) Divergence which is non-negative (Cover, 1999), P ω (Z t |X 0:t , Z 0:t-1 , C) is a variational estimation of the posterior distribution of Z t given X 0:t and Z 0:t-1 , i.e., P (Z t |X 0:t , Z 0:t-1 , C), which is modeled as a recurrent neural network with the parameter set ω in our work. Based on Equation 19and 20, we can obtain a lower bound of I(X 0:T → Z 0:T |C) denoted as L DI :\nNote that the joint distribution P (X 0:t , Z 0:t , C) has a recursive definition as follows:\nwhere µ(S 0 |C) denotes the distribution of the initial states for task C. Equation 23 holds because A -1 and Z 0 are dummy variables which are only for simplifying notations and never executed and set to be constant across different tasks. Based on Equation 22 and 23, we can get:\nIn Equation 24, prior(C) is the known prior distribution of the task context C,\n) can be replaced with π θ and π ϕ , respectively, due to the one-step Markov assumption.\nTo sum up, we can adopt the high-level policy, low-level policy and variational posterior to get an estimation of the lower bound of the directed information objective through Monte Carlo sampling (Sutton & Barto, 2018) according to Equation 21and 24, which can then be used to optimize the three networks." }, { "figure_ref": [], "heading": "B.2. A Lower Bound of the Mutual Information Objective", "publication_ref": [], "table_ref": [], "text": "In this section, we give out the derivation of a lower bound of the mutual information between the trajectory sequence X 0:T and its corresponding task context C, i.e., I(X 0:T ; C). P (X 0:T , C) log P ψ (C|X 0:T )\nIn Equation 25, H(•) denotes the entropy, prior(C) denotes the known prior distribution of the task context C, P (X 0:T , Z 0:T |C) can be calculated with Equation 24 by setting t = T , and P ψ (C|X 0:T ) is a variational estimation of the posterior distribution P (C|X 0:T ) which is implemented as a recurrent neural network with the parameter set ψ. Note that the inequality holds because the KL-Divergence, i.e., D KL (•), is non-negative." }, { "figure_ref": [], "heading": "B.3. The Analogy with the VAE Framework", "publication_ref": [ "b34", "b30" ], "table_ref": [], "text": "Variational Autoencoder (VAE) (Kingma & Welling, 2014) learns a probabilistic encoder P η (V |U ) and decoder P ξ (U |V ) which map between data U and latent variables V by optimizing the evidence lower bound (ELBO) on the marginal distribution P ξ (U ), assuming the prior distributions P U (•) and P V (•) over the data and latent variables respectively. The authors of (Higgins et al., 2017) extend the VAE approach by including a parameter β to control the capacity of the latent V , Second, we note that, for some circumstances, the basic skills need to be conditioned on the task context. For the Mujoco-MultiGoal/MultiVel tasks, the basic skills (e.g., Option 2: decreasing the velocity) can be directly transferred among the tasks in the class and the agent only needs to adjust its high-level policy according to the task variable (e.g., adopting Option 2 when achieving v/2). However, for tasks like Walker-RandParam, the skills need to adapt to the tasks, since the mass of the agent changes and so do the control dynamics. As shown in Figure 7(b), the learning performance would drop without conditioning the low-level policy (i.e., option) on the task context, i.e., MH-AIRL-no-cnt." } ]
Multi-task Imitation Learning (MIL) aims to train a policy capable of performing a distribution of tasks based on multi-task expert demonstrations, which is essential for general-purpose robots. Existing MIL algorithms suffer from low data efficiency and poor performance on complex longhorizontal tasks. We develop Multi-task Hierarchical Adversarial Inverse Reinforcement Learning (MH-AIRL) to learn hierarchically-structured multi-task policies, which is more beneficial for compositional tasks with long horizons and has higher expert data efficiency through identifying and transferring reusable basic skills across tasks. To realize this, MH-AIRL effectively synthesizes context-based multi-task learning, AIRL (an IL approach), and hierarchical policy learning. Further, MH-AIRL can be adopted to demonstrations without the task or skill annotations (i.e., stateaction pairs only) which are more accessible in practice. Theoretical justifications are provided for each module of MH-AIRL, and evaluations on challenging multi-task settings demonstrate superior performance and transferability of the multitask policies learned with MH-AIRL as compared to SOTA MIL baselines.
Multi-task Hierarchical Adversarial Inverse Reinforcement Learning
[ { "figure_caption": "Figure 1 .1Figure 1. Illustration of the hierarchical latent context structure and its implementation with the one-step option model.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2. (a) Multi-stage Mujoco locomotion tasks, where (1)-(3) show Ant, HalfCheetah, and Walker agent, respectively. (d) The Kitchen task. (b)(c)(e)(f) Comparison results of MH-AIRL with SOTA Meta Imitation Learning baselines on the four challenging tasks.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. (a) The environment for multi-task learning with MH-AIRL. (d) Visualization of the learned hierarchical policy in (a). (b)(c) New task settings for evaluating the learned options in (a). (e)(f) Comparison results on (b)(c) between our proposed HRL algorithm (i.e., HPPO) initialized with the transferred options (i.e., HPPO-init) and other SOTA HRL and RL baselines.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The analogy of our learning framework with the VAE structure.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "6 :6Figure 5. Interactions among the five networks in our learning system.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Comparison results of MH-AIRL with the ablated versions (MH-GAIL & H-AIRL) and SOTA Hierarchical Imitation Learning (HIL) baselines (Option-GAIL & DI-GAIL) on the four evaluation tasks. Our algorithm outperforms the baselines in all the tasks, especially in the more challenging ones (Ant & Kitchen). MH-GAIL performs better than the other baselines which do not contain the multi-task learning component. H-AIRL, an ablation and HIL algorithm, has better performance than other SOTA HIL baselines on the Mujoco tasks.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "EEC∼prior(•),(S,A,Z,Z ′ )∼π E (•|C) [log(1 -D ϑ (S, A, Z, Z ′ |C))] + E C∼prior(•),(S,A,Z,Z ′ )∼π θ,ϕ (•|C) [log D ϑ (S, A, Z, Z ′ |C)] C∼prior(•),(X 0:T ,Z 0:T )∼π θ,ϕ (•|C)", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Numeric results of the ablation study (i.e., the sum of rewards for each step in an episode) on the test tasks as the number of training samples increases. The training is repeated 5 times with different random seeds for each algorithm, of which the mean and standard deviation are shown as the solid line and shadow area, respectively.", "figure_data": "HalfCheetah-MultiVel Walker-RandParamAnt-MultiGoalKitchen-MultiSeqExpert376.55 ± 11.12399.95 ± 1.431593.17 ± 40.91400.00 ± 0.00MH-AIRL (ours)292.79 ± 15.99357.59 ± 12.101530.82 ± 15.18 352.59 ± 15.12MH-GAIL (ours)211.32 ± 52.74268.92 ± 49.291064.78 ± 180.28212.13 ± 25.25H-AIRL (ours)126.85 ± 21.92225.48 ± 12.87533.80 ± 40.6983.97 ± 10.95Option-GAIL-44.89 ± 51.95132.01 ± 54.75383.05 ± 13.52204.73 ± 56.41DI-GAIL56.77 ± 49.76225.22 ± 14.01328.06 ± 19.89131.79 ± 53.29", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "-1 , Z t |C) P θ,ϕ (X 0:t , Z 0:t |C)∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)b low (S t-1 , Z t |C) P θ,ϕ (X 0:t-1 , Z 0:t |C)b low (S t-1 , Z t |C) P θ,ϕ (X 0:t-1 , Z 0:t |C)b low (S t-1 , Z t |C) ∇ ϕ π ϕ (A t-1 |S t-1 , Z t , C) P θ,ϕ (X 0:t-1 , Z 0:t |C)b low (S t-1 , Z t |C)∇ ϕ At-1 π ϕ (A t-1 |S t-1 , Z t , C) P θ,ϕ (X 0:t-1 , Z 0:t |C)b low (S t-1 , Z t |C)∇ ϕ 1 = 0Algorithm 1 Multi-task Hierarchical Adversarial Inverse Reinforcement Learning (MH-AIRL) 1: Input: Prior distribution of the task variable prior(C), expert demonstrations {X E 0:T } (If the task or option annotations, i.e., {C E } or {Z E 0:T }, are provided, the corresponding estimation in Step 6 is not required.) 2: Initialize the hierarchical policy π θ and π ϕ , discriminator f ϑ , posteriors for the task context P ψ and option choice P ω 3: for each training episode do", "figure_data": "4:T=prior(C)Ct=1 X0:t,Z0:t=prior(C)CT= π = C prior(C) t=1 X0:t-1,Z0:t At-1 prior(C) TCt=1 X0:t-1,Z0:tAt-1T=prior(C)Ct=1 X0:t-1,Z0:tT=prior(C)Ct=1 X0:t-1,Z0:t", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
Jiayu Chen; Dipesh Tamboli; Tian Lan; Vaneet Aggarwal
[ { "authors": "A O Al-Abbasi; A Ghosh; V Aggarwal", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b0", "title": "Deeppool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning", "year": "2019" }, { "authors": "J Andreas; D Klein; S Levine", "journal": "", "ref_id": "b1", "title": "Modular multitask reinforcement learning with policy sketches", "year": "2017" }, { "authors": "A Balachandran; V Aggarwal; E Halepovic; J Pang; S Seshan; S Venkataraman; H Yan", "journal": "", "ref_id": "b2", "title": "Modeling web quality-of-experience on cellular networks", "year": "2014" }, { "authors": "X Bian; O M Maldonado; S Hadfield", "journal": "IEEE", "ref_id": "b3", "title": "SKILL-IL: disentangling skill and knowledge in multitask imitation learning", "year": "2022" }, { "authors": "L Bottou", "journal": "Springer", "ref_id": "b4", "title": "Large-scale machine learning with stochastic gradient descent", "year": "2010" }, { "authors": "J Chen; A K Umrawal; T Lan; V Aggarwal", "journal": "", "ref_id": "b5", "title": "Deepfreight: A model-free deep-reinforcement-learning-based algorithm for multi-transfer freight delivery", "year": "2021" }, { "authors": "J Chen; V Aggarwal; T Lan; Odpp", "journal": "", "ref_id": "b6", "title": "A unified algorithm framework for unsupervised option discovery based on determinantal point process", "year": "2022" }, { "authors": "J Chen; J Chen; T Lan; V Aggarwal", "journal": "IEEE Transactions on Artificial Intelligence", "ref_id": "b7", "title": "Multi-agent covering option discovery based on kronecker product of factor graphs", "year": "2022" }, { "authors": "J Chen; J Chen; T Lan; V Aggarwal", "journal": "", "ref_id": "b8", "title": "Scalable multi-agent covering option discovery based on kronecker graphs", "year": "2022" }, { "authors": "J Chen; M Haliem; T Lan; V Aggarwal", "journal": "", "ref_id": "b9", "title": "Multi-agent deep covering option discovery", "year": "2022" }, { "authors": "J Chen; T Lan; V Aggarwal", "journal": "", "ref_id": "b10", "title": "Option-aware adversarial inverse reinforcement learning for robotic control", "year": "2022" }, { "authors": "J Chung; K Kastner; L Dinh; K Goel; A C Courville; Y Bengio", "journal": "", "ref_id": "b11", "title": "A recurrent latent variable model for sequential data", "year": "2015" }, { "authors": "T M Cover", "journal": "John Wiley & Sons", "ref_id": "b12", "title": "Elements of information theory", "year": "1999" }, { "authors": "M P Deisenroth; P Englert; J Peters; D Fox", "journal": "IEEE", "ref_id": "b13", "title": "Multitask policy search for robotics", "year": "2014" }, { "authors": "C Devin; D Geng; P Abbeel; T Darrell; S Levine", "journal": "", "ref_id": "b14", "title": "Compositional plan vectors", "year": "2019" }, { "authors": "N Duminy; S M Nguyen; J Zhu; D Duhaut; J Kerdreux", "journal": "", "ref_id": "b15", "title": "Intrinsically motivated open-ended multi-task learning using transfer learning to discover task hierarchy", "year": "2021" }, { "authors": "S R Eddy", "journal": "Current Opinion in Structural Biology", "ref_id": "b16", "title": "Hidden markov models", "year": "1996" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "", "ref_id": "b17", "title": "Model-agnostic metalearning for fast adaptation of deep networks", "year": "2017" }, { "authors": "C Finn; T Yu; T Zhang; P Abbeel; S Levine", "journal": "", "ref_id": "b18", "title": "Oneshot visual imitation learning via meta-learning", "year": "2017" }, { "authors": "C Florensa; Y Duan; P Abbeel", "journal": "", "ref_id": "b19", "title": "Stochastic neural networks for hierarchical reinforcement learning", "year": "2017" }, { "authors": "R Fox; R Berenstein; I Stoica; K Goldberg", "journal": "IEEE", "ref_id": "b20", "title": "Multitask hierarchical imitation learning for home automation", "year": "2019" }, { "authors": "J Fu; K Luo; S Levine", "journal": "", "ref_id": "b21", "title": "Learning robust rewards with adversarial inverse reinforcement learning", "year": "2017" }, { "authors": "J Fu; A Kumar; O Nachum; G Tucker; S Levine; D4rl", "journal": "", "ref_id": "b22", "title": "datasets for deep data-driven reinforcement learning", "year": "2020" }, { "authors": "X Fu; D Peddireddy; V Aggarwal; M B Jun; -G ", "journal": "IEEE Transactions on Industrial Informatics", "ref_id": "b23", "title": "Improved dexel representation: A 3-d cnn geometry descriptor for manufacturing cad", "year": "2021" }, { "authors": "D Galvin", "journal": "", "ref_id": "b24", "title": "Three tutorial lectures on entropy and counting", "year": "2014" }, { "authors": "C Gao; Y Jiang; F Chen", "journal": "", "ref_id": "b25", "title": "Transferring hierarchical structures with dual meta imitation learning", "year": "2022" }, { "authors": "N Geng; Q Bai; C Liu; T Lan; V Aggarwal; Y Yang; M Xu", "journal": "IEEE Transactions on Vehicular Technology", "ref_id": "b26", "title": "A reinforcement learning framework for vehicular network routing under peak and average constraints", "year": "2023" }, { "authors": "S K S Ghasemipour; S Gu; R S Zemel; Smile", "journal": "", "ref_id": "b27", "title": "Scalable meta inverse reinforcement learning through context-conditional policies", "year": "2019" }, { "authors": "G Gonzalez; M Balakuntala; M Agarwal; T Low; B Knoth; A W Kirkpatrick; J Mckee; G Hager; V Aggarwal; Y Xue", "journal": "IEEE Transactions on Medical Robotics and Bionics", "ref_id": "b28", "title": "Asap: A semi-autonomous precise system for telesurgery during communication delays", "year": "2023" }, { "authors": "A Gupta; V Kumar; C Lynch; S Levine; K Hausman", "journal": "", "ref_id": "b29", "title": "Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning", "year": "2019" }, { "authors": "I Higgins; L Matthey; A Pal; C P Burgess; X Glorot; M M Botvinick; S Mohamed; A Lerchner", "journal": "", "ref_id": "b30", "title": "betavae: Learning basic visual concepts with a constrained variational framework", "year": "2017" }, { "authors": "J Ho; S Ermon", "journal": "", "ref_id": "b31", "title": "Generative adversarial imitation learning", "year": "2016" }, { "authors": "J L W V Jensen", "journal": "Acta mathematica", "ref_id": "b32", "title": "Sur les fonctions convexes et les inégalités entre les valeurs moyennes", "year": "1906" }, { "authors": "M Jing; W Huang; F Sun; X Ma; T Kong; C Gan; L Li", "journal": "", "ref_id": "b33", "title": "Adversarial option-aware hierarchical imitation learning", "year": "2021" }, { "authors": "D P Kingma; M Welling", "journal": "", "ref_id": "b34", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "A R Kosiorek; S Sabour; Y W Teh; G E Hinton", "journal": "", "ref_id": "b35", "title": "Stacked capsule autoencoders", "year": "2019" }, { "authors": "C Li; D Song; D Tao", "journal": "", "ref_id": "b36", "title": "The skill-action architecture: Learning abstract action embeddings for reinforcement learning", "year": "2021" }, { "authors": "X Luo; X Ma; M Munden; Y.-J Wu; Y Jiang", "journal": "Journal of Transportation Engineering, Part A: Systems", "ref_id": "b37", "title": "A multisource data approach for estimating vehicle queue length at metered on-ramps", "year": "2022" }, { "authors": "X Ma; A Karimpour; Y.-J Wu", "journal": "Transportation Research Part A: Policy and Practice", "ref_id": "b38", "title": "Statistical evaluation of data requirement for ramp metering performance assessment", "year": "2020" }, { "authors": "S Mangal; P Joshi; R Modak; Vs; Gru", "journal": "", "ref_id": "b39", "title": "vs. bidirectional RNN for script generation", "year": "2019" }, { "authors": "J Massey", "journal": "", "ref_id": "b40", "title": "Causality, feedback and directed information", "year": "1990" }, { "authors": "A Y Ng; S Russell", "journal": "", "ref_id": "b41", "title": "Algorithms for inverse reinforcement learning", "year": "" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b42", "title": "", "year": "2000" }, { "authors": "A Y Ng; D Harada; S Russell", "journal": "", "ref_id": "b43", "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "year": "" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b44", "title": "", "year": "1999" }, { "authors": "D Peddireddy; X Fu; A Shankar; H Wang; B G Joung; V Aggarwal; J W Sutherland; M B Jun; -G ", "journal": "Journal of Manufacturing Processes", "ref_id": "b45", "title": "Identifying manufacturability and machining processes using deep 3d convolutional networks", "year": "2021" }, { "authors": "D Pomerleau", "journal": "Neural Computation", "ref_id": "b46", "title": "Efficient training of artificial neural networks for autonomous navigation", "year": "1991" }, { "authors": "S Ross; G J Gordon; D Bagnell", "journal": "", "ref_id": "b47", "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "year": "2011" }, { "authors": "J Schulman; F Wolski; P Dhariwal; A Radford; O Klimov", "journal": "", "ref_id": "b48", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "M Sharma; A Sharma; N Rhinehart; K M Kitani", "journal": "", "ref_id": "b49", "title": "Directed-info GAIL: learning hierarchical policies from unsegmented demonstrations using directed information", "year": "2019" }, { "authors": "A Singh; E Jang; A Irpan; D Kappler; M Dalal; S Levine; M Khansari; C Finn", "journal": "IEEE", "ref_id": "b50", "title": "Scalable multitask imitation learning with autonomous improvement", "year": "2020" }, { "authors": "R S Sutton; A G Barto", "journal": "MIT press", "ref_id": "b51", "title": "Reinforcement learning: An introduction", "year": "2018" }, { "authors": "R S Sutton; D Precup; S P Singh", "journal": "Artificial Intelligence", "ref_id": "b52", "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "year": "1999" }, { "authors": "E Todorov; T Erez; Y Tassa; Mujoco", "journal": "IEEE", "ref_id": "b53", "title": "A physics engine for model-based control", "year": "2012" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L Kaiser; I Polosukhin", "journal": "", "ref_id": "b54", "title": "Attention is all you need", "year": "2017" }, { "authors": "P Wang; D Liu; J Chen; H Li; C Chan", "journal": "IEEE", "ref_id": "b55", "title": "Decision making for autonomous driving via augmented adversarial inverse reinforcement learning", "year": "2021" }, { "authors": "L Yu; T Yu; C Finn; S Ermon", "journal": "", "ref_id": "b56", "title": "Meta-inverse reinforcement learning with probabilistic context variables", "year": "2019" }, { "authors": "T Yu; P Abbeel; S Levine; C Finn", "journal": "", "ref_id": "b57", "title": "One-shot hierarchical imitation learning of compound visuomotor tasks", "year": "2018" }, { "authors": "T Yu; C Finn; A Xie; S Dasari; T Zhang; P Abbeel; S Levine", "journal": "", "ref_id": "b58", "title": "One-shot imitation from observing humans via domain-adaptive meta-learning", "year": "2018" }, { "authors": "S Zhang; S Whiteson", "journal": "", "ref_id": "b59", "title": "DAC: the double actor-critic architecture for learning options", "year": "2019" }, { "authors": "B D Ziebart; A L Maas; J A Bagnell; A K Dey", "journal": "", "ref_id": "b60", "title": "Maximum entropy inverse reinforcement learning", "year": "2008" } ]
[ { "formula_coordinates": [ 3, 55.44, 433.81, 234.67, 84.7 ], "formula_id": "formula_0", "formula_text": "Z ϑ = τ E P ϑ (τ E ). max ϑ E τ E [log P ϑ (τ E )] = max ϑ E τ E log P ϑ (τ E ) Z ϑ , P ϑ (τ E ) = µ(S 0 ) T -1 t=0 P St+1 St,At exp(R ϑ (S t , A t ))(1)" }, { "formula_coordinates": [ 3, 80.75, 617.49, 209.35, 30.2 ], "formula_id": "formula_1", "formula_text": "min ϑ T -1 t=0 -E τ E log D t ϑ -E τ log(1 -D t ϑ )(2)" }, { "formula_coordinates": [ 3, 80.07, 656.63, 159.29, 15.05 ], "formula_id": "formula_2", "formula_text": "D t ϑ = D ϑ (S t , A t ) = exp(f ϑ (St,At)) exp(f ϑ (St,At))+π(" }, { "formula_coordinates": [ 3, 319.93, 424.67, 194.08, 49.49 ], "formula_id": "formula_3", "formula_text": "max ϑ E C∼prior(•),τ E ∼π E (•|C) [log P ϑ (τ E |C)] , P ϑ (τ E |C) ∝ µ(S 0 |C) T -1 t=0 P St+1 St,At,C e R ϑ (" }, { "formula_coordinates": [ 4, 306.28, 375.08, 235.52, 32.87 ], "formula_id": "formula_4", "formula_text": "X 0:T = (X 0 , • • • , X T ) = ((A -1 , S 0 ), • • • , (A T -1 , S T )) = τ . A -1 is a dummy variable." }, { "formula_coordinates": [ 4, 329.62, 615.55, 212.49, 69.26 ], "formula_id": "formula_5", "formula_text": "L M I ≜ H(C) + E X T ,Z T ,C log P ψ (C|X 0:T ) L DI ≜ T t=1 [ E X t ,Z t ,C log P ω (Z t |X 0:t , Z 0:t-1 , C) + H(Z t |X 0:t-1 , Z 0:t-1 , C)](4)" }, { "formula_coordinates": [ 5, 114.33, 233.57, 95.67, 9.65 ], "formula_id": "formula_6", "formula_text": "H(Z t |X 0:t-1 , Z 0:t-1 , C" }, { "formula_coordinates": [ 5, 55.44, 293.34, 234.67, 79.64 ], "formula_id": "formula_7", "formula_text": "C ∼ prior(•), (X 0:t , Z 0:t ) ∼ P θ,ϕ (•|C), where P θ,ϕ (X 0:t , Z 0:t |C) is calculated by: (See Appendix B.1.) µ(S 0 |C) t i=1 [π θ (Z i |S i-1 , Z i-1 , C)• π ϕ (A i-1 |S i-1 , Z i , C)P Si Si-1,Ai-1,C ](5)" }, { "formula_coordinates": [ 5, 333.43, 175.7, 208.68, 26.94 ], "formula_id": "formula_8", "formula_text": "π θ (Z t+1 |S t , Z t , C) • π ϕ (A t |S t , Z t+1 , C) = π θ,ϕ (Z t+1 , A t |S t , Z t , C) = π θ,ϕ ( A t | S t , C)(6)" }, { "formula_coordinates": [ 5, 307.44, 226.6, 235.17, 23.21 ], "formula_id": "formula_9", "formula_text": "ϕ (A t |S t , Z t , Z t+1 , C) = π ϕ (A t |S t , Z t+1 , C)), S t ≜ (S t , Z t ) and A t ≜ (Z t+1 , A t )" }, { "formula_coordinates": [ 5, 322.76, 338.1, 219.35, 68.85 ], "formula_id": "formula_10", "formula_text": "max ϑ E C,(X T ,Z T )∼π E (•|C) log P ϑ (X T , Z T |C) , P ϑ (X 0:T , Z 0:T |C) ∝ P ϑ (X 0:T , Z 0:T |C) = µ(S 0 |C) T -1 t=0 P St+1 St,At,C e R ϑ (St,Zt,Zt+1,At|C)(7)" }, { "formula_coordinates": [ 5, 307.44, 429.81, 234, 22.86 ], "formula_id": "formula_11", "formula_text": "C E ∼ prior(•), (X E 0:T , Z E 0:T ) ∼ π E (•|C E )" }, { "formula_coordinates": [ 5, 317.53, 498.86, 224.58, 99.86 ], "formula_id": "formula_12", "formula_text": "min ϑ -E C E ,(X E 0:T ,Z E 0:T ) T -1 t=0 log D ϑ ( S E t , A E t |C E ) -E C,(X 0:T ,Z 0:T ) T -1 t=0 log(1 -D ϑ ( S t , A t |C)), max θ,ϕ L IL = E C,(X 0:T ,Z 0:T ) T -1 t=0 R t IL (8)" }, { "formula_coordinates": [ 5, 307.44, 609.17, 234, 30.76 ], "formula_id": "formula_13", "formula_text": "R t IL = log D t ϑ -log(1-D t ϑ ) and D t ϑ = D ϑ ( S t , A t |C) = exp(f ϑ ( St, At|C)) exp(f ϑ ( St, At|C))+π θ,ϕ ( At| St,C) ." }, { "formula_coordinates": [ 6, 55.44, 92.55, 179.82, 12.84 ], "formula_id": "formula_14", "formula_text": "C E ∼ P ψ (•|X E 0:T ), Z E 0:T ∼ P ω (•|X E 0:T , C E )" }, { "formula_coordinates": [ 6, 63.65, 427.32, 226.45, 55.43 ], "formula_id": "formula_15", "formula_text": "∇ ψ L M I = E C,X T ,Z T ∇ ψ log P ψ (C|X 0:T ) ∇ ω L DI = T t=1 E C,X t ,Z t ∇ ω log P ω (Z t |X t , Z t-1 , C)(9)" }, { "formula_coordinates": [ 6, 106.67, 527.17, 183.44, 11.72 ], "formula_id": "formula_16", "formula_text": "L = α 1 L M I + α 2 L DI + α 3 L IL (10)" }, { "formula_coordinates": [ 6, 67.94, 68.62, 474.17, 648.59 ], "formula_id": "formula_17", "formula_text": "∇ θ L = E C,X T ,Z T [ T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)• (Ret t -b high (S t-1 , Z t-1 |C))] ∇ ϕ L = E C,X T ,Z T [ T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)• (Ret t -b low (S t-1 , Z t |C))] (11) Ret t = α 1 log P ψ (C|X 0:T ) + T i=t [α 2 log P ω (Z i |X i , Z i-1 , C) π θ (Z i |S i-1 , Z i-1 , C) + α 3 R i-1 IL ] (12)" }, { "formula_coordinates": [ 13, 106.87, 155.05, 435.24, 68.77 ], "formula_id": "formula_18", "formula_text": "min ϑ E C -E τ E ∼π E (•|C) T -1 t=0 log D ϑ (S t , A t |C) -E τ ∼π(•|C) T -1 t=0 log(1 -D ϑ (S t , A t |C)) (13) max π E C E τ ∼π(•|C) T -1 t=0 log D ϑ (S t , A t |C) -log(1 -D ϑ (S t , A t |C))(14)" }, { "formula_coordinates": [ 13, 81.91, 231.49, 269.89, 9.65 ], "formula_id": "formula_19", "formula_text": "D ϑ (S, A|C) = exp(f ϑ (S, A|C))/[exp(f ϑ (S, A|C)) + π(A|S, C)]." }, { "formula_coordinates": [ 13, 55.44, 339.45, 234.09, 11.22 ], "formula_id": "formula_20", "formula_text": "K = [k 1 • • • k n ] T ∈ R n×d k and V = [v 1 • • • v n ] T ∈ R n×dv" }, { "formula_coordinates": [ 13, 190.61, 375.49, 347.35, 30.32 ], "formula_id": "formula_21", "formula_text": "Attention(q, K, V ) = n i=1 exp(q • k i ) n j=1 exp(q • k j ) × v i (15" }, { "formula_coordinates": [ 13, 537.96, 386.37, 4.15, 8.64 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 13, 104.27, 463.74, 437.84, 36 ], "formula_id": "formula_23", "formula_text": "W q i ∈ R d k ×d k , W K i ∈ R d k ×d k , W V i ∈ R dv×dv , W O ∈ R ndv×dv are the learnable parameters. M HA(q, K, V ) = Concat(head 1 , • • • , head h )W O , head i = Attention(qW q i , KW K i , V W V i )(16)" }, { "formula_coordinates": [ 13, 55.44, 549.01, 486.67, 34.9 ], "formula_id": "formula_24", "formula_text": "q = linear(Concat[S, W T C Z ′ ]), dense Z = M HA(q, W C , W C ), Z ∼ Categorical(•|dense Z ) (17) W C ∈ R N ×E" }, { "formula_coordinates": [ 13, 155.6, 660.26, 382.35, 12.69 ], "formula_id": "formula_25", "formula_text": "dense A = M LP (S, W T C Z), A ∼ Categorical/Gaussian(•|dense A ) (18" }, { "formula_coordinates": [ 13, 537.96, 662.5, 4.15, 8.64 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 16, 177.3, 192.94, 360.66, 27.23 ], "formula_id": "formula_27", "formula_text": "max η,ξ E U ∼P U (•) V ∼Pη(•|U ) log P ξ (U |V ) -βD KL (P η (V |U )||P V (V )) (26" }, { "formula_coordinates": [ 16, 537.96, 203.37, 4.15, 8.64 ], "formula_id": "formula_28", "formula_text": ")" }, { "formula_coordinates": [ 17, 55.44, 132.15, 486, 21.61 ], "formula_id": "formula_29", "formula_text": "P (S t+1 , Z t+1 |S t , Z t , Z t+1 , A t , C) = P (Z t+1 |S t , Z t , Z t+1 , A t , C)P (S t+1 |S t , Z t , Z t+1 , A t , C) = P(S t+1 |S t , A t , C" }, { "formula_coordinates": [ 17, 125.46, 190.77, 416.65, 99.87 ], "formula_id": "formula_30", "formula_text": "P ϑ (X 0:T , Z 0:T |C) ∝ µ( S 0 |C) T -1 t=0 P( S t+1 | S t , A t , C) exp(R ϑ ( S t , A t |C)) = P (S 0 , Z 0 |C) T -1 t=0 P (S t+1 , Z t+1 |S t , Z t , Z t+1 , A t , C) exp(R ϑ (S t , Z t , Z t+1 , A t |C)) = µ(S 0 |C) T -1 t=0 P(S t+1 |S t , A t , C) exp(R ϑ (S t , Z t , Z t+1 , A t |C))(27)" }, { "formula_coordinates": [ 17, 104.36, 452.83, 437.74, 202.65 ], "formula_id": "formula_31", "formula_text": "max θ,ϕ E C∼prior(•),(X 0:T ,Z 0:T )∼π θ,ϕ (•|C) T -1 t=0 R t IL = max θ,ϕ E C,X 0:T ,Z 0:T T -1 t=0 log D ϑ (S t , Z t , Z t+1 , A t |C) -log(1 -D ϑ (S t , Z t , Z t+1 , A t |C)) = max θ,ϕ E C,X 0:T ,Z 0:T T -1 t=0 f ϑ (S t , Z t , Z t+1 , A t |C) -log π θ,ϕ (Z t+1 , A t |S t , Z t , C) = max θ,ϕ E C,X 0:T ,Z 0:T T -1 t=0 f ϑ (S t , Z t , Z t+1 , A t |C) -log(π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C)) = max θ,ϕ E C,X 0:T ,Z 0:T log T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C)) T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C) ⇐⇒ max θ,ϕ E C,X 0:T ,Z 0:T log T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C))/Z C ϑ T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C)(28)" }, { "formula_coordinates": [ 18, 75.58, 99.97, 466.53, 132.51 ], "formula_id": "formula_32", "formula_text": "max θ,ϕ E C,X 0:T ,Z 0:T log T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C))/Z C ϑ T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C) = max θ,ϕ E C,X 0:T ,Z 0:T log µ(S 0 |C) T -1 t=0 P(S t+1 |S t , A t , C) T -1 t=0 exp(f ϑ (S t , Z t , Z t+1 , A t |C))/Z C ϑ µ(S 0 |C) T -1 t=0 P(S t+1 |S t , A t , C) T -1 t=0 π θ (Z t+1 |S t , Z t , C)π ϕ (A t |S t , Z t+1 , C) = max θ,ϕ E C∼prior(•),(X 0:T ,Z 0:T )∼π θ,ϕ (•|C) log π E (X 0:T , Z 0:T |C) π θ,ϕ (X 0:T , Z 0:T |C) = max θ,ϕ E C∼prior(•) [-D KL (π θ,ϕ (X 0:T , Z 0:T |C)||π E (X 0:T , Z 0:T |C))] ⇐⇒ min θ,ϕ E C∼prior(•) [D KL (π θ,ϕ (X 0:T , Z 0:T |C)||π E (X 0:T , Z 0:T |C))](29)" }, { "formula_coordinates": [ 18, 99.66, 338.87, 442.45, 193.78 ], "formula_id": "formula_33", "formula_text": "E X 0:T ∼D E [log P ϑ (X 0:T )] = E X 0:T ∼D E   log   C,Z 0:T P ϑ (X 0:T , C, Z 0:T )     = E X 0:T ∼D E   log   C,Z 0:T P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) P ϑ (C, Z 0:T |X 0:T )     = E X 0:T ∼D E log E (C,Z 0:T )∼P ϑ (•|X 0:T ) P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) ≥ E X 0:T ∼D E E (C,Z 0:T )∼P ϑ (•|X 0:T ) log P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) = E X 0:T ∼D E ,C∼P ψ (•|X 0:T ),Z 0:T ∼Pω(•|X 0:T ,C) log P ϑ (X 0:T , C, Z 0:T ) P ϑ (C, Z 0:T |X 0:T ) = E X 0:T ,C,Z 0:T [log P ϑ (X 0:T , C, Z 0:T )] -E X 0:T ,C,Z 0:T log P ϑ (C, Z 0:T |X 0:T ) = E X 0:T ,C,Z 0:T [log P ϑ (X 0:T , Z 0:T |C)] -E X 0:T ,C,Z 0:T -log prior(C) + log P ϑ (C, Z 0:T |X 0:T )(30)" }, { "formula_coordinates": [ 19, 201.27, 106.22, 340.84, 9.65 ], "formula_id": "formula_34", "formula_text": "f ϑ,ζ (S t , S t+1 ) = g ϑ (S t ) + γh ζ (S t+1 ) -h ζ (S t ) (31)" }, { "formula_coordinates": [ 19, 152.92, 219.93, 389.19, 24.6 ], "formula_id": "formula_35", "formula_text": "f ϑ,ζ ( S t , S t+1 |C) = g ϑ ( S t |C) + γh ζ ( S t+1 |C) -h ζ ( S t |C) = g ϑ (S t , Z t |C) + γh ζ (S t+1 , Z t+1 |C) -h ζ (S t , Z t |C)(32)" }, { "formula_coordinates": [ 19, 165.83, 370.95, 376.28, 22.75 ], "formula_id": "formula_36", "formula_text": "L M I = C prior(C) X 0:T ,Z 0:T P (X 0:T , Z 0:T |C) log P ψ (C|X 0:T )(33)" }, { "formula_coordinates": [ 19, 219.78, 430.34, 322.33, 30.2 ], "formula_id": "formula_37", "formula_text": "T t=1 π θ (Z t |S t-1 , Z t-1 , C)π ϕ (A t-1 |S t-1 , Z t , C)P(S t |S t-1 , A t-1 , C)(34)" }, { "formula_coordinates": [ 19, 151.51, 495.77, 390.6, 22.75 ], "formula_id": "formula_38", "formula_text": "∇ ψ L M I = C prior(C) X 0:T ,Z 0:T P (X 0:T , Z 0:T |C)∇ ψ log P ψ (C|X 0:T )(35)" }, { "formula_coordinates": [ 19, 115.59, 544, 426.52, 119.37 ], "formula_id": "formula_39", "formula_text": "∇ θ L M I = C prior(C) X 0:T ,Z 0:T ∇ θ P θ,ϕ (X 0:T , Z 0:T |C) log P ψ (C|X 0:T ) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log P θ,ϕ (X 0:T , Z 0:T |C) log P ψ (C|X 0:T ) = E C,X 0:T , Z 0:T [∇ θ log P θ,ϕ (X 0:T , Z 0:T |C) log P ψ (C|X 0:T )] = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) log P ψ (C|X 0:T )(36)" }, { "formula_coordinates": [ 19, 153.03, 686.32, 389.07, 33.69 ], "formula_id": "formula_40", "formula_text": "∇ ϕ L M I = E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) log P ψ (C|X 0:T )(37)" }, { "formula_coordinates": [ 20, 102.62, 141.94, 70.03, 30.48 ], "formula_id": "formula_41", "formula_text": "∇ θ L DI 1 = T t=1 C" }, { "formula_coordinates": [ 20, 105.39, 152.35, 345.72, 56.52 ], "formula_id": "formula_42", "formula_text": "∇ θ P θ,ϕ (X 0:t , Z 0:t |C) log P ω (Z t |X 0:t , Z 0:t-1 , C) = T t=1 C" }, { "formula_coordinates": [ 20, 105.39, 178.4, 353.19, 66.93 ], "formula_id": "formula_43", "formula_text": "t i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) log P t ω = T t=1 C" }, { "formula_coordinates": [ 20, 105.39, 214.86, 388.12, 75.02 ], "formula_id": "formula_44", "formula_text": "t i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) log P t ω = T t=1 C" }, { "formula_coordinates": [ 20, 105.39, 259.4, 436.72, 181.77 ], "formula_id": "formula_45", "formula_text": "X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) t i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) log P t ω = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 log P t ω t i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) T t=i log P t ω = E C,X 0:T , Z 0:T T i=1 ∇ θ log π θ (Z i |S i-1 , Z i-1 , C) T t=i log P ω (Z t |X 0:t , Z 0:t-1 , C) = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T i=t log P ω (Z i |X 0:i , Z 0:i-1 , C)(38)" }, { "formula_coordinates": [ 20, 120.6, 506.96, 176.22, 65.31 ], "formula_id": "formula_46", "formula_text": "∇ θ L DI 2 = ∇ θ T t=1 H(Z t |X 0:t-1 , Z 0:t-1 , C) = -∇ θ [ T t=1 C" }, { "formula_coordinates": [ 20, 123.37, 552.21, 418.74, 167.45 ], "formula_id": "formula_47", "formula_text": "X0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C) log P (Z t |X 0:t-1 , Z 0:t-1 , C)] = -∇ θ [ T t=1 C prior(C) X0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C) log π θ (Z t |S t-1 , Z t-1 , C)] = -∇ θ [ C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 log π θ (Z t |S t-1 , Z t-1 , C)] = -[ C prior(C) X 0:T ,Z 0:T ∇ θ P θ,ϕ (X 0:T , Z 0:T |C) T t=1 log π θ (Z t |S t-1 , Z t-1 , C)+ C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)] (39) = -E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T i=1 log π θ (Z i |S i-1 , Z i-1 , C) + 1 = -E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T i=t log π θ (Z i |S i-1 , Z i-1 , C)(40)" }, { "formula_coordinates": [ 21, 309.7, 186.93, 134.18, 14.11 ], "formula_id": "formula_48", "formula_text": "T i=1 log π θ (Z i |S i-1 , Z i-1 , C) + 1" }, { "formula_coordinates": [ 21, 119.68, 240.37, 422.42, 33.69 ], "formula_id": "formula_49", "formula_text": "∇ ϕ L DI 1 = E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) T i=t log P ω (Z i |X 0:i , Z 0:i-1 , C)(41)" }, { "formula_coordinates": [ 21, 123.15, 287.07, 418.96, 33.69 ], "formula_id": "formula_50", "formula_text": "∇ ϕ L DI 2 = -E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) T i=t log π θ (Z i |S i-1 , Z i-1 , C)(42)" }, { "formula_coordinates": [ 21, 100.22, 355.67, 441.89, 30.47 ], "formula_id": "formula_51", "formula_text": "∇ ω L DI = ∇ ω L DI 1 = T t=1 C prior(C) X0:t,Z0:t P θ,ϕ (X 0:t , Z 0:t |C)∇ ω log P ω (Z t |X 0:t , Z 0:t-1 , C)(43)" }, { "formula_coordinates": [ 21, 136.45, 463.62, 405.65, 31.09 ], "formula_id": "formula_52", "formula_text": "L IL = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T -1 i=0 R IL (S i , Z i , Z i+1 , A i |C)(44)" }, { "formula_coordinates": [ 21, 130.3, 527.78, 411.81, 33.69 ], "formula_id": "formula_53", "formula_text": "∇ θ L IL = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T -1 i=0 R IL (S i , Z i , Z i+1 , A i |C)(45)" }, { "formula_coordinates": [ 21, 97.66, 595.76, 444.45, 123.64 ], "formula_id": "formula_54", "formula_text": "E C,X 0:T , Z 0:T [∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R IL (S i , Z i , Z i+1 , A i |C)] = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R IL (S i , Z i , Z i+1 , A i |C) = C prior(C) X0:t-1, Z0:t X t:T , Z t+1:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R i IL = C prior(C) X0:t-1, Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R i IL (46) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C) Zt π θ (Z t |S t-1 , Z t-1 , C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)R i IL = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R i IL Zt π θ (Z t |S t-1 , Z t-1 , C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R i IL Zt ∇ θ π θ (Z t |S t-1 , Z t-1 , C) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R i IL ∇ θ Zt π θ (Z t |S t-1 , Z t-1 , C) = C prior(C) X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)R IL (S i , Z i , Z i+1 , A i |C)∇ θ 1 = 0(47)" }, { "formula_coordinates": [ 22, 82.41, 252.94, 130.7, 12.48 ], "formula_id": "formula_55", "formula_text": "R i IL = R IL (S i , Z i , Z i+1 , A i |C)" }, { "formula_coordinates": [ 22, 376.55, 266.47, 166.63, 9.65 ], "formula_id": "formula_56", "formula_text": "IL (S i , Z i , Z i+1 , A i |C) is irrelevant to Z t ." }, { "formula_coordinates": [ 22, 127.22, 297.88, 414.89, 33.69 ], "formula_id": "formula_57", "formula_text": "∇ θ L IL = E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C) T -1 i=t-1 R IL (S i , Z i , Z i+1 , A i |C)(48)" }, { "formula_coordinates": [ 22, 125.71, 359.81, 416.4, 33.69 ], "formula_id": "formula_58", "formula_text": "∇ ϕ L IL = E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C) T -1 i=t-1 R IL (S i , Z i , Z i+1 , A i |C)(49)" }, { "formula_coordinates": [ 22, 78.1, 493.16, 431.68, 225.41 ], "formula_id": "formula_59", "formula_text": "E T t=1 ∇ θ log π t θ b high (S t-1 , Z t-1 |C) = E T t=1 ∇ ϕ log π t ϕ b low (S t-1 , Z t |C) = 0, as follows: E C,X 0:T , Z 0:T T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 ∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = C prior(C) T t=1 X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = C prior(C) T t=1 X0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) X0:t-1,Z0:t P θ,ϕ (X 0:t-1 , Z 0:t |C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = X0:t-1, Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C) Zt π θ (Z t |S t-1 , Z t-1 , C)∇ θ log π θ (Z t |S t-1 , Z t-1 , C)b high (S t-1 , Z t-1 |C) = X0:t-1,Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)b high (S t-1 , Z t-1 |C) Zt ∇ θ π θ (Z t |S t-1 , Z t-1 , C) = X0:t-1,Z0:t-1 P θ,ϕ (X 0:t-1 , Z 0:t-1 |C)b high (S t-1 , Z t-1 |C)∇ θ 1 = 0 E C,X 0:T , Z 0:T T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)b low (S t-1 , Z t |C) = C prior(C) X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C) T t=1 ∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)b low (S t-1 , Z t |C) = C prior(C) T t=1 X 0:T ,Z 0:T P θ,ϕ (X 0:T , Z 0:T |C)∇ ϕ log π ϕ (A t-1 |S t-1 , Z t , C)b low (S t" }, { "formula_coordinates": [ 25, 302.86, 528.96, 239.25, 37.79 ], "formula_id": "formula_60", "formula_text": "T -1 t=0 R t IL , R t IL = -log D ϑ (S t , A t , Z t+1 , Z t |C)(50)" } ]
10.18653/v1/D19-1520
2023-10-19
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b55", "b41", "b4", "b27", "b66", "b49", "b19", "b53", "b52", "b29", "b1", "b43", "b47", "b35", "b10", "b24", "b67", "b49", "b58", "b28" ], "table_ref": [], "text": "Structured prediction (Smith, 2011) is a fundamental problem in NLP, wherein the label space consists of complex structured outputs with groups of interdependent variables. It covers a wide range of NLP tasks, including sequence labeling, syntactic parsing and information extraction (IE). Modern structured predictors are developed in a data-driven way, by training statistical models with suitable annotated data. Recent developments in neural models and especially pre-trained language models (Peters et al., 2018;Devlin et al., 2019;Liu et al., 2019;Yang et al., 2019) have greatly improved system performance on these tasks. Nevertheless, the success of these models still relies on the availability of sufficient manually annotated data, which is often expensive and time-consuming to obtain. To mitigate such data bottlenecks, active learning (AL), which allows the model to select the most informative data instances to annotate, has been demonstrated to achieve good model accuracy while requiring fewer labels (Settles, 2009). When applying AL to structured prediction, one natural strategy is to perform full annotation (FA) for the output structures, for example, annotating a full sequence of labels or a full syntax tree. Due to its simplicity, FA has been widely adopted in AL approaches for structured prediction tasks (Hwa, 2004;Settles and Craven, 2008;Shen et al., 2018). Nevertheless, a structured object can usually be decomposed into smaller sub-structures having nonuniform difficulty and informativeness. For example, as shown in Figure 1, in a dependency tree, edges such as functional relations are relatively easy to learn, requiring fewer manual annotations, while prepositional attachment links may be more informative and thus more worthwhile to annotate.\nThe non-uniform distribution of informative substructures naturally suggests AL with partial annotation (PA), where the annotation budget can be preserved by only choosing a portion of informative sub-structures to annotate rather than laboriously labeling entire sentence structures. This idea has been explored in previous work, covering typical structured prediction tasks such as sequence labeling (Shen et al., 2004;Marcheggiani and Artières, 2014;Chaudhary et al., 2019;Radmard et al., 2021) Algorithm 1 AL Procedure.\nInput: Seed dataset L0, dev dataset D, unlabeled pool U, total budget t, batch selection size b, annotation strategy. Output: Final labeled dataset L, trained model M.\n1: L ← L0 # Initialize 2: while t > 0 do # Until out of budget 3: M ← train(L, U) # Model training 4:\nS ← sentence-query(M, U) # Sentence selection 5:\nif strategy == \"partial\" then 6:\nr ← auto-ratio(S, D) # Decide adaptive ratio 7:\npartial-annotate(S, r) # Partial annotation 8: else 9:\nfull-annotate(S) # Full annotation 10:\nU ← U -S; L ← L ∪ S; t ← t -b 11: M ← train(L, U) # Final model training 12: return L, M and dependency parsing (Sassano and Kurohashi, 2010;Mirroshandel and Nasr, 2011;Flannery and Mori, 2015;Li et al., 2016). Our work follows this direction and investigates the central question in AL with PA of how to decide which sub-structures to select. Most previous work uses a pre-defined fixed selection criterion, such as a threshold or ratio, which may be hard to decide in practice. In this work, we adopt a performance predictor to estimate the error rate of the queried instances and decide the ratio of partial selection accordingly. In this way, our approach can automatically and adaptively adjust the amount of partial selection throughout the AL process.\nAnother interesting question for AL is how to better leverage unlabeled data. In this work, we investigate a simple semi-supervised method, self-training (Yarowsky, 1995), which adopts the model's automatic predictions on the unlabeled data as extra training signals. Self-training naturally complements AL in the typical pool-based setting where we assume access to a pool of unlabeled data (Settles, 2009). It is particularly compatible with PA-based AL since the un-selected substructures are typically also highly-confident under the current model and likely to be predicted correctly without requiring additional annotation. We revisit this idea from previous work (Tomanek and Hahn, 2009;Majidi and Crane, 2013) and investigate its applicability with modern neural models and our adaptive partial selection approach.\nWe perform a comprehensive empirical investigation on the effectiveness of different AL strategies for typical structured prediction tasks. We perform fair comparisons that account for the hidden cost of reading time by keeping the context size the same for all the strategies in each AL cy-cle. With evaluations on four benchmark tasks for structured prediction (named entity recognition, dependency parsing, event extraction, and relation extraction), we show that PA can obtain roughly the same benefits as FA with the same reading cost but less sub-structure labeling cost, leading to better data efficiency. We also demonstrate that the adaptive partial selection scheme and self-training play crucial and complementary roles." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "AL for Structured Prediction", "publication_ref": [ "b49", "b35", "b10", "b24", "b21", "b29", "b24", "b19", "b53", "b1", "b43", "b48", "b64", "b29", "b59", "b11" ], "table_ref": [], "text": "We adopt the conventional pool-based AL setting, which iteratively selects and annotates instances from an unlabeled pool. Please refer to Settles (2009) for the basics and details of AL; our main illustration focuses more specifically on applying AL to structured prediction.\nAlgorithm 1 illustrates the overall AL process. We focus on sentence-level tasks. In FA, each sentence is annotated with a full structured object (for example, a label sequence or a syntax tree). In PA, annotation granularity is at the sub-structure level (for example, a sub-sequence of labels or a partial tree). We adopt a two-step selection approach for all the strategies by first choosing a batch of sentences and then annotating within this batch. This approach is natural for FA since the original aim is to label full sentences, and it is also commonly adopted in previous PA work (Mirroshandel and Nasr, 2011;Flannery and Mori, 2015;Li et al., 2016). Moreover, this approach makes it easier to control the reading context size for fair comparisons of different strategies as described in §3.2.\nWithout loss of generality, we take sequence labeling as an example and illustrate several key points in the AL process. Other tasks follow similar treatment, with details provided in Appendix A.\n• Model. We adopt a standard BERT-based model with a CRF output layer for structured output modeling (Lafferty et al., 2001), together with the BIO tagging scheme.\n• Querying Strategy. We utilize the query-byuncertainty strategy with the margin-based metric, which has been shown effective in AL for structured prediction (Marcheggiani and Artières, 2014;Li et al., 2016). Specifically, each token obtains an uncertainty score with the difference between the (marginal) probabilities of the most and second most likely label. We also tried several other strategies, such as least-confidence or max-entropy, but did not find obvious benefits. 1\n• Sentence selection. For both FA and PA, selecting a batch of uncertain sentences is the first querying step. We use the number of total tokens to measure batch size since sentences may have variant lengths. The sentence-level uncertainty is obtained by averaging the token-level ones. This length normalization heuristic is commonly adopted to avoid biases towards longer sentences (Hwa, 2004;Shen et al., 2018).\n• Token selection. In PA, a subset of highly uncertain tokens is further chosen for annotation. One important question is how many tokens to select. Instead of using a pre-defined fixed selection criterion, we develop an adaptive strategy to decide the amount, as will be described in §2.2.\n• Annotation. Sequence labeling is usually adopted for tasks involving mention extraction, where annotations are over spans rather than individual tokens. Previous work explores subsequence querying (Chaudhary et al., 2019;Radmard et al., 2021), which brings further complexities. Since we mainly explore tasks with short mention spans, we adopt a simple annotation protocol: Labeling the full spans where any inside token is queried. Note that for annotation cost measurement, we also include the extra labeled tokens in addition to the queried ones.\n• Model learning. For FA, we adopt the standard log-likelihood as the training loss. For PA, we follow previous work (Scheffer et al., 2001;Wanvarie et al., 2011;Marcheggiani and Artières, 2014) and adopt marginalized likelihood to learn from incomplete annotations (Tsuboi et al., 2008;Greenberg et al., 2018). More details are provided in Appendix C." }, { "figure_ref": [], "heading": "Adaptive Partial Selection", "publication_ref": [ "b58", "b24" ], "table_ref": [], "text": "PA adopts a second selection stage to choose highly uncertain sub-structures within the selected sentences. One crucial question here is how many 1 Please refer to Appendix D.1 for more results. Note that our main focus is on AL for structured prediction, where AL selection involves not only what instances to select (acquisition function), but also at what granularity to select and annotate. In contrast with most AL work that focuses on the first aspect (and classification tasks), we mainly investigate the second one and explore better partial selection strategies. Exploring more advanced acquisition functions is mostly orthogonal to our main focus and is left to future work. sub-structures to select. Typical solutions in previous work include setting an uncertainty threshold (Tomanek and Hahn, 2009) or specifying a selection ratio (Li et al., 2016). The threshold or ratio is usually pre-defined with a fixed hyper-parameter.\nThis fixed selecting scheme might not be an ideal one. First, it is usually hard to specify such fixed values in practice. If too many sub-structures are selected, there will be little difference between FA and PA, whereas if too few, the annotation amount is insufficient to train good models. Moreover, this scheme is not adaptive to the model. As the model is trained with more data throughout the AL process, the informative sub-structures become less dense as the model improves. Thus, the number of selected sub-structures should be adjusted accordingly. To mitigate these shortcomings, we develop a dynamic strategy that can decide the selection in an automatic and adaptive way.\nWe adopt the ratio-based strategy which enables straightforward control of the selected amount. Specifically, we rank the sub-structures by the uncertainty score and choose those scoring highest by the ratio. Our decision on the selecting ratio is based on the hypothesis that a reasonable ratio should roughly correspond to the current model's error rate on all the candidates. The intuition is that incorrectly predicted sub-structures are the most informative ones that can help to correct the model's mistakes.\nSince the queried instances come from the unlabeled pool without annotations, the error rate cannot be directly obtained, requiring estimation. 2 We adopt a simple one-dimensional logistic regression model for this purpose. The input to the model is the uncertainty score 3 and the output is a binary prediction of whether its prediction is confidently correct 4 or not. The estimator is trained using all the sub-structures together with their correctness on the development set 5 and then applied to the queried candidates. For each candidate sub-structure s, the estimator will give it a correctness probability. We estimate the overall error rate as one minus the average correctness probability over all the candidates in the query set Q (all sub-structures in the selected sentences), and set the selection ratio r as this error rate:\nr = 1 - 1 n s∈Q p(correct = 1|s)\nIn this way, the selection ratio can be set adaptively according to the current model's capability. If the model is weak and makes many mistakes, we will have a larger ratio which can lead to more dense annotations and richer training signals. As the model is trained with more data and makes fewer errors, the ratio will be tuned down correspondingly to avoid wasting annotation budget on already-correctly-predicted sub-structures. As we will see in later experiments, this adaptive scheme is suitable for AL ( §3.3)." }, { "figure_ref": [], "heading": "Self-training", "publication_ref": [ "b67", "b67", "b31", "b15", "b6", "b63" ], "table_ref": [], "text": "Better utilization of unlabeled data is a promising direction to further enhance model training in AL since unlabeled data are usually freely available from the unlabeled pool. In this work, we adopt self-training (Yarowsky, 1995) for this purpose.\nThe main idea of self-training is to enhance the model training with pseudo labels that are predicted by the current model on the unlabeled data. It has been shown effective for various NLP tasks (Yarowsky, 1995;McClosky et al., 2006;He et al., 2020;Du et al., 2021). For the training of AL models, self-training can be seamlessly incorporated. For FA, the application of self-training is no different than that in the conventional scenarios by applying the current model to all the un-annotated instances in the unlabeled pool. The more interesting case is on the partially annotated instances in the PA regime. The same motivation from the adaptive ratio scheme ( §2.2) also applies here: We select the highly-uncertain sub-structures that are error-prone and the remaining un-selected parts are likely to be correctly predicted; therefore we can trust the predictions on the un-selected substructures and include them for training. One more enhancement to apply here is that we could further perform re-inference by incorporating the updated annotations over the selected sub-structures, which can enhance the predictions of un-annotated substructures through output dependencies.\nIn this work, we adopt a soft version of selftraining through knowledge distillation (KD; Hin-ton et al., 2015). This choice is because we want to avoid the potential negative influences of ambiguous predictions (mostly in completely unlabeled instances). One way to mitigate this is to set an uncertainty threshold and only utilize the highlyconfident sub-structures. However, it is unclear how to set a proper value, similar to the scenarios in query selection. Therefore, we take the model's full output predictions as the training targets without further processing.\nSpecifically, our self-training objective function is the cross-entropy between the output distributions predicted by the previous model m ′ before training and the current model m being trained:\nL = - y∈Y p m ′ (y|x) log p m (y|x)\nSeveral points are notable here: 1) The previous model is kept unchanged, and we can simply cache its predictions before training; 2) Over the instances that have partial annotations, the predictions should reflect these annotations by incorporating corresponding constraints at inference time; 3) For tasks with CRF based models, the output space Y is usually exponentially large and infeasible to explicitly enumerate; we utilize special algorithms (Wang et al., 2021) to deal with this, and more details are presented in Appendix C.\nFinally, we find it beneficial to include both the pseudo labels and the real annotated gold labels for the model training. With the gold data, the original training loss is adopted, while the KD objective is utilized with the pseudo labels. We simply mix these two types of data with a ratio of 1:1 in the training process, which we find works well." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Main Settings", "publication_ref": [ "b57", "b61" ], "table_ref": [], "text": "Tasks and data. Our experiments6 are conducted over four English tasks. The first two are named entity recognition (NER) and dependency parsing (DPAR), which are representative structured prediction tasks for predicting sequence and tree structures. We adopt the CoNLL-2003 English dataset (Tjong Kim Sang and De Meulder, 2003) for NER and the English Web Treebank (EWT) from Universal Dependencies v2.10 (Nivre et al., 2020) for DPAR. Moreover, we explore two more complex IE tasks: Event extraction and relation extraction.\nEach task involves two pipelined sub-tasks: The first aims to extract the event trigger and/or entity mentions, and the second predicts links between these mentions as event arguments or entity relations. We utilize the ACE05 dataset (Walker et al., 2006) for these IE tasks.\nAL. For the AL procedure, we adopt settings following conventional practices. We use the original training set as the unlabeled data pool to select instances. Unless otherwise noted, we set the AL batch size (for sentence selection) to 4K tokens, which roughly corresponds to 2% of the total pool size for most of the datasets we use. The initial seed training set and the development set are randomly sampled (with FA) using this batch size. Unless otherwise noted, we run 14 AL cycles for each experiment. In each AL cycle, we re-train our model since we find incremental updating does not perform well. Following most AL work, annotation is simulated by checking and assigning the labels from the original dataset. In FA, we annotate all the sub-structures for the selected sentences. In PA, we first decide the selection ratio and apply it to the selected sentences. We further adopt a heuristic7 that selects the union of sentence-wise uncertain sub-structures as well as global ones since both may contain informative sub-structures. Finally, all the presented results are averaged over five runs with different random seeds." }, { "figure_ref": [], "heading": "Model and training.", "publication_ref": [ "b27" ], "table_ref": [], "text": "For the models, we adopt standard architectures by stacking task-specific structured predictors over pre-trained RoBERTa base (Liu et al., 2019) and the full models are fine-tuned at each training iteration. After obtaining new annotations in each AL cycle, we first train a model based on all the available full or partial annotations. When using self-training, we further apply this newly trained model to assign pseudo soft labels to all un-annotated instances and combine them with the existing annotations to train another model. Compared to using the old model from the last AL cycle, this strategy can give more accurate pseudo labels since the newly updated model usually performs better by learning from more annotations. For PA, pseudo soft labels are assigned to both un-selected sentences and the un-annotated sub-structures in the selected sentences." }, { "figure_ref": [], "heading": "Comparison Scheme", "publication_ref": [ "b58", "b10", "b24", "b43" ], "table_ref": [], "text": "Since FA and PA annotate at different granularities, we need a common cost measurement to compare their effectiveness properly. A reasonable metric is the number of the labeled sub-structures; for instance, the number of labeled tokens for sequence labeling or edges for dependency parsing. This metric is commonly adopted in previous PA work (Tomanek and Hahn, 2009;Flannery and Mori, 2015;Li et al., 2016;Radmard et al., 2021).\nNevertheless, evaluating only by sub-structures ignores a crucial hidden cost: The reading time of the contexts. For example, in sequence labeling with PA, although not every token in the sentence needs to be tagged, the annotator may still need to read the whole sentence to understand its meaning. Therefore, if performing comparisons only by the amount of annotated sub-structures, it will be unfair for the FA baseline because more contexts must be read to carry out PA.\nIn this work, we adopt a simple two-facet comparison scheme that considers both reading and labeling costs. We first control the reading cost by choosing the same size of contexts in the sentence selection step of each AL cycle (Line 4 in Algorithm 1). Then, we further compare by the sub-structure labeling cost, measured by the substructure annotation cost. If PA can roughly reach the FA performance with the same reading cost but fewer sub-structures annotated, it would be fair to say that PA can help reduce cost over FA. A better comparing scheme should evaluate against a unified estimation of the real annotation costs (Settles et al., 2008). This usually requires actual annotation exercises rather than simulations, which we leave to future work." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "NER and DPAR", "publication_ref": [ "b3", "b26" ], "table_ref": [], "text": "Settings. We compare primarily three strategies: FA, PA, and a baseline where randomly selected sentences are fully annotated (Rand). We also include a supervised result (Super.) which is obtained from a model trained with the full original training set. We measure reading cost by the total number of tokens in the selected sentences. For labeling cost, we further adopt metrics with practical considerations. In NER, lots of tokens, such as functional words, can be easily judged as the 'O' (non-entity) tag. To avoid over-estimating the costs of such easy tokens for FA, we filter tokens by their partof-speech (POS) tags and only count the ones that 7RNHQ&RXQW are likely to be inside an entity mention. 8 For PA, we still count every queried token. For the task of DPAR, similarly, different dependency links can have variant annotation difficulties. We utilize the surface distance between the head and modifier of the dependency edge as the measure of labeling cost, considering that the decisions for longer dependencies are usually harder.\n) 1(55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 6XEVWUXFWXUH&RXQW ) 1(5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 7RNHQ&RXQW /$6 '3$55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 6XEVWUXFWXUH&RXQW /$6 '3$5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU\nMain Results. The main test results are shown in Figure 2, where the patterns on both tasks are similar. First, AL brings clear improvements over the random baseline and can roughly reach the fully supervised performance with only a small portion of data annotated (around 18% for CoNLL-2003 and 30% for UD-EWT). Moreover, self-training (+ST) is helpful for all the strategies, boosting performance without the need for extra manual annotations. Finally, with the help of self-training, the PA strategy can roughly match the performance of FA with the same amount of reading cost (according to the left figures) while labeling fewer sub-structures (according to the right figures). This indicates that PA can help to further reduce annotation costs over the strong FA baselines.\nRatio Analysis. We further analyze the effectiveness of our adaptive ratio scheme with DPAR as the case study. We compare the adaptive scheme to schemes with fixed ratio r, and the results 9 are shown in Figure 3. For the fixed-ratio schemes, if the value is too small (such as 0.1), although its improving speed is the fastest at the beginning, its performance lags behind others with the same reading contexts due to fewer sub-structures annotated. If the value is too large (such as 0.5), it grows slowly, probably because too many uninformative sub-structures are annotated. The fixed scheme with r = 0.3 seems a good choice; however, it is unclear how to find this sweet spot in realistic AL processes. The adaptive scheme provides a reasonable solution by automatically deciding the ratio according to the model performance.\nError and Uncertainty Analysis. We further analyze the error rates and uncertainties of the queried sub-structures. We still take DPAR as a case study and Figure 4 shows the results along the AL cycles in PA mode. First, though adopting a simple model, the performance predictor can give reasonable estimations for the overall error rates. Moreover, by further breaking down the error rates into selected 9 Here, we use self-training (+ST) for all the strategies. 7RNHQ&RXQW (S) and non-selected (N) groups, we can see that the selected ones contain many errors, indicating the need for manual corrections. On the other hand, the error rates on the non-selected sub-structures are much lower, verifying the effectiveness of using model-predicted pseudo labels on them in selftraining. Finally, the overall margin of the selected sentences keeps increasing towards 1, indicating that there are many non-ambiguous sub-structures even in highly-uncertain sentences. The margins of the selected sub-structures are much lower, suggesting that annotating them could provide more informative signals for model training.\n/$6 '3$55HDGLQJ&RVW U U U $GDSWLYH 6XSHU 6XEVWUXFWXUH&RXQW /$6 U U U $GDSWLYH 6XSHU '3$5/DEHOLQJ&RVW\nDomain-transfer Experiments. We further investigate a domain-transfer scenario: in addition to unlabeled in-domain data, we assume abundant outof-domain annotated data and perform AL on the target domain. We adopt tweet texts as the target domain, using Broad Twitter Corpus (BTC; Derczynski et al., 2016) for NER and Tweebank (Liu et al., 2018) for DPAR. We assume we have models trained from a richly-annotated source domain and continue performing AL on the target domain. The source domains are the datasets that we utilize in our main experiments: CoNLL03 for NER and UD-EWT for DPAR. We adopt a simple model-transfer" }, { "figure_ref": [], "heading": "SUHG HUURU HUURU6 HUURU1 PDUJLQ PDUJLQ6", "publication_ref": [], "table_ref": [], "text": "Figure 4: Analyses of error rates and uncertainties (margins) of the DPAR sub-structures in the queried sentences along the AL cycles (x-axis). Here, 'pred' denotes the predicted error rate, 'error' denotes the actual error rate and 'margin' denotes the uncertainty (margin) scores. For the suffixes, '(S)' indicates partially selected sub-structures, and '(N)' indicates non-selected ones. 'Margin(N)' is omitted since it is always close to 1.\napproach by initializing the model from the one trained with the source data and further fine-tuning it with the target data. Since the target data size is small, we reduce the AL batch sizes for BTC and Tweebank to 2000 and 1000 tokens, respectively. The results for these experiments are shown in Figure 5. In these experiments, we also include the no-transfer results, adopting the \"FA+ST\" but without model transfer. For NER, without transfer learning, the results are generally worse, especially in early AL stages, where there is a small amount of annotated data to provide training signals. In these cases, knowledge learned from the source domain can provide extra information to boost the results. For DPAR, we can see even larger benefits of using transfer learning; there are still clear gaps between transfer and no-transfer strategies when the former already reaches the supervised performance. These results indicate that the benefits of AL and transfer learning can be orthogonal, and combining them can lead to promising results." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Information Extraction", "publication_ref": [ "b44", "b70", "b46", "b46" ], "table_ref": [], "text": "We further explore more complex IE tasks that involve multiple types of output. Specifically, we investigate event extraction and relation extraction. We adopt a classical pipelined approach, 10 which splits the full task into two sub-tasks: the first performs mention extraction, while the second examines mention pairs and predicts relations. While 10 Please refer to Appendix A for more task-specific details.\n7RNHQ&RXQW ) 1(55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU 6XEVWUXFWXUH&RXQW ) 1(5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU 7RNHQ&RXQW /$6 '3$55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU 6XEVWUXFWXUH&RXQW /$6 '3$5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU\nFigure 5: AL results in domain-transfer settings (CoNLL03 → BTC for NER and UD-EWT → Tweebank for DPAR). Notations are the same as in Figure 2, except that there is one more curve of \"NoTransfer\" denoting the setting where no transfer learning is applied (FA+ST alone). 7RNHQ&RXQW\n) previous work investigates multi-task AL with FA (Reichart et al., 2008;Zhu et al., 2020;Rotman and Reichart, 2022), this work is the first to explore PA in this challenging setting. We extend our PA scheme to this multi-task scenario with several modifications. First, for the sentence-selection stage, we obtain a sentence-wise uncertainty score UNC(x) with a weighted combination of the two sub-tasks' uncertainty scores:\n(YHQW$UJXPHQW5HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 6XEVWUXFWXUH&RXQW ) (YHQW$UJXPHQW/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU\nUNC(x) = β • UNC-Mention(x) + (1 -β) • UNC-Relation(x)\nFollowing Rotman and Reichart (2022), we set β to a relatively large value (0.9), which is found to be helpful for the second relational sub-task. Moreover, for partial selection, we separately select sub-structures for the two sub-tasks according to the adaptive selection scheme. Since the second relational sub-task depends on the mentions extracted from the first sub-task, we utilize predicted mentions and view each feasible mention pair as a querying candidate. A special annotation protocol is adopted to deal with the incorrectly predicted mentions. For each queried relation, we first examine its mentions and perform corrections if there are mention errors that can be fixed by matching the gold ones. If neither of the two mentions can be corrected, we discard this query.\nFinally, to compensate for the influences of errors in mention extraction, we adopt further heuristics of increasing the partial ratio by the estimated percentage of queries with incorrect mentions, as well as including a second annotation stage with queries over newly annotated mentions. Please refer to Appendix A.2 for more details.\nWe show the results of event argument extraction in Figure 6, where the overall trends are similar to those in NER and DPAR. Here, labeling cost is simply measured as the number of candidate argument links. Overall, self-training is helpful for all AL strategies, indicating the benefits of making better use of unlabeled data. If measured by the labeling cost, PA learns the fastest and costs only around half of the annotated arguments of FA to reach the supervised result. On the other hand, PA is also competitive concerning reading cost and can generally match the FA results in later AL stages. There is still a gap between PA and FA in the earlier AL stages, which may be influenced by the errors produced by the first sub-task of mention extraction. We leave further investigations on improving early AL stages to future work. The results for the relation extraction task share similar trends and are presented in Appendix D.2, together with the results of mention extraction." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b67", "b34", "b18", "b15", "b33", "b6", "b60", "b2", "b58", "b28", "b68", "b59", "b56", "b65", "b11", "b37", "b8", "b39", "b38", "b30", "b23", "b53", "b19", "b62", "b36", "b14", "b69", "b29", "b1", "b43", "b10", "b24", "b22", "b7" ], "table_ref": [], "text": "Self-training. Self-training is a commonly utilized semi-supervised method to incorporate unlabeled data. It has been shown effective for a variety of NLP tasks, including word sense disambiguation (Yarowsky, 1995), parsing (McClosky et al., 2006), named entity recognition (Meng et al., 2021;Huang et al., 2021), text generation (He et al., 2020;Mehta et al., 2022) as well as natural language understanding (Du et al., 2021). Moreover, selftraining can be especially helpful for low-resource scenarios, such as in few-shot learning (Vu et al., 2021;Chen et al., 2021). Self-training has also been a commonly adopted strategy to enhance active learning (Tomanek and Hahn, 2009;Majidi and Crane, 2013;Yu et al., 2022).\nPA. Learning from incomplete annotations has been well-explored for structured prediction. For CRF models, taking the marginal likelihood as the objective function has been one of the most utilized techniques (Tsuboi et al., 2008;Täckström et al., 2013;Yang and Vozila, 2014;Greenberg et al., 2018). There are also other methods to deal with incomplete annotations, such as adopting local models (Neubig and Mori, 2010;Flannery et al., 2011), max-margin objective (Fernandes andBrefeld, 2011), learning with constraints (Ning et al., 2018(Ning et al., , 2019;;Mayhew et al., 2019) and negative sampling (Li et al., 2022).\nAL for structured prediction. AL has been investigated for various structured prediction tasks in NLP, such as sequence labeling (Settles and Craven, 2008;Shen et al., 2018), parsing (Hwa, 2004), semantic role labeling (Wang et al., 2017;Myers and Palmer, 2021) and machine translation (Haffari et al., 2009;Zeng et al., 2019). While most previous work adopt FA, that is, annotating full structured objects for the inputs, PA can help to further reduce the annotation cost. Typical examples of PA sub-structures include tokens and subsequences for tagging (Marcheggiani and Artières, 2014;Chaudhary et al., 2019;Radmard et al., 2021), word-wise head edges for dependency parsing (Flannery and Mori, 2015;Li et al., 2016) and mention links for coreference resolution (Li et al., 2020;Espeland et al., 2020)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we investigate better AL strategies for structured prediction in NLP, adopting a performance estimator to automatically decide suitable ratios for partial sub-structure selection and utilizing self-training to make better use of the available unlabeled data pool. With comprehensive experiments on various tasks, we show that the combination of PA and self-training can be more dataefficient than strong full AL baselines." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b45" ], "table_ref": [], "text": "This work has several limitations. First, the AL experiments in this work are based on simulations with existing annotations, following previous AL work. Our error estimator also requires a small development set and the proper setting of a hyperparameter. Nevertheless, we tried our best to make the settings practical and the evaluation fair, especially taking reading time into consideration. Second, in our experiments, we mainly focus on investigating how much data is needed to reach the fullysupervised results and continue the AL cycles until this happens. In practice, it may be interesting to more carefully examine the early AL stages, where most of the performance improvements happen. Finally, for the IE tasks with multiple output types, we mainly focus on the second relational sub-task and adopt a simple weighting setting to combine the uncertainties of the two sub-tasks. More explorations on the dynamic balancing of the two subtasks in pipelined models (Roth and Small, 2008) would be an interesting direction for future work." }, { "figure_ref": [], "heading": "A More Details of Task Settings", "publication_ref": [ "b5", "b20", "b54", "b32", "b10", "b24", "b24" ], "table_ref": [], "text": "A.1 DPAR • Model. Similar to NER, we utilize a BERTbased module to provide contextualized representations. We further stack a standard firstorder non-projective graph-based parsing module based on a biaffine scorer (Dozat and Manning, 2017). The marginals for each token's head decision can be feasibly calculated by the Matrix-Tree algorithm (Koo et al., 2007;Smith and Smith, 2007;McDonald and Satta, 2007).\n• Query and Selection. Following previous works (Flannery and Mori, 2015;Li et al., 2016), we view DPAR as a head-word finding problem and regard each token and its head decision as the sub-structure unit. In this case, the query and selection for DPAR are almost identical to the NER task because of this token-wise decision scheme. Therefore, the same AL strategies in NER can be adopted here.\n• Annotation. In DPAR, there are no special spanbased annotations as in NER; thus, we simply annotate in a word-based scheme.\n• Model learning. Similar to NER, we adopt the log-likelihood of the gold parse tree as the training loss in FA and marginalized likelihood in PA (Li et al., 2016)." }, { "figure_ref": [], "heading": "A.2 IE", "publication_ref": [ "b46", "b57", "b40", "b61", "b42", "b25" ], "table_ref": [], "text": "• Tasks. We tackle event extraction (EE) and relation extraction (RE) using a two-step pipelined approach. The first step aims to extract entity mentions for RE, and entity mentions and event triggers for EE. We adopt sequence labeling for mention extractions as in the NER task. Based on the mentions extracted in the first step, the second step examines each feasible candidate mention pair (entity pair for RE and event-entity pair for EE) and decides the relation (entity relation for RE and event argument relation for EE) for them. Since event argument links can be regarded as relations between event triggers and entities, for simplicity we will use the relational sub-task to refer to both relation and argument extraction.\n• Model. We adopt a multi-task model similar to the one utilized in (Rotman and Reichart, 2022). With a pre-trained encoder, we take the first N layers as the shared encoding module whose output representations are used for both sub-tasks. Each sub-task further adopts a private encoder that is initialized with the remaining pre-trained layers and is trained with task-specific signals.\nWe simply set N to 6, while the results are generally not sensitive to this hyper-parameter. Final task-specific predictors are further stacked upon the corresponding private encoders. We adopt a CRF layer for mention extraction and a pairwise local predictor with a biaffine scorer for relation or argument extraction.\n• Sentence selection. For an unlabeled sentence, there is an uncertainty score for each sub-task.\nFor mentions, the uncertainty is the average margin as in the NER task. For relations, we find that averaging uncertainties over all mention pairs has a bias towards sentences with fewer mentions.\nTo mitigate such bias, we first aggregate an uncertainty score for each mention by taking the maximum score within all the relations that link to it and then averaging over all the mentions for sentence-level scores. Finally, the scores of the two sub-tasks are linearly combined to form the sentence-level uncertainty.\n• Partial selection. For PA selection, the two subtasks are handled separately according to the adaptive ratio scheme. We further adopt two heuristics for the relational task to compensate for errors in the mention extraction. First, since there can be over-predicted mentions that lead to discarded relation queries, we adjust the PA ratio by estimating how many candidate relations contain such errors in the mentions. We again train a logistic regression model to predict whether a token is NIL (or 'O' in the BIO scheme, meaning not contained inside any gold mentions) based on its NIL probability. Then for each candidate relation, we calculate the probability that any token within its mentions is NIL.\nBy averaging this probability of all the candidates, we obtain a rough estimation of the percentage of problematic relations, which we call it α. Finally the PA selection ratio is adjusted by: r adjust = α•r problem +(1-α)•r origin . Here, r origin denotes the original selection ratio obtained from the adaptive scheme, and r problem denotes the selection ratio of problematic relations, which we conservatively set to 1. Secondly, since there can also be under-predicted mentions, we add a • Annotation. The annotation of the mentions is the same as in the NER task, while for the annotation of relational queries, their mentions are first examined and corrected if needed, as explained in §3.4. We measure the labeling cost by the final annotated items; thus, these extra examined mentioned will also be properly counted. (Tjong Kim Sang and De Meulder, 2003) for NER, the English Web Treebank (EWT) from Universal Dependencies 12 v2.10 (Nivre et al., 2020) for DPAR, and English portion of ACE2005 13 (Walker et al., 2006) for IE. We utilize Stanza 14 (Qi et al., 2020) to assign POS tags for cost measurement in NER (Lin et al., 2020)." }, { "figure_ref": [], "heading": "C Details of Algorithms", "publication_ref": [ "b21", "b0", "b20", "b54", "b32", "b59", "b24", "b11", "b59", "b63" ], "table_ref": [], "text": "In this section, we provide more details of the algorithms for CRF-styled models (Lafferty et al., 2001). For an input instance x (for example, a sentence), the model assigns a globally normalized probability to each possible output structured object y (for example, a tag sequence or a parse tree) in the target space Y:\np(y|x) = exp s(y|x) y ′ ∈Y exp s(y ′ |x) = exp f ∈y s(f |x) y ′ ∈Y f ′ ∈y ′ s(f ′ |x)\nHere, s(y|x) denotes the un-normalized raw scores assigned to y, which is further factorized into the sum of the sub-structure scores s(f |x). 16 In plain likelihood training for CRF, we take the negative log-probability as the training objective:\nL = -log p(y|x) = -s(y|x) + log y ′ ∈Y exp s(y ′ |x)\nFor brevity, in the remaining, we use log Z(x) to denote the second term of the log partition function.\nFor model training, we need to calculate the gradients of the model parameters θ to the loss function.\nThe first item is easy to deal with since it only involves one structured object, while log Z(x) needs some reorganization according to the factorization:\n∇ θ log Z = y ′ ∈Y exp s(y ′ |x)∇ θ s(y ′ |x) y ′′ ∈Y exp s(y ′′ |x) = y ′ ∈Y p(y ′ |x)∇ θ s(y ′ |x) = y ′ ∈Y p(y ′ |x) f ′ ∈y ′ ∇ θ s(f ′ |x) = f ′ ∇ θ s(f ′ |x) y ′ ∈Y f ′ p(y ′ |x)\nThe last step is obtained by swapping the order of the two summations, and finally, the problem is reduced to calculating each sub-structure's marginal probability y ′ ∈Y f ′ p(y ′ |x). Here, Y f ′ denotes all the output structured objects that contain the sub-structure f ′ , and the marginals can usually be calculated by classical structured prediction algorithms such as forward-backward for sequence 16 Such as unary and pairwise scores for sequence labeling or token-wise edge scores for dependency parsing. labeling (Baum et al., 1970) or Matrix-tree for non-projective dependency parsing (Koo et al., 2007;Smith and Smith, 2007;McDonald and Satta, 2007).\nLearning with incomplete annotations. Following previous works (Tsuboi et al., 2008;Li et al., 2016;Greenberg et al., 2018), for the instances with incomplete annotations, we utilize the logarithm of the marginal likelihood as the learning objective:\nL = -log y∈Y C p(y|x) = -log y∈Y C exp s(y|x) y∈Y exp s(y|x) = -log y∈Y C exp s(y|x) + log Z(x)\nHere, Y C denotes the constrained set of the output objects that agree with the existing partial annotations. In this objective function, the second item is exactly the same as in standard CRF, while the first one can be calculated 17 in a modified way (Tsuboi et al., 2008). Knowledge distillation. As described in the main context, we adopt the knowledge distillation objective for self-training with soft labels. For brevity, we denote the probabilities from the last model as p ′ (y|x) and keep using p(y|x) to denote the ones from the current model. Following Wang et al. (2021), the loss can be calculated by:\nL = - y∈Y p ′ (y|x) log p(y|x) = - y∈Y p ′ (y|x)s(y|x) + log Z(x) = - y∈Y p ′ (y|x) f ′ ∈y ′ s(f ′ |x) + log Z(x) = - f ′ s(f ′ |x) y ′ ∈Y f ′ p ′ (y ′ |x) + log Z(x)\nThe loss function is broken down into two items whose gradients can be obtained by calculating marginals according to the last model or the current one, respectively. 17 In our implementation, we adopt a simple method to enforce the constraints by adding negative-infinite to the scores of the impossible labels. In this case, the structures that violates the constraints will have a score of negative-infinite (and a probability of zero) and will thus be excluded. " }, { "figure_ref": [], "heading": "D Extra Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1 Using Different Acquisition Functions", "publication_ref": [ "b17" ], "table_ref": [], "text": "In the main experiments, our acquisition function is based on margin-based uncertainty, that is, selecting the instances that have the largest marginal differences between the most and second-most confident predictions. Here, we compare it with various other acquisition functions, including leastconfident (-LC), max-entropy (-E) and BALD (-B) (Houlsby et al., 2011). We take DPAR as the studying case and the results for full annotation and partial annotation are shown in Figure 8 and 7, respectively. Generally, there are no large differences between the adopted querying methods and the margin-based method can obtain the overall best results. Notice that regardless of the adopted acquisition function, we can see the effectiveness of our partial selection scheme: it requires lower labeling cost than full annotation to reach the upper bound. This shows that our method is extensible to different AL querying methods and it will be interesting to explore the combination of our method with more complex and advanced acquisition func-tions, such as those considering representativeness." }, { "figure_ref": [ "fig_1" ], "heading": "D.2 IE Experiments", "publication_ref": [ "b45" ], "table_ref": [], "text": "In this section, we present more results of the IE experiments. First, Figure 9 shows the mention extraction results for the event extraction task. The overall trends are very similar to those in NER: PA can obtain similar results to FA with the same reading texts and less mention labeling cost. In Figure 10, we show the results for mention and relation extractions. In the ACE dataset, relations are very sparsely annotated, and around 97% of the entities are linked with less or equal to two relations. Considering this fact, we measure the cost of FA relation extraction by two times the annotated entities, while PA still counts the number of the queried relations. The relation results are similar to the patterns for event argument extraction, showing the benefits of selecting and annotating with partial sub-structures. Notice that in some of the mention extraction results, there seems to be less obvious differences between the AL strategies over the random baseline. This may be due to our focus on the second sub-task for relations (or event arguments), directly reflected by its high weight (β) in calculating sentence uncertainty. It will be interesting to explore better ways to enhance both sub-tasks, probably with an adaptive combination scheme (Roth and Small, 2008)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "(argument results are shown in Figure 6). " } ]
In this work we propose a pragmatic method that reduces the annotation cost for structured label spaces using active learning. Our approach leverages partial annotation, which reduces labeling costs for structured outputs by selecting only the most informative substructures for annotation. We also utilize selftraining to incorporate the current model's automatic predictions as pseudo-labels for unannotated sub-structures. A key challenge in effectively combining partial annotation with self-training to reduce annotation cost is determining which sub-structures to select to label. To address this challenge, we adopt an error estimator to adaptively decide the partial selection ratio according to the current model's capability. In evaluations spanning four structured prediction tasks, we show that our combination of partial annotation and self-training using an adaptive selection ratio reduces annotation cost over strong full annotation baselines under a fair comparison scheme that takes reading time into consideration.
Data-efficient Active Learning for Structured Prediction with Partial Annotation and Self-Training
[ { "figure_caption": "Figure 1 :1Figure1: Example partial annotations of a dependency tree. Manual annotation is requested only for the uncertain sub-structures (red), whereas model predictions can be used to annotate the highly-confident edges (blue).", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Comparisons according to reading and labeling cost. Each node indicates one AL cycle. For x-axis, reading cost (left) is measured by token numbers, while labeling cost (right) is task-specific ( §3.3). NER is evaluated with labeled F1 scores on CoNLL-2003, while DPAR is with LAS scores on UD-EWT. Results are averaged over five runs with different seeds, and the shaded areas indicate standard deviations. The overall unlabeled pool contains around 200K tokens. Using AL, good performance can be obtained with less than 30% (60K) annotated.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Comparisons of different strategies to decide the partial ratio. The first three utilize fixed ratio r, while \"Adaptive\" adopts the dynamic scheme. The grey curve (corresponding to the right y-axis) denotes the actual selection ratios with the adaptive scheme.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Results of event argument extraction on ACE05. Notations are the same as in Figure 2.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :Figure 8 :78Figure 7: Comparisons of different acquisition functions for partial annotation: \"-M\" denotes margin-based, \"-LC\" denotes least-confident, \"-E\" denotes entropy-based, and \"-B\" indicates BALD.", "figure_data": "", "figure_id": "fig_5", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "and mention tasks. We followLin et al. (2020) for the pre-processing 15 of the ACE dataset. For the IE tasks on ACE, we find that the conventional test set contains only newswire documents while the training set consists of various genres (such as from conversation and web). Such mismatches between the AL pool and the final testing set are nontrivial to handle with the classical AL protocol, and we thus randomly re-split the ACE dataset (with a ratio of 7:1:2 for training, dev, and test sets, respectively). Table1shows data statistics. For each AL experiment, we take the original training set as the unlabeled pool, down-sample a dev set from the original dev set, and evaluate on the full test set.", "figure_data": "More Settings. All of our models are based onthe pre-trained RoBERTa base as the contextual-ized encoder. We further fine-tune it with thetask-specific decoder in all the experiments. Thenumber of model parameters is roughly 124M forsingle-output tasks and around 186M for multi-taskIE tasks. For other hyper-parameter settings, wemostly follow common practices. Adam is utilizedfor optimization, with an initial learning rate of 1e-5 for NER and 2e-5 for DPAR and IE. The learningrate is linearly decayed to 10% of the initial valuethroughout the training process. The models aretuned for 10K steps with a batch size of roughly512 tokens. We evaluate the model on the dev setevery 1K steps to choose the best checkpoint. Theexperiments are run with one 2080Ti GPU. Thetraining of one AL cycle usually takes only oneor two hours, and the full simulation of one ALrun can be finished within one day. We adopt stan-dard evaluation metrics for the tasks: labeled F1score for NER, labeled attachment score (LAS) for11 https://www.clips.uantwerpen.be/conll2003/DPAR, labeled argument and relation F1 score forner/ 12 https://universaldependencies.org/event arguments and relations13 https://catalog.ldc.upenn.edu/LDC2006T0614 https://stanfordnlp.github.io/stanza/", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Zhisong Zhang; Emma Strubell; Eduard Hovy
[ { "authors": "Leonard E Baum; Ted Petrie; George Soules; Norman Weiss", "journal": "The annals of mathematical statistics", "ref_id": "b0", "title": "A maximization technique occurring in the statistical analysis of probabilistic functions of markov chains", "year": "1970" }, { "authors": "Aditi Chaudhary; Jiateng Xie; Zaid Sheikh; Graham Neubig; Jaime Carbonell", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "A little annotation does a lot of good: A study in bootstrapping lowresource named entity recognizers", "year": "2019" }, { "authors": "Yiming Chen; Yan Zhang; Chen Zhang; Grandee Lee; Ran Cheng; Haizhou Li", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Revisiting selftraining for few-shot learning of language model", "year": "2021" }, { "authors": "Leon Derczynski; Kalina Bontcheva; Ian Roberts", "journal": "", "ref_id": "b3", "title": "Broad Twitter corpus: A diverse named entity recognition resource", "year": "2016" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Timothy Dozat; Christopher D Manning", "journal": "", "ref_id": "b5", "title": "Deep biaffine attention for neural dependency parsing", "year": "2017" }, { "authors": "Jingfei Du; Edouard Grave; Beliz Gunel; Vishrav Chaudhary; Onur Celebi; Michael Auli; Veselin Stoyanov; Alexis Conneau", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Self-training improves pre-training for natural language understanding", "year": "2021" }, { "authors": "Beatrice Espeland; Benjamin Alex; Bach", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Enhanced labelling in active learning for coreference resolution", "year": "2020" }, { "authors": "R Eraldo; Ulf Fernandes; Brefeld", "journal": "Springer", "ref_id": "b8", "title": "Learning from partially annotated sequences", "year": "2011" }, { "authors": "Daniel Flannery; Yusuke Miayo; Graham Neubig; Shinsuke Mori", "journal": "Asian Federation of Natural Language Process", "ref_id": "b9", "title": "Training dependency parsers from partially annotated corpora", "year": "2011" }, { "authors": "Daniel Flannery; Shinsuke Mori", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Combining active learning and partial annotation for domain adaptation of a Japanese dependency parser", "year": "2015" }, { "authors": "Nathan Greenberg; Trapit Bansal; Patrick Verga; Andrew Mccallum", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Marginal likelihood training of BiLSTM-CRF for biomedical named entity recognition from disjoint label sets", "year": "2018" }, { "authors": "Chuan Guo; Geoff Pleiss; Yu Sun; Kilian Q Weinberger", "journal": "", "ref_id": "b12", "title": "On calibration of modern neural networks", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b13", "title": "", "year": "" }, { "authors": "Gholamreza Haffari; Maxim Roy; Anoop Sarkar", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Active learning for statistical phrase-based machine translation", "year": "2009" }, { "authors": "Junxian He; Jiatao Gu; Jiajun Shen; Marc'aurelio Ranzato", "journal": "", "ref_id": "b15", "title": "Revisiting self-training for neural sequence generation", "year": "2020" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b16", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Neil Houlsby; Ferenc Huszár; Zoubin Ghahramani; Máté Lengyel", "journal": "", "ref_id": "b17", "title": "Bayesian active learning for classification and preference learning", "year": "2011" }, { "authors": "Jiaxin Huang; Chunyuan Li; Krishan Subudhi; Damien Jose; Shobana Balakrishnan; Weizhu Chen; Baolin Peng; Jianfeng Gao; Jiawei Han", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Fewshot named entity recognition: An empirical baseline study", "year": "2021" }, { "authors": "Rebecca Hwa", "journal": "Computational Linguistics", "ref_id": "b19", "title": "Sample selection for statistical parsing", "year": "2004" }, { "authors": "Terry Koo; Amir Globerson; Xavier Carreras; Michael Collins", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Structured prediction models via the matrix-tree theorem", "year": "2007" }, { "authors": "Andrew John D Lafferty; Fernando Cn Mccallum; Pereira", "journal": "", "ref_id": "b21", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "year": "2001" }, { "authors": "Belinda Z Li; Gabriel Stanovsky; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Active learning for coreference resolution using discrete annotation", "year": "2020" }, { "authors": "Yangming Li; Lemao Liu; Shuming Shi", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Rethinking negative sampling for handling missing entity annotations", "year": "2022" }, { "authors": "Zhenghua Li; Min Zhang; Yue Zhang; Zhanyi Liu; Wenliang Chen; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Active learning for dependency parsing with partial annotation", "year": "2016" }, { "authors": "Ying Lin; Heng Ji; Fei Huang; Lingfei Wu", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "A joint neural model for information extraction with global features", "year": "2020" }, { "authors": "Yijia Liu; Yi Zhu; Wanxiang Che; Bing Qin; Nathan Schneider; Noah A Smith", "journal": "", "ref_id": "b26", "title": "Parsing tweets into Universal Dependencies", "year": "2018" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b27", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Saeed Majidi; Gregory Crane", "journal": "Assocation for Computational Linguistics", "ref_id": "b28", "title": "Active learning for dependency parsing by a committee of parsers", "year": "2013" }, { "authors": "Diego Marcheggiani; Thierry Artières", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "An experimental comparison of active learning strategies for partially labeled sequences", "year": "2014" }, { "authors": "Stephen Mayhew; Snigdha Chaturvedi; Chen-Tse Tsai; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Named entity recognition with partially annotated training data", "year": "2019" }, { "authors": "David Mcclosky; Eugene Charniak; Mark Johnson", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Effective self-training for parsing", "year": "2006" }, { "authors": "Ryan Mcdonald; Giorgio Satta", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "On the complexity of non-projective data-driven dependency parsing", "year": "2007" }, { "authors": "Sanket Vaibhav Mehta; Jinfeng Rao; Yi Tay; Mihir Kale; Ankur Parikh; Emma Strubell", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Improving compositional generalization with self-training for data-to-text generation", "year": "2022" }, { "authors": "Yu Meng; Yunyi Zhang; Jiaxin Huang; Xuan Wang; Yu Zhang; Ji Heng; Jiawei Han", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Distantlysupervised named entity recognition with noiserobust learning and language model augmented selftraining", "year": "2021" }, { "authors": "Abolghasem Seyed; Alexis Mirroshandel; Nasr", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Active learning for dependency parsing using partially annotated sentences", "year": "2011" }, { "authors": "Skatje Myers; Martha Palmer", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Tuning deep active learning for semantic role labeling", "year": "2021" }, { "authors": "Graham Neubig; Shinsuke Mori", "journal": "European Language Resources Association (ELRA)", "ref_id": "b37", "title": "Word-based partial annotation for efficient corpus construction", "year": "2010" }, { "authors": "Qiang Ning; Hangfeng He; Chuchu Fan; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Partial or complete, that's the question", "year": "2019" }, { "authors": "Qiang Ning; Zhongzhi Yu; Chuchu Fan; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Exploiting partially annotated data in temporal relation extraction", "year": "2018" }, { "authors": "Joakim Nivre; Marie-Catherine De Marneffe; Filip Ginter; Jan Hajič; Christopher D Manning; Sampo Pyysalo; Sebastian Schuster; Francis Tyers; Daniel Zeman", "journal": "European Language Resources Association", "ref_id": "b40", "title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", "year": "2020" }, { "authors": "Matthew E Peters; Mark Neumann; Mohit Iyyer; Matt Gardner; Christopher Clark; Kenton Lee; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Deep contextualized word representations", "year": "2018" }, { "authors": "Peng Qi; Yuhao Zhang; Yuhui Zhang; Jason Bolton; Christopher D Manning", "journal": "", "ref_id": "b42", "title": "Stanza: A python natural language processing toolkit for many human languages", "year": "2020" }, { "authors": "Puria Radmard; Yassir Fathullah; Aldo Lipani", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Subsequence based deep active learning for named entity recognition", "year": "2021" }, { "authors": "Roi Reichart; Katrin Tomanek; Udo Hahn; Ari Rappoport", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Multi-task active learning for linguistic annotations", "year": "2008" }, { "authors": "Dan Roth; Kevin Small", "journal": "", "ref_id": "b45", "title": "Active learning for pipeline models", "year": "2008" }, { "authors": "Guy Rotman; Roi Reichart", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b46", "title": "Multi-task active learning for pre-trained transformer-based models", "year": "2022" }, { "authors": "Manabu Sassano; Sadao Kurohashi", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Using smaller constituents rather than sentences in active learning for Japanese dependency parsing", "year": "2010" }, { "authors": "Tobias Scheffer; Christian Decomain; Stefan Wrobel", "journal": "Springer", "ref_id": "b48", "title": "Active hidden markov models for information extraction", "year": "2001" }, { "authors": "Burr Settles", "journal": "", "ref_id": "b49", "title": "Active learning literature survey", "year": "2009" }, { "authors": "Burr Settles; Mark Craven", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "An analysis of active learning strategies for sequence labeling tasks", "year": "2008" }, { "authors": "Burr Settles; Mark Craven; Lewis Friedland", "journal": "", "ref_id": "b51", "title": "Active learning with real annotation costs", "year": "2008" }, { "authors": "Dan Shen; Jie Zhang; Jian Su; Guodong Zhou; Chew-Lim Tan", "journal": "", "ref_id": "b52", "title": "Multi-criteria-based active learning for named entity recognition", "year": "2004" }, { "authors": "Yanyao Shen; Hyokun Yun; Zachary C Lipton; Yakov Kronrod; Animashree Anandkumar", "journal": "", "ref_id": "b53", "title": "Deep active learning for named entity recognition", "year": "2018" }, { "authors": "David A Smith; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Probabilistic models of nonprojective dependency trees", "year": "2007" }, { "authors": "A Noah; Smith", "journal": "Synthesis lectures on human language technologies", "ref_id": "b55", "title": "Linguistic structure prediction", "year": "2011" }, { "authors": "Oscar Täckström; Dipanjan Das; Slav Petrov; Ryan Mc-Donald; Joakim Nivre", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b56", "title": "Token and type constraints for cross-lingual part-of-speech tagging", "year": "2013" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b57", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "year": "2003" }, { "authors": "Katrin Tomanek; Udo Hahn", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "Semi-supervised active learning for sequence labeling", "year": "2009" }, { "authors": "Yuta Tsuboi; Hisashi Kashima; Shinsuke Mori; Hiroki Oda; Yuji Matsumoto", "journal": "", "ref_id": "b59", "title": "Training conditional random fields using incomplete annotations", "year": "2008" }, { "authors": "Tu Vu; Minh-Thang Luong; Quoc Le; Grady Simon; Mohit Iyyer", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "STraTA: Self-training with task augmentation for better few-shot learning", "year": "2021" }, { "authors": "Christopher Walker; Stephanie Strassel; Julie Medero; Kazuaki Maeda", "journal": "Linguistic Data Consortium", "ref_id": "b61", "title": "ACE 2005 multilingual training corpus", "year": "2006" }, { "authors": "Chenguang Wang; Laura Chiticariu; Yunyao Li", "journal": "", "ref_id": "b62", "title": "Active learning for black-box semantic role labeling with neural factors", "year": "2017" }, { "authors": "Xinyu Wang; Yong Jiang; Zhaohui Yan; Zixia Jia; Nguyen Bach; Tao Wang; Zhongqiang Huang; Fei Huang; Kewei Tu", "journal": "Association for Computational Linguistics", "ref_id": "b63", "title": "Structural knowledge distillation: Tractably distilling information for structured predictor", "year": "2021" }, { "authors": "Dittaya Wanvarie; Hiroya Takamura; Manabu Okumura", "journal": "Information and Media Technologies", "ref_id": "b64", "title": "Active learning with subsequence sampling strategy for sequence labeling tasks", "year": "2011" }, { "authors": "Fan Yang; Paul Vozila", "journal": "Association for Computational Linguistics", "ref_id": "b65", "title": "Semi-supervised Chinese word segmentation using partial-label learning with conditional random fields", "year": "2014" }, { "authors": "Zhilin Yang; Zihang Dai; Yiming Yang; Jaime Carbonell; Russ R Salakhutdinov; Quoc V Le", "journal": "Advances in neural information processing systems", "ref_id": "b66", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" }, { "authors": "David Yarowsky", "journal": "Association for Computational Linguistics", "ref_id": "b67", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "year": "1995" }, { "authors": "Yue Yu; Lingkai Kong; Jieyu Zhang; Rongzhi Zhang; Chao Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b68", "title": "AcTune: Uncertainty-based active self-training for active fine-tuning of pretrained language models", "year": "2022" }, { "authors": "Xiangkai Zeng; Sarthak Garg; Rajen Chatterjee; Udhyakumar Nallasamy; Matthias Paulik", "journal": "Association for Computational Linguistics", "ref_id": "b69", "title": "Empirical evaluation of active learning techniques for neural MT", "year": "2019" }, { "authors": "Hua Zhu; Wu Ye; Sihan Luo; Xidong Zhang", "journal": "", "ref_id": "b70", "title": "A multitask active learning framework for natural language understanding", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 108.04, 144.07, 143.93, 29.64 ], "formula_id": "formula_0", "formula_text": "r = 1 - 1 n s∈Q p(correct = 1|s)" }, { "formula_coordinates": [ 4, 342.81, 275.21, 144.94, 22.26 ], "formula_id": "formula_1", "formula_text": "L = - y∈Y p m ′ (y|x) log p m (y|x)" }, { "formula_coordinates": [ 6, 88.75, 76.58, 406.04, 238.58 ], "formula_id": "formula_2", "formula_text": ") 1(55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 6XEVWUXFWXUH&RXQW ) 1(5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 7RNHQ&RXQW /$6 '3$55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 6XEVWUXFWXUH&RXQW /$6 '3$5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU" }, { "formula_coordinates": [ 7, 79.06, 76.58, 189.56, 238.58 ], "formula_id": "formula_3", "formula_text": "/$6 '3$55HDGLQJ&RVW U U U $GDSWLYH 6XSHU 6XEVWUXFWXUH&RXQW /$6 U U U $GDSWLYH 6XSHU '3$5/DEHOLQJ&RVW" }, { "formula_coordinates": [ 8, 87.39, 76.58, 406.14, 238.58 ], "formula_id": "formula_4", "formula_text": "7RNHQ&RXQW ) 1(55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU 6XEVWUXFWXUH&RXQW ) 1(5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU 7RNHQ&RXQW /$6 '3$55HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU 6XEVWUXFWXUH&RXQW /$6 '3$5/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 1R7UDQVIHU 6XSHU" }, { "formula_coordinates": [ 8, 147.12, 367.57, 346.3, 113.12 ], "formula_id": "formula_5", "formula_text": "(YHQW$UJXPHQW5HDGLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU 6XEVWUXFWXUH&RXQW ) (YHQW$UJXPHQW/DEHOLQJ&RVW 5DQG 5DQG67 )$ )$67 3$ 3$67 6XSHU" }, { "formula_coordinates": [ 8, 94.33, 651.39, 171.33, 26.35 ], "formula_id": "formula_6", "formula_text": "UNC(x) = β • UNC-Mention(x) + (1 -β) • UNC-Relation(x)" }, { "formula_coordinates": [ 16, 107.36, 197.35, 144.08, 60.54 ], "formula_id": "formula_7", "formula_text": "p(y|x) = exp s(y|x) y ′ ∈Y exp s(y ′ |x) = exp f ∈y s(f |x) y ′ ∈Y f ′ ∈y ′ s(f ′ |x)" }, { "formula_coordinates": [ 16, 100.4, 346.31, 159.19, 41.91 ], "formula_id": "formula_8", "formula_text": "L = -log p(y|x) = -s(y|x) + log y ′ ∈Y exp s(y ′ |x)" }, { "formula_coordinates": [ 16, 87.41, 501.08, 183.99, 125.97 ], "formula_id": "formula_9", "formula_text": "∇ θ log Z = y ′ ∈Y exp s(y ′ |x)∇ θ s(y ′ |x) y ′′ ∈Y exp s(y ′′ |x) = y ′ ∈Y p(y ′ |x)∇ θ s(y ′ |x) = y ′ ∈Y p(y ′ |x) f ′ ∈y ′ ∇ θ s(f ′ |x) = f ′ ∇ θ s(f ′ |x) y ′ ∈Y f ′ p(y ′ |x)" }, { "formula_coordinates": [ 16, 329.35, 230.89, 171.86, 90.01 ], "formula_id": "formula_10", "formula_text": "L = -log y∈Y C p(y|x) = -log y∈Y C exp s(y|x) y∈Y exp s(y|x) = -log y∈Y C exp s(y|x) + log Z(x)" }, { "formula_coordinates": [ 16, 315.8, 528.34, 198.95, 120.99 ], "formula_id": "formula_11", "formula_text": "L = - y∈Y p ′ (y|x) log p(y|x) = - y∈Y p ′ (y|x)s(y|x) + log Z(x) = - y∈Y p ′ (y|x) f ′ ∈y ′ s(f ′ |x) + log Z(x) = - f ′ s(f ′ |x) y ′ ∈Y f ′ p ′ (y ′ |x) + log Z(x)" } ]
10.48550/arXiv.2304.02643
2023-06-06
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b4", "b8", "b10", "b7", "b9", "b7", "b14", "b17", "b18", "b19", "b20", "b22" ], "table_ref": [], "text": "C AMOUFLAGE is widespread in nature, as wildlife ani- mals often adapt their patterns and colors to blend into their surroundings or mimic other objects for concealment purposes [1]. Through the use of camouflage, prey animals can effectively hinder the detection or recognition by their predators, thereby enhancing their protection [2]. Furthermore, the concept of camouflaged objects extends beyond wildlife, encompassing entities that closely resemble the background or are heavily obscured, such as polyps or soldiers utilizing camouflage techniques. Within this context, the field of camouflaged object detection (COD) has garnered significant research attention, enabling a range of crucial applications including lung infection segmentation [3], polyp segmentation [4], species identification [5], and military camouflage pattern design [6].\nRecently, with the advancement of large-scale benchmark datasets [9]- [11], a plethora of COD [8], [10], [12]- [14] methods have been proposed and have exhibited exceptional performance in diverse and intricate scenarios. In general, the detection process of these models can be summarized into , FAPNet [7] and PreyNet [8], our method preserves better details and is competent to distinguish the entire target from the background.\nthree steps: 1) resizing the input image to a reduced resolution, 2) leveraging a pretrained backbone to extract features at multiple levels, and 3) utilizing a decoder to generate the final result. Despite the effectiveness of this widely adopted framework in addressing various challenging scenarios, there exist certain overlooked flaws that significantly hinder its overall performance. Concretely, in the first step, the majority of COD models employ low-resolution images (e.g., 352 × 352 [7], [15], [16], 416 × 416 [14]) as their input. However, camouflaged objects can be small in size and exhibit fuzzy boundaries. Although the subsampling process reduces training time and computational and memory overhead, it comes at the cost of losing substantial structural information, leading to a decline in performance. A possible remedy for this issue is the utilization of highresolution images. Nevertheless, as highlighted in [17], [18], CNN-based methods have an effective receptive field size much smaller than the theoretical value. Consequently, solely using larger samples would make it challenging for the models to capture large targets accurately. Furthermore, in the last two steps, existing models rely on multiple convolution blocks to process the entire image sequentially. However, a significant portion of the image consists of redundant areas for COD. These extensive background regions not only squander computational resources but also introduce significant noise. As a result, the models may erroneously identify some background objects as camouflaged targets.\nIn this paper, we propose the Bioinspired Three-Stage Net-work (BTSNet) as a solution to address these issues. BTSNet draws inspiration from the human perceptual behavior when examining images containing camouflaged targets. Given an indistinct image, the observer's search process can be broadly divided into three sequential steps: 1) scanning the image to identify suspicious regions, 2) zooming in on each identified region to verify the presence of camouflaged objects, and 3) zooming out to determine the precise locations of all targets within the input image. We emulate this process and devise a novel framework that facilitates the generation of detailed outcomes through a coarse-to-fine approach. Specifically, we propose a bifurcated backbone network that incorporates the initial two stages of the original ResNet-50 [19] model as the stem branch. Additionally, we employ the remaining three stages to construct two leaf branches with identical structures. In contrast to [20], we introduce a 2 × 2 maxpooling layers to downsample the output of the stem branch. Subsequently, this downsampled output is fed into the first leaf branch. Features extracted from the first leaf branch are then aggregated by the first decoder to generate a coarse prediction map. Importantly, the primary focus of the first decoder is to precisely locate the target. Hence, it becomes imperative to capture multi-scale information to effectively characterize camouflaged objects of varying sizes. To address this issue, we propose the Multi-scale Feature Enhancement Module (MFEM). MFEM comprises multiple branches, each performing convolution operations within an individual scale space. Furthermore, the output of the preceding branch is combined with the input using element-wise addition before being passed to the subsequent branch to repeat this process. By systematically reducing the kernel size of the pooling layers, we can successfully extract multi-scale cues and significantly expand the receptive field size while retaining important structural information.\nAfter obtaining the initial output of the first decoder, the corresponding foreground features are extracted from the stem branch through cropping and resized to a predetermined resolution. Subsequently, the second leaf branch utilizes these cropped features as input to generate more refined results, while being unaffected by background regions. To improve the overall performance, we introduce the Boundary Enhancement Module (BEM) in this process. The BEM integrates boundary information with the input feature through channel-wise concatenation. Furthermore, attention mechanisms are employed to enhance the representation capability. Through the incorporation of boundary information, the BEM effectively mitigates the problem of blurred contours.\nThe cropping and resizing operations conducted prior to the application of the second decoder have the potential to negatively impact the final performance. Therefore, in the third decoder, we incorporate the stem branch feature and the output of the second decoder as inputs in order to recover the omitted details. As emphasized in [21]- [23], low-level features (specifically, stem branch features) preserve ample details but are prone to background noises. Consequently, a simple integration of features with the generated mask may result in performance degradation. To address this issue, we propose the Mask-Guided Fusion Module (MGFM). The MGFM initially employs the Split-Fusion Module (SFM) for integration. The SFM divides the input feature into multiple groups, each of which is combined with the mask through channel-wise concatenation. By sequentially processing the multiple groups of features, we are able to effectively suppress background noises. Subsequently, the MGFM introduces boundary information to further enhance the performance. Consequently, we can generate fine-grained results with well-defined contours.\nTo validate the superiority of our proposed schema and the effectiveness of its key components, we conducted extensive experiments on three benchmark datasets. The experimental results demonstrate that the proposed schema not only generates high-quality results but also significantly reduces computational and memory overhead. Specifically, BTSNet surpasses state-of-the-art CNN-based counterparts by a substantial margin across six universally accepted evaluation metrics. Additionally, BTSNet achieves a processing speed of 50 frames per second (FPS) for 704 × 704 inputs with a batch size of 60, utilizing a single NVIDIA Titan XP GPU.\nIn summary, the main contributions of this paper are fivefold:\n• Inspired by human behavior when searching for camouflaged objects in vague images, a novel schema for COD is proposed. The proposed schema demonstrates the ability to leverage detailed structural information effectively, resulting in superb results while minimizing computational and memory overhead. " }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Camouflaged object detection", "publication_ref": [ "b21", "b23", "b24", "b25", "b21", "b23", "b24", "b8", "b10", "b9", "b14", "b30", "b31", "b10", "b32", "b33", "b34" ], "table_ref": [], "text": "Similar to many other computer vision tasks (e.g., salient object detection [22], [24], object detection [25]), early traditional COD methods mainly rely on hand-crafted features to spot the camouflaged targets from their high-similarity surroundings [26]- [29]. However, as pointed out in many previous researches [16], [30], these approaches are limited to only using low-level features to locate the targets, which makes these methods unable to distinguish camouflaged objects in complex scenarios, e.g., low contrast, occlusions, and indefinable boundaries.\nSimilar to many other computer vision tasks (e.g., salient object detection [22], [24], object detection [25]), early traditional COD methods primarily rely on hand-crafted features to identify camouflaged targets within their highly similar surroundings. However, as highlighted in numerous prior research studies [16], [30], these approaches are constrained by their utilization of only low-level features for target localization. Consequently, these methods face limitations in effectively discerning camouflaged objects within complex scenarios characterized by factors such as low contrast, occlusions, and ambiguous boundaries.\nRecently, with the development of finely annotated largescale datasets [9]- [11], numerous deep-learning-based COD methods have been proposed, surpassing their traditional counterparts by a significant margin. Fan et al. (SINet) [10] propose a straightforward yet effective framework comprising two main modules for searching and detecting camouflaged targets. Met et al. [15] (PFNet) employ a positioning module to locate potential targets from a global perspective and design a focus module to iteratively refine the coarse prediction map. Sun et al.(C 2 FNet) [31] develop an attention-induced crosslevel fusion module to aggregate features from multiple levels, followed by the utilization of a dual-branch global context module to exploit global contextual cues. Ren et al. (TANet) [32] devise multiple texture-aware refinement modules to generate texture-aware features, thereby enhancing the segmentation process by amplifying the subtle texture differences between the camouflaged target and its surroundings. Lv et al. (LSR) [11] incorporate ranking information and propose a ranking-based model for COD. Zhuge et al. (CubeNet) [13] introduce the concept of χ connection and include a sub edge decoder between two square fusion decoders to effectively model the weak contours of the objects. Zhai et al. (DTCNet) [33] develop a deep model that leverages the spatial organization of textons in both background and foreground areas as informative cues for accurate COD. Chou et al. (PINet) [34] devise a cascaded decamouflage module for target detection and obtaining complete segmentation results.\nIn addition to the aforementioned methods, supplementary information has recently been incorporated to enhance performance. Li et al. (FindNet) [35] introduce global contour and local patterns as crucial cues to facilitate the detection process. Chen et al. (BgNet) [30] explicitly exploit the complementary information between camouflaged regions and their corresponding boundaries, enabling the generation of prediction maps with finer details. He et al. (ELDNet) [36] have designed a novel framework for progressively refining boundary likelihood maps, which are then utilized to guide the feature fusion of concealed targets. Sun et al. (BGNet) [14] utilize informative object-related edge semantics to emphasize object structure and improve the performance of COD.\nDespite the exceptional performance achieved by these methods, it is worth noting that most existing COD models are trained and tested on low-resolution (352×352 [4], [12], [16], [37],416× 416 [14]) images. The subsampling process prior to model input leads to the loss of structural information. Consequently, the subtle differences between the camouflaged targets and their surroundings become more ambiguous, significantly degrading performance. In contrast, we propose a novel COD schema that fully exploits contextual information and local details in high-resolution input images, with minimal additional computational and memory overhead." }, { "figure_ref": [], "heading": "B. Multi-scale feature extraction", "publication_ref": [ "b22", "b23", "b37", "b39", "b19", "b30", "b22" ], "table_ref": [], "text": "It has been noted in numerous previous studies [23], [24], [38], [39] that capturing objects of varying sizes requires the incorporation of multi-scale information. To address this issue, several modules have been proposed to enhance the representation ability and improve performance by extracting multi-scale information.\nZhang et al. [40] [20] make modifications to the receptive field block [41]. In the n-th branch of the modified module, a (2n-1)×(2n-1) convolutional layer followed by a 3×3 dilated convolutional layer with a dilation rate of (2n-1) is utilized to expand the receptive field size and exploit multi-scale context information. The efficacy of the aforementioned modules has been demonstrated in various approaches related to COD [16],\n[30], [31].\nCompared to the convolution operation, the pooling operation has fewer parameters and higher efficiency. Recently, several pooling-based modules have been proposed to extract multi-scale context cues. Ji et al. (FSNet) [42] introduce the Pyramid Pooling Module (PPM) [43] to capture targets of various sizes. PPM consists of multiple branches, each containing an adaptive average pooling layer followed by a 1×1 convolutional layer. However, the intermediate feature maps produced by the adaptive average pooling layers lack detailed information, making PPM more suitable for processing lowresolution features.\nIn contrast, Liu et al. [23] propose the Feature Aggregation Module (FAM). Similar to PPM, FAM also comprises multiple branches. However, FAM uses an average pooling layer in each branch to better preserve the structural information. Building upon the concept of FAM, we propose a pooling-based module named Multi-scale Feature Enhancement Module (MFEM) to characterize multi-scale information. In contrast to FAM, which calculates intermediate features in parallel using multiple branches, the branches in MFEM are utilized in series. This sequential utilization allows the output of each preceding branch to propagate to the next branch, enabling further enhancement. By progressively reducing the size of the pooling layers, we can significantly expand the receptive field size while preserving structural information. " }, { "figure_ref": [], "heading": "C. Multi-stage detection", "publication_ref": [ "b44", "b47" ], "table_ref": [], "text": "Employing multiple decoders to perform coarse-to-fine detection is a common practice in the field. In the early CNNbased methods, several decoders with identical architecture are primarily utilized to enhance the segmentation results. For instance, Wei et al. (F 3 Net) [44] introduce the Cascaded Feedback Decoder (CFD), which comprises multiple subdecoders with independent parameters but sharing the same architecture. This design allows for iterative refinement of multi-level features, leading to improved performance. Similarly, Chen et al. (AFNet) [45] propose the cascaded feature interweaved decoder to exploit the complementary information across multi-level features and iteratively refine them for producing more precise segmentation maps.\nAfterwards, certain methodologies have indicated that employing decoders with distinct architectures at different stages can potentially yield improved performance. In their work, Fan et al. (SINetV2) [16] adopt a neighbor connection decoder to integrate features for generating a coarse result. Subsequently, they employ multiple group-reversal attention modules to generate the refined prediction map. In a similar vein, Chen et al. (BgNet) [30] employ a simplified decoder for target localization. Subsequently, the coarse result, along with the boundary prediction map, is fed into the second decoder to generate a segmentation result with enhanced boundary delineation.\nIt is worth noting that the aforementioned methods extract hierarchical features from the entire image, which results in computational overhead due to the inclusion of background regions and negatively impacts the final performance. Therefore, a potential solution could be to assign less importance to the background regions and instead focus on the foreground areas. Xu et al. (PA-KRN) [46] propose a coarse locating module to approximate the regions containing the target. Based on this coarse result, an attention-based sampler [47] is employed to emphasize informative regions in the input image. Subsequently, the magnified image is fed into the fine segmenting module to generate the final results. Similarly, Jia et al. (SegMaR) [48] also utilize the sampler [47] to integrate segmentation, magnification, and iterative processes in a multistage fashion.\nBenefiting from the sampler, these methods exhibit improved refinement and enrichment of details, particularly for small camouflaged targets. However, it is important to note that these methods require repeated extraction of encoder features and generation of full-resolution prediction maps, leading to decreased running speed and limited practical applications. In contrast, our multi-stage detection framework does not rely on the non-parametric sampler, enabling an end-to-end model. Consequently, our model demonstrates higher efficiency, and the optimization of the multi-stage training process becomes easier." }, { "figure_ref": [], "heading": "III. METHODS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_3" ], "heading": "A. Overview", "publication_ref": [ "b17", "b19", "b24", "b20", "b50" ], "table_ref": [], "text": "The overall framework of BTSNet is depicted in Fig. 2. BTSNet consists of a bifurcated backbone network and three decoders. Given an input image with a spatial resolution of H × W , the first branch of the backbone extracts five hierarchical features, denoted as {f i , i = 1, 2, 3, 4, 5}. It is worth noting that we add a 2 × 2 maxpooling layer after obtaining f 2 . As emphasized in [17], [18], the effective receptive field size of a CNN is smaller than its theoretical value. The inclusion of a pooling layer facilitates the model in fully leveraging global context information, without introducing additional parameters.\nWe discard f 1 since low-level features largely increase the inference time while contributing less to the final performance [20]. Thus, the spatial resolution of the extracted features can be calculated as follows:\nH 2 i × W 2 i i ≤ 2 H 2 i+1 × W 2 i+1 i > 2 (1)\nThen, the three high-level features (i.e., f 3 , f 4 , f 5 ) are utilized as inputs to the first decoder in order to generate a preliminary map. The primary objective of the initial decoder is to extract multi-scale information to identify all potential targets. In order to simplify the process, we employ a U-shaped architecture. This procedure can be described as follows:\nr 5 = M F EM (f 5 ),(2)\nr 4 = M F EM (f 4 + r 5 ),(3)\nr 3 = M F EM (f 3 + r 4 ),(4)\nwhere M F EM indicates the MFEM module. After obtaining the refined feature r 3 , we use a 3 × 3 convolutional layer to produce a coarse result M 1 . Following [30], the corresponding boundary prediction map E 1 can be calculated as follows:\nE 1 = abs(P a 3 (M 1 ) -M 1 )(5)\nwhere abs denotes the absolute value function, P a 3 represents a 3 × 3 average pooling layer with 1 stride.\nAfter obtaining M 1 and E 1 , we use a normalization function to ensure that the two coarse prediction maps are within the range of [0, 1]. We note that even background regions contain non-zero elements. To reduce the negative impacts caused by background noise, a binarization process is performed by applying a threshold value of 0.5. The formulation for this process is as follows:\nM n 1 = f minmax (M 1 ), E n 1 = f minmax (E 1 ),(6)\nM b 1 (i, j) = 1, M n 1 (i, j) > 0.5 0, M n 1 (i, j) ≤ 0.5 (7) E b 1 (i, j) = 1, E n 1 (i, j) > 0.5 0, E n 1 (i, j) ≤ 0.5(8)\nwhere f minmax represents the normalization function, M n Inspired by Faster RCNN [25] and Mask RCNN [49], we generate the bounding box of the located target and crop the corresponding features from f 2 to exclude background regions. Specifically, we first upsample M b 1 to match the size of f 2 and calculate the sum of the upsampled result along the height/width dimension. Notably, non-zero elements can be identified at both ends of the resulting onedimensional vector. Thus, we can get the initial bounding box (x min ,y min ,x max ,y max ). The center coordinates of the box are computed as (x c =(x min +x max )/2, y c =(y min +y max )/2). Since M b 1 may contain incomplete parts, we expand the initial bounding box to encompass a larger area. Therefore, the final bounding box can be represented as (x c -r × l/2, y c -r × l/2, x c + r × l/2, y c + r × l/2), where l = max(y max -y min , x max -x min ) and r denotes the expansion ratio. Based on the final bounding box, we crop a feature map from f 2 . The cropped feature is denoted as F 2 and is empirically resized to 120×120 before being inputted into the second branch of the backbone to generate semantic-enhanced features F 3 , F 4 , F 5 . Unlike f i (i = 3, 4, 5), F i (i = 3, 4, 5) is specifically designed to emphasize foreground regions. It is noteworthy that previous methods employ 352 × 352 images as inputs. The resolutions of the three deepest features are 44 × 44, 22 × 22, and 11 × 11. In contrast, the second branch of our backbone extracts foreground features with larger sizes (i.e., 60 × 60, 30 × 30, and 15 × 15). Thus, we can mitigate the negative impacts caused by background regions and preserve finer foreground details.\nThe second decoder takes the cropped boundary prediction map C e and the foreground features F 3 , F 4 , F 5 as inputs. It is important to note that C e is obtained by cropping from E n 1 based on the bounding box. In this stage, we initially utilize MFEMs to expand the receptive fields and excavate multi-scale contextual information, thereby enhancing the performance. Subsequently, the refined features, along with the boundary prediction map, are inputted to the BEM to generate precise outcomes. This process can be formulated as follows:\nR i = M F EM (F i ), i = 3, 4, 5,(9)\nC m i , C e i , R ′ i = BEM (R i , R ′ i+1 , C e i+1 ), i = 3, 4, BEM (R 5 , C e ), i = 5,(10)\nwhere C m i denotes the prediction map, C e i represents the boundary prediction map, R ′ i is a enhanced feature. C m 3 and C e 3 are then resized to the same size as the bounding box. We create an image with the same size as f 2 and assign zero value to all pixels in the image. Afterward, we can obtain M 2 and E 2 by mapping C m 3 and C e 3 into the bounding box region of the created image.\nAs illustrated in Fig. 3, targets in M 2 often exhibit jagged boundaries, which can be attributed to low-resolution features (i.e., F 3 ) and the application of multiple resizing operations. To achieve fine-grained segmentation results with smoother boundaries, the third decoder takes f 2 , M 2 , and E 2 as inputs to generate the final results. The process can be described as:\nM 3 , E 3 = M GF M (f 2 , M 2 , E 2 ),(11)\nwhere M GF M is the Mask-Guide Fusion Module, M 3 and E 3 are the prediction results.\nAs done in many previous methods [21], [50], we adopt deep supervision strategy to train the whole model in an endto-end manner. Without using any post-processing technique (e.g., CRF [51]), our proposed BTSNet generates fine-grained results with a real-time inference speed of 50 FPS on a single NVIDIA Titan XP GPU. Specifically, in the Non-local block, we first use a 1 × 1 convolutional layer to reduce the channel number to 64. Then, three 1 × 1 convolutional layers are utilized to generate the corresponding key feature K ∈ R 16×h×w , query feature Q ∈ R 16×h×w , and the value feature V ∈ R 64×h×w , where h and w are the height and width, respectively. Afterwards, K and Q are reshaped to (16×n), and V is reshaped to 64×n, where n = h × w. We conduct the matrix multiplication between the transpose of Q and K. Afterward, a softmax layer is applied to calculate the spatial attention map S ∈ R n×n . Meanwhile, we perform a matrix multiplication between V and S. The resulting feature is reshaped to 64 × h × w. Thus, the output of the Non-local block can be calculated as follows:" }, { "figure_ref": [ "fig_4" ], "heading": "MFEM Conv Block", "publication_ref": [ "b18" ], "table_ref": [], "text": "𝐶 1 𝑃 8 𝐶 3 𝐷 2 𝐶 1 𝑃 4 𝐶 3 𝐷 2 𝐶 1 𝑃 2 𝐶 3 𝐷 2 𝐶 1 𝐶 3 𝐷 2 Shortcut Block + + Non-Local Block 𝐶 1 𝐶 1 𝐶 1 𝐶 1 𝐶 1 × Softmax × + + FEM 𝑓 𝑖𝑛 𝑓 𝑜𝑢𝑡 𝐶 1 1 × 1 Convolution 𝑃 2 𝐷 2 + × Softmax 2 × 2 MaxPooling\nf nl = C 1 (f in ),(12)\nQ = C 1 (f nl ), K = C 1 (f nl ), V = C 1 (f nl ),(13)\nS = sof tmax(Q T ⊗ K),(14)\nf o nl = V ⊗ S,(15)\nwhere C 1 is a 1 × 1 convolutional layer, f in is the input feature, Q T denotes the transpose of Q, ⊗ represents the matrix multiplication, f o nl is the output of the Non-local block. Note that to reduce computational overhead, the Non-local block goes into effect only when processing the deepest lowresolution features (i.e., f 5 and F 5 ).\nThe convolutional block comprises multiple branches. Within each branch, a 1×1 convolution layer is used to reduce the channel count to 64, akin to the approach employed in the Non-local block. As illustrated at the bottom of Fig. 4, the initial branch incorporates an 8 × 8 average pooling layer, succeeded by a 3 × 3 convolutional layer to expand the receptive fields. Subsequently, a 3 × 3 dilated convolutional layer, employing a dilation rate of 2, is employed to further enhance the receptive fields without compromising the resolution of the feature map. The resulting feature is upsampled to match the dimensions of the input in the subsequent branch. The two features are subsequently aggregated through elementwise addition.Similarly, the second branch also encompasses a pooling layer and two convolutional layers to refine the features. It is important to note that in the second branch, a smaller average pooling layer (i.e., 4 × 4) is utilized. As a result, the second branch effectively expands the receptive fields while preserving finer details. By gradually reducing the size of the pooling layers, MFEM is able to capture multi-scale information without sacrificing structural details. Furthermore, by combining the output of the previous branch with the input of the subsequent branch, the receptive fields can be progressively enlarged, enabling the exploitation of global contextual information. Given that the effective receptive field size of a CNN-based model is smaller than the theoretical value, MFEM addresses this limitation with a marginal increase in parameters. The output of the MFEM can be calculated as follows:\nf i c = C 1 (f in ), i ∈ {1, 2, 3, 4},(16)\nO i c = D 2 (C 3 (P 8 (f 1 c ))), i = 1, D 2 (C 3 (P 2 4-i (f i c + O i-1 c ))), i = 2, 3, 4,(17)\nwhere P i is an i × i average pooling layer, C 3 denotes a 3 × 3 convolutional layer, D 2 indicates a 3 × 3 dilated convolutional layer with a dilation rate of 2, O i c is the output of the i-th branch, O i c 4 is the output of MFEM. It is noteworthy that we omit the upsampling process for conciseness.\nAs pointed out in [19], [52], encoding residual features is easier than encoding original features. Thus, we add a shortcut block to facilitate optimization. The result of MFEM can be formulated as follows:\nf o s = C 1 (f in ),(18)\nf o c = O 4 c ,(19)\nf out = f o s + f o c + f o nl ,(20)\nwhere f out is the output of MFEM." }, { "figure_ref": [ "fig_5" ], "heading": "C. Boundary enhancement module", "publication_ref": [], "table_ref": [], "text": "The structure of BEM is illustrated in Fig. 5. f high and f low represent the output features of the former BEM and MFEM respectively. Initially, these two features are integrated using element-wise addition. Subsequently, the resulting feature is combined with the boundary prediction map through channelwise concatenation. The concatenated feature is then passed through a 3 × 3 convolutional layer. Considering the boundary prediction map implicitly reveals the location of camouflaged regions, the convolution operation plays a vital role in accurately locating the target. Following this, a channel attention module is employed to emphasize informative channels. As stated in [53], different channels exhibit responses to different semantics. Hence, incorporating the channel attention operation allows for a more focused analysis of channels related to camouflaged objects. Furthermore, a spatial attention module is utilized to refine the results even further. Based on the refined feature, we generate the mask/boundary prediction maps. The entire process can be formulated as follows:\nf = C 3 (cat(C e low , f low + f high )),(21)\nf ca = CA(f ) × f,(22)\nf out = SA(f ca ) × f ca ,(23)\nC e high = C 3 (f out ), C m high = C 3 (C 3 (C 3 (f out ))),(24)\nwhere f out is the refined feature, C e high and C m high are boundary and mask prediction maps, respectively. More concretely, the channel attention module is implemented as:\nCA(f ) = σ(M (P g (f ))),(25)\nwhere P g denotes the global average pooling operation, M is a 2-layer perceptron, σ represents the sigmoid function. The spatial attention module is implemented as:\nSA(f ) = σ(C 7 (P c g (f ))),(26)\nwhere P c g is a global average pooling operation along the channel dimension, C 7 is a 7 × 7 convolutional layer." }, { "figure_ref": [ "fig_6" ], "heading": "D. Mask-guided fusion module", "publication_ref": [], "table_ref": [], "text": "MGFM MGFM SFM CA SA 𝐶 3 + C 𝐶 3 • • 𝐶 3 + 𝐶 3 𝐶 3 𝑀 2 𝑀 3 𝐸 2 𝐸 3 𝑅 2 𝑓 𝑎 𝑓 𝑟 𝑓 𝑓 𝑓 𝑒\nFig. 6. The structure of the MGFM.\nSFM CA SA 𝐶 3 • • + CA SA 𝐶 3 • • + CA SA 𝐶 3 • • + CA SA 𝐶 3 • • C out 𝑅 2 𝑀 2 C C C C Fig. 7\n. The structure of the SFM. The orange block f and the blue sheet M are respectively the input feature map and mask prediction map.\nThe structure of MGFM is illustrated in Fig. 6. MGFM first uses an SFM to integrate the mask prediction map M 2 with the high-resolution feature map R 2 . The detailed architecture of the proposed SFM is shown in Fig. 7.\nConcretely, f 2 is inputted into an MFEM to generate R 2 . The 64-channel feature R 2 is divided into four groups, denoted as {G i , i = 1, 2, 3, 4}. The first sub-feature, which consists of 16 channels, is concatenated with M 2 along the channel dimension. The resultant feature is then propagated to a 3 × 3 convolutional layer. It is noteworthy that the low-level feature f 2 contains affluent local details but lacks semantic information. Applying a 3 × 3 convolutional layer is helpful to eliminate background noise and focus more on foreground regions. Subsequently, a sequential spatial attention operation and a channel attention operation are employed to further refine the feature. As mentioned in [53], the spatial attention operation effectively enhances the representation of foreground regions, while the channel attention operation aids in suppressing redundant or noise-degraded channels. Consequently, the resulting feature provides a more accurate description of camouflaged objects.\nThe output of the first branch is aggregated with the subfeature of the subsequent group through element-wise addition. Likewise, we concatenate the aggregated feature with the mask prediction map and employ convolutional and attention operations to further refine it. By iteratively repeating this process, we derive the outputs of the four branches, which are subsequently concatenated to form the output of SFM. This entire process can be formalized as follows:\nG i conv = C 3 (cat(G i , M 2 )), i = 1, C 3 (cat(G i + G i o , M 2 )), i = 2, 3, 4,(27)\nG i sa = G i conv × SA(G i conv ),(28)\nG i o = G i sa × CA(G i sa ),(29)\nf f = cat(G 1 o , G 2 o , G 3 o , G 4 o ),(30)\nwhere\nG i o is the output of the i-th branch, f f is the output of SFM.\nDue to the absence of sufficient semantic information, distinguishing between foreground and background regions becomes challenging in R 2 . While SFM partially addresses this issue, effectively differentiating camouflaged objects from their surroundings remains difficult, leading to potential performance degradation. However, it is worth noting that the boundary prediction map implicitly provides insights into the target's location. Therefore, we propose the introduction of E 2 as an enhancement to further improve the performance.\nMore specifically, we first use attention operations to highlight informative data.\nf ca = f f × CA(f f ),(31)\nf a = f ca × SA(f ca ),(32)\nThen, we calculate the boundary feature by using a single 3×3 convolutional layer. The boundary feature is combined with the boundary prediction map E 2 via channel-wise concatenation, the result of which is fed to a 3 × 3 convolutional layer for refinement.\nf e = C 3 (cat(C 3 (f a ), E 2 )),(33)\nThus, we can obtain the boundary-enhanced feature by integrating the boundary feature with the output of the attention operations. Afterwards, we use a shortcut connection to facilitate optimization and employ convolutional layers to compute finer mask/boundary prediction maps. The whole process can be described as:\nf r = f a + C 3 (f a + f e ),(34)\nM 3 = C 3 (f r ), E 3 = C 3 (f e ),(35)\nwhere M 3 and E 3 are respectively mask/boundary prediction maps To verify the effectiveness of our proposed BEM and MGFM, we present a selection of representative visualization results of feature maps in Fig. 8. The first row of the figure showcases the input features (R ′ 4 +R 3 ) and the output features (R ′ 3 ) of the final BEM. Additionally, R ′ 4 represents the output feature map of the preceding BEM. Comparing R ′ 3 with R ′ 4 + R 3 demonstrates that BEM is effective in distinguishing the foreground regions from the background. Furthermore, from the comparison between R ′ 3 and R ′ 4 , we can conclude that BEM is effective in generating features with sharper boundaries.\nBesides, as we can observe from the figure, it is challenging to identify the camouflaged object in R 2 . A comparison between R 2 and f f illustrates the usefulness of SFM in locating the target and reducing background noise. Furthermore, when compared to f f , it can be observed that certain channels focusing on the background regions (e.g., the third and fifth channels in the first row of f f and f a ) are suppressed in f a , which confirms the beneficial impact of attention operations in highlighting informative channels. Meanwhile, some channels in f a exhibit difficulty in distinguishing foreground regions from the background (e.g., the seventh channel of the third row and the fourth channel of the last row), which results in fuzzy boundaries that may degrade performance. In contrast, clear contours are observed in f r . Additionally, background channels are further suppressed. The aforementioned comparisons and discussions serve to validate the effectiveness of our proposed modules.\nImage Pred R 4 ' R 3 +R 4 ' R 3 ' GT 𝑅 2 𝑓 𝑓 𝑓 𝑎 𝑓 𝑟" }, { "figure_ref": [], "heading": "E. loss function", "publication_ref": [ "b30" ], "table_ref": [], "text": "As done in many previous COD methods [7], [14], [16], [30], [31], we adopt the hybrid loss function [44] to train the model. The hybrid loss function is defined as:\nL = L w IoU + L w BCE ,(36)\nwhere L w IoU is a weighted Intersection-over-Union (IoU) loss, L w BCE is a weighted Binary Cross Entropy (BCE) loss. As pointed out in [44], pixels near the boundaries are prone to wrong predictions and should be attached with more importance. The standard BCE and IoU losses ignore the difference between pixels and treat all pixels equally, which results in performance degradation. Differently, L w IoU and L w BCE assign larger weights to harder pixels, which has been proven effective in enhancing the model's generalization ability. We use multiple supervisions for the three side-output maps of the second decoder to facilitate optimization. Thus, the training loss can be formulated as follows:\nL mask = L(M 1 , G)+ 5 i=3 L(Rst(C m i ), G)+L(M 3 , G),(37)\nL boundary = L(E 1 , G) + 5 i=3 L(Rst(C e i ), G) + L(E 3 , G),(38)\nL total = L mask + L boundary ,(39)\nwhere G denotes the groundtruth, C m i and C e i are the output mask/boundary prediction maps, Rst denotes the restoring operation to map the prediction map into the original bounding box region, L total is the overall loss function." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Datasets and evaluation metrics", "publication_ref": [ "b8", "b9", "b10", "b7", "b53", "b55" ], "table_ref": [], "text": "We conduct experiments on three benchmark datasets: CAMO [9], COD10K [10], NC4K [11]. The three datasets contain 1,250, 5,066, and 4,121 images, respectively. Following previous COD methods [8], [14]- [16], [30], we use 1,000 images from CAMO and 3,040 images from COD10K as our training set. The remaining samples are utilized for evaluation.\nTo provide a comprehensive evaluation of the performance, we adopt six universally-agreed metrics, including 1) S-measure (S m ) [54], 2) mean E-measure (E ϕ ) [55], 3) weighted F-measure (F w β ) [56], 4) mean absolute error (M ), 5)Precision-Recall curves, 6) F-measure curves. Note that for M , a lower value indicates better performance. For other metrics, higher is better." }, { "figure_ref": [], "heading": "B. Implementation details", "publication_ref": [ "b18", "b56" ], "table_ref": [], "text": "We implement BTSNet with the PyTorch toolbox. For fair comparison, we use ResNet-50 [19] to build the bifurcated backbone network. Throughout the training process, all images are resized to 704 × 704 and augmented by multiple augmentation strategies (i.e., flipping, rotating, and border clipping). We employ Adam optimizer [57] with an initial learning rate of 2e-5 to train the model. We maintain a batch size of 8 during training and adopt the poly strategy with a power of 0.9 for learning rate adjustment. BTSNet is trained for 120 epochs. The training process takes about 15 hours on a single NVIDIA GeForce RTX 3090 GPU (24G memory). During testing, the images were resized to 704 × 704 as well. For evaluation purposes, we exclusively utilize the output of the third decoder. " }, { "figure_ref": [ "fig_7", "fig_8", "fig_9" ], "heading": "C. Comparisons to the state-of-the-arts", "publication_ref": [ "b9", "b31", "b32", "b14", "b10", "b57", "b7", "b58", "b30", "b59", "b14", "b7", "b7", "b14", "b7", "b60" ], "table_ref": [ "tab_2", "tab_3", "tab_3", "tab_3", "tab_5" ], "text": "We compare the proposed BTSNet with 17 state-of-theart CNN-based models: SINet [10], ERRNet [12], CubeNet [13], TANet [32], TINet [37], DTCNet [33], PFNet [15], LSR [11], MGL [58], PreyNet [8], BgNet [30], PraNet [4], BSANet [59], C 2 FNet [31], BGNet [14], FAPNet [7], SINetV2 [16]. Since some competing methods (e.g., SINetV2 [16], PraNet [4]) are built on Res2Net-50 [60], we implement BTSNet+ using Res2Net-50 as the backbone for fair comparison. Quantitative Evaluation. The quantitative evaluation results of all models are shown in Table I. It can be clearly seen from the table that BTSNet surpasses other high-performance models across all benchmark datasets in terms of all evaluation metrics. More specifically, performance gains over the three best compared algorithms (PFNet [15], PreyNet [8] and BgNet [30]) built on ResNet-50 are (S m : 0.9% ∼ 4.2%, F w β : 0.9% ∼ 5.8%, M : 0.001 ∼ 0.014, E ϕ : 0.2% ∼ 3.3%) on the three challenging datasets. Besides, when using Res2Net-50 as the backbone, BTSNet+ outperforms the three best compared models (BgNet+ [30], BGNet [14], and FAPNet [7]) by (S m : 0.2% ∼ 3.9%, F w β : 1.2% ∼ 7.4%, M : 0 ∼ 0.009, E ϕ : 0.1% ∼ 2.6%). Meanwhile, the Precision-Recall and F-measure curves are shown in Fig. 9 and Fig. 10, respectively. The evaluation results, together with the curves, validate the superiority of the BTSNet.\nIt is worth noting that the competing methods are trained on images of different sizes. For example, PreyNet [8], BGNet [14] and FAPNet [7] are trained on 448 × 448, 416 × 416, and 352 × 352 images. Thus, we retrain the six best competing methods (i.e., PFNet [15], PreyNet [8], BgNet [30], BGNet [14], FAPNet [7], and BgNet+ [30]) on 704 × 704 images for fair comparison. During the training process, according to the Linear Scaling Rule [61], we reduce the batch size due to the limitation of GPU memory size and adjust the learning rate proportionally. The evaluation results on the three datasets are presented in Table II. As demonstrated in the table, BTSNet still outperforms the six competing methods. Concretely, performance gains over the three models based on ResNet-50 are (S m : 0.3% ∼ 5.0%, F w β : -0.8% ∼ 10.5%, M : -0.002 ∼ 0.035, E ϕ : 0.2% ∼ 6.9%). The negative number denotes that a contender (i.e., PreyNet) shows better performance. It is worth noting that BgNet suffers from severe performance degradation, which can be partly attributed to the smaller receptive field size of the model. When using Res2Net-50 as the backbone, BTSNet+ outperforms the three competing algorithms (i.e., BGNet [14], BgNet+ [30], and FAPNet [7]) by (S m : 0.7% ∼ 6.5%, F w β : 0.9% ∼ 11.1%, M : 0.002 ∼ 0.035, E ϕ : 1.2% ∼ 8.3%).\nAs we can observe from Table II, the performance of the six competing methods is closer comparable to that of the BTSNet when evaluated on COD10K. This similarity can be partly attributed to the varying proportions of small images in the three datasets. To quantify this, we define A f as the area of the foreground region and A t as the area of the entire image. By selecting images with A f /A t < 1/8, 1/16, and 1/32 from the three datasets, we generate three distinct groups of images: Small8, Small16, and Small32, respectively. The sizes of these image groups are presented in Table III. Notably, COD10K exhibits a relatively higher proportion of images containing small targets, as indicated by the table. We evaluated BTSNet and the six competing methods on Small8, Small16, and Small32, respectively. The results of the evaluation are presented in Table IV. In comparison, the performance of the competing methods approaches that of our proposed BTSNet when using a smaller threshold to collect images. However, considering the overall performance, BTSNet outperforms these methods. This indicates that previous methods struggle to generate precise prediction maps when handling high-resolution images due to their limited effective receptive field size. Moreover, a comparison between X-704 (e.g., BgNet-704) and X (e.g., BgNet) reveals that models trained on high-resolution images exhibit superior performance when dealing with images containing small camouflaged objects. This can be attributed to the preservation of structural details in high-resolution images, which facilitates the generation of finer prediction maps by the models. Qualitative Evaluation. In Fig. 11, we visualize several challenging scenes and prediction maps produced by BTSNet and other high-performance models. As we can observe, BTSNet can generate fine-grained results in these scenarios and outperforms other competing methods.\nMore specifically, the first and second rows illustrate the prediction maps featuring large camouflaged objects. As depicted in the figure, BTSNet effectively captures the entire target, whereas the competing methods exhibit a tendency to overlook certain parts of the objects. The third and fourth rows depict images containing thin and elongated structures. It is evident that the proposed BTSNet accurately segments the camouflaged regions, while the alternative approaches fail to encompass the entirety of the targets. In the fifth and sixth rows, multiple camouflaged objects are present. BTSNet suc- cessfully segments all the camouflaged objects, whereas other methods yield results with notably lower accuracy. The seventh and eighth rows showcase images featuring extremely small objects. In comparison with other models, BTSNet not only demonstrates superior precision in target localization but also preserves finer details. The last two rows present challenging scenes characterized by similar foreground and background textures. Although alternative models are capable of locating the primary body of the targets, it is worth noting that they often overlook highly camouflaged regions, such as the head of the camouflaged individuals depicted in the ninth row. In summary, BTSNet proves its competence in generating finegrained results across a range of highly challenging scenes." }, { "figure_ref": [ "fig_3" ], "heading": "D. Ablation study 1 Results of different decoders", "publication_ref": [ "b22", "b22", "b22", "b19", "b19" ], "table_ref": [ "tab_2", "tab_6", "tab_6", "tab_7", "tab_7", "tab_7", "tab_8" ], "text": "As shown in Fig. 3, the results of the third decoder exhibit smoother boundaries. We conducted a quantitative evaluation of the performance of the three decoders, and the results are presented in Table V. As illustrated in the table, the first decoder demonstrates the lowest performance, which can be partly attributed to the utilization of low-resolution input features. Furthermore, our primary focus lies in extracting multiscale information for accurate localization of camouflaged objects, inadvertently overlooking the importance of boundary information. Consequently, harder pixels (e.g., pixels near the boundaries) are prone to erroneous predictions. In general, the third decoder outperforms the second decoder, thereby validating the effectiveness of integrating high-resolution lowlevel features with the output of the second decoder to enhance performance. 2 Effectiveness of the proposed modules Effectiveness of MFEM. To validate the superiority of our proposed MFEM, we conduct several experiments on the three benchmark datasets. Concretely, we train three versions, namely \"w/o MFEM\", \"BTSNet-FAM\" and \"MFEM-Parallel\". In \"w/o MFEM\" and \"BTSNet-FAM\", we replace MFEMs with 3 × 3 convolution layers and FAMs [23], respectively. In \"MFEM-Parallel\", we change the architecture of the convolutional block. Thus, multiple branches of the block are used in parallel. The quantitative evaluation results are presented in Table VI.\nMore specifically, comparing BTSNet with \"w/o MFEM\" demonstrates that MFEM is effective in largely improving the performance. In contrast to MFEM, FAM [23] comprises four branches, each employing a pooling layer followed by a convolutional layer to perform convolution operations at varying scales. The output features of the four branches are then aggregated to generate the final output. Although FAM [23] also adopts the pooling-based strategy, BTSNet surpasses \"BTSNet-FAM\" by (S m : 0.6% ∼ 0.9%, F w β : 1.3% ∼ 2.1%, M : 0.001 ∼ 0.006, E ϕ : 0.5% ∼ 0.9%), highlighting the superiority of MFEM over FAM. The performance gains can be attributed to two main factors: 1) MFEM uses more layers (e.g., dilated convolutional layers and Non-local block) to expand the receptive fields; 2) MFEM employs multiple branches for sequential feature processing, enabling the preservation of detailed structural information. As can be seen from the table, BTSNet outperforms \"MFEM-Parallel\", further validating the advantage of sequentially processing multi-scale information. Effectiveness of BEM. We train two version (\"w/o BEM\" and \"w/o edge\") to demonstrate the effectiveness of the proposed BEM. Concretely, \"w/o BEM\" replaces the BEM with a 3 × 3 convolutional layer, and \"w/o edge\" removes the prediction process of the boundary prediction map. The experimental results are shown in Table VII. As can be clearly seen from the table, \"w/o edge\" shows better performance than \"w/o BEM\", which proves the effectiveness of using attention operations to boost the performance. The comparison between BTSNet and \"w/o edge\" demonstrates that employing the boundary prediction map as auxiliary information is helpful to generate finer results. Effectiveness of MGFM. We train two versions of BTSNet (\"w/o SFM\" and \"w/o edge\") to investigate the effectiveness of the MGFM. In \"w/o SFM\", the high-resolution feature map and the output of the second decoder are directly concatenated. The concatenated feature is then fed to a 3 × 3 convolutional layer followed by two attention operations for refinement. Then, the edge prediction map is introduced to generate boundary-enhanced results as done in BTSNet. Similar with BTSNet, \"w/o edge\" uses an SFM to integrate the input feature map with the mask prediction map. The only difference is that \"w/o edge\" does not leverage the boundary prediction map. The experimental results are presented in Table VIII. As we can observe, BTSNet outperforms the two versions. Performance gains are (S m : 0.5% ∼ 1.7%, F w β : 1.1% ∼ 2.9%, M : 0.002 ∼ 0.006, E ϕ : 0.3% ∼ 1.1%). Thus, we can conclude that by employing SFM and introducing boundary information, MGFM is beneficial for boosting COD performance.\n3) Other settings Impact of different sizes of training images. On the one hand, utilizing larger-sized images as input allows for the preservation of finer, more intricate structures. On the other hand, this approach presents limitations in terms of the effective receptive field size. Consequently, the model faces challenges in capturing global contextual information when the input images are excessively large. To investigate the impact of training image sizes and determine the optimal size, extensive experiments are conducted. Specifically, multiple versions of the model are trained by varying the size of the training images while maintaining consistent configurations. It is important to note that, for each model, the image size used during the testing phase corresponds to that employed in the training phase. The results are shown in Table IX. From the table, we observe that increasing the input image size allows for the incorporation of more detailed structural information while still exploiting global contextual understanding. Notably, the rate of improvement diminishes at higher resolutions and reaches a plateau when the input size reaches 704 × 704. Consequently, we adopt 704 × 704 as the size for the training samples. Impact of different sizes of F 2 . Similarly, we also investigate the influence of the F 2 size. The experimental results are presented in Table X. As can be clearly seen from the table, the performance increases as the size grows. Besides, the performance saturates at a resolution of 120 × 120. Impact of different expansion ratios. As mentioned in Section III, following the acquisition of the prediction map generated by the first decoder, the calculation of the initial bounding box is conducted, which is subsequently expanded using an expansion ratio r. Based on the adjusted bounding box, features are extracted from f 2 and resized to a predetermined resolution. Generally, employing a smaller value of r leads to a reduction in the size of the bounding box, thereby accentuating the foreground regions. Nonetheless, it is possible for the prediction maps produced by the initial decoder to overlook certain segments of the camouflaged targets. Conversely, adopting a larger value of r enables the incorporation of a greater area, facilitating the detection of the complete targets. In order to strike a balance, multiple versions are trained, each utilizing different expansion ratios.\nThe outcomes are illustrated in Table XI. As observed, BTSNet achieves the optimal performance with an expansion ratio of r = 1.2. Impact of different backbone architecture. The use of a bifurcated backbone network has been previously explored in several methods [20], [30], [50]. In these methods, the output prediction map of the first decoder is passed through a holistic attention module, which is utilized to emphasize foreground regions in a low-level feature. Subsequently, the refined feature is fed into the second branch of the backbone network. To compare the two types of bifurcated encoders, we trained a version called \"BTSNet-BE2\" by employing the encoder from [20] for feature extraction. Similar to BTSNet, \"BTSNet-BE2\" includes a 2 × 2 pooling layer before the third stage of the encoder, aids in capturing global context information.\nThe experimental results are presented in Table XII. As can be seen from the table, BTSNet outperforms \"BTSNet-BE2\" with notable performance gains (S m : 0.5% ∼ 1.3%, F w β : 0.8% ∼ 1.8%, M : 0.001 ∼ 0.003, E ϕ : 0.2% ∼ 1.1%). These results validate the effectiveness of our encoder, primarily due to its ability to effectively eliminate background regions in the second branch. Additionally, the input of the second branch is obtained by cropping a high-resolution feature and resizing it to a fixed resolution of 120 × 120. Consequently, our encoder is better equipped to handle images with small targets." }, { "figure_ref": [ "fig_10", "fig_10" ], "heading": "E. Failure cases and analyses", "publication_ref": [ "b61", "b62", "b64", "b65", "b66" ], "table_ref": [], "text": "Despite the outstanding performance of the proposed BT-SNet, it can still exhibit subpar performance in highly challenging scenes. In order to facilitate future research on COD, we present three representative failure cases in Fig. 12. For the purpose of comparison, we also include the results obtained by the recently developed SAM algorithm [62]. The first situation is that BTSNet misses some parts of the camouflaged objects. For instance, in the first row, BTSNet fails to accurately detect the camouflaged individuals. It is important to note that although SAM demonstrates better performance, its results are also imperfect. This can be attributed to the fact that the camouflaged target shares the same texture as the background, and the presence of visual noise within the circular region further exacerbates the issue. The second type of failure case occurs when the camouflaged objects are extremely rare. In such scenarios, both BTSNet and SAM struggle to correctly identify the targets or fully segment the objects. In the third category, our model fails to detect targets that are concealed in darkness, which can be partly attributed to the significant illumination variations in the foreground regions. Surprisingly, SAM exhibits much better performance in this particular situation.\nWe propose several ideas to address the aforementioned instances of failure and anticipate that these suggestions will stimulate insightful considerations for future research in the field of COD. Firstly, we recommend the utilization of a model with larger or even unlimited receptive fields. This proposition is supported by the comparison depicted in Fig. 12, where the performance of SAM surpasses that of BTSNet. Additionally, the advancements in vision transformers [63]- [65] have led to the emergence of transformer-based COD models [66], [67], which exhibit significantly improved performance compared to their CNN-based predecessors. Although transformer-based models typically necessitate more computational resources and pose challenges when applied to high-resolution images, a viable solution could involve the integration of CNN and transformer to construct a hybrid model. Secondly, current approaches allocate equal attention to all pixels during the processing of input images. However, it is worth noting that certain pixels possess a higher level of information, such as the eyes of animals. Consequently, a promising approach would involve first identifying the most informative regions and subsequently examining the surrounding patches in a progressive manner. " }, { "figure_ref": [], "heading": "F. Computational complexity", "publication_ref": [], "table_ref": [], "text": "We compare the computational complexities of BTSNet and 9 state-of-the-art COD methods to further demonstrate the superiority of our proposed model. All experiments are conducted on a single NVIDIA Titan XP GPU, and the inference speed is calculated using 704 × 704 images. The inference time is obtained by running the model 100 times indicates that 32 is the maximum batch size of SINet tested on Titan XP. When using a batch size of 32, the average inference time per iteration is 0.6732s. As observed from the table, the maximum batch size of BTSNet surpasses that of other competing methods, highlighting that BTSNet requires fewer computing resources. Furthermore, the GMACs (Giga Multiply-Accumulates) of BTSNet are also smaller than those of other methods, indicating that BTSNet exhibits higher computational efficiency." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we present a novel bioinspired three-stage model called BTSNet for COD, drawing inspiration from human behavior when observing images that contain camouflaged objects. Our proposed model aims to address the limitations in existing approaches. To achieve this, we first introduce a novel schema and design a bifurcated backbone network, which allows us to effectively utilize detailed structural information while minimizing computational and memory overhead. Additionally, we propose the Multi-scale Feature Enhancement Module (MFEM) to enhance the representation ability of the model. This module improves the model's capability to capture and represent features at different scales. Furthermore, we introduce the Boundary Enhancement Module (BEM) to incorporate boundary information and facilitate the propagation of useful knowledge to the shallower decoder stage. By leveraging the complementary nature of coarse prediction maps and high-resolution low-level features, our model utilizes the Mask-Guided Fusion Module (MGFM) to generate fine-grained prediction maps. Extensive experiments are conducted on three challenging datasets to evaluate the performance of BTSNet. The results demonstrate the superiority of our model compared to 18 state-of-the-art models, as indicated by significant performance improvements across multiple standard evaluation metrics. In conclusion, our proposed BTSNet model, with its innovative three-stage architecture and modules, outperforms existing approaches in the field of Camouflaged Object Detection. The advancements achieved in this research contribute to the further development and application of bioinspired models in computer vision tasks." } ]
Camouflaged objects are typically assimilated into their backgrounds and exhibit fuzzy boundaries. The complex environmental conditions and the high intrinsic similarity between camouflaged targets and their surroundings pose significant challenges in accurately locating and segmenting these objects in their entirety. While existing methods have demonstrated remarkable performance in various real-world scenarios, they still face limitations when confronted with difficult cases, such as small targets, thin structures, and indistinct boundaries. Drawing inspiration from human visual perception when observing images containing camouflaged objects, we propose a three-stage model that enables coarse-to-fine segmentation in a single iteration. Specifically, our model employs three decoders to sequentially process subsampled features, cropped features, and high-resolution original features. This proposed approach not only reduces computational overhead but also mitigates interference caused by background noise. Furthermore, considering the significance of multi-scale information, we have designed a multiscale feature enhancement module that enlarges the receptive field while preserving detailed structural cues. Additionally, a boundary enhancement module has been developed to enhance performance by leveraging boundary information. Subsequently, a mask-guided fusion module is proposed to generate fine-grained results by integrating coarse prediction maps with high-resolution feature maps. Our network surpasses state-of-the-art CNN-based counterparts without unnecessary complexities. Upon acceptance of the paper, the source code will be made publicly available at https://github.com/clelouch/BTSNet.
A bioinspired three-stage model for camouflaged object detection
[ { "figure_caption": "Fig. 1 .1Fig. 1. Several visual examples of camouflaged object detection in several highly challenging cases (e.g., long and thin structures, low contrast, small targets). The left (1st row) or right (2rd and 3th) yellow box areas are the enlarged regions cropped from the original image. Compared with the stateof-the-art CNN-based models (i.e., FAPNet [7] and PreyNet[8], our method preserves better details and is competent to distinguish the entire target from the background.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The overall pipeline of the BTSNet (Best viewed in color).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "1 and E n 11denote the normalized results, M b 1 and E b 1 are binary masks, M b 1 (i, j) and E b 1 (i, j) are the pixel values of M b 1 and E b 1 .", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Results of different decoders (Best viewed in zoomed-in). FD: first decoder. SD: second decoder. TD: third decoder.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. The structure of the MFEM. MFEM contains three blocks, namely Non-local block, shortcut block, and convolutional block. The Non-local block becomes effective only when processing the deepest features (i.e., f 5 , F 5 ). Upsampling operations are omitted in the figure for conciseness.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The structure of the BEM.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Visualization results. Best viewed in color and zoomed-in.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Precison-Recall curves of BTSNet and 11 high-performance COD models.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. F-measure curves of BTSNet and 11 high-performance COD models.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig.11. Visual comparisons with 9 high-performance COD models in several representative highly challenging scenarios: large objects (rows 1 and 2), thin and long structures (rows 3 and 4), multiple objects (row 5 and 6), small objects (rows 7 and 8), highly camouflaged regions (rows 9 and 10).", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Three representative failure cases of our proposed BTSNet. The results of SAM [62] are also presented for contrast. Best visualized in color and zoomed-in.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Extensive experiments are conducted on three benchmark datasets. The experimental results validate the superiority of the proposed novel schema and the effectiveness of the key modules.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "OF THE PROPOSED BTSNET WITH 17 STATE-OF-THE-ART ALGORITHMS ON THREE COD BENCHMARK DATASETS. THE BEST RESULTS ARE SHOWN IN RED. '-' INDICATES THE EVALUATION RESULTS ARE NOT AVAILABLE.", "figure_data": "CAMOCOD10KNC4KMethodPub/YearBackboneSmF w βME ϕSmF w βME ϕSmF w βME ϕSINet [10]CVPR 20ResNet-50.751 .606 .100.771.771 .551 .051 .806.808 .723 .058 .871ERRNet [12]PR 22ResNet-50.747 .667 .087.849.739 .589 .048 .868.783 .704 .060 .887CubeNet [13]PR 22ResNet-50.788 .682 .085.838.795 .644 .041 .864----TANet [32]TCSVT 23ResNet-50.778 .659 .089.813.794 .613 .043 .838----TINet [37]AAAI 21ResNet-50.781 .678 .087.847.793 .635 .043 .848----DTCNet [33]TMM 22ResNet-50.778 .667 .084.804.790 .616 .041 .821----PFNet [15]CVPR 21ResNet-50.782 .695 .085.842.800 .660 .040 .877.829 .745 .053 .888LSR [11]CVPR 21ResNet-50.787 .696 .080.838.804 .673 .037 .880.840 .766 .048 .895MGL-R [58]CVPR 21ResNet-50.775 .673 .088.842.814 .666 .035 .890.833 .739 .053 .893PreyNet [8] ACMMM 22ResNet-50.790 .708 .077.842.813 .697 .034 .881.834 .763 .050 .887BgNet [30]KBS 22ResNet-50.804 .719 .075.859.804 .663 .039 .881.843 .764 .048 .901BTSNetResNet-50.824 .753 .071.875.834 .716 .033 .897.852 .781 .046 .903PraNet [4]MICCAI 20Res2Net-50 .769 .663 .094.825.789 .629 .045 .861.822 .724 .059 .876BSANet [59]AAAI 22Res2Net-50 .794 .717 .079.851.818 .699 .034 .891.841 .771 .048 .897C 2 FNet [31]IJCAI 21Res2Net-50 .796 .719 .080.854.813 .686 .036 .890.838 .762 .049 .897BGNet [14]IJCAI 22Res2Net-50 .813 .749 .073.870.831 .722 .033 .901.851 .788 .044 .907FAPNet [7]TIP 22Res2Net-50 .769 .663 .097.802.835 .717 .034 .885.839 .753 .052 .872SINetV2 [16]TPAMI 21Res2Net-50 .820 .743 .070.882.815 .680 .037 .887.847 .770 .048 .903BgNet+ [30]KBS 22Res2Net-50 .832 .762 .065.884.826 .703 .034 .898.855 .784 .045 .907BTSNet+Res2Net-50 .834 .774 .066 .885.854 .754 .028 .913.866 .803 .040 .914", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "OF THE BTSNET WITH 6 COMPETING METHODS USING 704 × 704 IMAGES AS INPUTS. THE BEST RESULTS ARE HIGHLIGHTED IN RED.", "figure_data": "CAMOCOD10KNC4KMethodSmF w βME ϕSmF w βME ϕSmF w βME ϕPFNet [15] .789 .709 .084 .842 .827 .708 .033 .893.839 .764 .050 .891PreyNet [15] .788 .715 .084 .850 .831 .728 .031 .895.840 .770 .049 .891BgNet [30].774 .648 .106 .806 .807 .641 .045 .850.827 .713 .061 .863BTSNet .824 .753 .071 .875 .834 .716 .033 .897.852 .781 .046 .903BGNet [14] .798 .721 .081 .849 .847 .745 .030 .901.849 .781 .047 .894BgNet+ [30].790 .668 .101 .818 .826 .671 .041 .867.844 .736 .057 .874FAPNet [7] .769 .663 .097 .802 .835 .717 .034 .885.839 .753 .052 .872BTSNet+ .834 .774 .066 .885 .854 .754 .028 .913.866 .803 .040 .914", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "OF THE BTSNET WITH COMPETING METHODS ON IMAGES WITH SMALL TARGETS. SMALL8: IMAGES WITH A f /At < 1/8. SMALL16: IMAGES WITH A f /At < 1/16. SMALL32: IMAGES WITH A f /At < 1/32. WE USE X -704 (E.G., BGNET-704) TO INDICATE THE MODEL TRAINED ON 704 × 704 IMAGES, AND UTILIZE X (E.G., BGNET) TO REPRESENT THE ORIGINAL MODEL.", "figure_data": "Small8(3,761)Small16(2,163)Small32(1,099)MethodSmF w βME ϕSmF w βME ϕSmF w βME ϕBgNet-704 [30] 0.799 0.617 0.041 0.8430.766 0.539 0.037 0.8060.718 0.437 0.039 0.746BgNet [30] 0.791 0.584 0.039 0.8430.751 0.492 0.034 0.8040.699 0.381 0.033 0.743PFNet-704 [15] 0.825 0.694 0.027 0.8930.797 0.626 0.023 0.8690.755 0.530 0.021 0.823PFNet [15] 0.803 0.653 0.031 0.8810.768 0.571 0.028 0.8500.720 0.466 0.027 0.794PreyNet-704 [8] 0.825 0.707 0.026 0.8910.798 0.642 0.022 0.8660.755 0.546 0.021 0.821PreyNet [8] 0.815 0.689 0.026 0.8860.785 0.616 0.022 0.8610.736 0.507 0.021 0.811TSNet 0.830 0.699 0.029 0.8950.799 0.626 0.026 0.8670.750 0.519 0.027 0.812BgNet+-704 [30] 0.819 0.647 0.038 0.8580.789 0.574 0.034 0.8260.745 0.483 0.035 0.773BgNet+ [30] 0.812 0.621 0.035 0.8580.778 0.539 0.030 0.8260.731 0.435 0.027 0.771BGNet-704 [14] 0.840 0.725 0.025 0.8960.817 0.668 0.020 0.8750.775 0.570 0.018 0.828BGNet [14] 0.831 0.713 0.026 0.9030.802 0.646 0.022 0.8800.761 0.552 0.021 0.834FAPNet-704 [7] 0.826 0.688 0.028 0.8710.799 0.623 0.024 0.8440.758 0.534 0.023 0.795FAPNet [7] 0.822 0.683 0.030 0.8870.790 0.608 0.026 0.8600.743 0.505 0.027 0.809TSNet+ 0.848 0.734 0.024 0.9090.822 0.671 0.020 0.8890.780 0.577 0.019 0.844ImageGTOursBgNet BSANet C", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "MFEM: REPLACING MFEM WITH A SINGLE 3 × 3 CONVOLUTIONAL LAYER. BTSNET-FAM: REPLACING MFEM WITH FAM. MFEM-PARALLEL: EMPLOYING MULTIPLE BRANCHES TO PROCESS FEATURES IN PARALLEL IN THE CONVOLUTIONAL ANALYSIS FOR THE PROPOSED BEM. w/o BEM: REPLACING BEM WITH A 3 × 3 CONVOLUTIONAL LAYER. w/o EDGE: REMOVE THE BOUNDARY GENERATION PROCESS IN BEM. SFM: REPLACING SFM WITH A 3 × 3 CONVOLUTIONAL LAYER. w/o EDGE: REMOVING THE EDGE GENERATION PROCESS IN MGFM.", "figure_data": "TABLE VPERFORMANCE OF DIFFERENT DECODERS.CAMOCOD10KNC4KMethodSmF w βME ϕSmF w βME ϕSmF w βME ϕFirst 0.799 0.681 0.085 0.8360.797 0.613 0.045 0.847 0.826 0.706 0.060 0.866Second 0.826 0.734 0.075 0.8650.828 0.682 0.038 0.877 0.849 0.755 0.051 0.890Third0.824 0.753 0.071 0.8750.834 0.716 0.033 0.897 0.852 0.781 0.046 0.903TABLE VIABLATION ANALYSIS FOR THE PROPOSED MFEM. w/o BLOCK.CAMOCOD10KNC4Kexpand ratioSmF w βME ϕSmF w βME ϕSmF w βME ϕw/o MFEM 0.810 0.719 0.075 0.8580.820 0.691 0.035 0.888 0.845 0.765 0.049 0.897BTSNet-FAM 0.813 0.732 0.077 0.8660.825 0.699 0.034 0.889 0.846 0.768 0.048 0.897MFEM-Parallel 0.821 0.748 0.073 0.8700.828 0.705 0.034 0.891 0.848 0.773 0.046 0.900BTSNet 0.824 0.753 0.071 0.8750.834 0.716 0.033 0.897 0.852 0.781 0.046 0.903CAMOCOD10KNC4KSmF w βME ϕSmF w βME ϕSmF w βME ϕw/o BEM 0.807 0.724 0.077 0.8640.823 0.692 0.036 0.8870.846 0.765 0.049 0.897w/o edge 0.809 0.727 0.076 0.8640.826 0.700 0.035 0.8900.847 0.770 0.048 0.900BTSNet 0.824 0.753 0.071 0.8750.834 0.716 0.033 0.8970.852 0.781 0.046 0.903TABLE VIIIABLATION ANALYSIS FOR MGFM. w/o CAMOCOD10KNC4KSmF w βME ϕSmF w βME ϕSmF w βME ϕw/o SFM 0.815 0.732 0.074 0.866 0.820 0.687 0.037 0.8840.847 0.765 0.049 0.898w/o edge0.806 0.721 0.075 0.863 0.822 0.697 0.036 0.8890.844 0.767 0.048 0.899BTSNet 0.824 0.753 0.071 0.875 0.834 0.716 0.033 0.8970.852 0.781 0.046 0.903TABLE IXABLATION ANALYSIS FOR THE SPATIAL RESOLUTION OF THE INPUT IMAGE. THE PERFORMANCE INCREASES AS THE INPUT SIZE GROWS, SATURATINGAS THE INPUT SIZE REACHES 704 × 704CAMOCOD10KNC4KsizeSmF w βME ϕSmF w βME ϕSmF w βME ϕ480 0.812 0.735 0.074 0.867", "figure_id": "tab_6", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "ANALYSIS FOR THE SPATIAL RESOLUTION OF FEATURES CROPPED FROM f 2 .", "figure_data": "CAMOCOD10KNC4KsizeSmF w βME ϕSmF w βME ϕSmF w βME ϕ72 0.812 0.729 0.074 0.8640.824 0.695 0.036 0.8880.847 0.768 0.049 0.89888 0.820 0.748 0.072 0.8750.830 0.707 0.034 0.8940.849 0.773 0.047 0.900104 0.814 0.741 0.073 0.8690.833 0.715 0.032 0.8970.853 0.783 0.045 0.904120 0.824 0.753 0.071 0.8750.834 0.716 0.033 0.8970.852 0.781 0.046 0.903136 0.824 0.753 0.072 0.8720.832 0.716 0.035 0.8950.851 0.780 0.048 0.902TABLE XIABLATION ANALYSIS FOR THE EXPANSION RATIO.CAMOCOD10KNC4KratioSmF w βME ϕSmF w βME ϕSmF w βME ϕ1 0.820 0.746 0.074 0.8650.831 0.713 0.036 0.8800.848 0.776 0.049 0.8941.20.824 0.753 0.071 0.8750.834 0.716 0.033 0.8970.852 0.781 0.046 0.9031.40.822 0.743 0.075 0.8570.832 0.717 0.032 0.8820.851 0.774 0.048 0.8961.60.821 0.746 0.074 0.8660.830 0.708 0.036 0.8830.847 0.762 0.049 0.8961.80.820 0.745 0.073 0.8670.828 0.710 0.037 0.8800.844 0.761 0.048 0.897", "figure_id": "tab_7", "figure_label": "X", "figure_type": "table" }, { "figure_caption": "ANALYSIS FOR THE BIFURCATED BACKBONE NETWORK. BTSNET-BE2: ADOPTING THE BIFURCATED ENCODER OF[20], [50] AS THE BTSNet-BE2 0.811 0.735 0.074 0.864 0.824 0.700 0.034 0.894 0.847 0.773 0.047 0.901 BTSNet 0.824 0.753 0.071 0.875 0.834 0.716 0.033 0.897 0.852 0.781 0.046 0.903 TABLE XIII COMPUTATIONAL COMPLEXITIES OF BTSNET AND 9 STATE-OF-THE-ART COD MODELS. calculating the average value. The experimental results are presented in Table XIII. It is worth noting that we also report the inference time when fully utilizing the GPU. For instance, in the first column, 0.6732 32", "figure_data": "BACKBONE NETWORK.CAMOCOD10KNC4KSmF w βME ϕSmF w βME ϕSmF w βME ϕSINetPFNetPreyNet BSANetFAPNetBGNetSINetV2BgNetBgNet+BTSNet BTSNet+Param(M)48.9546.538.5332.5829.6979.8526.9860.4760.7751.9752.27GMACs77.6976.04143.4899.82118.75167.3849.1110.74116.0146.1949.79Time(s)0.04050.04040.07440.04320.05380.04550.02570.04920.05560.03990.0526Time(s)0.67321.16241.00771.11360.91401.35661.25910.67270.79561.22831.5221Batch2448203018346016166060Average(s)0.02810.02420.05030.03710.05080.03990.02100.04200.04970.02050.0254", "figure_id": "tab_8", "figure_label": "XII", "figure_type": "table" } ]
Tianyou Chen; Jin Xiao; Xiaoguang Hu; Guofeng Zhang; Shaojie Wang
[ { "authors": "N Price; S Green; J Troscianko; T Tregenza; M Stevens", "journal": "Scientific reports", "ref_id": "b0", "title": "Background matching and disruptive coloration as habitat-specific strategies for camouflage", "year": "2019" }, { "authors": "M Stevens; S Merilaita", "journal": "Philosophical Transactions of the Royal Society B: Biological Sciences", "ref_id": "b1", "title": "Animal camouflage: current issues and new perspectives", "year": "2009" }, { "authors": "D Fan; T Zhou; G Ji; Y Zhou; G Chen; H Fu; J Shen; L Shao", "journal": "IEEE Trans. Medical Imaging", "ref_id": "b2", "title": "Inf-net: Automatic COVID-19 lung infection segmentation from CT images", "year": "2020" }, { "authors": "D Fan; G Ji; T Zhou; G Chen; H Fu; J Shen; L Shao", "journal": "", "ref_id": "b3", "title": "Pranet: Parallel reverse attention network for polyp segmentation", "year": "2020" }, { "authors": "R P De La Fuente; X Delclòs; E Peñalver; M Speranza; J Wierzchos; C Ascaso; M S Engel", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b4", "title": "Early evolution and ecology of camouflage in insects", "year": "2012" }, { "authors": "J R Hall; O Matthews; T N Volonakis; E Liggins; K P Lymer; R Baddeley; I C Cuthill; N E Scott-Samuel", "journal": "Defence Technology", "ref_id": "b5", "title": "A platform for initial testing of multiple camouflage patterns", "year": "2021" }, { "authors": "T Zhou; Y Zhou; C Gong; J Yang; Y Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b6", "title": "Feature aggregation and propagation network for camouflaged object detection", "year": "2022" }, { "authors": "M Zhang; S Xu; Y Piao; D Shi; S Lin; H Lu", "journal": "ACM MM", "ref_id": "b7", "title": "Preynet: Preying on camouflaged objects", "year": "2022-09-10" }, { "authors": "J Yan; T Le; K Nguyen; M Tran; T Do; T V Nguyen", "journal": "IEEE Access", "ref_id": "b8", "title": "Mirrornet: Bio-inspired camouflaged object segmentation", "year": "2021" }, { "authors": "D Fan; G Ji; G Sun; M Cheng; J Shen; L Shao", "journal": "", "ref_id": "b9", "title": "Camouflaged object detection", "year": "2020" }, { "authors": "Y Lv; J Zhang; Y Dai; A Li; B Liu; N Barnes; D Fan", "journal": "", "ref_id": "b10", "title": "Simultaneously localize, segment and rank the camouflaged objects", "year": "2021" }, { "authors": "G.-P Ji; L Zhu; M Zhuge; K Fu", "journal": "Pattern Recognition", "ref_id": "b11", "title": "Fast camouflaged object detection via edge-based reversible re-calibration network", "year": "2022" }, { "authors": "M Zhuge; X Lu; Y Guo; Z Cai; S Chen", "journal": "Pattern Recognition", "ref_id": "b12", "title": "Cubenet: X-shape connection for camouflaged object detection", "year": "2022" }, { "authors": "Y Sun; S Wang; C Chen; T Xiang", "journal": "", "ref_id": "b13", "title": "Boundary-guided camouflaged object detection", "year": "2022" }, { "authors": "H Mei; G Ji; Z Wei; X Yang; X Wei; D Fan", "journal": "", "ref_id": "b14", "title": "Camouflaged object segmentation with distraction mining", "year": "2021" }, { "authors": "D.-P Fan; G.-P Ji; M.-M Cheng; L Shao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Concealed object detection", "year": "2021" }, { "authors": "S Li; X Sui; X Luo; X Xu; Y Liu; R S M Goh", "journal": "", "ref_id": "b16", "title": "Medical image segmentation using squeeze-and-expansion transformers", "year": "2021" }, { "authors": "Y Mao; J Zhang; Z Wan; Y Dai; A Li; Y Lv; X Tian; D Fan; N Barnes", "journal": "CoRR", "ref_id": "b17", "title": "Transformer transforms salient object detection and camouflaged object detection", "year": "2021" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b18", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Z Wu; L Su; Q Huang", "journal": "", "ref_id": "b19", "title": "Cascaded partial decoder for fast and accurate salient object detection", "year": "2019-05-15" }, { "authors": "Q Hou; M Cheng; X Hu; A Borji; Z Tu; P H S Torr", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b20", "title": "Deeply supervised salient object detection with short connections", "year": "2019" }, { "authors": "T Chen; X Hu; J Xiao; G Zhang; S Wang", "journal": "Neurocomputing", "ref_id": "b21", "title": "Binet: Bidirectional interactive network for salient object detection", "year": "2021" }, { "authors": "J Liu; Q Hou; M Cheng; J Feng; J Jiang", "journal": "", "ref_id": "b22", "title": "A simple poolingbased design for real-time salient object detection", "year": "2019" }, { "authors": "X Qin; Z V Zhang; C Huang; C Gao; M Dehghan; M Jägersand", "journal": "", "ref_id": "b23", "title": "Basnet: Boundary-aware salient object detection", "year": "2019" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b24", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2017" }, { "authors": "S Li; D A F Florêncio; Y Zhao; C Cook; W Li", "journal": "", "ref_id": "b25", "title": "Foreground detection in camouflaged scenes", "year": "2017" }, { "authors": "F Xue; C Guoying; R Hong; J Gu", "journal": "Multim. Syst", "ref_id": "b26", "title": "Camouflage texture evaluation using a saliency map", "year": "2015" }, { "authors": "A Tankus; Y Yeshurun", "journal": "Comput. Vis. Image Underst", "ref_id": "b27", "title": "Convexity-based visual camouflage breaking", "year": "2001" }, { "authors": "F Xue; C Yong; S Xu; H Dong; Y Luo; W Jia", "journal": "Multim. Tools Appl", "ref_id": "b28", "title": "Camouflage performance analysis and evaluation framework based on features fusion", "year": "2016" }, { "authors": "T Chen; J Xiao; X Hu; G Zhang; S Wang", "journal": "Knowledge-Based Systems", "ref_id": "b29", "title": "Boundary-guided network for camouflaged object detection", "year": "2022" }, { "authors": "Y Sun; G Chen; T Zhou; Y Zhang; N Liu", "journal": "", "ref_id": "b30", "title": "Context-aware crosslevel fusion network for camouflaged object detection", "year": "2021" }, { "authors": "J Ren; X Hu; L Zhu; X Xu; Y Xu; W Wang; Z Deng; P.-A Heng", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b31", "title": "Deep texture-aware features for camouflaged object detection", "year": "2023" }, { "authors": "W Zhai; Y Cao; H Xie; Z.-J Zha", "journal": "IEEE Transactions on Multimedia", "ref_id": "b32", "title": "Deep texton-coherence network for camouflaged object detection", "year": "2022" }, { "authors": "M.-C Chou; H.-J Chen; H.-H Shuai", "journal": "", "ref_id": "b33", "title": "Finding the achilles heel: Progressive identification network for camouflaged object detection", "year": "2022" }, { "authors": "P Li; X Yan; H Zhu; M Wei; X.-P Zhang; J Qin", "journal": "IEEE Transactions on Image Processing", "ref_id": "b34", "title": "Findnet: Can you find me? boundary-and-texture enhancement network for camouflaged object detection", "year": "2022" }, { "authors": "C He; L Xu; Z Qiu", "journal": "", "ref_id": "b35", "title": "Eldnet: Establishment and refinement of edge likelihood distributions for camouflaged object detection", "year": "2022" }, { "authors": "J Zhu; X Zhang; S Zhang; J Liu", "journal": "", "ref_id": "b36", "title": "Inferring camouflaged objects by texture-aware interactive guidance network", "year": "2021" }, { "authors": "Y Pang; X Zhao; L Zhang; H Lu", "journal": "", "ref_id": "b37", "title": "Multi-scale interactive network for salient object detection", "year": "2020" }, { "authors": "X Qin; Z V Zhang; C Huang; M Dehghan; O R Zaïane; M Jägersand", "journal": "Pattern Recognit", "ref_id": "b38", "title": "U 2 -net: Going deeper with nested u-structure for salient object detection", "year": "2020" }, { "authors": "L Zhang; J Dai; H Lu; Y He; G Wang", "journal": "", "ref_id": "b39", "title": "A bi-directional message passing model for salient object detection", "year": "2018" }, { "authors": "S Liu; D Huang; Y Wang", "journal": "", "ref_id": "b40", "title": "Receptive field block net for accurate and fast object detection", "year": "2018" }, { "authors": "G Ji; K Fu; Z Wu; D Fan; J Shen; L Shao", "journal": "", "ref_id": "b41", "title": "Full-duplex strategy for video object segmentation", "year": "2021" }, { "authors": "H Zhao; J Shi; X Qi; X Wang; J Jia", "journal": "", "ref_id": "b42", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "J Wei; S Wang; Q Huang", "journal": "", "ref_id": "b43", "title": "F 3 net: Fusion, feedback and focus for salient object detection", "year": "2020" }, { "authors": "T Chen; J Xiao; X Hu; G Zhang; S Wang", "journal": "Neurocomputing", "ref_id": "b44", "title": "Adaptive fusion network for rgb-d salient object detection", "year": "2023" }, { "authors": "B Xu; H Liang; R Liang; P Chen", "journal": "", "ref_id": "b45", "title": "Locate globally, segment locally: A progressive architecture with knowledge review network for salient object detection", "year": "2021" }, { "authors": "H Zheng; J Fu; Z Zha; J Luo", "journal": "", "ref_id": "b46", "title": "Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition", "year": "2019" }, { "authors": "Q Jia; S Yao; Y Liu; X Fan; R Liu; Z Luo", "journal": "", "ref_id": "b47", "title": "Segment, magnify and reiterate: Detecting camouflaged objects the hard way", "year": "2022" }, { "authors": "K He; G Gkioxari; P Dollár; R Girshick", "journal": "", "ref_id": "b48", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Y Zhai; D.-P Fan; J Yang; A Borji; L Shao; J Han; L Wang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b49", "title": "Bifurcated backbone strategy for rgb-d salient object detection", "year": "2021" }, { "authors": "P Krähenbühl; V Koltun", "journal": "", "ref_id": "b50", "title": "Efficient inference in fully connected crfs with gaussian edge potentials", "year": "2011" }, { "authors": "M Feng; H Lu; Y Yu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b51", "title": "Residual learning for salient object detection", "year": "2020" }, { "authors": "T Zhao; X Wu", "journal": "", "ref_id": "b52", "title": "Pyramid feature attention network for saliency detection", "year": "2019" }, { "authors": "D Fan; M Cheng; Y Liu; T Li; A Borji", "journal": "", "ref_id": "b53", "title": "Structure-measure: A new way to evaluate foreground maps", "year": "2017" }, { "authors": "D Fan; G Ji; X Qin; M Cheng", "journal": "SCIENTIA SINICA Informationis", "ref_id": "b54", "title": "Cognitive vision inspired object segmentation metric and loss function", "year": "2021" }, { "authors": "R Margolin; L Zelnik-Manor; A Tal", "journal": "", "ref_id": "b55", "title": "How to evaluate foreground maps", "year": "2014" }, { "authors": "D P Kingma; J Ba", "journal": "ICLR", "ref_id": "b56", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Q Zhai; X Li; F Yang; C Chen; H Cheng; D.-P Fan", "journal": "", "ref_id": "b57", "title": "Mutual graph learning for camouflaged object detection", "year": "2021" }, { "authors": "H Zhu; P Li; H Xie; X Yan; D Liang; D Chen; M Wei; J Qin", "journal": "", "ref_id": "b58", "title": "I can find you! boundary-guided separated attention network for camouflaged object detection", "year": "2022" }, { "authors": "S Gao; M Cheng; K Zhao; X Zhang; M Yang; P H S Torr", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b59", "title": "Res2net: A new multi-scale backbone architecture", "year": "2021" }, { "authors": "A Krizhevsky", "journal": "CoRR", "ref_id": "b60", "title": "One weird trick for parallelizing convolutional neural networks", "year": "2014" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W Lo; P Dollár; R B Girshick", "journal": "CoRR", "ref_id": "b61", "title": "Segment anything", "year": "2023" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly; J Uszkoreit; N Houlsby", "journal": "ICLR", "ref_id": "b62", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "W Wang; E Xie; X Li; D.-P Fan; K Song; D Liang; T Lu; P Luo; L Shao", "journal": "", "ref_id": "b63", "title": "Pyramid vision transformer: A versatile backbone for dense prediction without convolutions", "year": "2021" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b64", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "F Yang; Q Zhai; X Li; R Huang; A Luo; H Cheng; D.-P Fan", "journal": "", "ref_id": "b65", "title": "Uncertainty-guided transformer reasoning for camouflaged object detection", "year": "2021" }, { "authors": "Z Liu; Z Zhang; Y Tan; W Wu", "journal": "", "ref_id": "b66", "title": "Boosting camouflaged object detection with dual-task interactive transformer", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 139.16, 307.86, 160.87, 27.82 ], "formula_id": "formula_0", "formula_text": "H 2 i × W 2 i i ≤ 2 H 2 i+1 × W 2 i+1 i > 2 (1)" }, { "formula_coordinates": [ 5, 134.83, 426.24, 165.2, 9.65 ], "formula_id": "formula_1", "formula_text": "r 5 = M F EM (f 5 ),(2)" }, { "formula_coordinates": [ 5, 124.26, 444.86, 175.77, 9.65 ], "formula_id": "formula_2", "formula_text": "r 4 = M F EM (f 4 + r 5 ),(3)" }, { "formula_coordinates": [ 5, 124.26, 463.48, 175.77, 9.65 ], "formula_id": "formula_3", "formula_text": "r 3 = M F EM (f 3 + r 4 ),(4)" }, { "formula_coordinates": [ 5, 120.45, 534.52, 179.58, 12.69 ], "formula_id": "formula_4", "formula_text": "E 1 = abs(P a 3 (M 1 ) -M 1 )(5)" }, { "formula_coordinates": [ 5, 86.02, 667.39, 214, 12.69 ], "formula_id": "formula_5", "formula_text": "M n 1 = f minmax (M 1 ), E n 1 = f minmax (E 1 ),(6)" }, { "formula_coordinates": [ 5, 105.63, 687.68, 194.39, 63.11 ], "formula_id": "formula_6", "formula_text": "M b 1 (i, j) = 1, M n 1 (i, j) > 0.5 0, M n 1 (i, j) ≤ 0.5 (7) E b 1 (i, j) = 1, E n 1 (i, j) > 0.5 0, E n 1 (i, j) ≤ 0.5(8)" }, { "formula_coordinates": [ 5, 374.22, 558.36, 188.81, 9.65 ], "formula_id": "formula_7", "formula_text": "R i = M F EM (F i ), i = 3, 4, 5,(9)" }, { "formula_coordinates": [ 5, 323.17, 581.93, 239.86, 25.57 ], "formula_id": "formula_8", "formula_text": "C m i , C e i , R ′ i = BEM (R i , R ′ i+1 , C e i+1 ), i = 3, 4, BEM (R 5 , C e ), i = 5,(10)" }, { "formula_coordinates": [ 6, 106.76, 305.31, 193.27, 9.65 ], "formula_id": "formula_9", "formula_text": "M 3 , E 3 = M GF M (f 2 , M 2 , E 2 ),(11)" }, { "formula_coordinates": [ 6, 3.57, 452.84, 286.31, 225.7 ], "formula_id": "formula_10", "formula_text": "𝐶 1 𝑃 8 𝐶 3 𝐷 2 𝐶 1 𝑃 4 𝐶 3 𝐷 2 𝐶 1 𝑃 2 𝐶 3 𝐷 2 𝐶 1 𝐶 3 𝐷 2 Shortcut Block + + Non-Local Block 𝐶 1 𝐶 1 𝐶 1 𝐶 1 𝐶 1 × Softmax × + + FEM 𝑓 𝑖𝑛 𝑓 𝑜𝑢𝑡 𝐶 1 1 × 1 Convolution 𝑃 2 𝐷 2 + × Softmax 2 × 2 MaxPooling" }, { "formula_coordinates": [ 6, 406.82, 504.36, 156.21, 9.65 ], "formula_id": "formula_11", "formula_text": "f nl = C 1 (f in ),(12)" }, { "formula_coordinates": [ 6, 350.93, 524.3, 212.1, 9.65 ], "formula_id": "formula_12", "formula_text": "Q = C 1 (f nl ), K = C 1 (f nl ), V = C 1 (f nl ),(13)" }, { "formula_coordinates": [ 6, 385.14, 542.18, 177.9, 11.03 ], "formula_id": "formula_13", "formula_text": "S = sof tmax(Q T ⊗ K),(14)" }, { "formula_coordinates": [ 6, 409.89, 562.13, 153.15, 12.69 ], "formula_id": "formula_14", "formula_text": "f o nl = V ⊗ S,(15)" }, { "formula_coordinates": [ 7, 114, 294.66, 186.02, 12.69 ], "formula_id": "formula_15", "formula_text": "f i c = C 1 (f in ), i ∈ {1, 2, 3, 4},(16)" }, { "formula_coordinates": [ 7, 64.51, 314.75, 235.51, 26.54 ], "formula_id": "formula_16", "formula_text": "O i c = D 2 (C 3 (P 8 (f 1 c ))), i = 1, D 2 (C 3 (P 2 4-i (f i c + O i-1 c ))), i = 2, 3, 4,(17)" }, { "formula_coordinates": [ 7, 145.05, 460.75, 154.97, 12.69 ], "formula_id": "formula_17", "formula_text": "f o s = C 1 (f in ),(18)" }, { "formula_coordinates": [ 7, 155.11, 479.16, 144.92, 12.69 ], "formula_id": "formula_18", "formula_text": "f o c = O 4 c ,(19)" }, { "formula_coordinates": [ 7, 128.97, 497.57, 171.06, 12.69 ], "formula_id": "formula_19", "formula_text": "f out = f o s + f o c + f o nl ,(20)" }, { "formula_coordinates": [ 7, 368.3, 312.97, 194.73, 12.69 ], "formula_id": "formula_20", "formula_text": "f = C 3 (cat(C e low , f low + f high )),(21)" }, { "formula_coordinates": [ 7, 399.56, 335.34, 163.47, 9.65 ], "formula_id": "formula_21", "formula_text": "f ca = CA(f ) × f,(22)" }, { "formula_coordinates": [ 7, 390.67, 355.64, 172.37, 9.65 ], "formula_id": "formula_22", "formula_text": "f out = SA(f ca ) × f ca ,(23)" }, { "formula_coordinates": [ 7, 332.72, 373.86, 230.32, 12.69 ], "formula_id": "formula_23", "formula_text": "C e high = C 3 (f out ), C m high = C 3 (C 3 (C 3 (f out ))),(24)" }, { "formula_coordinates": [ 7, 386.47, 438.82, 176.57, 9.65 ], "formula_id": "formula_24", "formula_text": "CA(f ) = σ(M (P g (f ))),(25)" }, { "formula_coordinates": [ 7, 386.22, 499.64, 176.82, 12.69 ], "formula_id": "formula_25", "formula_text": "SA(f ) = σ(C 7 (P c g (f ))),(26)" }, { "formula_coordinates": [ 7, 257.05, 553.12, 281.65, 156.36 ], "formula_id": "formula_26", "formula_text": "MGFM MGFM SFM CA SA 𝐶 3 + C 𝐶 3 • • 𝐶 3 + 𝐶 3 𝐶 3 𝑀 2 𝑀 3 𝐸 2 𝐸 3 𝑅 2 𝑓 𝑎 𝑓 𝑟 𝑓 𝑓 𝑓 𝑒" }, { "formula_coordinates": [ 8, 48.96, 62.63, 234.16, 175.87 ], "formula_id": "formula_27", "formula_text": "SFM CA SA 𝐶 3 • • + CA SA 𝐶 3 • • + CA SA 𝐶 3 • • + CA SA 𝐶 3 • • C out 𝑅 2 𝑀 2 C C C C Fig. 7" }, { "formula_coordinates": [ 8, 67.24, 626.32, 232.79, 26.54 ], "formula_id": "formula_28", "formula_text": "G i conv = C 3 (cat(G i , M 2 )), i = 1, C 3 (cat(G i + G i o , M 2 )), i = 2, 3, 4,(27)" }, { "formula_coordinates": [ 8, 116.24, 662.95, 183.79, 12.69 ], "formula_id": "formula_29", "formula_text": "G i sa = G i conv × SA(G i conv ),(28)" }, { "formula_coordinates": [ 8, 126.31, 684.05, 173.71, 12.69 ], "formula_id": "formula_30", "formula_text": "G i o = G i sa × CA(G i sa ),(29)" }, { "formula_coordinates": [ 8, 119.74, 705.16, 180.28, 12.69 ], "formula_id": "formula_31", "formula_text": "f f = cat(G 1 o , G 2 o , G 3 o , G 4 o ),(30)" }, { "formula_coordinates": [ 8, 48.96, 725.55, 251.05, 22.49 ], "formula_id": "formula_32", "formula_text": "G i o is the output of the i-th branch, f f is the output of SFM." }, { "formula_coordinates": [ 8, 395.2, 195.95, 167.84, 9.65 ], "formula_id": "formula_33", "formula_text": "f ca = f f × CA(f f ),(31)" }, { "formula_coordinates": [ 8, 394.34, 214.77, 168.7, 9.65 ], "formula_id": "formula_34", "formula_text": "f a = f ca × SA(f ca ),(32)" }, { "formula_coordinates": [ 8, 382.13, 293.37, 180.91, 9.65 ], "formula_id": "formula_35", "formula_text": "f e = C 3 (cat(C 3 (f a ), E 2 )),(33)" }, { "formula_coordinates": [ 8, 388.72, 390.64, 174.32, 9.65 ], "formula_id": "formula_36", "formula_text": "f r = f a + C 3 (f a + f e ),(34)" }, { "formula_coordinates": [ 8, 379.09, 409.46, 183.95, 9.65 ], "formula_id": "formula_37", "formula_text": "M 3 = C 3 (f r ), E 3 = C 3 (f e ),(35)" }, { "formula_coordinates": [ 9, 91.91, 166.14, 423.19, 130.57 ], "formula_id": "formula_38", "formula_text": "Image Pred R 4 ' R 3 +R 4 ' R 3 ' GT 𝑅 2 𝑓 𝑓 𝑓 𝑎 𝑓 𝑟" }, { "formula_coordinates": [ 9, 133.22, 443.53, 166.8, 12.69 ], "formula_id": "formula_39", "formula_text": "L = L w IoU + L w BCE ,(36)" }, { "formula_coordinates": [ 9, 53.95, 608.51, 246.08, 30.32 ], "formula_id": "formula_40", "formula_text": "L mask = L(M 1 , G)+ 5 i=3 L(Rst(C m i ), G)+L(M 3 , G),(37)" }, { "formula_coordinates": [ 9, 55.86, 644.28, 244.16, 38.91 ], "formula_id": "formula_41", "formula_text": "L boundary = L(E 1 , G) + 5 i=3 L(Rst(C e i ), G) + L(E 3 , G),(38)" }, { "formula_coordinates": [ 9, 114.4, 686.19, 185.63, 9.65 ], "formula_id": "formula_42", "formula_text": "L total = L mask + L boundary ,(39)" } ]
10.18653/v1/D19-1380
2024-02-02
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b11", "b3", "b30", "b18", "b15", "b50", "b34", "b31" ], "table_ref": [], "text": "The sentence, together with the word, are the two fundamental grammatical units of human language. Representing sentences for machine learning, which involves transforming a sentence into a vector or a fixed-length representation, is an important component of NLP. The quality of these representations affects the performance of downstream NLP tasks like text classification and text similarity (Conneau and Kiela, 2018).\nDeep neural networks have played a major role in obtaining sentence representations. While there have been significant advancements in the development of large language models (LLMs) such as GPT-3 (Brown et al., 2020), BLOOM (Workshop, 2023), they learn through effective word representations and modelling of the language at the (next) word level. Endowing models with the ability to learn effective representations of higher linguistic units beyond words -such as sentences -is useful.\nFor instance, sentence representations can help in retrieving semantically similar documents prior to generation. LangChain 1 and various other frameworks like DSPy (Khattab et al., 2023), have underscored the critical demand for proficient sentence representations. The documents retrieved serve as valuable resources for generating fact-based responses, using custom documents to address user queries, and fulfilling other essential functions.\nHowever, current language models exhibit drawbacks in obtaining sentence representations out-ofthe-box. For instance, Ethayarajh (2019) showed that out-of-the-box representations from BERT (Devlin et al., 2019) are fraught with problems such as anisotropy-representations occupying a narrow cone, making every representation closer to all others. Also, they are impractical for real-life applications: finding the best match for a query takes hours (Reimers and Gurevych, 2019).\nTo overcome the inadequacy of directly using sentence representations from language models, numerous methods have been developed. Several works have proposed to post-process the representations from BERT to alleviate the anisotropy (Li et al., 2020;Huang et al., 2021b) or repurpose representations from different layers of the model (Kim et al., 2021). But there has been a steadily growing body of works that move away from such postprocessing and introduce new methods.\nPerhaps due to the rapid advancements in the field (Figure 1), there are no literature reviews discussing the diverse range of techniques for learning sentence representations. The present paper offers a review of these techniques, with a specific emphasis on deep learning methods. Our review caters to two audiences: (a) Researchers from various fields seeking to get insights into recent breakthroughs in " }, { "figure_ref": [], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "We structure our literature review as follows:\n• § 2 provides a brief history of methods to learn sentence representations and the different components of a modern framework. • § 3 provides a review of supervised sentence representations that use labeled data to learn sentence representations. • § 4 reviews methods that use unlabeled data to learn sentence representations (also called unsupervised sentence representation learning), a major focus of recent methods. • § 5 describes methods that draw inspiration from other fields such as computer vision. • § 6 provides a discussion of trends and analysis. • § 7 discusses the challenges and suggests some future directions for research." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Sentence Representations", "publication_ref": [ "b39", "b48", "b32", "b11", "b37", "b31", "b21", "b47", "b10", "b77", "b41", "b65", "b71", "b14", "b54", "b38", "b5", "b6", "b34", "b20", "b15" ], "table_ref": [], "text": "Before the advent of neural networks, bag-of-words models were commonly used to represent sentences, but they suffered from limitations such as being unable to capture the relationships between words or the overall structure of the sentence. Numerous efforts have aimed to improve sentence representations (Figure 1). Inspired by Word2Vec (Mikolov et al., 2013;Pennington et al., 2014), Kiros et al. (2015) trained neural networks to predict the surrounding sentences of a given target sentence. Subsequently, Conneau and Kiela (2018) employed various recurrent neural networks (RNNs) to produce sentence embeddings, exploring their linguistic attributes, including part-ofspeech tags, verb tense and named entity recognition. Notably, this study utilized natural language ESimCSE (Wu et al., 2022d), CASE (Wang et al., 2022),PCL (Wu et al.,2022a), Mirror BERT (Liu et al., 2021) Kim et al. (2021), Zhang et al. (2022a) DeCLUTR (Giorgi et al., 2021), EASE (Nishikawa et al., 2022), PromptBERT (Jiang et al., 2022a), Zeng et al. ( 2022) (Jiang et al., 2022b), InfoCSE (Wu et al., 2022b), DiffCSE (Chuang et al., 2022), (Chen et al., 2022a), (Wu et al., 2022c), (Zhang et al., 2022d ), (Zhang et al., 2020), (Min et al.,2021), (Huang et al., 2021a) (Wu and Zhao, 2022), (Yang et al., 2021), ClusterNS (Deng et al., 2023), (Seonwoo et al., 2023), (Deng), (Liu et al., 2023) (Cao et al., 2022), (Zhou et al., 2022a), (Zhang et al., 2022c), (Chen et al., 2023) BERTFlow (Li et al., 2020), Huang et al., (2021b) SimCSE (Gao et al., 2021), SCD (Klein and Nabi,2022) inference (NLI) data for neural network training, predating the emergence of extensive pretrained models such as BERT (Devlin et al., 2019). BERT and similar models have since served as a foundation for enhancing sentence representations. Exploring whether Large Language Models will ignite advancements in sentence representations or if pretrained language models like BERT remain pivotal is a crucial inquiry within today's context. ( § 6)" }, { "figure_ref": [ "fig_4", "fig_2", "fig_4" ], "heading": "Components of Sentence Representations", "publication_ref": [ "b50", "b2", "b63", "b46" ], "table_ref": [ "tab_1" ], "text": "Neural networks have become the de-facto standard for learning sentence representations. The network takes two sentences as input and creates a vector for each sentence. These vectors are then trained to be similar for sentences that mean the same thing and different for sentences with different meanings. Learning sentence representations using neural networks involves the following generic components (Figure 3): \nl i = -log e sim(h i ,h p i ) N j=1 e sim(h i ,h j )\nwhere N is the size of a mini-batch, sim(•, •) is the similarity function which plays a crucial role. However, when selecting an appropriate loss function, several factors need to be considered. These factors include the choice of similarity measures and the characteristics of the negative examples.\nThe different components have disproportionate effects in learning sentence representations. While Model has played an important role and has brought the most advances in learning sentence representations, Data cannot be disregarded. Most In their influential paper, Reimers and Gurevych (2019) utilized this versatile framework to generate highly effective sentence embeddings, which has subsequently served as a cornerstone for further research. This framework, commonly referred to as the bi-encoder or Siamese network approach, involves encoding the query and candidate separately. This does not encourage interactions betweeen words. Encouraging word interactions can be achieved through a cross encoder, where the query and candidate are concatenated and encoded by a single model. However, this approach is computationally expensive and we have omitted it in this paper. In contrast, the Siamese BERT network pre-computes query and candidate vectors, enabling fast retrieval.\nFigure 2 illustrates the progression of work aimed at improving sentence representations. Two primary approaches stand out: supervised and unsupervised methods. For a clearer understanding of innovations, we categorize these methods based on variations of common techniques. Each category identifies contributions that target specific components (Figure 3): The Better Positives category focuses on refining augmentation techniques, primarily addressing the Data component. Conversely, the Alternate Loss and Objectives category explores improvements in the contrastive Loss function. These dynamic interactions between categories are further depicted in Table 1.\nNatural language understanding involves intricate reasoning. One way to learn better sentence representations is by excelling at tasks that demand reasoning. Large-scale supervised datasets for natural language understanding have emerged over the years: SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018), ANLI (Nie et al., 2020). To that end, neural network methods utilize supervised datasets to learn sentence representations." }, { "figure_ref": [], "heading": "Natural Language Inference", "publication_ref": [ "b13", "b12", "b50", "b9", "b24" ], "table_ref": [], "text": "Natural Language Inference (NLI) is the process of determining the logical relationship between a premise (an assumed true sentence) and a hypothesis (a possibly true sentence). The objective of NLI is to determine whether the hypothesis can be logically inferred from the premise (entailment), contradicts the premise (contradiction), or is neutral with respect to it (Dagan et al., 2013). NLI serves as a proxy for evaluating natural language understanding. According to Conneau et al. (2017), learning sentence representations using NLI data can be effectively transferred to other NLP tasks, demonstrating the generality of this approach.\nReimers and Gurevych (2019) and subsequent works mainly rely on learning sentence representations using NLI data. There are two noteworthy components to enable this. First, processing inputs individually without promoting interaction between words; second, using an encoder like BERT as its backbone model. The first component is computationally efficient but has been found to result in poorer performance compared to methods that promote interaction between words (Reimers and Gurevych, 2019). This lack of interaction can limit the network's ability to capture the nuances of language, and may result in less accurate sentence embeddings. In order to solve this, efforts such as the work from Cheng (2021), incorporated word-level interaction features into the sentence embedding while maintaining the efficiency of Siamese-BERT networks. Their approach makes use of ideas from knowledge distillation (Hinton et al., 2015): using the rich knowledge in pretrained cross-encoders to improve the performance of Siamese-BERT.\nMeanwhile, with the raise of generative models which have a myriad of capabilities has lead researchers to explore whether they can serve as better backbone models for sentence representations (Ni et al., 2022a) compared to encoder-only models like BERT. They consider three methods to obtain sentence representations from a pretrained T5 model: the representation of the first token of the encoder, the representation of the first generated token of the decoder, or the mean of the representations from the encoder. They found that such models trained on NLI are performant, showing the utility of generative models for obtaining sentence representations." }, { "figure_ref": [], "heading": "Generating Data", "publication_ref": [], "table_ref": [], "text": "Acquiring supervised data to train sentence representations is a difficult task. However, in recent years, pre-trained models have emerged as a potential solution for generating training data. Furthermore, pre-trained models can serve as weak labelers to create \"silver data\".\nCross-encoders that are pretrained on NLI data can be used to obtain silver data. In order to do this, Thakur et al. (2021a) suggest Augmented-SBERT. Their approach involves using different strategies to mine sentence pairs, followed by labeling them using a cross-encoder to create silver data. The silver data is then combined with the human-labelled training dataset, and a Siamese-BERT network is trained. However, this method requires mining appropriate sentence pairs first.\nRather than relying solely on obtaining supervised data, researchers are exploring the use of generative language models to create large amounts of synthetic training data for sentence encoders. This approach has the potential to produce high-quality training data at scale, addressing some of the challenges associated with supervised data acquisition. For instance, Chen et al. (2022b) demonstrate the use of a T5 model trained to generate entailment or contradiction pairs for a given sentence. However, this method still needs to provision a sentence to generate the entailment/contradiction pairs. DINO, introduced by Schick and Schütze (2021), automates the generation of NLI data instructions using GPT2-XL. This approach eliminates the need for providing a sentence to generate entailment or contradiction pairs. Models trained on the resulting STS-Dino dataset outperform strong baselines on multiple semantic textual similarity datasets." }, { "figure_ref": [ "fig_4" ], "heading": "Unsupervised Sentence Representations", "publication_ref": [ "b11" ], "table_ref": [], "text": "Unlike supervised methods, unsupervised learning techniques do not rely on explicit positive and negative examples but instead employ alternative (Conneau and Kiela, 2018), COMPONENT indicates the component from Figure 3 that the work targets, and AVERAGE is the average score on STS.\ntechniques to mine them. Hence this has garnered significant attention in recent years. Additionally, they may also modify the learning objectives." }, { "figure_ref": [], "heading": "Better Positives", "publication_ref": [ "b52" ], "table_ref": [], "text": "Contrastive learning techniques optimize sentence representations by contrasting semantically similar examples against dissimilar ones (c.f § 2.2). A simple way to obtain a semantically similar example is to make minimal changes to it. In contrast to images, where simple transformations such as rotation, clipping, and color distortion can generate semantically similar examples, deleting or replacing a random word in a sentence can drastically change its meaning (Schlegel et al., 2021). Therefore, it is crucial to carefully select positive and negative examples for contrastive learning in NLP." }, { "figure_ref": [], "heading": "Surface Level", "publication_ref": [ "b37", "b37", "b17" ], "table_ref": [], "text": "To create a sentence that carries the same meaning as another, one can modify the words or characters in the text. Recent research (Wang et al., 2022;Liu et al., 2021;Wu et al., 2022d) suggests certain transformations that preserve the semantic meaning. Wang et al. (2022) propose randomly flipping the case of some tokens, while Liu et al. (2021) mask spans of tokens to get positive instances, and Wu et al. (2022d) suggest to repeat certain words or subwords. Besides generating positive instances, these transformations help in fixing certain biases in representations generated by transformers. For example, Jiang et al. (2022a) found that avoiding high-frequency tokens can result in better sentence representations, and transformations that mask them out while learning sentence representations can improve its quality.\nHowever, altering the surface characteristics of sentences can lead to models relying on shortcuts rather than learning semantics (Du et al., 2021). To address this issue, Wu et al. (2022a) propose the use of multiple augmentation strategies rather than a single transformation. They use shuffling, repeating, and dropping words as transformation strategies to improve model robustness. Additionally, they implement mechanisms to enhance learning from multiple positive examples." }, { "figure_ref": [], "heading": "Model Level", "publication_ref": [ "b20", "b33", "b35", "b60" ], "table_ref": [], "text": "Minor modifications to the words or the structure of a sentence can still result in big changes in semantics in language processing. However, researchers have explored another method, where such small modifications can be made in the representation space by leveraging the distinctive characteristics of the backbone model utilized in contrastive learning. These characteristics might be architectural choices, or using representations from certain components of the model.\nOne such approach uses Dropout -a regularization technique used in deep learning to prevent overfitting of a model. During training, some neurons in the layer are randomly deactivated, resulting in slightly different representations when the same training instance is passed through the model multiple times. These different representations can be used as positive examples for learning. Recent studies such as Gao et al. (2021) have demonstrated the effectiveness of dropout as an augmentation strategy. Several other works have also incorporated this technique and improved upon it: promoting decorrelation between different dimensions (Klein and Nabi, 2022) and adding dropout in the transformation arsenal (Wu et al., 2022a,d).\nOn the other hand, special components can be trained to generate semantically similar representations. One example is the use of prefix modules (Li and Liang, 2021), which are small, trainable modules added to a pretrained language model. Wang and Lu (2022) attach two prefix modules to the Siamese BERT network (c.f § 2) -one each for the two branches -and train them on NLI data. This enables the prefix modules to understand the nuances of the difference between representations. They show that representations from the two modules for the same sentence can then be used as positives." }, { "figure_ref": [], "heading": "Representation Level", "publication_ref": [ "b50", "b31" ], "table_ref": [], "text": "Examining the latent representation of sentences generated by a model yields a valuable benefit. In this scenario, one can discover positive examples by exploring the representation space. These approaches offer the distinct advantage of obviating the need for any data augmentation.\nAlthough BERT's [CLS] representation is commonly used as a sentence representation, it has been shown to be ineffective (Reimers and Gurevych, 2019). In fact, Kim et al. (2021) " }, { "figure_ref": [], "heading": "Alternative Methods", "publication_ref": [ "b21", "b47", "b16", "b72" ], "table_ref": [], "text": "Researchers have explored various other methods for obtaining positive samples for unsupervised sentence representations. One option is weak supervision: using spans from the same document (Giorgi et al., 2021), employing related entities (Nishikawa et al., 2022), and utilizing tweets and retweets-with-quotes (Di Giovanni and Brambilla, 2021). On the other hand, dialogue turns can be used as semantically related pairs of text for learning sentence representations (Zhou et al., 2022b).\nOther approaches use the capability of large language models to perform tasks based on instructions-a technique called \"prompting\". Researchers have used prompts to obtain better sentence representations, as demonstrated in studies such as Jiang et al. (2022a), which employs the \"[X] means [MASK]\" prompt to extract sentence representations from the representation of the \"[MASK]\" token in a sentence. Another study by (Zeng et al., 2022) combines prompt-derived sentence representations with contrastive learning to improve the quality of the representations." }, { "figure_ref": [], "heading": "Alternative Loss and Objectives", "publication_ref": [ "b10", "b54", "b38", "b77", "b41", "b65", "b71" ], "table_ref": [], "text": "In § 2 we discussed Contrastive loss, which is widely used in machine learning. However, this loss suffers from several limitations: for instance it only considers binary relationships between instances and lacks a mechanism to incorporate hardnegatives (negatives that are difficult to distinguish from positive examples). To overcome these drawbacks, researchers have explored various strategies: Supplementary Losses: Used in addition to contrastive losses, these include: (1) hinge loss (Jiang et al., 2022b), which enhances discrimination between positive and negative pairs; (2) losses for reconstructing the original sentence from its representation to better capture sentence semantics (Wu et al., 2022b) ; (3) a loss to identify masked words and improve sensitivity to meaningless semantic transformations (Chuang et al., 2022); and (4) a loss to minimize redundant information in transformations by minimizing entropy (Chen et al., 2022a) (5) Ranking based losses to ensure that all negatives are not treated equally -some negatives are closer to the query compared to others (Seonwoo et al., 2023;Liu et al., 2023) Modified Contrastive Loss: Wu et al. (2022c) proposed an additional term that incorporates random noise from a Gaussian distribution as negative instances. Also, Zhang et al. (2022d) introduced two losses, angular loss and margin-based triplet loss, to address the intricacies of similarity between pairs of examples.\nDifferent Loss: Moving away from contrastive loss. Disadvantages of contrastive representations include not considering the relevance of different parts of the sentence in the entire representation, and assuming that sentence representations lie in the Euclidean space. Zhang et al. (2020) address the first by maximizing the mutual information between a local context and the entire sentence. Min et al. (2021) address the second by identifying an alternative sub-manifold within the sentence representation space. Other objectives to learn sentence representations include disentangling the syntax and semantics from the representation (Huang et al., 2021a), generating important phrases from sentences instead of using contrastive learning (Wu and Zhao, 2022), or using sentence representation as a strong inductive bias to perform Masked Language Modeling (Yang et al., 2021)." }, { "figure_ref": [], "heading": "Better Negative Sampling", "publication_ref": [ "b5", "b31", "b6", "b14" ], "table_ref": [], "text": "The efficacy of contrastive learning hinges on the quality of negative samples used during training. While most methods prioritize selecting positive samples that bear similarity to the query text, it is equally crucial to include hard negatives that are dissimilar to the query text and pose a challenge for the model to classify. Failure to do so leads to a gradual diminution of the loss gradients, impeding the learning of useful representations (Zhang et al., 2022c). Additionally, using an adequate number of negative samples is also imperative for effective learning (Cao et al., 2022).\nGiven the importance of incorporating hard negatives, several innovative strategies have emerged.\nResearchers have found that mixed-negatives-a combination of representations of a positive and a randomly chosen negative-serve as an excellent hard negative representation (Zhang et al., 2022c). Similarly, Zhou et al. (2022a) leveraged noise from a uniform Gaussian distribution as negatives to foster uniformity in the learned representation spacea metric to assess learned sentence representation. Recently, In contrast to the approach taken by Kim et al. (2021), (Chen et al., 2023) employ representations from various layers as negatives, recognizing that similarities across these layers render them less discriminative. This contemporary approach shows enhanced performance on the STS benchmark and subsequent tasks. However, it's important to note that perceptions of what constitutes 'positive' or 'negative' in the literature are constantly evolving.\nFalse negatives are instances where certain negatives exhibit a higher similarity to the anchor sentence compared to other negatives, yet maintain a lower similarity than the positives. Properly identifying and integrating measures to address these false negatives is crucial for enhancing sentence representation learning. (Deng et al., 2023) tackle this by clustering the remaining N-1 sentences in a batch. Sentences within the same cluster are designated as false negatives. To manage this scenario effectively, they employ a Bidirectional Margin Loss. This approach ensures that false negatives are not excessively distanced from the anchor sentence, thereby improving the overall quality of the sentence representation." }, { "figure_ref": [], "heading": "Post-Processing", "publication_ref": [ "b0", "b34" ], "table_ref": [], "text": "Ethayarajh (2019) suggest that the out-of-the-box representations from LLMs are not effective sentence representations. Consequently, several efforts have addressed this issue. Almarwani et al. (2019) utilize the Discrete Cosine Transform, a widely used technique in signal processing, to condense word vectors into fixedlength vectors. This approach has demonstrated its effectiveness in capturing both syntax and seman-tics. Similarly, Li et al. (2020) employ normalizing flows to convert BERT's token representations into a Gaussian distribution, while Huang et al. (2021b) propose a simpler 'whitening' technique that enhances out-of-the-box sentence representations from LLMs by transforming the mean and covariance matrix of the sentence vectors. These post processing techniques have only been tested on BERT based models so far, and their generalization to newer models has not been answered." }, { "figure_ref": [], "heading": "Other Approaches", "publication_ref": [ "b27", "b1", "b23", "b5", "b55", "b22", "b76" ], "table_ref": [], "text": "Multimodal: Human experiences are complex and involve multiple sensory modalities. Thus, it is beneficial to incorporate multiple modalities in learning sentence representations. Researchers have explored different approaches to use images for this purpose: using contrastive loss that utilizes both images and text (Zhang et al., 2022b); optimizing a loss each for visual and textual representation (Jian et al., 2022); grounding text into image (Bordes et al., 2019). Other modalities like audio and video are yet to be incorporated. Given that obtaining supervised data with just one modality is already hard, obtaining the same for multiple modalities will be even more challenging.\nComputer Vision Inspired: Momentum encoder, introduced by He et al. (2020), improves training stability. It utilizes a queue of representations from previous batches as negatives for the current batch, decoupling batch size from the learning process. Several studies have integrated momentum encoder into sentence representation learning, leading to enhanced performance (Cao et al., 2022;Wu et al., 2022a,d;Tan et al., 2022). This might require additional memory in the GPU which is challenging when training large NLP models.\nAnother popular technique, Bootstrap Your Own Latent (BYOL) (Grill et al., 2020), is a selfsupervised learning method that dispenses with negative samples. It trains a neural network to predict a set of 'target' representations from an input data point, given an 'online' representation of the same data point. BYOL employs a contrastive loss function to encourage similarity between the online and target representations. An advantage of BYOL is the elimination of the need for negative samples; instead, it uses augmented versions of the same data point as positive samples. This method has been effectively applied to natural language processing by Zhang et al. (2021) It implicitly as-sumes that obtaining an augmented sentence is easy -which might not be the case, as we have seen in the previous sections." }, { "figure_ref": [], "heading": "Trends & Analysis", "publication_ref": [ "b20" ], "table_ref": [ "tab_1" ], "text": "Limited advantages of supervision: Table 1 summarizes all the results. Surprisingly, a simple dropout-based data augmentation technique (Gao et al., 2021) demonstrates superior performance compared to most other methods, including those using T5, which is trained on billions of tokens (Ni et al., 2022a). Note that T5 is trained on a token generation objective that might not be suitable for obtaining better sentence representations. Besides the model, using an appropriate unsupervised task might be important for better representations." }, { "figure_ref": [], "heading": "Downplaying downstream task evaluation:", "publication_ref": [ "b58", "b51", "b42", "b36", "b70" ], "table_ref": [ "tab_1" ], "text": "The neglect of evaluating sentence representations in downstream tasks, as exemplified in Table 1, is noticeable. With LLMs demonstrating remarkable zero-shot performance across various tasks, the utility of sentence representations for tasks beyond semantic similarity and retrieval seems to dwindle. Nevertheless, recent research shows how sentence representations can enhance few-shot text classification performance (Tunstall et al., 2022). Future sentence representations should consider the utility of representations in enhancing few-shot text classification.\nData-centric innovations: Most innovations in this field focus on improving the data aspect, including obtaining better positives or negatives, and generating data using large language models (Schick and Schütze, 2021;Chen et al., 2022b). While generative models like T5 can boost performance, other LLMs like ChatGPT can bring additional benefits because of their scale.\nKeeping up with LLMs: We have identified several noteworthy endeavors using massive language models with billions of parameters for sentence representations. SGPT (Muennighoff, 2022) has successfully trained an open-source GPT decoder-only model on the SNLI and MNLI datasets, surpassing OpenAI's 175B parameter model. Additionally, GTR (Ni et al., 2022b) examined scaling laws, revealing larger T5 models have better performance. Nonetheless, recent developments such as GTE (Li et al., 2023) and BGE (Xiao et al., 2023) highlight that a collection of high-quality datasets for contrastive training can yield significantly better results compared to just using bigger models." }, { "figure_ref": [], "heading": "Challenges", "publication_ref": [ "b53", "b59", "b75", "b47", "b19", "b62" ], "table_ref": [], "text": "Practical Applications and the rise of Tools: Sentence representations are commonly employed for sentence retrieval in practical applications, as evidenced by the increasing number of benchmarks (Thakur et al., 2021b). However, their utility extends beyond retrieval, as demonstrated by recent work (Schuster et al., 2022), which leverages sentence representations for identifying documents that share a similar stance on a topic and for isolating documents that diverge from the consensus.\nThe increasing use of sentence representations in practical applications such as retrieval and providing an appropriate context to generative language models to rely on has lead to the rise of tools known as vector databases. These tools enable storing vectors indices and include algorithms for fast retrieval of similar vectors. Popular options such as Pinecone2 and Milvus3 also offer services for cloud hosting and resilience. These vector databases can be integrated with other frameworks such as LangChain, that facilitate the development of LLM applications.\nAdapting to Different Domains: Research has shown that sentence representations learned in one domain may not accurately capture the semantic meaning of sentences in another domain (Jiang et al., 2022b;Thakur et al., 2021a). Some solutions have been proposed in the literature, such as generating queries using a pretrained T5 model on a paragraph from the target domain, or using a pretrained cross-encoder to label the query and paragraph, or using a denoising objective (Wang et al., 2021). Nonetheless, training models that work well across domains remains challenging.\nCross-lingual Sentence Representations: Creating sentence representations that can be used across languages, especially those with limited annotated data, poses a significant challenge (Zhang et al., 2023). New solutions for cross-lingual retrieval are being developed and deployed for real-world use cases. 4 Many scholarly works (Nishikawa et al., 2022;Feng et al., 2022;Wieting et al., 2020) have addressed cross-lingual sentence representation learning in recent times, but they require aligned data between languages, which is hard to obtain." }, { "figure_ref": [], "heading": "Universality of Sentence Representations:", "publication_ref": [ "b11", "b43" ], "table_ref": [ "tab_1" ], "text": "The original purpose of sentence representations was to serve as a versatile tool for various NLP tasks. One prominent effort to evaluate the universality of sentence representations was the SentEval task (Conneau and Kiela, 2018), which tested the representations' performance on text classification, natural language inference, and semantic text similarity tasks. However, many recent works on sentence representation tend to emphasize their effectiveness on semantic text similarity datasets (Table 1). This shift raises questions about the universal nature of these representations-are sentence representations useful only for retrieval, or do they indeed have other applications? Such questions are put back into spotlight by recent benchmarks such as MTEB (Muennighoff et al., 2022)." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This survey offers an overview of sentence representations, presenting a taxonomy of methods. While major innovations focused on obtaining better quality data for contrastive learning, modern advances in generative technologies can accelerate the automatic generation of supervised data at low cost. Although LLMs play a crucial role in informing the advancement of sentence representations, further enhancements in sentence representation learning are necessary to personalize current LLMs to achieve tailored results. We highlighted that better multilingual and multi-domain sentence representations are needed, now that LLMs are being deployed in different domains at a rapid pace. We hope that this survey can accelerate advances in sentence representation learning." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "While we have made an effort to encompass a comprehensive range of literature on sentence representations, it is possible that certain papers may have been inadvertently excluded from our literature review. Additionally, we acknowledge that our approach assumes the majority of methods primarily focus on sentences or a limited number of tokens, typically within a few hundred. However, it is important to note that representation learning for documents or longer contexts-an active area of research-utilizes similar techniques. This survey does not cover those specific areas, which may warrant further attention." } ]
Sentence representations are a critical component in NLP applications such as retrieval, question answering, and text classification. They capture the meaning of a sentence, enabling machines to understand and reason over human language. In recent years, significant progress has been made in developing methods for learning sentence representations, including unsupervised, supervised, and transfer learning approaches. However there is no literature review on sentence representations till now. In this paper, we provide an overview of the different methods for sentence representation learning, focusing mostly on deep learning models. We provide a systematic organization of the literature, highlighting the key contributions and challenges in this area. Overall, our review highlights the importance of this area in natural language processing, the progress made in sentence representation learning, and the challenges that remain. We conclude with directions for future research, suggesting potential avenues for improving the quality and efficiency of sentence representations.
A Comprehensive Survey of Sentence Representations: From the BERT Epoch to the CHATGPT Era and Beyond
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of some of the milestones in Sentence Representation Learning Research", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "), Ni et al. (2022) Thakur et al. (2021a), Chen et al. (2022b), Schick and Schutze (2021)", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of sentence representation methods.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "1.Data: Data used for learning sentence representations consists of pairs of semantically similar sentences, which can be either annotated by humans or generated through transformations to create positive and negative sentence pairs. (cf. § § 4.1 and 4.3). 2. Model: A sentence representation extraction model is a neural network backbone model unless specified otherwise. The backbone model can take the form of an RNN or a pretrained transformer model like BERT (Devlin et al., 2019) or T5 (Raffel et al., 2020). 3. Transform: Neural network representations are not well suited for use as sentence representations directly. While the [CLS] representations from BERT can serve as such, Reimers and Gurevych (2019) propose a pooling mechanism to obtain sentence representations by aggregating the token representations. The transformation required depends on the model type. 4. Loss: Contrastive learning is often used for sentence representations. The objective is to bring semantically similar examples closer together while pushing dissimilar examples further apart. Specifically, given a set of example pairs D = {x i , x p i }, a model is used to obtain representations for each pair, denoted h i and h p i . The contrastive loss for an example is:", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The components of an architecture to learn sentence representations. There are four main components: 1) Data -Obtaining positive and negative examples either using supervised data or some transformation 2) Model -Generally a pretrained model that has been trained on large quantities of gneeral text. 3) Transform -Some transformation applied to the representations from the model to obtain sentence representations, and 4) Loss -Losses that bring semantically similar sentences closer together and others apart.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparison of methods. SUPERVISION indicates whether the method is supervised or unsupervised, SENTEVAL indicates whether the work benchmarks against SentEval", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "demonstrated that the various layers of BERT have differing levels of performance on the STS dataset. To address this issue, they propose reusing the intermediate BERT representations as positive examples. In contrast, Zhang et al. (2022a) perform augmentation by identifying the k-nearest neighbors of a sentence representation.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Abhinav Ramesh Kashyap; Thanh-Tung Nguyen; Viktor Schlegel; Stefan Winkler; See-Kiong Ng; Soujanya Poria Declare
[ { "authors": "Nada Almarwani; Hanan Aldarmaki; Mona Diab", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Efficient sentence embedding using discrete cosine transform", "year": "2019" }, { "authors": "Patrick Bordes; Eloi Zablocki; Laure Soulier; Benjamin Piwowarski; Patrick Gallinari", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Incorporating visual semantics into sentence representations within a grounded space", "year": "2019" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Rui Cao; Yihao Wang; Yuxin Liang; Ling Gao; Jie Zheng; Jie Ren; Zheng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Exploring the impact of negative samples of contrastive learning: A case study of sentence embedding", "year": "2022" }, { "authors": "Nuo Chen; Linjun Shou; Jian Pei; Ming Gong; Bowen Cao; Jianhui Chang; Jia Li; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Alleviating over-smoothing for unsupervised sentence representation", "year": "2023" }, { "authors": "Shaobin Chen; Jie Zhou; Yuling Sun; Liang He; ; ", "journal": "International Committee on Computational Linguistics", "ref_id": "b7", "title": "An information minimization based contrastive learning model for unsupervised sentence embeddings learning", "year": "2022" }, { "authors": "Yiming Chen; Yan Zhang; Bin Wang; Zuozhu Liu; Haizhou Li", "journal": "", "ref_id": "b8", "title": "Generate, discriminate and contrast: A semi-supervised sentence representation learning framework", "year": "2022" }, { "authors": "Xingyi Cheng", "journal": "Association for Computing Machinery", "ref_id": "b9", "title": "Dual-view distilled bert for sentence embedding", "year": "2021" }, { "authors": "Yung-Sung Chuang; Rumen Dangovski; Hongyin Luo; Yang Zhang; Shiyu Chang; Marin Soljacic; Shang-Wen; Scott Li; Yoon Yih; James Kim; Glass", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "DiffCSE: Difference-based contrastive learning for sentence embeddings", "year": "2022" }, { "authors": "Alexis Conneau; Douwe Kiela", "journal": "European Language Resources Association (ELRA", "ref_id": "b11", "title": "SentEval: An evaluation toolkit for universal sentence representations", "year": "2018" }, { "authors": "Alexis Conneau; Douwe Kiela; Holger Schwenk; Loïc Barrault; Antoine Bordes", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Supervised learning of universal sentence representations from natural language inference data", "year": "2017" }, { "authors": "Ido Dagan; Dan Roth; Mark Sammons; Fabio Massimo Zanzotto", "journal": "Synthesis Lectures on Human Language Technologies", "ref_id": "b13", "title": "Recognizing textual entailment: Models and applications", "year": "2013" }, { "authors": "Jinghao Deng; Fanqi Wan; Tao Yang; Xiaojun Quan; Rui Wang", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Clustering-aware negative sampling for unsupervised sentence representation", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Marco Di; Giovanni ; Marco Brambilla", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Exploiting Twitter as source of large corpora of weakly similar pairs for semantic sentence embeddings", "year": "2021" }, { "authors": "Mengnan Du; Varun Manjunatha; Rajiv Jain; Ruchi Deshpande; Franck Dernoncourt; Jiuxiang Gu; Tong Sun; Xia Hu", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Towards interpreting and mitigating shortcut learning behavior of NLU models", "year": "2021" }, { "authors": "Kawin Ethayarajh", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings", "year": "2019" }, { "authors": "Fangxiaoyu Feng; Yinfei Yang; Daniel Cer; Naveen Arivazhagan; Wei Wang", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Language-agnostic BERT sentence embedding", "year": "2022" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "SimCSE: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "John Giorgi; Osvald Nitski; Bo Wang; Gary Bader", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "DeCLUTR: Deep contrastive learning for unsupervised textual representations", "year": "2021" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre H Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Daniel Guo; Mohammad Gheshlaghi Azar; Bilal Piot; Koray Kavukcuoglu; Rémi Munos; Michal Valko", "journal": "", "ref_id": "b22", "title": "Bootstrap your own latent a new approach to self-supervised learning", "year": "2020" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b23", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b24", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "James Y Huang; Kuan-Hao Huang; Kai-Wei Chang; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Disentangling semantics and syntax in sentence embeddings with pre-trained language models", "year": "2021" }, { "authors": "Junjie Huang; Duyu Tang; Wanjun Zhong; Shuai Lu; Linjun Shou; Ming Gong; Daxin Jiang; Nan Duan", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "WhiteningBERT: An easy unsupervised sentence embedding approach", "year": "2021" }, { "authors": "Yiren Jian; Chongyang Gao; Soroush Vosoughi", "journal": "", "ref_id": "b27", "title": "Non-linguistic supervision for contrastive learning of sentence embeddings", "year": "2022" }, { "authors": "Ting Jiang; Jian Jiao; Shaohan Huang; Zihan Zhang; Deqing Wang; Fuzhen Zhuang; Furu Wei; Haizhen Huang; Denvy Deng; Qi Zhang; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Prompt-BERT: Improving BERT sentence embeddings with prompts", "year": "2022" }, { "authors": "Yuxin Jiang; Linhan Zhang; Wei Wang", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Improved universal sentence embeddings with promptbased contrastive learning and energy-based learning", "year": "2022" }, { "authors": "Omar Khattab; Arnav Singhvi; Paridhi Maheshwari; Zhiyuan Zhang; Keshav Santhanam; Sri Vardhamanan; Saiful Haq; Ashutosh Sharma; Thomas T Joshi; Hanna Moazam; Heather Miller; Matei Zaharia; Christopher Potts", "journal": "", "ref_id": "b30", "title": "DSPy: Compiling declarative language model calls into self-improving pipelines", "year": "2023" }, { "authors": "Taeuk Kim; Min Kang; Sang-Goo Yoo; Lee", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Self-guided contrastive learning for BERT sentence representations", "year": "2021" }, { "authors": "Ryan Kiros; Yukun Zhu; Ruslan Salakhutdinov; Richard S Zemel; Antonio Torralba; Raquel Urtasun; Sanja Fidler", "journal": "MIT Press", "ref_id": "b32", "title": "Skip-thought vectors", "year": "2015" }, { "authors": "Tassilo Klein; Moin Nabi", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "SCD: Selfcontrastive decorrelation of sentence embeddings", "year": "2022" }, { "authors": "Bohan Li; Hao Zhou; Junxian He; Mingxuan Wang; Yiming Yang; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "On the sentence embeddings from pre-trained language models", "year": "2020" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Zehan Li; Xin Zhang; Yanzhao Zhang; Dingkun Long; Pengjun Xie; Meishan Zhang", "journal": "", "ref_id": "b36", "title": "Towards general text embeddings with multi-stage contrastive learning", "year": "2023" }, { "authors": "Fangyu Liu; Ivan Vulić; Anna Korhonen; Nigel Collier", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders", "year": "2021" }, { "authors": "Jiduan Liu; Jiahao Liu; Qifan Wang; Jingang Wang; Wei Wu; Yunsen Xian; Dongyan Zhao; Kai Chen; Rui Yan", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "RankCSE: Unsupervised sentence representations learning via learning to rank", "year": "2023" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "", "ref_id": "b39", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b40", "title": "", "year": "" }, { "authors": "Changrong Min; Yonghe Chu; Liang Yang; Bo Xu; Hongfei Lin", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Locality preserving sentence encoding", "year": "2021" }, { "authors": "Niklas Muennighoff", "journal": "", "ref_id": "b42", "title": "Sgpt: Gpt sentence embeddings for semantic search", "year": "2022" }, { "authors": "Niklas Muennighoff; Nouamane Tazi; Loïc Magne; Nils Reimers", "journal": "", "ref_id": "b43", "title": "Mteb: Massive text embedding benchmark", "year": "2022" }, { "authors": "Jianmo Ni; Gustavo Hernandez Abrego; Noah Constant; Ji Ma; Keith Hall; Daniel Cer; Yinfei Yang", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Sentence-T5: Scalable sentence encoders from pre-trained text-to-text models", "year": "2022" }, { "authors": "Jianmo Ni; Chen Qu; Jing Lu; Zhuyun Dai; Gustavo Hernandez Abrego; Ji Ma; Vincent Zhao; Yi Luan; Keith Hall; Ming-Wei Chang; Yinfei Yang", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Large dual encoders are generalizable retrievers", "year": "2022" }, { "authors": "Yixin Nie; Adina Williams; Emily Dinan; Mohit Bansal; Jason Weston; Douwe Kiela", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Adversarial NLI: A new benchmark for natural language understanding", "year": "2020" }, { "authors": "Sosuke Nishikawa; Ryokan Ri; Ikuya Yamada; Yoshimasa Tsuruoka; Isao Echizen", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "EASE: Entity-aware contrastive learning of sentence embedding", "year": "2022" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher Manning", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "GloVe: Global vectors for word representation", "year": "2014" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b49", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Generating datasets with pretrained language models", "year": "2021" }, { "authors": "Viktor Schlegel; Goran Nenadic; Riza Batista-Navarro", "journal": "", "ref_id": "b52", "title": "Semantics altering modifications for evaluating comprehension in machine reading", "year": "2021" }, { "authors": "Tal Schuster; Sihao Chen; Senaka Buthpitiya; Alex Fabrikant; Donald Metzler", "journal": "Association for Computational Linguistics", "ref_id": "b53", "title": "Stretching sentence-pair NLI models to reason over long documents and clusters", "year": "2022" }, { "authors": "Yeon Seonwoo; Guoyin Wang; Changmin Seo; Sajal Choudhary; Jiwei Li; Xiang Li; Puyang Xu; Sunghyun Park; Alice Oh", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Rankingenhanced unsupervised sentence representation learning", "year": "2023" }, { "authors": "Haochen Tan; Wei Shao; Han Wu; Ke Yang; Linqi Song", "journal": "Association for Computational Linguistics", "ref_id": "b55", "title": "A sentence is worth 128 pseudo tokens: A semantic-aware contrastive learning framework for sentence embeddings", "year": "2022" }, { "authors": "Nandan Thakur; Nils Reimers; Johannes Daxenberger; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Augmented SBERT: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks", "year": "2021" }, { "authors": "Nandan Thakur; Nils Reimers; Andreas Rücklé; Abhishek Srivastava; Iryna Gurevych", "journal": "", "ref_id": "b57", "title": "BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models", "year": "2021" }, { "authors": "Lewis Tunstall; Nils Reimers; Unso Eun; Seo Jo; Luke Bates; Daniel Korat; Moshe Wasserblat; Oren Pereg", "journal": "", "ref_id": "b58", "title": "Efficient few-shot learning without prompts", "year": "2022" }, { "authors": "Kexin Wang; Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "TSDAE: Using transformer-based sequential denoising auto-encoderfor unsupervised sentence embedding learning", "year": "2021" }, { "authors": "Tianduo Wang; Wei Lu", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "Differentiable data augmentation for contrastive sentence representation learning", "year": "2022" }, { "authors": "Wei Wang; Liangzhu Ge; Jingqiao Zhang; Cheng Yang", "journal": "Association for Computing Machinery", "ref_id": "b61", "title": "Improving contrastive learning of sentence embeddings with case-augmented positives and retrieved negatives", "year": "2022" }, { "authors": "John Wieting; Graham Neubig; Taylor Berg-Kirkpatrick", "journal": "Association for Computational Linguistics", "ref_id": "b62", "title": "A bilingual generative transformer for semantic sentence embedding", "year": "2020" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b63", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" }, { "authors": "", "journal": "BigScience Workshop", "ref_id": "b64", "title": "Bloom: A 176bparameter open-access multilingual language model", "year": "2023" }, { "authors": "Bohong Wu; Hai Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b65", "title": "Sentence representation learning with generative objective rather than contrastive objective", "year": "2022" }, { "authors": "Qiyu Wu; Chongyang Tao; Tao Shen; Can Xu; Xiubo Geng; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b66", "title": "PCL: Peercontrastive learning with diverse augmentations for unsupervised sentence embeddings", "year": "2022" }, { "authors": "Xing Wu; Chaochen Gao; Zijia Lin; Jizhong Han; Zhongyuan Wang; Songlin Hu", "journal": "Association for Computational Linguistics", "ref_id": "b67", "title": "InfoCSE: Information-aggregated contrastive learning of sentence embeddings", "year": "2022" }, { "authors": "Xing Wu; Chaochen Gao; Yipeng Su; Jizhong Han; Zhongyuan Wang; Songlin Hu", "journal": "", "ref_id": "b68", "title": "Smoothed contrastive learning for unsupervised sentence embedding", "year": "2022" }, { "authors": "Xing Wu; Chaochen Gao; Liangjun Zang; Jizhong Han; Zhongyuan Wang; Songlin Hu", "journal": "International Committee on Computational Linguistics", "ref_id": "b69", "title": "ESim-CSE: Enhanced sample building method for contrastive learning of unsupervised sentence embedding", "year": "2022" }, { "authors": "Shitao Xiao; Zheng Liu; Peitian Zhang; Niklas Muennighoff", "journal": "", "ref_id": "b70", "title": "C-pack: Packaged resources to advance general chinese embedding", "year": "2023" }, { "authors": "Ziyi Yang; Yinfei Yang; Daniel Cer; Jax Law; Eric Darve", "journal": "Association for Computational Linguistics", "ref_id": "b71", "title": "Universal sentence representation learning with conditional masked language model", "year": "2021" }, { "authors": "Jiali Zeng; Yongjing Yin; Yufan Jiang; Shuangzhi Wu; Yunbo Cao", "journal": "Association for Computational Linguistics", "ref_id": "b72", "title": "Contrastive learning with prompt-derived virtual semantic prototypes for unsupervised sentence embedding", "year": "2022" }, { "authors": "Dejiao Zhang; Wei Xiao; Henghui Zhu; Xiaofei Ma; Andrew Arnold; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b73", "title": "Virtual augmentation supported contrastive learning of sentence representations", "year": "2022" }, { "authors": "Miaoran Zhang; Marius Mosbach; David Adelani; Michael Hedderich; Dietrich Klakow", "journal": "Association for Computational Linguistics", "ref_id": "b74", "title": "MCSE: Multimodal contrastive learning of sentence embeddings", "year": "2022" }, { "authors": "Xinyu Zhang; Nandan Thakur; Odunayo Ogundepo; Ehsan Kamalloo; David Alfonso-Hermelo; Xiaoguang Li; Qun Liu; Mehdi Rezagholizadeh; Jimmy Lin", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b75", "title": "MIRACL: A Multilingual Retrieval Dataset Covering 18 Diverse Languages", "year": "2023" }, { "authors": "Yan Zhang; Ruidan He; Zuozhu Liu; Lidong Bing; Haizhou Li", "journal": "", "ref_id": "b76", "title": "Bootstrapped unsupervised sentence representation learning", "year": "2021" }, { "authors": "Yan Zhang; Ruidan He; Zuozhu Liu; Kwan Hui Lim; Lidong Bing", "journal": "Association for Computational Linguistics", "ref_id": "b77", "title": "An unsupervised sentence embedding method by mutual information maximization", "year": "2020" }, { "authors": "Yanzhao Zhang; Richong Zhang; Samuel Mensah; Xudong Liu; Yongyi Mao", "journal": "", "ref_id": "b78", "title": "Unsupervised sentence representation via contrastive learning with mixing negatives", "year": "2022" }, { "authors": "Yuhao Zhang; Hongji Zhu; Yongliang Wang; Nan Xu; Xiaobo Li; Binqiang Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b79", "title": "A contrastive framework for learning sentence representations from pairwise and triple-wise perspective in angular space", "year": "2022" }, { "authors": "Kun Zhou; Beichen Zhang; Xin Zhao; Ji-Rong Wen", "journal": "Association for Computational Linguistics", "ref_id": "b80", "title": "a. Debiased contrastive learning of unsupervised sentence representations", "year": "2022" }, { "authors": "Zhihan Zhou; Dejiao Zhang; Wei Xiao; Nicholas Dingwall; Xiaofei Ma; Andrew Arnold; Bing Xiang", "journal": "Association for Computational Linguistics", "ref_id": "b81", "title": "Learning dialogue representations from consecutive utterances", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 124.61, 561.89, 122.26, 33.37 ], "formula_id": "formula_0", "formula_text": "l i = -log e sim(h i ,h p i ) N j=1 e sim(h i ,h j )" } ]
2023-05-22
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b17", "b18", "b19", "b20", "b17", "b19", "b22", "b23", "b24", "b25", "b21", "b21", "b21", "b21", "b26", "b27", "b17" ], "table_ref": [], "text": "U NSUPERVISED domain adaptation (UDA) aims to promote the model performance on an unlabeled target domain, by adapting a model trained on a large-scale labeled source dataset to the unlabeled target domain. The key challenge of UDA is the distribution discrepancy between source and target domains [1], [2]. To address this issue, existing methods propose to align source and target domains either by exploiting diverse discrepancy metrics (e.g., maximum mean discrepancy [3], [4], high-order statistics of distributions [5], [6] and inter/intra class distance [7], [8]), or by conducting domain adversarial learning [9], [10], [11].\nHowever, in real-world applications, one may only access a source-trained model instead of source data due to the law of privacy protection [12], [13], [14]. This makes existing UDA [15], [16], [17] methods (that rely heavily on source data) fail. To handle this, Source-Free Unsupervised Domain Adaptation (SF-UDA) [18] has been explored recently, where only a source model and unlabeled target data are available. To solve this problem, existing SF-UDA methods propose to refine the source model either by using the source model to generate pseudo-labeled target South China University of Technology, Guangzhou 510641, China (e-mail: {sehongbinlin, seqiuzhen, sensc, sesmildong}@mail.scut.edu.cn; {duqing, cslyx, mingkuitan}@scut.edu.cn). • Y. Zhang is with National University of Singapore, Singapore, 138600 (email: [email protected]).\ndata (e.g., SHOT [18]), or by using generative adversarial networks (GANs) [19] to generate target-style images (e.g., MA [20]). However, due to the domain discrepancy, the pseudo labels could be noisy. Moreover, directly generating target-style images is very difficult since GANs are hard to train on a small target dataset [21].\nTo handle the absence of source data, our insight is to mine the hidden knowledge within the source model for generating feature prototypes of each source class. In light of this, we propose a Contrastive Prototype Generation and Adaptation (CPGA) method, including two stages: 1) Prototype generation: by exploring the classification boundary information in the source classifier, we train a prototype generator to generate source prototypes based on contrastive learning. 2) Prototype adaptation: to mitigate domain discrepancies, based on the generated feature prototypes and target pseudo labels, we develop a new contrastive prototype adaptation strategy to align each pseudo-labeled target data to the source prototype with the same class. To alleviate label noise, we enhance the alignment via confidence reweighting and early learning regularization. Extensive experiments verify the effectiveness and superiority of our CPGA.\nDespite the success of CGPA in solving vanilla SF-UDA, as shown in Table 1, conventional SF-UDA methods [18], [20], [23] implicitly assume that the training data of the source model and the target data follow relatively balanced class distributions. Nevertheless, practical data may follow any class distribution, e.g., a longtailed class distribution [24], [25], [26]. In this scenario, SF-UDA TABLE 1: Illustration of the related UDA settings. Compared to Source-Free Unsupervised Domain Adaptation (SF-UDA), Imbalanced SF-UDA [22] particularly relies on the prior of the source class distribution (e.g., label frequency) for training a balance source model, so it is not essentially SF-UDA as it influences the use of source data. In contrast, imbalance-agnostic SF-UDA only accesses an imbalance-agnostic source model without influencing the training of source models. Balance training refers to training a class-uniformed model with the class distribution prior, while standard training trains the source model only via vanilla loss (e.g., Cross-Entropy loss). Imbalanced SF-UDA [22] Required Balanced\nImbalance-agnostic SF-UDA Standard becomes more challenging, and vanilla SF-UDA methods suffer performance degradation due to the issues of class imbalance and unknown class distribution shifts.\nTo conquer this, ISFDA [22] explores handling Imbalanced SF-UDA where the class distributions of both domains are inverse (e.g., long-tailed source domain and inversely long-tailed target domain) as shown in Table 1. Specifically, it first resorts to the prior of the source class distribution to train a class-balanced model. Then, it conducts label refine curriculum adaptation and representation optimization to overcome the joint presence of covariate and class distribution shifts. However, the class-balanced source model is not always available in real scenarios since it relies on the prior knowledge of the source class distribution. Due to the lack of source data, an imbalance-agnostic source model is more probably given, i.e., the source model may be class-biased. More critically, the target domain is not necessarily following the class distribution that is just inverse to that of the source domain.\nTo address these issues, we explore a more practical task, called imbalance-agnostic SF-UDA, where the class distributions of both the unseen source domain and unlabeled target domain are unknown and can be arbitrarily skewed (e.g., long-tailed, inversely long-tailed) as shown in Table 1. In addition to the challenges in SF-UDA, this task poses an additional challenge: it is unknown how to adapt the imbalance-agnostic source model to the unlabeled target domain under unidentified class distribution shifts. Apparently, dealing with the co-occurrence of data distribution shifts and unidentified class distribution shifts is nontrivial, which leads to the performance degradation of existing SF-UDA methods [22], [27]. Compared with Imbalanced SF-UDA, imbalance-agnostic SF-UDA does not rely on the source class distribution prior and considers the existence of unidentified class distribution shifts.\nTo handle imbalance-agnostic SF-UDA, we extend CPGA and propose a new Target-aware Contrastive Prototype Generation and Adaptation (T-CPGA) method. To alleviate the negative effect of unidentified class distribution shifts, we are motivated to leverage the zero-shot prediction abilities of CLIP (Contrastive Language-Image Pre-training) [28] to help identify unknown target class distribution. Specifically, we aggregate the knowledge of the source model and CLIP to perceive the unlabeled target domain. This way helps us obtain more reliable target pseudo labels, which enable contrastive domain alignment via feature prototypes even under unknown class distribution shifts. Specifically, as CPGA, T-CPGA also contains two stages: 1) Prototype generation: we keep the same contrastive source prototype generation strategy with CPGA to handle the lack of source data. 2) Prototype adaptation: instead of only relying on the source model, T-CPGA generates target pseudo labels via the automatically weighted ensemble of self-supervised pseudo-labeling [18] and CLIP zero-shot prediction. Meanwhile, rather than assigning confidence weights for target data based on source predictions as CPGA, we further reweight the target sample confidence to avoid noisy pseudo labels. To alleviate the negative effect of unidentified class distribution shifts, we further devise an additional target label-distribution-aware classifier to match the class distribution of the target domain. In this way, we are able to adapt a class distribution-agnostic source model to an unlabeled target domain even if both domains are class-imbalanced and agnostic. Extensive experiments on three imbalanced domain adaptation benchmark datasets demonstrate the effectiveness and superiority of T-CPGA in handling imbalance-agnostic SF-UDA.\nOur primary contributions are summarized as follows:\n•\nWe introduce a novel CPGA method for addressing SF-UDA. Compared with previous SF-UDA methods, CPGA innovatively generates source feature prototypes to handle the absence of source data. More critically, these feature prototypes can also enhance the performance of conventional UDA methods, allowing them to achieve comparable or even superior results to those obtained through the illegitimate use of source data in SF-UDA." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We study a more practical task called imbalance-agnostic SF-UDA. Compared with vanilla SF-UDA, it assumes that the class distributions of both source and target domains are unknown and can be arbitrarily skewed. Hence, it accounts for unidentified class distribution shifts during adaptation, making it more applicable to real-world SF-UDA scenarios." }, { "figure_ref": [], "heading": "•", "publication_ref": [ "b26" ], "table_ref": [], "text": "We further propose a T-CPGA method to handle imbalance-agnostic SF-UDA. This method introduces a new pseudo label generation strategy that is crucial for accurately generating pseudo labels for unlabeled target data, even under unknown class shifts. Specifically, this strategy identifies unknown target class distributions, and thus is essential for effective adaptation in imbalanceagnostic SF-UDA.\nA short version of this work was published in IJCAI 2021 [27]. This paper extends the previous version in the following aspects: 1) It explores a novel task called imbalance-agnostic SF-UDA, which considers a more practical scenario where the class distributions of both the source and target domains can be imbalanced. 2) To solve unidentified class distribution shifts, T-CPGA introduces a new pseudo label generation strategy and a target-aware classifier to better match the target class distribution. 3) The paper provides extensive new empirical evaluations, demonstrating that T-CPGA achieves clearly better performance over CPGA (e.g., the average of 25.1% and 22.5% overall accuracy gains on Cl→Pr and Cl→Rw of the imbalance-agnostic Office-Home dataset)." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b21" ], "table_ref": [], "text": "This section commences with a comprehensive literature review of relevant domain adaptation tasks, including Source-Free Unsupervised Domain Adaptation (SF-UDA) and Class-Imbalanced Domain Adaptation (CI-UDA). Then, we compare our task with the most pertinent Imbalanced SF-UDA [22]. Due to page limitations, we provide the review of vanilla UDA in Appendix A." }, { "figure_ref": [], "heading": "Source-Free Unsupervised Domain Adaptation", "publication_ref": [ "b19", "b28", "b17", "b29", "b19", "b20", "b30", "b31", "b32" ], "table_ref": [], "text": "Unlike conventional UDA, SF-UDA methods [20], [29] seek to adapt a source model to an unlabeled target domain without access to any source data. To handle this task, existing methods seek to refine the source model either by pseudo label generation (i.e., SHOT [18] and SHOT++ [30]) or target-style images generation (i.e., MA [20]). Nonetheless, pseudo labels would be noisy due to the domain discrepancy, which is ignored by SHOT. To address this issue, SHOT++ employs semi-supervised learning to improve the accuracy of less-confident predictions. As for MA, it may be plagued by the training difficulties of GANbased approaches [21]. Recent SF-UDA methods aim to alleviate the domain discrepancy via learning domain-invariant feature representations. For instance, NRC [31] and G-SFDA [32] focus on leveraging neighborhood structures to encourage consistency in feature predictions. Alternatively, CAiDA [33] guides anchor points to search for semantically nearest confident anchors to generate pseudo labels and enhance feature representations.\nCompared with the above methods, our CPGA proposes to generate source feature prototypes for each class to handle the lack of source data. Additionally, CPGA alleviates the negative effect of pseudo label noise via confidence reweighting and early learning regularization." }, { "figure_ref": [], "heading": "Imbalanced Unsupervised Domain Adaptation", "publication_ref": [ "b33", "b21", "b34", "b35", "b36", "b37", "b33", "b34", "b36", "b37", "b35" ], "table_ref": [], "text": "The objective of CI-UDA is to conduct domain alignment between a labeled source domain and an unlabeled target domain in the presence of class distribution shifts. Existing methods seek to overcome class distribution shifts by class-wise importance reweighting [34], balanced sampling [22], [35] or representation learning [36], [37], [38]. Specifically, SIDA [34] employs selfadaptive imbalanced cross-entropy loss to adjust its model to varying degrees of imbalanced target scenarios. COAL [35] utilizes balanced sampling and self-training to address class distribution shifts and conducts prototype-based conditional alignment to mitigate domain shifts. Regarding representation learning methods, CDM [37] exploits latent sub-domains within and across data domains to learn class-balanced feature representations for joint adaptation. Besides, PCT [38] aims to learn robust and domaininvariant representations by minimizing the expected pairwise cost between target features and imbalance-robust source prototypes. PAT [36] reduces domain discrepancy by aligning centroids and generating adversarial samples for minority classes to handle the class imbalance issue.\nIn the context of CI-UDA, existing methods construct classimbalanced UDA scenarios by sub-sampling datasets with imbalanced source domains and uniform or reversely-imbalanced target domains. Compared with imbalance-agnostic SF-UDA, they only account for a portion of imbalance scenarios. Moreover, CI-UDA relies on the accessibility of source data." }, { "figure_ref": [], "heading": "Imbalanced Source-free Domain Adaptation", "publication_ref": [ "b21" ], "table_ref": [], "text": "ISFDA [22] is a relevant study that investigates imbalanced source-free domain adaptation in which the class distributions between the source and target domains are opposite (e.g., longtailed source and inversely long-tailed target). This study assumes that using class-balanced sampling to train the source model is permissible and introduces secondary label correction to handle class distribution shifts. However, the source-trained model is generally provided in advance and cannot be further trained for class re-balancing. In other words, the source model is more likely to be an imbalance-agnostic model trained via the standard crossentropy loss. Moreover, ISFDA only focuses on the task with opposite class distributions. However, the source class distribution is not necessary to be inverse to the target class distribution in practice. Therefore, we relax the assumption and propose a more challenging but practical task, called imbalance-agnostic SF-UDA, where we seek to adapt an imbalance-agnostic source model to an imbalance-agnostic target domain with access to only unlabeled target data." }, { "figure_ref": [], "heading": "PROBLEM DEFINITION", "publication_ref": [], "table_ref": [], "text": "Source-Free Unsupervised Domain Adaptation (SF-UDA). We first study the task of SF-UDA, where only a well-trained source model and unlabeled target data are accessible. Specifically, this work considers a multi-class classification task where the source and target domains share the same label space with K classes. The pre-trained source model is assumed to consist of a feature extractor G e and a classifier G y . Additionally, the unlabeled target domain is denoted by\nD t = {x i } nt i=1\n, where n t is the number of target samples. The primary objective is to adapt the source model to the target domain by leveraging only the unlabeled target data. The task of SF-UDA presents a challenge due to the lack of source data and target annotations. Conventional UDA methods that rely on source data are unable to tackle this task. To address the challenge of SF-UDA, we propose a Contrastive Prototype Generation and Adaptation (CPGA) method (cf. Section 4).\nImbalance-Agnostic SF-UDA. Existing SF-UDA methods implicitly assume that the training class distributions of the source domain on which the source model is pre-trained and the target domain follow a balanced class distribution. However, in realworld applications, this assumption may not necessarily hold, and the source and target domains are likely to follow any class distribution (e.g., being long-tailed, inversely long-tailed, or relatively class-balanced). For this reason, we study a more practical task, called imbalance-agnostic SF-UDA, where a class distributionagnostic model trained via vanilla cross-entropy loss and a class distribution-agnostic unlabeled target domain are available. To resolve this task, we extend CPGA and propose Target-aware Contrastive Prototype Generation and Adaptation (cf. Section 5). For simplicity, we use the same notations as the above sections." }, { "figure_ref": [], "heading": "CPGA: CONTRASTIVE PROTOTYPE GENERA-TION AND ADAPTATION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Overall Scheme", "publication_ref": [ "b38", "b39" ], "table_ref": [], "text": "The key challenge of SF-UDA is the lack of source data. Inspired by that feature prototypes can represent a group of semantically similar instances [39], we explore generating feature prototypes to represent each source class and adopt them for class-wise domain alignment. As shown in Figure 1, CPGA consists of two stages: prototype generation and prototype adaptation.\nIn stage one (Section 4.2), motivated by that the classifier of the source model contains class distribution information [40], we train a class conditional generator G g to learn such class information and generate feature prototypes for each class. Meanwhile, the source classifier G y is exploited to judge whether G g generates correct feature prototypes regarding classes. By training the generator G g to confuse G y via both cross-entropy L ce and contrastive loss L p con , we are able to generate intra-class compact and inter-class separated feature prototypes. Meanwhile, to overcome the lack of target labels, we resort to a self pseudolabeling strategy to generate pseudo labels for each target data.\nIn stage two (Section 4.3), we propose to adapt the source model to the target domain by aligning the pseudo-labeled target features to the corresponding source class prototypes. Specifically, we conduct class-wise alignment using a contrastive loss L w con with a domain projector G p . Besides, we introduce an early learning regularization term L elr to mitigate the effects of noisy pseudo labels on the adaptation process.\nThe overall procedure of CPGA is summarized as:\nmin θg L ce (θ g ) + L p con (θ g ),(1)\nmin {θe,θp} L w con (θ e , θ p ) + λL elr (θ e , θ p ),(2)\nwhere θ g , θ e and θ p denotes the parameters of the generator G g , the feature extractor G e and the projector G p , respectively. Moreover, λ is a trade-off parameter to balance losses." }, { "figure_ref": [ "fig_1", "fig_2", "fig_2" ], "heading": "Contrastive Prototype Generation", "publication_ref": [ "b39", "b40", "b41" ], "table_ref": [ "tab_8", "tab_8", "tab_8" ], "text": "The absence of the source data makes UDA challenging. To handle this, we generate feature prototypes for each class by exploring the 6. Compared with training with only crossentropy Lce, the contrastive loss L p con encourages the prototypes of the same category to be more compact and those of different categories to be more separated. Better viewed in color. class distribution information hidden in the source classifier [40].\nTo this end, we use the source classifier G y to train the class conditional generator G g . To be specific, as shown in Figure 1, given a uniform noise z ∼ U (0, 1) and a label y ∈ R K as inputs, the generator G g first generates the feature prototype p = G g (y, z) (more details of the generator and the generation process can be found in Appendix C). Then, the classifier G y judges whether the generated prototype belongs to y and trains the generator via the cross-entropy loss:\nL ce = -y log G y (p),(3)\nwhere p is the generated prototype and G y (p) denotes the prediction of the classifier. In this way, the generator is capable of generating feature prototypes for each category. However, as shown in Figure 2(a), training the generator with only cross-entropy may make the feature prototypes not well compact and prototypical. As a result, domain alignment with these prototypes may make the adapted model less discriminative, leading to limited performance (See Table 6). To address this, motivated by InfoNCE [41], [42], we further impose a contrastive loss to encourage more prototypical prototypes:\nL p con =-log exp(φ(p, o + )/τ ) exp(φ(p, o + )/τ )+ K-1 j=1 exp(φ(p, o - j )/τ ) ,(4)\nwhere p denotes any anchor prototype. For each anchor, we sample the positive pair o + by randomly selecting a generated prototype with the same category to the anchor p, and sample K-1 negative pairs o -that have diverse classes with the anchor. Here, in each training batch, we generate at least 2 prototypes for each class in stage one. Moreover, φ(•, •) denotes the cosine similarity and τ is a temperature factor. As shown in Figure 2(b), by training the generator with L ce + L p con , the generated prototypes are more representative (i.e., intra-class compact and inter-class separated). Interestingly, we empirically observe that the inter-class cosine distance will converge closely to 1 (i.e., cosine similarity close to 0) by training with L ce +L p con (See Table 6), if the feature dimensions are larger than the number of classes. That is, the generated prototypes of different categories are approximatively orthometric in the highdimensional feature space." }, { "figure_ref": [], "heading": "Contrastive Prototype Adaptation", "publication_ref": [ "b7", "b42", "b17", "b17", "b4", "b43", "b44", "b45", "b46" ], "table_ref": [], "text": "Pseudo label generation. Domain alignment can be conducted based on the generated source prototypes, However, the alignment is non-trivial due to the lack of target annotations, which makes the class-wise alignment difficult [8], [43]. To address this, a feasible way is to leverage a self-supervised pseudo-labeling strategy [18] to generate pseudo labels for the target data. To be specific, let q i = G e (x i ) denote the feature vector and let ŷk i = G k y (q) be the predicted probability of the classifier regarding the class k. We first attain the initial centroid for each class k by: c k = n t i=1\nŷk i q i n t i=1 ŷk i\n, where n t is the number of target data. These centroids help to characterize the distribution of different categories [18]. Then, the prediction of the i-th target data is obtained by: ŷi = σ(φ(q i , C)/τ ), where σ(•), φ(•, •) and C=[c 0 , ..., c K-1 ] denote the softmax function, cosine similarity and class centroid matrix, respectively. Moreover, the pseudo label is computed:\nȳi = arg max k ŷi ,(5)\nwhere ȳi ∈ R 1 is a scalar index. During the training process, we update the centroid of each class by c k = n t i=1 I(ȳi=k)q i n t i=1 I(ȳi=k) and then update pseudo labels based on Eqn. (5) in each epoch, where I(•) is the indicator function.\nBased on the generated prototypes and target pseudo labels, we conduct prototype adaptation to alleviate domain shifts. Here, in each training batch, we generate one prototype for each class. However, due to domain shifts, the pseudo labels can be quite noisy, making the adaptation difficult. To address this, we propose a new contrastive prototype adaptation strategy, which consists of two key components: (1) weighted contrastive alignment and (2) early learning regularization.\nWeighted contrastive alignment. Relying on the pseudo-labeled target data, we then conduct class-wise contrastive learning to align the target data to the corresponding source feature prototype. However, the pseudo labels may be noisy, making contrastive alignment degraded. To handle this issue, we differentiate pseudolabeled target data and assign higher importance to reliable ones. Motivated by [44] that reliable samples are generally closer to the class centroid, we compute the confidence weight by:\nw i = exp(φ(q i , c ȳi )/τ ) K k=1 exp(φ(q i , c k )/τ ) ,(6)\nwhere the feature with higher similarity to the corresponding centroid will have higher importance. Then, we can conduct weighted contrastive alignment. To this end, inspired by [45], we first use a non-linear projector G p to project the target features and source prototypes to a l 2 -normalized contrastive feature space. Specifically, the target contrastive feature is denoted as u = G p (q), while the prototype contrastive feature is denoted as v = G p (p). Then, for any target feature u i as an anchor, we conduct prototype adaptation via a weighted contrastive loss:\nL w con = -w i log exp(u i v + /τ ) exp(u i v + /τ )+ K-1 j=1 exp(u i v - j /τ ) ,(7)\nwhere the positive pair v + is the prototype with the same class to the anchor u i , while the negative pairs v -are the prototypes with different classes.\nEarly learning regularization. As deep neural networks (DNNs) tend to first memorize the clean samples with correct labels and subsequently learn the noisy data with incorrect labels [46], the model in the \"early learning\" phase can be more predictable to the noisy data. Therefore, inspired by [47] " }, { "figure_ref": [], "heading": "T-CPGA: TARGET-AWARE CONTRASTIVE PRO-TOTYPE GENERATION AND ADAPTATION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Overall Scheme", "publication_ref": [ "b17", "b27" ], "table_ref": [], "text": "In this section, we seek to adapt a class distribution-agnostic source model to a class distribution-agnostic target domain with access to only unlabeled target data. This task poses a new challenge in SF-UDA, as it involves adapting the source model to an unlabeled target domain under unidentified class distribution shifts. Existing SF-UDA methods (e.g., SHOT [18] and our CPGA) are unable to tackle this task, since they rely on the source model to generate pseudo labels for unlabeled target data, but the source model is class distribution-agnostic (i.e., source data are unknown and may be arbitrarily skewed) and may generate noisy pseudo labels. Moreover, existing SF-UDA methods use a fixed source classifier, which may not provide accurate predictions for target data under unidentified class distribution shifts.\nTo address these issues, by extending CPGA, we propose a Target-aware Contrastive Prototype Generation and Adaptation (T-CPGA) method. We summarize the overall training scheme of T-CPGA in Algorithms 2, which is made up of two stages. To handle the lack of source data, T-CPGA holds the same first stage as CPGA. As for the second stage, it is unreliable for an imbalance-agnostic source model to generate accurate pseudo labels for unlabeled target data due to unidentified class distribution shifts. Inspired by the unknown class distribution identification ability of CLIP [28] (cf. Section 5.2), we leverage its zero-shot prediction capabilities to identify unknown target class distribution and adjust our pseudo-labeling strategy. In addition, since the fixed classifier G y is biased toward the source label distribution which is probably different from the target ones, we develop an additional target label-distribution-aware classifier G t to adjust the bias. The" }, { "figure_ref": [], "heading": "… …", "publication_ref": [], "table_ref": [], "text": "Fig. \nwhere θ e , θ p and θ t denotes the parameters of the feature extractor G e , the projector G p and the target label-distribution-aware classifier G t , respectively. We will depict L wt con and L t ce in the following sub-sections." }, { "figure_ref": [], "heading": "Target-aware Contrastive Prototype Alignment", "publication_ref": [ "b4", "b45", "b27", "b21" ], "table_ref": [], "text": "Target-aware pseudo label generation. As mentioned in Section 4.3, CPGA generates pseudo labels for target samples based on Eqn. (5). Unfortunately, as the class distribution-agnostic source model may be biased toward majority classes in imbalanced scenarios, this strategy may fail to provide precise pseudo labels and leads to severe domain misalignment. To examine this issue, we introduce a metric called pseudo-label distribution discrepancy. It is calculated by comparing the per-category number of pseudo labels {y i pl } K i=1 to the ground truth labels {y i gt } K i=1 , i.e., pseudolabel distribution discrepancy\nd pdd = K i=1 |y i pl -y i gt | y i gt .\nA smaller pseudo-label distribution discrepancy value indicates that the generated pseudo labels are more reliable.\nAs shown in Figure 3, a class-imbalanced source model trained on long-tailed source data exhibits a significant pseudolabel discrepancy due to the data/class distribution shifts when applied to an inversely long-tailed target domain. This highlights the challenge of relying solely on the source model to generate pseudo labels, as it may lead to pseudo label noise and deviation from the ground truth. In contrast, CPGA exhibits a smaller pseudo-label distribution discrepancy, but eventually memorizes the noisy pseudo labels. As we mentioned before, DNNs would memorize clean samples at first, and then the noisy data with wrong labels [46]. Once the model memorizes the noisy data, it is prone to severe performance degradation.\nTo handle this issue, we resort to CLIP [28] (Contrastive Language-Image Pre-training), a powerful model for zero-shot prediction. In particular, CLIP's zero-shot prediction can provide relatively accurate predictions for unlabeled data even under unidentified class distribution shifts. As shown in Figure 3, CLIP has a much smaller pseudo-label distribution discrepancy, which inspires us to leverage its zero-shot prediction abilities to identify unknown target class distributions. Despite the relatively reliable predictions, solely using CLIP is sub-optimal since it does not take advantage of labeled source data for improvement (cf. Figure 3). A feasible solution is to aggregate the knowledge of the sourcetrained model to constantly refine pseudo labels. Specifically, considering the unequal predictive power of the source-trained model and CLIP, we apply a dynamic ensemble strategy. Inspired by previous work [22] that the predictions are more reliable when the discrepancy between the largest and second-largest predicted probabilities widens, we propose to automatically assign ensemble weights based on the difference between their largest and the second-largest predicted probability. To be specific, let ψ(•) denote the CLIP model and σ(•) denote the softmax function. We first compute the weights by:\na c = max k1 σ(ψ(x i )) -max k2,k2 =k1 σ(ψ(x i )), a p = max k1 ŷi -max k2,k2 =k1 ŷi ,(9)\nwhere a c and a p are the weights for the CLIP and the predictions ŷi , respectively. Moreover, k 1 and k 2 are element indexes regarding different classes. To guarantee the sum of the ensemble prediction to be 1, we obtain the normalized weights āc and āp via a softmax function:\n[ā c , āp ] = σ([a c , a p ]).\nLastly, the final prediction of the i-th target data can be formulated:\nỹi = āc σ(ψ(x i )) + āp ŷi .(10)\nAfterward, we can obtain the pseudo label by: ȳi = arg max k ỹi , where ȳi is a scalar index. As shown in Figure 3, T-CPGA is capable of producing relatively precise pseudo labels in the initial stage, while also improving the quality of the generated pseudo labels in the subsequent stage.\nTarget-aware weighted contrastive alignment. As mentioned in Section 4.3, to mitigate the negative effect of pseudo label noise, we propose to differentiate target data based on their similarity to the corresponding centroid in CPGA. Nevertheless, due to unidentified class distribution shifts, such a strategy may be less reliable. To handle this, since target pseudo labels are obtained via ensemble intelligence, the confidence weights are modified as the maximum element of the prediction ỹi :\nw t i = max k ỹi , (11\n)\nwhere k is an element index. Eventually, the weighted contrastive loss of T-CPGA is modified to:\nL wt con = -w t i log exp(u i v + /τ ) exp(u i v + /τ )+ K-1 j=1 exp(u i v - j /τ ) ,(12)\nTarget label-distribution-aware classifier. In CPGA, the final prediction is made by the fixed source classifier. Although contrastive alignment facilitates the alignment of target features to the source prototypes and thereby the source classifier, a fixed source classifier may not be capable of predicting target samples well in the presence of class distribution shifts across domains. To address this issue, we develop an additional target label-distribution-aware classifier G t that is designed to particularly fit the target class distribution. Specifically, we train G t using the cross-entropy loss to estimate the target pseudo label distribution:\nL t ce = -ỹ i log G t (q i ),(13)\nCompared with the fixed source classifier, the target-aware classifier G t matches the target class distribution better. Despite this, the existence of noisy pseudo labels may impede the classification performance of G t . To address this issue, we complementarily use G y and G t to get more accurate predictions via average ensemble, wherein G y demonstrates stronger classification ability thanks to sufficiently labeled source data, whereas G t conforms better to the target class distribution." }, { "figure_ref": [], "heading": "EXPERIMENT OF VANILLA SF-UDA", "publication_ref": [ "b60", "b61", "b62", "b47", "b48", "b9", "b57", "b55", "b56", "b49", "b7", "b50", "b51", "b58", "b52", "b53", "b17", "b28", "b19", "b54", "b47", "b17", "b63", "b64" ], "table_ref": [], "text": "In this section, we empirically evaluate the effectiveness of CPGA for tackling vanilla SF-UDA. Moreover, we conduct ablation studies on the proposed two modules (i.e., prototype generation and prototype adaptation).\nDatasets. We conduct experiments on three benchmark datasets:\n(1) Office-31 [61] is a standard domain adaptation dataset consisting of three distinct domains, i.e., Amazon (A), Webcam (W) and DSLR (D). Three domains share 31 categories and contain 2817, 795 and 498 samples, respectively. (2) VisDA [62] is a large-scale dataset that concentrates on the 12-class synthesis-to-real object recognition task. The dataset has a source domain containing 152k synthetic images and a target domain with 55k real object images.\n(3) Office-Home [63] is a medium-sized dataset consisting of four distinct domains, i.e., Artistic images (Ar), CLIP Art (Cl), Product images (Pr) and Real-world images (Rw). The dataset contains 65 categories in each of the four domains.\nBaselines. We compare CPGA with three types of baselines: (1) source-only model: ResNet [48]; (2) UDA methods: MCD [49], CDAN [10], TPN [58], SAFN [56], SWD [57], MDD [50], CAN [8], DMRL [51], BDG [52], PAL [59], MCC [53], SRDC [54]; (3) SF-UDA methods: SHOT [18], PrDA [29], MA [20] and BAIT [55].\nImplementation details. We implement our method in PyTorch. We use a ResNet [48] model pre-trained on ImageNet as the backbone for all methods. Following [18], we replace the original fully connected (FC) layer with a task-specific FC layer followed by a weight normalization layer. The projector consists of three FC layers with hidden feature dimensions of 1024, 512 and 256.\nWe train the source model via label smoothing technique [64] and train CPGA using SGD optimizer. To get more compact feature representations, we further train the extractor via the neighborhood clustering term [65]. More implementation details are put in Appendix C due to the page limitation." }, { "figure_ref": [], "heading": "Results of Vanilla SF-UDA", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "As shown in Table 2, the proposed CPGA achieves the best performance on Office-31, compared with SF-UDA methods w.r.t. the average accuracy over 6 transfer tasks. Note that even when compared with the state-of-the-art methods using source data (e.g., SRDC), our CPGA is still able to obtain a competitive result. Besides, Table 3 demonstrates that CPGA outperforms all the state-of-the-art methods w.r.t. the average accuracy (i.e., perclass accuracy) on the challenging VisDA dataset. Specifically, CPGA achieves the highest accuracy regarding eight classes of the VisDA dataset, while also obtaining comparable results in the remaining classes. Moreover, our CPGA also surpasses the baseline methods with source data (e.g., CoSCA), which demonstrates the superiority of our proposed method. Due to the page limitation, we put the results on Office-Home in Appendix D." }, { "figure_ref": [], "heading": "Ablation Studies of Vanilla SF-UDA", "publication_ref": [ "b8", "b66", "b67" ], "table_ref": [ "tab_6", "tab_8", "tab_7" ], "text": "To evaluate the effectiveness of the proposed two modules (i.e., prototype generation and prototype adaptation), we conduct a series of ablation studies on VisDA. Moreover, we put the analysis of hyper-parameters in Appendix D.\nEffectiveness of prototype generation. In this section, we verify the benefits of our generated prototypes to existing UDA methods (e.g., DANN [9], ADDA [67] and DMAN [68]), which cannot resolve SF-UDA previously. Specifically, we use the generated prototypes to replace their source data for domain alignment. As shown in Table 4, these methods based on prototypes achieve competitive performance compared with the counterparts using source data, or even perform better in some tasks of Office-31. This results demonstrates the benefits and applicability of our prototype generation scheme to existing UDA methods.\nAblation studies on prototype generation. To study the impact of our contrastive loss L p con , we compare the results of models with and without L p con . As shown in Table 6, compared with training by only the cross-entropy loss L ce , optimizing the generator via L ce +L p con makes the inter-class features separated (i.e., larger inter-class distance) and intra-class features compact (i.e., smaller intra-class distance). As a result, L p con enhances the final adaptation performance by 1% accuracy gains.\nAblation studies on prototype adaptation. We next ablate the losses in prototype adaptation. As shown in Table 5, compared with the conventional contrastive loss L con , our weighted contrastive loss L w con can achieve more promising performance on VisDA. This result verifies the ability of our method to alleviate pseudo label noise. Besides, L elr can also improve the performance, since it prevents the model from memorizing pseudo label noise. When combining all the losses (i.e., L w con and L elr ), our method obtains the best performance." }, { "figure_ref": [], "heading": "EXPERIMENT OF IMBALANCE-AGNOSTIC SF-UDA", "publication_ref": [ "b21", "b61", "b62", "b34", "b47", "b8", "b49", "b52", "b65", "b34", "b37", "b17", "b54", "b30", "b21" ], "table_ref": [], "text": "This section evaluates T-CPGA for handling imbalance-agnostic SF-UDA. Subsequently, we discuss the use of CLIP and the target label-distribution-aware classifier.\nDatasets. To simulate target class-distribution-agnostic scenarios, inspired by [22], we construct the following datasets. 1) VisDA-I is a variant of the VisDA [62], which is 12-class synthesis-to-real object recognition task. The source domain has two inverse distributions, i.e., forward long-tailed distribution (FLT) and backward long-tailed distribution (BLT), while the target domain has three, i.e., FLT, BLT and a relative balance distribution (Bal). Note that we term the class distribution of the original target domain in the VisDA as Bal. Hence, such a dataset has 6 tasks with different class distribution shifts. Moreover, we use an imbalance factor to measure the degree of imbalance, i.e., µ= Nmax Nmin , where N max and N min denote the number of samples in the maximum class and minimum class, respectively. 2) Office-Home-I is a variant of the Office-Home [63], which contains three distinct domains, i.e., Clipart (Cl), Product images (Pr) and Real-World images (Rw). Each domain has three class distributions (i.e., FLT, BLT and Bal), where Bal denotes the vanilla class distribution in the Office-Home. 3) DomainNet-S constructed by Tan et al. [35] consists of four domains (Real (R), Clipart (C), Painting (P), Sketch (S)) with 40 classes. Since each domain of DomainNet-S is imbalanced, we directly use it for imbalance-agnostic SF-UDA.\nBaselines. We compare T-CPGA with five categories of baselines: 1) source-only model: ResNet [48]; 2) UDA methods: DANN [9], MDD [50], MCC [53], ToAlign [66]; 3) CI-UDA methods: COAL [35], PCT [38]; 4) SF-UDA methods: SHOT [18], BAIT [55], NRC [31], our CPGA; 5) imbalanced SF-UDA method: ISFDA [22].\nImplementation details. We implement all the baselines based on their official codes or reimplementation 1 . For the network architecture, we use RetNet-50, pre-trained on ImageNet, as the backbone for Office-Home-I and DomainNet-S, while adopting ResNet-101 for VisDA-I. Due to the page limitation, we provide more implementation details in the supplementary.\nEvaluation protocol. We use overall accuracy to measure how well the model matches the target class distribution, and also adopt average per-class accuracy for evaluation. Due to the page limitation, we put the results in terms of overall accuracy in the main paper, and more detailed results in terms of per-class accuracy in Appendix D.\n1. https://github.com/thuml/Transfer-Learning-Library" }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Results of Imbalance-agnostic SF-UDA", "publication_ref": [ "b21", "b5", "b68", "b69", "b69", "b64", "b64", "b39" ], "table_ref": [ "tab_9" ], "text": "We verify the effectiveness of our T-CPGA in handling diverse class distribution shifts on three datasets, i.e., Office-Home-I, DomainNet-S and VisDA-I. Specifically, on Office-Home-I, we present the results on six types of class distribution shifts regarding the Cl→Pr task in Table 7 and those regarding the Cl→Rw task in Table 8, while the results for other tasks (e.g., Pr→Rw) are provided in Appendix E. Moreover, we report the results on DomainNet-S in Table 9, where each task corresponds to a distinct type of class distribution shift.\nIn light of the results on Office-Home-I and DomainNet-S, we draw the following observations: 1) UDA and CI-UDA methods are incapable to alleviate the domain discrepancy when confronted with agnostic class distribution shifts, which leads to relatively poor performance. 2) Recent state-of-the-art SF-UDA methods outperform UDA and CI-UDA methods, but they assume implicitly that the source and target domains are class-balanced. As a result, these methods exhibit inadequate performance in imbalanceagnostic SF-UDA. 3) ISFDA [22] is a better SF-UDA method when compared to other SF-UDA methods. ISFDA considers two opposite class distributions (FLT→BLT and BLT→FLT), resulting in better performance in the two tasks than other SF-UDA baselines, as evidenced in Tables E. 6-E.7. However, ISFDA depends on the prior of the source class distribution to train a class-balanced model, which is infeasible in real imbalance-agnostic SF-UDA. Furthermore, ISFDA cannot perform well on other types of class distribution shifts beyond FLT→BLT and BLT→FLT. 4) Unlike the above baselines, our proposed method, T-CPGA, demonstrates superior performance, indicating that it can accurately perceive the target class distribution and effectively leverage the source model's knowledge to solve imbalance-agnostic SF-UDA.\nWe further investigate the effectiveness of T-CPGA under various imbalance ratios and report the results on VisDA-I with three ratios (i.e., 10, 50, 100) in Figure 4. Specifically, our T-CPGA achieves the best performance on all ratios and maintains stable performance even if the imbalance ratio is 100, whereas baselines suffer from performance degradation when the imbalance ratio is high. This further demonstrates the practicability of T-CPGA in handling wide imbalance ratio scenarios of imbalanceagnostic SF-UDA.\nWe further use t-SNE [69] to visualize the features learned by the source-only model (ResNet-101), the CLIP model, and the model trained by our T-CPGA. We randomly selected 50 samples per class from the validation set of VisDA-I (FLT→BLT, imbalance ratio 100) for visualization. As shown in Figure 5, the feature distribution of the source-only model appears chaotic, while CLIP is only slightly better than the source-only model. In contrast, the feature distribution of T-CPGA is more discriminative, exhibiting both intra-class compactness and interclass separation. This is achieved by our target-aware contrastive prototype alignment strategy. Note that previous work [70] has shown that learning discriminative image representations can facilitate classifier learning in imbalanced cases [70]. Therefore, this visualization analysis further confirms the effectiveness of T-CPGA in addressing imbalance-agnostic SF-UDA. 10). It is important to note that simply fine-tuning CLIP cannot achieve better performance in imbalanceagnostic SF-UDA due to the lack of true target annotations. Target neighborhood clustering. To enhance the contrastive alignment, we further resort to feature clustering to make the target features more compact. Inspired by [65] that the intraclass samples in the same domain are generally more closer, we propose to close the distance between each target sample and its nearby neighbors. To this end, we maintain a memory bank Q={q 1 , q 2 , ..., q nt } to restore all target features, which are updated when new features are extracted in each iteration. Based on the bank, for the i-th sample's feature q i , we can compute its normalized similarity with any feature q j by s i,j = exp(φ(q i ,q j )/τ ) n t l=1,l =i exp(φ(q i ,q l )/τ ) . Motivated by that minimizing the entropy of the normalized similarity helps to learn compact features for similar data [65], we further train the extractor via a neighborhood clustering term:\nL nc = - nt j=1,j =i s i,j log(s i,j ). (C.1)\nNote that the entropy minimization here does not use pseudo labels, so the learned compact target features are (to some degree) robust to pseudo label noise.\nImplementation details of CPGA. We set the learning rate and epoch to 0.01 and 40 for VisDA and to 0.001 and 400 for Office-31 and Office-Home. For hyper-parameters, we set η, β, τ and batch size to 0.05, 0.9, 0.07 and 64, respectively. Besides, we set λ=7 for Office-31 and Office-home while λ=5 for VisDA. Following [40], the dimension of noise z is 100.\nImplementation details of T-CPGA. The new added target labeldistribution-aware classifier C lda is a fully connected layer. We set the learning rate and epoch to 0.001 and 300 for Office-Home-I and to 0.01 and 400 for VisDA-I and DomainNet-S. For hyperparameters, we set the same values as in CPGA. " }, { "figure_ref": [], "heading": "APPENDIX D MORE EXPERIMENTAL RESULTS OF CPGA", "publication_ref": [ "b17", "b54", "b78", "b54" ], "table_ref": [ "tab_15", "tab_15", "tab_15", "tab_15", "tab_15" ], "text": "Comparison with SOTA methods of CPGA. We verify the effectiveness of our method on the Office-Home dataset. From Table D.2, the results show that: (1) CPGA outperforms all the conventional unsupervised domain adaptation methods which need to use the source data. (2) CPGA achieves the competitive performance compared with the state-of-the-art source-free UDA methods, i.e., SHOT [18] and BAIT [55]. Besides, we also provide our reimplemented results of the published source-free UDA methods on VisDA and Office-31 based on their published source codes (See Table D.1 and Table D.4).\nInfluence of hyper-parameters of CPGA. On the one hand, we evaluate the sensitivity of two hyper-parameters λ and η on VisDA via an unsupervised reverse validation strategy [79] based on the source prototypes. For convenience, we set η = 0.05 when studying λ, and set λ = 5 when studying η. As shown in Table D.5, the proposed method achieves the best performance when setting λ = 5 and η = 0.05 on VisDA. The results also demonstrate that our method is non-sensitive to the hyperparameters. On the other hand, we provide more results for the hyper-parameters λ and β on VisDA. As shown in Table D.3, our method achieves the best performance with the setting β=0.9 and λ=5 on VisDA.\nVisualization of optimization curve of CPGA. Figure D.1 shows our method converges well in terms of the total loss and accuracy in the training phase. Also, the curve on the validation set means our method does not suffer from pseudo label noise.\nCompared CPGA with BAIT. As shown in Figure D.2, BAIT [55] may overfit to mistaken divisions of certain and uncertain sets, leading to poor generalization abilities. In contrast, our method is more robust and can conquer the issue of pseudo label noise." }, { "figure_ref": [], "heading": "APPENDIX E MORE EXPERIMENTAL RESULTS OF T-CPGA", "publication_ref": [ "b80", "b81" ], "table_ref": [ "tab_13", "tab_13", "tab_13" ], "text": "To further verify the effectiveness of T-CPGA in handling imbalance-agnostic UDA, we report Per-Class Accuracy and Overall Accuracy of each task on the Office-Home-I dataset (From Optimization of target-aware classifier. In T-CPGA, we leverage the standard cross-entropy loss to train the additional target labeldistribution-aware classifier G t . However, there may be concerns regarding how to ensure equal treatment of each category to achieve higher Per-Class Accuracy rather than Overall Accuracy.\nTo further explore this, we adopt the balanced softmax loss [81] and the seesaw loss [82] to train a balanced target-aware classifier and get the variants of T-CPGA (i.e., T-CPGA (Bal-CE) and T-CPGA (Seasaw)). For the balanced softmax loss, it proposes a meta sampler to explicitly learn the current best sampling rate to prevent the model from overfitting to head (majority) classes or tail (minority) classes. As for the seesaw loss, according to the ratio of accumulated sample numbers, it adjusts the negative sample gradient which is applied to the corresponding category. In this way, it is able to effectively balance the positive and negative sample gradients of different categories, which helps the model treat samples of each class in a balanced way. As shown in Table E.2, the experiments conducted on VisDA-I datasets with varying imbalance ratios, have demonstrated that: 1) compared with the standard cross-entropy loss, the use of the balanced softmax loss or the seesaw loss results in less performance degradation as the imbalance ratio increases. 2) However, due to the reliance on label frequency to regulate the training process, these two losses are susceptible to pseudo label noise, leading to biased model adaptation. Consequently, they fail to yield significant performance gains when dealing with imbalance agnostic SF-UDA.\nMore Discussions on CLIP. For the fine-tuning of CLIP, we report the results on three types of label distributions (i.e., FLT, BLT and Bal) regarding the Cl→Pr task and those regarding the Cl→Pr task in Figure E.1. Due to the inevitable noise in pseudo labels, the experimental results indicate that fine-tuning CLIP performance degrades in all six tasks, demonstrating that simply fine-tuning CLIP cannot achieve better performance in imbalanceagnostic SF-UDA due to the lack of true target annotations.\nSince publicly available CLIP checkpoints are limited to ResNet-50, ViT-B/32, or larger models, we present the experiments of training T-CPGA in a small model architecture. Specifically, we adopt the MobileNet-V2 (pre-trained on ImageNet) as the backbone and evaluate T-CPGA on the Cl→Pr and Cl→Rw tasks of the Office-Home-I dataset. From Table E.1, our T-CPGA is able to achieve competitive performance even with a small-sized backbone, which also demonstrates the effectiveness of the proposed methods in solving imbalance-agnostic SF-UDA.\nTo further explore the use of CLIP in the testing phase, we incorporate CLIP zero-shot prediction into the testing phase (i.e., averaging the outputs of G y , G t and CLIP Zero-shot Prediction → T-CPGA (Combination)) as shown in Table E.3. Note that in the testing phase of T-CPGA, the input is first passed through the feature extractor G e and then transmitted separately to both the fixed classifier, G y , and the target label-distribution-aware classifier, G t to obtain the final logit via averaging the output. Experimental results show that: 1) Compared with the CLIP Zeroshot Prediction (i.e., only CLIP), the combination of CLIP and T-CPGA achieves better performance. However, 2) compared with our T-CPGA, this variant does not bring additional performance improvement, indicating that T-CPGA fully utilized the ability of CLIP when generating pseudo-labels and thus it is no need to integrate CLIP into the testing phase. " }, { "figure_ref": [], "heading": "SUPPLEMENTARY MATERIALS FOR \"IMBALANCE-AGNOSTIC SOURCE-FREE DOMAIN ADAPTATION VIA AVATAR PROTOTYPE ALIGNMENT\"", "publication_ref": [ "b4" ], "table_ref": [], "text": "In this supplementary, we first provide more discussions on the conventional Unsupervised Domain Adaptation (UDA) methods. In addition, we also provide more implementation details and more experimental results for both CPGA and T-CPGA. The organization of the supplementary materials is as follows:\n1) In Appendix A, we review the literature on vanilla unsupervised domain adaptation methods. 2) In Appendix B, we provide more details of the early learning regularization term L elr . 3) In Appendix C, we provide more implementation details of both CPGA and T-CPGA. 4) In Appendix D, we provide more detailed experimental results of CPGA. 5) In Appendix E, we provide more detailed experimental results of T-CPGA." }, { "figure_ref": [], "heading": "APPENDIX A REVIEW OF VANILLA UDA", "publication_ref": [ "b10", "b53", "b70", "b71", "b72", "b8", "b48", "b65", "b73", "b49", "b57", "b58", "b37", "b7", "b59", "b74", "b75" ], "table_ref": [], "text": "Unsupervised domain adaptation (UDA) seeks to leverage a labelrich source domain to improve the model performance on an unlabeled target domain [11], [54], [71], [72]. In this field, Most existing methods alleviate the domain discrepancy either by adding adaptation layers to match high-order moments of distributions, e.g., DDC [73], or by devising a domain discriminator to learn domain-invariant features in an adversarial manner, e.g., DANN [9] and MCD [49]. Recent adversarial-based approaches mainly focus on two levels, i.e., feature-level and distribution-level. At the feature-level, ToAlign [66] proposes to select the corresponding source features to achieve task-oriented domain alignment via ignoring the task-irrelevant source features.\nAt the distribution-level, CLS [74] proposes to align both conditional and class distribution shifts while MDD [50] introduces Margin Disparity Discrepancy to measure distribution-level discrepancy which is subsequently minimized to facilitate domain alignment. Besides, prototypical methods and contrastive learning have also been introduced to UDA. For instance, TPN [58], PAL [59] and PCT [38] attempt to align the source and target domains based on the learned prototypical feature representations. In addition, CAN [8] and CoSCA [60] leverage contrastive learning to minimize intra-class distance and maximize inter-class distance explicitly. As CLIP has been successfully applied in recent studies, CLIP-based domain adaptation methods have emerged as well.\nFor instance, AP [75] adopts CLIP for domain generalization by combining domain prompt inference with CLIP. Additionally, StyleGAN-NADA [76] adopts CLIP for image generation via leveraging CLIP to discover global directions of disentangled change in the latent space.\nAlthough conventional UDA methods continue to evolve and improve, the increasing emphasis on privacy protection laws has led to restrictions on the availability of source domain data. Furthermore, practical data may follow any class distributions rather than only relatively balanced class distributions. To this end, we investigate a more practical task called imbalance-agnostic SF-UDA. In this task, only a source pre-trained model and unlabeled target data are available, and the class distributions of both domains are unknown and could be arbitrarily skewed. " }, { "figure_ref": [], "heading": "APPENDIX B EARLY LEARNING REGULARIZATION", "publication_ref": [ "b45", "b46", "b46" ], "table_ref": [], "text": "To further prevent the model from memorizing noise, we propose to regularize the learning process via an early learning regularizer. Since DNNs first memorize the clean samples with correct labels and then the noisy data with wrong labels [46], the model in the \"early learning\" phase can be more predictable to the noisy data. Therefore, we seek to use the early predictions of each sample to regularize learning. To this end, we devise a memory bank H={h 1 , h 2 , ..., h nt } to record non-parametric predictions of each target sample, and update them based on new predictions via a momentum strategy. Formally, for the i-th sample, we predict its non-parametric prediction regarding the k-th prototype by\n, and update the momentum by:\nwhere\nand β denotes the momentum coefficient. Based on the memory bank, for the i-th data, we further train the model via an early learning regularizer L elr , proposed in [47]:\nThis regularizer enforces the current prediction to be close to the prediction momentum, which helps to prevent overfitting to label noise. Note that the use of L elr here is different from [47], which focuses on classification tasks and uses parametric predictions." }, { "figure_ref": [], "heading": "APPENDIX C MORE IMPLEMENTATION DETAILS", "publication_ref": [ "b76", "b17" ], "table_ref": [], "text": "Architecture of the generator. As shown in Table C.1, the generator consists of an embedding layer, two FC layers and two deconvolution layers. Similar to ACGAN [77], given an input noise z∼U (0, 1) and a label y∈R K , we first map the label into a vector using the embedding layer. After that, we combine the vector with the given noise by element-wise multiplication and then feed it into the following layers. Since we propose to obtain feature prototypes instead of images, we reshape the output of the generator into a feature vector with the same dimensions as the last FC layer.\nTraining of the generator. In stage one, we train the generator by optimizing L ce +L p con . The batch size is set to 128. We use the SGD optimizer with learning rate = 0.001. In stage two, to achieve SHOT [18] 65 " } ]
Source-free Unsupervised Domain Adaptation (SF-UDA) aims to adapt a well-trained source model to an unlabeled target domain without access to the source data. One key challenge is the lack of source data during domain adaptation. To handle this, we propose to mine the hidden knowledge of the source model and exploit it to generate source avatar prototypes (i.e., representative features for each source class). To this end, we propose a Contrastive Prototype Generation and Adaptation (CPGA) method. CPGA consists of two stages: 1) Prototype generation: by exploring the classification boundary information of the source model, we train a prototype generator to generate source prototypes. 2) Prototype adaptation: based on the prototypes and target pseudo labels, we develop a robust contrastive prototype adaptation strategy to align each pseudo-labeled target data to the corresponding source prototypes. Extensive experiments on three UDA benchmark datasets demonstrate the superiority of CPGA. However, existing SF-UDA studies (including our CPGA) implicitly assume the class distributions of both source and target domains to be balanced. This hinders the applications of existing SF-UDA to real scenarios, in which the class distributions are usually skewed and agnostic. To address this issue, we study a more practical SF-UDA task, termed imbalance-agnostic SF-UDA, where the class distributions of both the unseen source domain and unlabeled target domain are unknown and could be arbitrarily skewed (e.g., long-tailed, or even inversely longtailed). This task is much more challenging than vanilla SF-UDA due to the co-occurrence of covariate shifts and unidentified class distribution shifts between the source and target domains. To address this task, we extend CPGA and propose a new Target-aware Contrastive Prototype Generation and Adaptation (T-CPGA) method. Specifically, for better prototype adaptation in the imbalanceagnostic scenario, T-CPGA applies a new pseudo label generation strategy to identify unknown target class distribution and generate accurate pseudo labels, by utilizing the collective intelligence of the source model and an additional contrastive language-image pretrained model. Meanwhile, we further devise a target label-distribution-aware classifier to adapt the model to the unknown target class distribution. We empirically show that T-CPGA significantly outperforms CPGA and other SF-UDA methods in imbalance-agnostic SF-UDA, e.g., 25.1% and 22.5% overall accuracy gains on Cl→Pr and Cl→Rw tasks of the imbalance-agnostic Office-Home dataset.
Imbalance-Agnostic Source-Free Domain Adaptation via Avatar Prototype Alignment
[ { "figure_caption": "•M. Tan, Y. Zhang and Z. Qiu are co-first authors. Corresponding to Y. Liu. • H. Lin, Z. Qiu, S. Niu, D. Liu, Q. Du, Y. Liu and M. Tan are with", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FixedFig. 1 :1Fig. 1: An overview of CPGA. CPGA contains two stages: (1) Prototype generation: under the guidance of the fixed classifier, a generator Gg is trained to generate feature prototypes via Lce and L p con . (2) Prototype adaptation: in each training batch, we use the learned prototype generator to generate one prototype for each class. Based on the generated prototypes and pseudo labels obtained by clustering, we align each pseudo-labeled target feature to the corresponding class prototype by training a domain-invariant feature extractor via L w con and L elr . Note that the classifier Gy is fixed during the whole training phase.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig.2: Visualizations of the generated feature prototypes by the generator trained with different losses, which shows the corresponding visual results of Table6. Compared with training with only crossentropy Lce, the contrastive loss L p con encourages the prototypes of the same category to be more compact and those of different categories to be more separated. Better viewed in color.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Overall Accuracy (%) on the VisDA-I dataset (ResNet-101). The number after VisDA-I is the imbalance ratio. TABLE 10: Ablation studies of source bias compensation and target pseudo label generation for T-CPGA on the DomainNet-S dataset (ResNet-50) in terms of overall accuracy (%). We first show T-CPGA w/o target label-distribution-aware classifier Gt (i.e., Lce). Meanwhile, to validate the effectiveness of our pseudo label generation strategy, we show T-CPGA with pseudo label generation only by CLIP [28].", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: The t-SNE visualizations on the VisDA-I validation set (i.e., FLT→BLT, imbalance ratio 100) generated by the source pretrained model (ResNet-101), CLIP zero-shot prediction and our T-CPGA. Since different colors represent different classes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "TABLE 11 :11Compare CLIP zero-shot prediction and our T-CPGA on the Office-Home-I (ResNet-50) and VisDA-I (ResNet-101) datasets in terms of Overall Accuracy (%). Method Office-Home-I VisDA-I-10 VisDA-I-50 VisDA-I-100 Avg. CLIP (zero-shot prediction) Discussion on CLIP. One might wonder why we do not use CLIP directly to classify target samples in imbalance-agnostic SF-UDA, given its impressive performance in other settings. However, our proposed method, T-CPGA, offers two significant advantages in real-world imbalance-agnostic SF-UDA applications. First, T-CPGA has better performance over CLIP in various imbalanceagnostic UDA datasets. As shown in Table 11, T-CPGA is more effective than CLIP zero-shot prediction, as it can generate more discriminative feature representation for classification (cf. Figure 5(b) vs 5(c)), and generate more accurate pseudo labels for domain alignment (cf. Table", "figure_data": "", "figure_id": "fig_5", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure E. 1 (1cf. Appendix E) demonstrates that fine-tuning CLIP with self-training (with inevitable noisy pseudo labels) yields declining performance compared to CLIP zero-shot prediction. In contrast, T-CPGA employs target-aware contrastive prototype alignment to mitigate the risk of memorizing noisy labels, making it more suitable for imbalance-agnostic SF-UDA. Second, T-CPGA can be used to train various model architectures (cf.", "figure_data": "", "figure_id": "fig_6", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. E. 1 :1Fig. E.1: Overall Accuracy of fine-tuned CLIP on the Product and Real-World domains with increasing epochs. Here, FLT, BLT and Bal denote the type of the label distribution.", "figure_data": "", "figure_id": "fig_7", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. E. 2 :2Fig. E.2: Per-Class Accuracy (%) on the VisDA-I dataset (ResNet-101). The number after VisDA-I is the imbalance ratio.", "figure_data": "", "figure_id": "fig_8", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Unlabeled target data Dt={xi} n t i=1 ; Source model {Ge, Gy}; Training epoch E, M ; Parameters β, τ , λ. 1 Initialize Projector Gp, Generator Gg; // ** Stage 1: Prototype Generation ** // 2 for e = 1 → E do Generate prototypes p based on Gg; // Learn representative prototypes Compute Lce and L p con based on Eqns. (3) and (4); Generate prototypes p based on the learned Gg; // ** Stage 2: Prototype Adaptation ** // 8 for m = 1 → M do Extract target data features Ge(x) based on Ge; Compute L elr based on Eqn. (B.2) (cf. Appendix B); Update target feature extractor Ge based on Eqn. (2); 15 end Output: Ge and Gy.", "figure_data": "410Obtain target pseudo labels based on Eqn. (5);11Obtain contrastive features ht based on Gp;// Conduct class-wise domain alignment12Compute L w con based on Eqn. (4);// Prevent memorizing label noise1314", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Unlabeled target data Dt={xi} n t i=1 ; Source model {Ge, Gy}; target label-distribution-aware classifier Gt; Training epoch E, M ; Parameters β, τ , λ. 1 Initialize Projector Gp; Generator Gg. // ** Stage 1: Prototype Generation ** // 2 for e = 1 → E do Compute Lce and L p con based on Eqns. (3) and (4); Extract target data features Ge(x) based on Ge; // Conduct target-aware pseudo label generation.", "figure_data": "5Update generator Gg based on Eqn. (1);6 end7 Generate prototypes p based on the learned Gg;// ** Stage 2: Prototype Adaptation ** //8 for m = 1 → M do9Obtain pseudo labels based on Eqn. (10);Obtain contrastive features h t based on G p ;Obtain confidence weights w t i based on Eqn. (11);// Target-aware weighted contrastive alignment.Compute L wt con based on w t i and Eqn. (12);", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "3: Pseudo-label distribution discrepancy for different methods on the VisDA-I dataset (long-tailed → inversely long-tailed, imbalance ratio 100). The pseudo-label distribution discrepancy means the difference in the amount of each category between ground truths and pseudo labels (or predictions) of compared methods. The results show that T-CPGA can iteratively achieve more accurate pseudo labels with a better initialization via CLIP, while CPGA overfits noisy labels when it exists unidentified class distribution shifts.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overall Accuracy (%) on the Office-31 (ResNet-50).", "figure_data": "MethodSource-free A→D A→W D→W W→D D→A W→A Avg.ResNet-50 [48]68.968.496.799.362.560.7 76.1MCD [49]92.288.698.5100.069.569.7 86.5CDAN [10]92.994.198.6100.071.069.3 87.7MDD [50]90.490.498.799.975.073.7 88.0CAN [8]95.094.599.199.670.366.4 90.6DMRL [51]93.490.899.0100.073.071.2 87.9BDG [52]93.693.699.0100.073.272.0 88.5MCC [53]95.695.498.6100.072.673.9 89.4SRDC [54]95.895.799.2100.076.777.1 90.8PrDA [29]92.291.198.299.571.071.2 87.2SHOT [18]93.190.998.899.974.574.8 88.7BAIT [55]92.094.698.1100.074.675.2 89.1MA [20]92.793.798.599.875.377.8 89.6CPGA (ours)94.494.198.499.876.076.6 89.9", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Per-class Accuracy (%) on the large-scale VisDA dataset (ResNet-101).", "figure_data": "MethodSource-freeplanebicyclebuscarhorseknifemcyclpersonplantsktbrdtraintruckPer-classResNet-101 [48]55.153.361.959.180.617.979.731.281.026.573.58.552.4CDAN [10]85.266.983.050.884.274.988.174.583.476.081.938.073.9SAFN [56]93.661.384.170.694.179.091.879.689.955.689.024.476.1SWD [57]90.882.581.770.591.769.586.377.587.463.685.629.276.4TPN [58]93.785.169.281.693.561.989.381.493.581.684.549.980.4PAL [59]90.950.572.382.788.388.390.379.889.779.288.139.478.3MCC [53]88.780.380.571.590.193.285.071.689.473.885.036.978.8CoSCA [60]95.787.485.773.595.372.891.584.894.687.987.936.882.9PrDA [29]86.981.784.663.993.191.486.671.984.558.274.542.776.7SHOT [18]92.681.180.158.589.786.181.577.889.584.984.349.379.6MA [20]94.873.468.874.893.195.488.684.789.184.783.548.181.6BAIT [55]93.783.284.565.092.995.488.180.890.089.084.045.382.7CPGA (Ours)95.689.075.464.991.797.589.783.893.993.487.769.086.0", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparisons of the existing domain adaptation methods with source data or prototypes on Office-31 (ResNet-50).", "figure_data": "MethodA→D A→W D→W W→D D→A W→A Avg.DANN (with source data) 79.7 82.0 96.9 99.1 68.2 67.4 82.2DANN (with prototypes) 83.7 81.1 97.5 99.8 63.4 63.6 81.5DMAN (with source data) 83.3 85.7 97.1 100.0 65.1 64.4 82.6DMAN (with prototypes) 86.3 84.2 97.7 100.0 64.7 64.5 82.9ADDA (with source data) 82.9 79.9 97.4 99.4 64.9 63.6 81.4ADDA (with prototypes) 83.5 81.9 97.2 100.0 63.8 63.0 81.6", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of the losses (i.e., L w con and L elr ) in terms of per-class accuracy (%) on VisDA. Here, Lcon indicates L w con without the confidence weight w.", "figure_data": "BackboneLconL w conLelrPer-class (%)52.480.983.686.0", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation studies on prototype generation in stage one with different losses. Inter-class distance and intra-class distance are based on cosine distance (range from 0 to 2). We report per-class accuracy (%) after training the model on VisDA.", "figure_data": "ObjectiveInter-class distance Intra-class distance Per-class (%)Lce0.78603.343 × e -485.0Lce + L p con1.00342.670 × e -686.0", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Overall Accuracy (%) of Cl→Pr Task with different class distribution shifts on the Office-Home-I dataset (ResNet-50). SF and CI indicate source-free and class-imbalanced.", "figure_data": "MethodSF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.ResNet-50 [48]53.8843.9348.1944.5154.2651.3949.36DANN [9]65.9045.1051.5043.4066.9059.0055.30MDD [50]69.4148.9255.2446.3268.2161.8658.33MCC [53]53.2842.9247.0239.1154.4148.1947.49ToAlign [66]69.6656.2265.4052.4271.5464.5063.29COAL [35]64.0658.7463.3757.1161.8164.0561.52PCT [38]67.9459.2966.9755.3470.7367.2464.58SHOT [18]69.6658.7466.5056.3570.4372.1865.64BAIT [55]65.9853.2061.8454.1864.8461.8460.31NRC [31]71.7764.5872.8559.4369.5772.7068.48CPGA (Ours)65.7356.1760.3753.7866.0064.1661.03ISFDA [22]67.5966.3573.1556.7568.0671.0567.16T-CPGA (Ours)84.8886.2587.3884.7886.2087.2086.12", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Overall Accuracy (%) of Cl→Rw Task with different class distribution shifts on the Office-Home-I dataset (ResNet-50). SF and CI indicate source-free and class-imbalanced.", "figure_data": "MethodSF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.ResNet-50 [48]54.4344.9350.0145.4158.6654.0551.25DANN [9]66.8044.7057.2046.0071.7065.2058.60MDD [50]69.6750.3658.3048.2871.4369.8261.31MCC [53]54.9943.5051.4640.7863.2951.1850.87ToAlign [66]71.3555.9569.0253.7172.5570.3765.49COAL [35]61.9458.8268.2158.9868.3268.3764.11PCT [38]70.2359.8670.4856.4271.0369.0666.18SHOT [18]68.9561.8574.2760.0272.3972.9668.41BAIT [55]65.2851.5661.7251.1670.7963.0060.59NRC [31]65.4463.7772.1461.8570.7175.2468.19CPGA (Ours)62.6159.4666.6754.9170.3166.8663.47ISFDA [22]68.4067.6071.0661.7770.7971.7768.56T-CPGA (Ours)85.1685.7987.1585.0085.8787.0386.00", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Overall Accuracy (%) on the DomainNet-S dataset (ResNet-50). SF and CI indicate source-free and class-imbalanced.", "figure_data": "MethodSFCIC→PC→RC→SP→CP→RP→SR→CR→PR→SS→CS→PS→RAvg.ResNet-50 [48]56.9676.5858.0257.9282.4965.7466.4674.5660.1960.1562.7074.4366.35DANN [9]61.7081.7065.5059.3077.3061.0074.1077.3071.6073.1069.0079.6070.93MDD [50]70.3086.7272.7062.3185.8069.2079.5879.2473.2477.4174.8784.2976.31MCC [53]51.9481.0660.2363.1284.1957.4066.5261.1655.7762.6255.5574.8564.53ToAlign [66]70.2086.9871.8667.2084.8673.7478.7180.1073.7077.2974.2283.9376.90COAL [35]73.5084.6571.0369.9987.2067.1575.9979.3761.6177.2375.3585.2275.69PCT [38]73.2489.2175.2475.0788.4775.5178.5881.1774.8279.7478.5886.7779.70SHOT [18]76.9089.0772.5774.6388.9274.2876.6777.6271.2474.8175.3986.9278.25BAIT [55]81.9590.4876.7476.3087.2876.2877.9782.1674.2081.6879.2088.0581.02NRC [31]77.9390.4776.0778.2290.3175.7480.0778.6274.4980.8280.8291.0681.22CPGA (Ours)68.0684.9166.5769.0684.7269.5374.3279.3463.7875.3174.3284.1374.50ISFDA [22]77.3889.3073.7877.9189.7372.6180.0780.4472.0777.6076.7687.3179.58T-CPGA (Ours)86.5993.3085.0889.3692.9985.5490.1086.5985.4989.7386.7393.0388.71", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Target label-distribution-aware classifier. As we mentioned in Section 5.2, unidentified class distribution shifts would cause the fixed source classifier to provide unreliable predictions. Therefore, we devise a target label-distribution-aware classifier that enables T-CPGA to match the target label distribution and accurately classify target samples. This design can be verified by the results inTable 10, where our T-CPGA with the target label-distributionaware classifier performs much better than that without this classifier on DomainNet-S. overcome the lack of source data by generating feature prototypes for each class via contrastive learning in the first stage. Based on the generated prototypes, we develop a robust contrastive prototype adaptation strategy to mitigate domain shifts and pseudo label noise in the second stage. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of CPGA in handling SF-UDA. In addition to SF-UDA, we have explored a more practical task, namely imbalance-agnostic SF-UDA, where the class distribution does not necessarily be balanced. To address it, we have extended CPGA to Target-aware Contrastive Prototype Generation and Adaptation (T-CPGA). Like CPGA, T-CPGA consists of two stages: 1) it holds the same first stage as CPGA to handle the absence of source data. 2) To avoid the negative effect of the unidentified class distribution shift, we design a novel target-aware contrastive prototype alignment strategy. Extensive experiments on three UDA variant datasets verify the effectiveness of T-CPGA in handling imbalance-agnostic SF-UDA.", "figure_data": "8 CONCLUSION In this paper, we have proposed a Contrastive Prototype Genera-tion and Adaptation (CPGA) method to resolve SF-UDA. Specif-Epochs 0.04 0.03 0.02 0.01 0.00 Total Loss Train Validation (a) Total loss curve 0 25 50 75 100 125 150 175 Epochs 0.82 0.84 0.86 0.88 0.94 0.92 0.90 Accuracy Train Validation 0.80 (b) Accuracy curve Fig. D.1: Optimization curves of CPGA on Office-31(A→W). ically, we 0 25 50 75 100 125 150 175 Epochs 0 3 6 9 12 15 0.50 0.55 0.60 0.65 0.70 0.80 0.75 Accuracy Ours BAITFig. D.2: Testing curves of CPGA and BAIT on VisDA dataset.class-wise domain alignment, we generate feature prototypes forK classes in each epoch.", "figure_id": "tab_13", "figure_label": "E", "figure_type": "table" }, { "figure_caption": "Table E.4 to Table E.15) and different imbalance ratios on the VisDA-I dataset (From Table E.16 to Table E.20). The results in terms of Overall Accuracy on the DomainNet-S dataset (i.e., Table E.22) and the intuitive histograms for the VisDA-I TABLE D.1: Classification accuracies (%) on large-scale VisDA dataset (ResNet-101). We adopt underlines to denote reimplemented results.", "figure_data": "MethodSource-free plane bicycle buscarhorse knife mcycl person plant sktbrd train truck Per-classSHOT [18]92.681.180.1 58.5 89.786.181.577.889.584.984.3 49.379.6SHOT [18]88.585.977.9 49.8 90.290.882.079.088.584.485.6 50.579.4BAIT [55]93.783.284.5 65.0 92.995.488.180.890.089.084.0 45.382.7BAIT [55]93.875.486.1 64.0 93.996.488.581.288.988.786.9 39.982.0CPGA (Ours)95.689.075.4 64.9 91.797.589.783.893.993.487.7 69.086.0", "figure_id": "tab_14", "figure_label": "", "figure_type": "table" }, { "figure_caption": "2: Classification accuracies (%) on the Office-Home dataset (ResNet-50). We adopt underlines to denote reimplemented results.", "figure_data": "MethodSource-free Ar→Cl Ar→Pr Ar→Rw Cl→Ar Cl→Pr Cl→Rw Pr→Ar Pr→Cl Pr→Rw Rw→Ar Rw→Cl Rw→Pr Avg.ResNet-50 [48]34.950.058.037.441.946.238.531.260.453.941.259.946.1MCD [49]48.968.374.661.367.668.857.047.175.169.152.279.664.1CDAN [10]50.770.676.057.670.070.057.450.977.370.956.781.665.8MDD [50]54.973.777.860.071.471.861.253.678.172.560.282.368.1BNM [78]52.373.980.063.372.974.961.749.579.770.553.682.267.9BDG [52]51.573.478.765.371.573.765.149.781.174.655.184.868.7SRDC [54]52.376.381.069.576.278.068.753.881.776.357.185.071.3PrDA [29]48.473.476.964.369.871.762.745.376.669.850.579.065.7SHOT [18]56.978.181.067.978.478.167.054.681.873.458.184.571.6SHOT [18]57.577.980.366.578.376.665.855.781.774.061.284.271.6BAIT [55]57.477.582.468.077.275.167.155.581.973.959.584.271.6BAIT [55]52.271.372.559.970.669.960.353.978.268.458.980.766.4CPGA(ours)59.378.179.865.475.576.465.758.081.072.064.483.371.6", "figure_id": "tab_15", "figure_label": "D", "figure_type": "table" }, { "figure_caption": "", "figure_data": "λβ0.50.70.90.99381.283.083.983.0581.382.284.183.2779.781.683.383.0", "figure_id": "tab_16", "figure_label": "D", "figure_type": "table" }, { "figure_caption": "", "figure_data": ".4: Classification accuracies (%) on the Office-31 dataset(ResNet-50). We adopt underlines to denote reimplemented results.MethodSource-free A→D A→W D→W W→D D→A W→A Avg.SHOT [18]93.190.998.899.974.574.888.7SHOT [18]91.490.099.1100.074.873.688.2BAIT [55]92.094.698.1100.074.675.289.1BAIT [55]91.387.497.699.771.467.285.8CPGA (Ours)94.494.198.499.876.076.689.9", "figure_id": "tab_17", "figure_label": "D", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Parameterλη135790.001 0.005 0.01 0.050.1Acc.83.3 85.0 86.0 85.5 85.385.585.685.5 86.0 83.0SF-UDA, which fully demonstrates the effectiveness of T-CPGA.", "figure_id": "tab_18", "figure_label": "D", "figure_type": "table" }, { "figure_caption": "Overall Accuracy (%) of Cl→Pr and Cl→Rw tasks with different class distribution shifts on the Office-Home-I dataset (MobileNet-V2 and ResNet-50).", "figure_data": "MobileNet-V2, Cl→PrResNet-50, Cl→PrMethodFLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.MethodFLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.MobileNet-V2 [80]40.9827.7134.4426.0140.7135.1234.16ResNet-50 [48]53.8843.9348.1944.5154.2651.3949.36T-CPGA (Ours)84.8380.4087.3484.0785.7987.1884.94T-CPGA (Ours)84.8886.2587.3884.7886.2087.2086.12MethodMobileNet-V2, Cl→RwMethodResNet-50, Cl→RwFLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.MobileNet-V2 [80]40.5430.4936.2228.8146.3738.4036.81ResNet-50 [48]54.4344.9350.0145.4158.6654.0551.25T-CPGA (Ours)82.0485.4786.6783.1683.3286.5784.54T-CPGA (Ours)85.1685.7987.1585.0085.8787.0386.00", "figure_id": "tab_19", "figure_label": "E1", "figure_type": "table" }, { "figure_caption": "Per-Class Accuracy (%) of different class distribution shifts and imbalance ratios on the VisDA-I dataset.", "figure_data": "MethodVisDA-I-10FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.T-CPGA (CE)88.0988.5989.9488.4989.9288.91 88.99T-CPGA (Bal-CE)88.6388.8689.6088.9589.6488.21 88.98T-CPGA (Seasaw)88.6789.0389.8288.7989.9588.84 89.18MethodVisDA-I-50FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.T-CPGA (CE)85.1085.9689.7486.0089.9086.00 87.12T-CPGA (Bal-CE)85.4887.0389.6586.3589.6985.55 87.29T-CPGA (Seasaw)85.4987.0089.8086.0189.9385.86 87.35MethodVisDA-I-100FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.T-CPGA (CE)89.9285.1484.0383.6089.9283.60 86.03T-CPGA (Bal-CE)89.4887.4883.2085.5989.8886.88 87.08T-CPGA (Seasaw)89.8684.9586.0085.2789.8485.46 86.90", "figure_id": "tab_20", "figure_label": "E2", "figure_type": "table" }, { "figure_caption": "3: Per-Class Accuracy (%) of Cl→Pr and Cl→Rw tasks with different class distribution shifts on the Office-Home-I dataset.", "figure_data": "MethodCl→PrFLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.CLIP Zero-shot Prediction84.0883.4584.1884.0883.4584.18 83.90T-CPGA (Combination)85.5184.8386.5085.5284.8486.17 85.56T-CPGA (Ours)85.5584.9286.5085.5284.8486.19 85.59MethodCl→RwFLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg.CLIP Zero-shot Prediction85.0583.4684.0785.0583.4684.07 84.19T-CPGA (Combination)85.6783.9085.2285.6983.9385.04 84.91T-CPGA (Ours)85.6783.9085.2585.6983.9385.06 84.92", "figure_id": "tab_21", "figure_label": "E", "figure_type": "table" }, { "figure_caption": "Per-class Accuracy (%) on the Office-Home-I dataset (ResNet-50). SF and CI indicate source-free and class-imbalanced.", "figure_data": "MethodSFCICl→PrCl→RwPr→ClPr→RwRw→ClRw→PrAvg.", "figure_id": "tab_22", "figure_label": "E4", "figure_type": "table" } ]
Hongbin Lin; Mingkui Tan; Yifan Zhang; Zhen Qiu; Shuaicheng Niu; Dong Liu; Qing Du; Yanxia Liu
[ { "authors": "S Sankaranarayanan; Y Balaji; C D Castillo", "journal": "", "ref_id": "b0", "title": "Generate to adapt: Aligning domains using generative adversarial networks", "year": "2018" }, { "authors": "J Hoffman; E Tzeng; T Park", "journal": "", "ref_id": "b1", "title": "Cycada: Cycle-consistent adversarial domain adaptation", "year": "2018" }, { "authors": "M Long; Y Cao; J Wang", "journal": "", "ref_id": "b2", "title": "Learning transferable features with deep adaptation networks", "year": "2015" }, { "authors": "M Long; H Zhu; J Wang", "journal": "", "ref_id": "b3", "title": "Deep transfer learning with joint adaptation networks", "year": "2017" }, { "authors": "C Chen; Z Fu; Z Chen", "journal": "", "ref_id": "b4", "title": "Homm: Higher-order moment matching for unsupervised domain adaptation", "year": "2020" }, { "authors": "B Sun; K Saenko", "journal": "", "ref_id": "b5", "title": "Deep coral: Correlation alignment for deep domain adaptation", "year": "2016" }, { "authors": "C Chen; Z Chen; B Jiang", "journal": "", "ref_id": "b6", "title": "Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation", "year": "2019" }, { "authors": "G Kang; L Jiang", "journal": "", "ref_id": "b7", "title": "Contrastive adaptation network for unsupervised domain adaptation", "year": "2019" }, { "authors": "Y Ganin; V Lempitsky", "journal": "", "ref_id": "b8", "title": "Unsupervised domain adaptation by backpropagation", "year": "2015" }, { "authors": "M Long; Z Cao; J Wang", "journal": "NeurIPS", "ref_id": "b9", "title": "Conditional adversarial domain adaptation", "year": "2018" }, { "authors": "Y Zhang; Y Wei; Q Wu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b10", "title": "Collaborative unsupervised domain adaptation for medical image diagnosis", "year": "2020" }, { "authors": "S Niu; J Wu; Y Zhang", "journal": "", "ref_id": "b11", "title": "Efficient test-time model adaptation without forgetting", "year": "2022" }, { "authors": "S Niu; J Wu; Y Zhang", "journal": "", "ref_id": "b12", "title": "Towards stable test-time adaptation in dynamic wild world", "year": "2023" }, { "authors": "H Lin; Y Zhang; Z Qiu", "journal": "", "ref_id": "b13", "title": "Prototype-guided continual adaptation for class-incremental unsupervised domain adaptation", "year": "2022" }, { "authors": "J Dong; Y Cong; G Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "Where and how to transfer: knowledge aggregation-induced transferability perception for unsupervised domain adaptation", "year": "2021" }, { "authors": "J Li; Z Du; L Zhu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Divergence-agnostic unsupervised domain adaptation by adversarial attacks", "year": "2021" }, { "authors": "Y Luo; C Ren; D Dai", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b16", "title": "Unsupervised domain adaptation via discriminative manifold propagation", "year": "2020" }, { "authors": "J Liang; D Hu; J Feng", "journal": "", "ref_id": "b17", "title": "Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation", "year": "2020" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza", "journal": "NeurIPS", "ref_id": "b18", "title": "Generative adversarial networks", "year": "2014" }, { "authors": "R Li; Q Jiao; W Cao", "journal": "", "ref_id": "b19", "title": "Model adaptation: Unsupervised domain adaptation without source data", "year": "2020" }, { "authors": "T Karras; M Aittala; J Hellsten", "journal": "", "ref_id": "b20", "title": "Training generative adversarial networks with limited data", "year": "2020" }, { "authors": "X Li; J Li; L Zhu", "journal": "ACM MM", "ref_id": "b21", "title": "Imbalanced source-free domain adaptation", "year": "2021" }, { "authors": "H Xia; H Zhao; Z Ding", "journal": "", "ref_id": "b22", "title": "Adaptive adversarial network for sourcefree domain adaptation", "year": "2021" }, { "authors": "Y Zhang; B Kang; B Hooi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b23", "title": "Deep long-tailed learning: A survey", "year": "2023" }, { "authors": "B Kang; S Xie; M Rohrbach", "journal": "", "ref_id": "b24", "title": "Decoupling representation and classifier for long-tailed recognition", "year": "2020" }, { "authors": "Y Zhang; B Hooi; L Hong", "journal": "", "ref_id": "b25", "title": "Self-supervised aggregation of diverse experts for test-agnostic long-tailed recognition", "year": "2021" }, { "authors": "Z Qiu; Y Zhang; H Lin", "journal": "", "ref_id": "b26", "title": "Source-free domain adaptation via avatar prototype generation and adaptation", "year": "2021" }, { "authors": "A Radford; J W Kim; C Hallacy", "journal": "", "ref_id": "b27", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Y Kim; D Cho; P Panda", "journal": "", "ref_id": "b28", "title": "Progressive domain adaptation from a source pre-trained model", "year": "2020" }, { "authors": "J Liang; D Hu; Y Wang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b29", "title": "Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer", "year": "2021" }, { "authors": "S Yang; J Van De Weijer; L Herranz", "journal": "", "ref_id": "b30", "title": "Exploiting the intrinsic neighborhood structure for source-free domain adaptation", "year": "2021" }, { "authors": "S Yang; Y Wang; J Van De Weijer", "journal": "", "ref_id": "b31", "title": "Generalized source-free domain adaptation", "year": "2021" }, { "authors": "J Dong; Z Fang; A Liu", "journal": "", "ref_id": "b32", "title": "Confident anchor-induced multi-source free domain adaptation", "year": "2021" }, { "authors": "Y Zhu; X Wu; Y Li", "journal": "IEEE Transactions on Artificial Intelligence", "ref_id": "b33", "title": "Self-adaptive imbalanced domain adaptation with deep sparse autoencoder", "year": "2022" }, { "authors": "S Tan; X Peng; K Saenko", "journal": "", "ref_id": "b34", "title": "Class-imbalanced domain adaptation: An empirical odyssey", "year": "2020" }, { "authors": "W Shi; R Zhu; S Li", "journal": "", "ref_id": "b35", "title": "Pairwise adversarial training for unsupervised class-imbalanced domain adaptation", "year": "2022" }, { "authors": "Y.-H H Tsai; C.-A Hou; W.-Y Chen", "journal": "", "ref_id": "b36", "title": "Domain-constraint transfer coding for imbalanced unsupervised domain adaptation", "year": "2016" }, { "authors": "K Tanwisuth; X Fan; H Zheng", "journal": "", "ref_id": "b37", "title": "A prototype-oriented framework for unsupervised domain adaptation", "year": "2021" }, { "authors": "J Snell; K Swersky; R Zemel", "journal": "NeurIPS", "ref_id": "b38", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "S Xu; H Li; B Zhuang", "journal": "", "ref_id": "b39", "title": "Generative low-bitwidth data free quantization", "year": "2020" }, { "authors": "A Van Den Oord; Y Li; O Vinyals", "journal": "", "ref_id": "b40", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Y Zhang; B Hooi; D Hu", "journal": "", "ref_id": "b41", "title": "Unleashing the power of contrastive self-supervised visual models via contrast-regularized fine-tuning", "year": "2021" }, { "authors": "Z Pei; Z Cao; M Long", "journal": "", "ref_id": "b42", "title": "Multi-adversarial domain adaptation", "year": "2018" }, { "authors": "C Chen; W Xie; W Huang", "journal": "", "ref_id": "b43", "title": "Progressive feature alignment for unsupervised domain adaptation", "year": "2019" }, { "authors": "T Chen; S Kornblith; M Norouzi", "journal": "", "ref_id": "b44", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "D Arpit; S Jastrzebski; N Ballas", "journal": "", "ref_id": "b45", "title": "A closer look at memorization in deep networks", "year": "2017" }, { "authors": "S Liu; J Niles-Weed; N Razavian", "journal": "NeurIPS", "ref_id": "b46", "title": "Early-learning regularization prevents memorization of noisy labels", "year": "2020" }, { "authors": "K He; X Zhang; S Ren", "journal": "", "ref_id": "b47", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "K Saito; K Watanabe; Y Ushiku", "journal": "", "ref_id": "b48", "title": "Maximum classifier discrepancy for unsupervised domain adaptation", "year": "2018" }, { "authors": "Y Zhang; T Liu; M Long", "journal": "", "ref_id": "b49", "title": "Bridging theory and algorithm for domain adaptation", "year": "2019" }, { "authors": "Y Wu; D Inkpen; A El-Roby", "journal": "", "ref_id": "b50", "title": "Dual mixup regularized learning for adversarial domain adaptation", "year": "2020" }, { "authors": "G Yang; H Xia; M Ding", "journal": "", "ref_id": "b51", "title": "Bi-directional generation for unsupervised domain adaptation", "year": "2020" }, { "authors": "Y Jin; X Wang; M Long", "journal": "", "ref_id": "b52", "title": "Minimum class confusion for versatile domain adaptation", "year": "2020" }, { "authors": "H Tang; K Chen; K Jia", "journal": "", "ref_id": "b53", "title": "Unsupervised domain adaptation via structurally regularized deep clustering", "year": "2020" }, { "authors": "S Yang; Y Wang; J Van De Weijer", "journal": "", "ref_id": "b54", "title": "Unsupervised domain adaptation without source data by casting a bait", "year": "2020" }, { "authors": "R Xu; G Li; J Yang", "journal": "", "ref_id": "b55", "title": "Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation", "year": "2019" }, { "authors": "C.-Y Lee; T Batra; M H Baig", "journal": "", "ref_id": "b56", "title": "Sliced wasserstein discrepancy for unsupervised domain adaptation", "year": "2019" }, { "authors": "Y Pan; T Yao; Y Li", "journal": "", "ref_id": "b57", "title": "Transferrable prototypical networks for unsupervised domain adaptation", "year": "2019" }, { "authors": "D Hu; J Liang; Q Hou", "journal": "", "ref_id": "b58", "title": "Panda: Prototypical unsupervised domain adaptation", "year": "2020" }, { "authors": "S Dai; Y Cheng; Y Zhang", "journal": "", "ref_id": "b59", "title": "Contrastively smoothed class alignment for unsupervised domain adaptation", "year": "2020" }, { "authors": "K Saenko; B Kulis; M Fritz", "journal": "", "ref_id": "b60", "title": "Adapting visual category models to new domains", "year": "2010" }, { "authors": "X Peng; B Usman; N Kaushik", "journal": "", "ref_id": "b61", "title": "Visda: The visual domain adaptation challenge", "year": "2017" }, { "authors": "H Venkateswara; J Eusebio; S Chakraborty", "journal": "", "ref_id": "b62", "title": "Deep hashing network for unsupervised domain adaptation", "year": "2017" }, { "authors": "R Müller; S Kornblith; G E Hinton", "journal": "", "ref_id": "b63", "title": "When does label smoothing help?", "year": "2019" }, { "authors": "K Saito; D Kim; S Sclaroff", "journal": "", "ref_id": "b64", "title": "Universal domain adaptation through self supervision", "year": "2020" }, { "authors": "G Wei; C Lan; W Zeng", "journal": "", "ref_id": "b65", "title": "Toalign: Task-oriented alignment for unsupervised domain adaptation", "year": "2021" }, { "authors": "E Tzeng; J Hoffman; K Saenko", "journal": "", "ref_id": "b66", "title": "Adversarial discriminative domain adaptation", "year": "2017" }, { "authors": "Y Zhang; H Chen; Y Wei", "journal": "", "ref_id": "b67", "title": "From whole slide imaging to microscopy: Deep microscopy adaptation network for histopathology cancer image classification", "year": "2019" }, { "authors": "L V D Maaten; G Hinton", "journal": "Journal of machine learning research", "ref_id": "b68", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "P Wang; K Han; X.-S Wei", "journal": "", "ref_id": "b69", "title": "Contrastive learning based hybrid networks for long-tailed image classification", "year": "2021" }, { "authors": "Y Yan; W Li; M K Ng", "journal": "", "ref_id": "b70", "title": "Learning discriminative correlation subspace for heterogeneous domain adaptation", "year": "2017" }, { "authors": "J Liang; R He; Z Sun", "journal": "", "ref_id": "b71", "title": "Distant supervised centroid shift: A simple and efficient approach to visual domain adaptation", "year": "2019" }, { "authors": "E Tzeng; J Hoffman; N Zhang", "journal": "", "ref_id": "b72", "title": "Deep domain confusion: Maximizing for domain invariance", "year": "2014" }, { "authors": "X Liu; Z Guo; S Li", "journal": "", "ref_id": "b73", "title": "Adversarial unsupervised domain adaptation with conditional and label shift: Infer, align and iterate", "year": "2021" }, { "authors": "X Zhang; Y Iwasawa; Y Matsuo", "journal": "", "ref_id": "b74", "title": "Amortized prompt: Lightweight fine-tuning for CLIP in domain generalization", "year": "2021" }, { "authors": "R Gal; O Patashnik; H Maron", "journal": "ACM Trans. Graph", "ref_id": "b75", "title": "Stylegan-nada: Clip-guided domain adaptation of image generators", "year": "2022" }, { "authors": "A Odena; C Olah; J Shlens", "journal": "", "ref_id": "b76", "title": "Conditional image synthesis with auxiliary classifier gans", "year": "2017" }, { "authors": "S Cui; S Wang; J Zhuo", "journal": "", "ref_id": "b77", "title": "Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations", "year": "2020" }, { "authors": "Y Ganin; E Ustinova; H Ajakan", "journal": "JMLR", "ref_id": "b78", "title": "Domain-adversarial training of neural networks", "year": "2016" }, { "authors": "M Sandler; A Howard; M Zhu", "journal": "", "ref_id": "b79", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "J Ren; C Yu; X Ma", "journal": "NeurIPS", "ref_id": "b80", "title": "Balanced meta-softmax for long-tailed visual recognition", "year": "2020" }, { "authors": "J Wang; W Zhang; Y Zang", "journal": "", "ref_id": "b81", "title": "Seesaw loss for long-tailed instance segmentation", "year": "2021" }, { "authors": " Table E.9", "journal": "ResNet", "ref_id": "b82", "title": "Overall Accuracy (%) of Pr→Cl task on the Office-Home dataset (ResNet-50). Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg", "year": "" }, { "authors": "Table E 10", "journal": "Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg. ResNet", "ref_id": "b83", "title": "Per-Class Accuracy (%) of Pr→Rw task on the Office-Home dataset (ResNet-50)", "year": "" }, { "authors": "E Table", "journal": "ResNet", "ref_id": "b84", "title": "Overall Accuracy (%) of Pr→Rw task on the Office-Home dataset (ResNet-50). Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg", "year": null }, { "authors": "Table E 12", "journal": "Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg. ResNet", "ref_id": "b85", "title": "Per-Class Accuracy (%) of Rw→Cl task on the Office-Home dataset (ResNet-50)", "year": "" }, { "authors": "Table E 13", "journal": "ResNet", "ref_id": "b86", "title": "Overall Accuracy (%) of Rw→Cl task on the Office-Home dataset (ResNet-50). Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg", "year": "" }, { "authors": "E Table", "journal": "ResNet", "ref_id": "b87", "title": "Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg", "year": null }, { "authors": "E Table", "journal": "ResNet", "ref_id": "b88", "title": "Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg", "year": null }, { "authors": "E Table", "journal": "ResNet", "ref_id": "b89", "title": "Method SF CI FLT→FLT FLT→BLT FLT→Bal BLT→FLT BLT→BLT BLT→Bal Avg", "year": null } ]
[ { "formula_coordinates": [ 3, 402.26, 379.81, 59.4, 13.25 ], "formula_id": "formula_0", "formula_text": "D t = {x i } nt i=1" }, { "formula_coordinates": [ 4, 123.09, 613.52, 176.91, 16.66 ], "formula_id": "formula_1", "formula_text": "min θg L ce (θ g ) + L p con (θ g ),(1)" }, { "formula_coordinates": [ 4, 100.59, 634.42, 199.41, 17.04 ], "formula_id": "formula_2", "formula_text": "min {θe,θp} L w con (θ e , θ p ) + λL elr (θ e , θ p ),(2)" }, { "formula_coordinates": [ 4, 395.45, 690.24, 168.55, 9.65 ], "formula_id": "formula_3", "formula_text": "L ce = -y log G y (p),(3)" }, { "formula_coordinates": [ 5, 61.57, 400.51, 238.43, 25.07 ], "formula_id": "formula_4", "formula_text": "L p con =-log exp(φ(p, o + )/τ ) exp(φ(p, o + )/τ )+ K-1 j=1 exp(φ(p, o - j )/τ ) ,(4)" }, { "formula_coordinates": [ 5, 422.32, 41.94, 24.6, 17.18 ], "formula_id": "formula_5", "formula_text": "ŷk i q i n t i=1 ŷk i" }, { "formula_coordinates": [ 5, 404.59, 134.2, 159.41, 16.66 ], "formula_id": "formula_6", "formula_text": "ȳi = arg max k ŷi ,(5)" }, { "formula_coordinates": [ 5, 375.58, 419.81, 188.42, 26.04 ], "formula_id": "formula_7", "formula_text": "w i = exp(φ(q i , c ȳi )/τ ) K k=1 exp(φ(q i , c k )/τ ) ,(6)" }, { "formula_coordinates": [ 5, 323.39, 573.34, 240.61, 27.61 ], "formula_id": "formula_8", "formula_text": "L w con = -w i log exp(u i v + /τ ) exp(u i v + /τ )+ K-1 j=1 exp(u i v - j /τ ) ,(7)" }, { "formula_coordinates": [ 6, 428.55, 533.22, 94.47, 19.43 ], "formula_id": "formula_10", "formula_text": "d pdd = K i=1 |y i pl -y i gt | y i gt ." }, { "formula_coordinates": [ 7, 91.13, 260.02, 208.87, 34.28 ], "formula_id": "formula_11", "formula_text": "a c = max k1 σ(ψ(x i )) -max k2,k2 =k1 σ(ψ(x i )), a p = max k1 ŷi -max k2,k2 =k1 ŷi ,(9)" }, { "formula_coordinates": [ 7, 143.69, 349.81, 91.08, 9.65 ], "formula_id": "formula_12", "formula_text": "[ā c , āp ] = σ([a c , a p ])." }, { "formula_coordinates": [ 7, 123.88, 381.48, 176.13, 9.79 ], "formula_id": "formula_13", "formula_text": "ỹi = āc σ(ψ(x i )) + āp ŷi .(10)" }, { "formula_coordinates": [ 7, 146.39, 563.35, 149.66, 16.66 ], "formula_id": "formula_14", "formula_text": "w t i = max k ỹi , (11" }, { "formula_coordinates": [ 7, 296.04, 566.04, 3.96, 8.24 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 7, 56.79, 615.73, 243.21, 27.61 ], "formula_id": "formula_16", "formula_text": "L wt con = -w t i log exp(u i v + /τ ) exp(u i v + /τ )+ K-1 j=1 exp(u i v - j /τ ) ,(12)" }, { "formula_coordinates": [ 7, 392.77, 240.71, 171.23, 12.69 ], "formula_id": "formula_17", "formula_text": "L t ce = -ỹ i log G t (q i ),(13)" }, { "formula_coordinates": [ 14, 113.9, 523.17, 186.1, 29.79 ], "formula_id": "formula_18", "formula_text": "L nc = - nt j=1,j =i s i,j log(s i,j ). (C.1)" } ]
10.1145/3589334.3645525
2024-02-24
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b43", "b5", "b14", "b36", "b28", "b33", "b4", "b16", "b24", "b26", "b41", "b37", "b42", "b17", "b44" ], "table_ref": [], "text": "Cold-start is a long-standing challenge in the recommendation system research [44]. It demands the system's capability to infer recommendations for new items. To solve the cold-start issue, it is crucial to integrate the raw item attributes into the model for offering beneficial information, which have been verified as an effective scheme [6,15,37] in the centralized recommendation service setting. Generally, the service provider can collect all the users' personal data (e.g., interaction records) and the items' raw attributes for model construction, as shown in Figure 1 (a). By learning the correlations between item attributes and user interaction records, the system can make predictions for the new items. The centralized method (a) saves raw item attributes on the server but exposes private user interaction records. Traditional FedRecSys (b) secures the interaction records but exposes the item attributes to the clients. Our IFedRec (c) can protect these two types of security-sensitive information.\nHowever, with the serious social concerns about the exploitation of user privacy [29,34], developing recommendation models while protecting user's private data from being leaked has attracted increasing attention. As an emerging privacy-preserving recommendation framework, Federated Recommendation System (FedRecSys) [5,17,25,27,42] deploys individual models on the devices (clients), and a server can optimize a common model by commanding the local model parameter aggregation and distribution. Privacy can be guaranteed as users preserve private data locally, which prevents accessibility from the server or other users. Although impressive progress has been shown [38,43], there is still a lack of solutions for cold-start recommendation models under the federated setting.\nGiven the remarkable success of cold-start recommendation models in the centralized setting, the intuitive idea to develop the federated version is to deploy the centralized model on each device, that is, each client downloads all the raw item attributes from the server and trains the local model with personal interaction records, as shown in Figure 1 (b). However, the dissemination of raw item attributes outside of the service provider poses a significant risk to the system. Firstly, the raw item attributes are crafted carefully with expert effort, and the disclosure can lead to substantial damage to commercial properties. Moreover, publicly available raw item attributes are susceptible to malicious usage and may incur hostile adversarial attacks [18,45]. Hence, it is crucial to preserve the raw item attributes on the server. The challenge of constructing a coldstart FedRecSys lies in how to promote the system learning while preserving the security of private interaction data on the client and the raw item attributes on the server.\nIn this paper, we present a novel Item-aligned Federated aggregation framework for cold-start Recommendation (IFedRec), which is the first effort to achieve cold items recommendation in federated setting. To realize the cold-start FedRecSys while preserving user data and raw item attributes safely, we propose a coherent learning process for two item representations from the client and server. The client maintains item embeddings based on user interaction records capturing user preferences, while the server incorporates a meta attribute network to represent item attributes using raw item attributes. We also devise an item representation alignment mechanism to bridge the connection between item attributes and user preferences, enabling cold-start recommendation. Figure 1 (c) demonstrates how our IFedRec framework effectively executes cold-item recommendation by leveraging item attributes, while simultaneously ensuring the security of private interaction data and raw item attributes, preventing their exposure.\nTo implement the idea, we develop a two-phase learning framework, i.e., learning on warm items and inference on cold items. In the learning phase, the server aggregates the local item embeddings to achieve the global one, which is then used as supervision to train the meta attribute network on the server. For each client, the local model training is carlibrated by minimizing the distance between local item embedding and item attribute representation from the server. This mechanism injects the attribute information into the recommendation model, which enhances item representation learning and promotes recommendation prediction. In the inference phase, the server learns the attribute representations for cold items. Each client can then utilized these attribute representations along with the user-specific recommendation models to make personalized recommendations. We integrate our framework into two representative FedRecSys, which gain significant performance improvement than the original version when dealing with cold-start scenarios. Our IFedRec achieves the state-of-the-art performance on four cold-start recommendation datasets, outperforming both federated and centralized baselines across comprehensive metrics. Moreover, we empirically demonstrate the robustness of IFedRec even when only a few clients participate in each communication round, which indicates its potential for practical application. Additionally, by integrating the local differential privacy technique, our IFedRec strikes a balance between model performance and system noise injection, which sheds lights on the privacy-protection enhanced FedRecSys construction.\nIn summary, our main contributions are listed as follows,\n• We present a novel framework, IFedRec, to the best knowledge of the authors, it is the first effort to solve the cold-start recommendation under the federated setting where there are no interactions for the new items. • Our method achieves state-of-the-art performance in extensive experiments and in-depth analysis supports the significance of cold items recommendation. • The proposed item semantic alignment mechanism can be easily integrated into existing federated recommendation frameworks for cold-start recommendation performance improvement." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Cold-Start Recommendation", "publication_ref": [ "b45", "b35", "b45", "b9", "b30", "b3" ], "table_ref": [], "text": "Cold-start recommendation research focuses on addressing the challenge of providing quality recommendation service for new items [46]. Several approaches have been proposed to tackle this issue, including collaborative filtering techniques [36,46], contentbased methods [10,31] and the hybrid models [4]. Collaborative filtering methods infer the item similarities based on historical user interactions and identify items that tend to be consumed together. Content-based methods leverage the item attributes to understand the item characteristics so that the system can analyze the correlations between new items and existing items and make recommendations. Hybrid models combine both collaborative filtering and content-based methods, which extract meaningful features from item attributes and integrate them into the collaborative filtering framework to discover the correlations with user interactions. " }, { "figure_ref": [], "heading": "Phase I: Learning", "publication_ref": [], "table_ref": [], "text": "𝓛 \"#'%\" ⊕ Client Warm item id 𝓨 # 𝓨 … # 𝓨" }, { "figure_ref": [], "heading": "Cold item attribute", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Item attribute representation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Global Item Embedding", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Item attribute representation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Item attribute representation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_5" ], "heading": "Item attribute representation", "publication_ref": [], "table_ref": [], "text": "Figure 2: The framework of IFedRec. During the learning phase, the client uploads the item embedding to the server for global aggregation, and other recommendation modules are preserved locally to capture user personalization. On the server side, we elaborate a meta attribute network to learn item attribute representation based on raw item attributes. Besides, an item representation alignment mechanism is developed to align two item representations, i.e., L 𝑔𝑙𝑜𝑏𝑎𝑙 and R. During the inference phase, the server first learns the cold item attribute representation, and then each client can make personalized recommendations by integrating it with locally preserved recommendation modules." }, { "figure_ref": [], "heading": "Federated Recommendation System", "publication_ref": [ "b19", "b20", "b27", "b38", "b2", "b7", "b8", "b12", "b18", "b22", "b25", "b39", "b40", "b42", "b44", "b31" ], "table_ref": [], "text": "Federated recommendation system encapsulates recommendation model within the federated learning framework [20,21,28,39], which has recently drawn widespread attention due to the urgency of user privacy protection. Generally, each user is regarded as a client who trains a recommendation model with locally reserved private data, and a server coordinates the collaborative optimization among all clients by aggregating model parameters. Various recommendation benchmark architectures have been adapted to the FedRecSys frameworks [3,8,9,13,19,23,26,40,41,43,45]. However, existing FedRecSys models focus on recommending items with historical interactions, and the cold-start recommendation has rarely been studied. After thorough investigation, we found that only one FedRecSys model [32] is proposed for the item cold-start recommendation, which still depends on a small number of interactions of the new items. In this paper, we explore the setting that the system recommends the new items without any interactions, which is rather challenging and realistic in practical applications." }, { "figure_ref": [], "heading": "PRELIMINARY", "publication_ref": [], "table_ref": [], "text": "Federated Cold-Start Recommendation. Let U denote the user set with 𝑛 = |U| users. I 𝑤𝑎𝑟𝑚 represents the warm item set that has been interacted by users, and I 𝑐𝑜𝑙𝑑 is the cold item set whose items have never been interacted by any users. Under the federated learning framework, each user is regarded as a client, whose model F 𝜃 consists of three modules, i.e.,, an item embedding module P, a user embedding module Q and a rating prediction module S. Given the item attribute matrix X 𝑤𝑎𝑟𝑚 and each user's interaction \nwhere L 𝑖 (𝜃 ) is the loss of 𝑖-th client and 𝜃 := (𝑝, 𝑞, 𝑠) denote model parameters. Then, the system can make recommendations for each user about cold items based on the item attribute X 𝑐𝑜𝑙𝑑 . Mathematically, the cold items prediction could be formulated as follows,\nY 𝑐𝑜𝑙𝑑 𝑢 = F 𝜃 (X 𝑐𝑜𝑙𝑑 )(2)" }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "In this section, we begin by introducing the overall framework of the proposed method. We then delve into the details of the learning phase workflow and summarize it as an optimization algorithm. Furthermore, we demonstrate the application of the inference phase specifically for recommending cold items. Finally, we present our IFedRec that enhances privacy protection by incorporating the local Differential Privacy technique. We summarize the notations used in our method in Appendix A" }, { "figure_ref": [ "fig_5" ], "heading": "Framework Overview", "publication_ref": [], "table_ref": [], "text": "Introducing raw item attributes is crucial to achieve cold-start recommendation. However, simply utilizing the raw item attributes to learn item embeddings may risk commercial property damage and lead to adversarial attack to the FedRecSys. In this context, we develop a novel Item-aligned Federated aggregation for cold-start\nRecommendation (IFedRec) model, whose overall framework is illustrated in Figure 2. We elaborate two phases to firstly model the item information and then utilize the trained model to infer the cold start items. During the learning phase, each client trains a recommendation model locally, and the server learns a meta attribute network globally. We present an item representation alignment mechanism to align two item representations, so that the system can learn enhanced item representation and achieve cold-start recommendation. During the inference phase, the server first learns the cold item attribute representation, and then each user can make a prediction using it with the help of locally preserved personalized recommendation modules." }, { "figure_ref": [], "heading": "Learning on the Warm Items", "publication_ref": [], "table_ref": [], "text": "To achieve a model that can make recommendations on new items, we first train the model on the warm items based on the user interaction records and raw item attributes. To be specific, we alternately perform the following two steps: First, the server trains the global meta attribute network M 𝜙 with the item attributes.\nSecond, each client 𝑢 updates the local recommendation model F 𝜃 𝑢 with the historical interaction records. Meanwhile, an item representation alignment mechanism is introduced to align item attribute representation from the server and item embedding from the client. Next we detail the two steps below." }, { "figure_ref": [], "heading": "Global meta attribute network learning.", "publication_ref": [ "b15", "b21" ], "table_ref": [], "text": "Under the federated learning optimization framework, the server is responsible for coordinating all clients to train a globally shared model. In our method, we regard the item embedding module P as the shared component, which is learned from user interactions. Both user embedding and rating prediction modules are regarded as private components and preserved locally. Once the clients have completed the local model training, they upload the item embeddings to server. Then, the server aggregates all received item embeddings into a global one, which depicts the common item characteristics derived from user preferences. Particularly, we adopt the naive average aggregation formulation due to its simplicity and no additional computational overhead, which is as follows,\n𝑝 := 1 𝑛 𝑛 ∑︁ 𝑖=1 𝑝 𝑢(3)\nwhere 𝑝 𝑢 denotes the item embedding parameter of client 𝑢 and 𝑛 is the total number of clients. Other weight-based aggregation methods [16,22] are also promising for better performance. After aggregation, the global item embedding would be distributed to clients so that the common item characteristics can be exchanged among clients. Generally, the server holds rich attributes of items, including both warm items and cold items. The item attribute information can be used to bridge the connection between items, which paves the way to cold item recommendation. Specifically, we propose a meta attribute network M 𝜙 to learn the item attribute representation based on item attributes and deploy it on the server. Compared with the on-device deployment, we preserve the raw item attributes on the service provider, which guarantees the data safety from exposure and alleviates the potential damage of malicious utilization.\nParticularly, We formulate the learning of M 𝜙 as,\n𝑟 𝑣 := M 𝜙 (𝑥 𝑣 ) (4)\nwhere 𝜙 is the model parameter. The 𝑥 𝑣 and 𝑟 𝑣 are the attribute and learned representation of item 𝑣, respectively. Item embedding alignment. We regard the global item embedding 𝑝 as the supervision to train the meta attribute network M 𝜙 , so that we can construct the connection between the item attributes and the user interaction records with item embedding as the intermediary. Then, for the cold items, which have only attribute information, our method can calculate the attribute representation and make recommendations for them. Particularly, considering the properties of the regression task, we adopt the mean square error as the loss function and formulate it as,\nL (𝑝; 𝜙) := 1 𝑚 𝑚 ∑︁ 𝑣=1 (𝑟 𝑣 -𝑝 (𝑣)) 2 (5\n)\nwhere 𝑚 is the number of warm items. 𝑟 𝑣 and 𝑝 (𝑣) are the learned attribute representation and global item embedding of item 𝑣.\nBased on the loss L in Eq. ( 5), we update the meta attribute network parameter 𝜙 via stochastic gradient descent algorithm and the 𝑡-th update step is,\n𝜙 𝑡 := 𝜙 𝑡 -1 -𝛾 𝜕 𝜙 𝑡 -1 L (𝑝; 𝜙)(6)\nwhere 𝛾 is the parameter update learning rate." }, { "figure_ref": [], "heading": "Local recommendation model update.", "publication_ref": [], "table_ref": [], "text": "Based on the recommendation model F 𝜃 , where 𝜃 := (𝑝, 𝑞, 𝑠), we formulate the model prediction of user 𝑢 about item 𝑣 as,\nY 𝑢𝑣 := S(P 𝑣 , Q 𝑢 )(7)\nwhere P 𝑣 and Q 𝑢 denote the embedding of item 𝑣 and user 𝑢, respectively. Particularly, we discuss the typical implicit feedback recommendation task, i.e., Y 𝑢𝑣 = 1 if there is an interaction between user 𝑢 and item 𝑣; otherwise Y 𝑢𝑣 = 0. With the binary-value nature of implicit feedback, we define the recommendation loss of user 𝑢 as the binary cross-entropy loss,\nL 𝑢 (Y 𝑢𝑣 ; 𝜃 𝑢 ) := - ∑︁ (𝑢,𝑣) ∈𝐷 𝑢 log Ŷ𝑢𝑣 - ∑︁ (𝑢,𝑣 ′ ) ∈𝐷 - 𝑢 log(1 -Ŷ𝑢𝑣 ′ ) (8)\nwhere D - 𝑢 is the negative samples set of user 𝑢. It is worth noting that other loss metrics can also be adopted, and here we take the binary cross-entropy loss as an example. To construct D - 𝑢 conveniently, we first count all the uninteracted items of user 𝑢 as,\nI - 𝑢 := I 𝑤𝑎𝑟𝑚 \\I 𝑢(9)\nwhere I 𝑢 is the interacted warm items set of user 𝑢. Then, we uniformly sample negative items from I - 𝑢 by setting the sampling ratio based on the user's interacted item amount. Item attribute representation alignment. For the local recommendation model, it learns a unique item embedding for each item, which depicts the item characteristic. Meanwhile, the server can learn the latent representation based on the raw item attribute, which is effective complementary information that can be further used to enhance client model training leading to a more comprehensive local item embedding.\nTo this end, we propose to align the local item embedding module with the global learned item attribute representation. Particularly, we regard the item attribute representation as a regularization term to enrich the recommendation model supervision information, and reformulate the local model training loss as,\nL 𝑡𝑜𝑡𝑎𝑙 := L 𝑢 (Y 𝑢𝑣 ; 𝜃 𝑢 ) + 𝜆R (𝑝 𝑢 , 𝑟 )(10)\nwhere 𝑝 𝑢 is the item embedding module parameter of user 𝑢 and 𝑟 denotes the item attribute representation learned by raw item attributes on the server side.\nBased on the local training loss L 𝑡𝑜𝑡𝑎𝑙 , we can update the recommendation model parameter 𝜃 𝑢 via stochastic gradient descent algorithm. Notably, we adopt the alternative update method to update different modules, i.e., first update the locally preserved user embedding module Q and rating prediction module S to adapt the recommendation model with the global item embedding, and then update the local item embedding P with the tuned Q and S. The 𝑡-th update step is formulated as,\n(𝑞 𝑡 𝑢 , 𝑠 𝑡 𝑢 ) := (𝑞 𝑡 -1 𝑢 , 𝑠 𝑡 -1 𝑢 ) -𝜂 1 𝜕 (𝑞 𝑡 -1 𝑢 ,𝑠 𝑡 -1 𝑢 ) L 𝑡𝑜𝑡𝑎𝑙 𝑝 𝑡 𝑢 := 𝑝 𝑡 -1 𝑢 -𝜂 2 𝜕 𝑝 𝑡 -1 𝑢 L 𝑡𝑜𝑡𝑎𝑙(11)\nwhere \n𝜂\nwhere 𝑛 is the number of clients. The L 𝑢 is the supervised loss of the 𝑢-th client. 𝑝 𝑢 , 𝑞 𝑢 , and 𝑠 𝑢 are item embedding, user embedding, and rating prediction module parameters, respectively. R (•, •) is the regularization term and 𝜆 is the regularization coefficient. The 𝑟 is learned by the meta attribute network M 𝜙 with item attributes X 𝑤𝑎𝑟𝑚 as input. Particularly, we aggregate the clients' item embeddings to achieve global item embedding 𝑝 and take it as the supervision to optimize the meta attribute network on the server. We summarize the optimization procedure on the warm items set into Algorithm 1 in Appendix B" }, { "figure_ref": [], "heading": "Inference on the Cold Items", "publication_ref": [], "table_ref": [], "text": "During the learning phase, the system is optimized with the warm items and the learned model can be used for inferring cold item recommendations. When new items I 𝑐𝑜𝑙𝑑 come, the server first calculates the item attribute representation 𝑟 𝑐𝑜𝑙𝑑 via the meta attribute network. Then, the clients can combine 𝑟 𝑐𝑜𝑙𝑑 with the locally preserved user embedding Q and rating prediction module S to make personalized recommendations." }, { "figure_ref": [], "heading": "Privacy-Protection Enhanced IFedRec with Local Differential Privacy", "publication_ref": [ "b6", "b1" ], "table_ref": [], "text": "To further enhance the privacy-protection, we can integrate privacypreserving techniques into FL optimization framework, such as Differential Privacy [7] and Homomorphic Encryption [2], and the key idea is to prevent the server from inferring the client private information through the received model parameters. In our method, each client uploads the item embedding module to the server for exchanging common information, which may be maliciously used to infer sensitive user information. To handle the issue, we present a privacy-protection enhanced IFedRec by equipping it with the local Differential Privacy technique. Particularly, each client 𝑢 adds a zero-mean Laplacian noise to the item embedding before uploaded to the server, which can be formulated as,\n𝑝 𝑢 = 𝑝 𝑢 + 𝐿𝑎𝑝𝑙𝑎𝑐𝑒 (0, 𝛿)(13)\nwhere 𝛿 is the noise strength. As a result, the server receives an encrypted item embedding from clients, which reduces the risk of user privacy exposure." }, { "figure_ref": [], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct experiments to evaluate our method and explore the following research questions: Q1: How does IFedRec perform compared with the federated models and the state-of-the-art centralized models? Q2: Why does IFedRec work well on cold-item recommendation? Q3: How do the key hyper-parameters impact the performance? Q4: How well does IFedRec converge w.r.t. the client's amount? Q5: How does IFedRec perform under noise injection?" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b32", "b0", "b46" ], "table_ref": [], "text": "We evaluate the proposed IFedRec on two cold-start recommendation benchmark datasets, i.e., CiteULike [33] and XING [1], which have rich item attribute information. Particularly, we extract three dataset subsets from the original XING dataset according to user amount and mark them as XING-5000, XING-10000 and XING-20000, respectively. For a fair comparison, we follow the warm items and cold items division of [47]. For CiteULike, we select 80% items as the warm items, which serve as the training set to learn the model, and keep the other items as cold items. Then, we sample 30% items from the cold items as the validation set and take the remaining cold items as the test set. For three XING datasets, we divide the training set (warm items), validation set and test set (cold items) according to the ratio of 6:1:3. The dataset statistics and more descriptions about the datasets can be found in Appendix C." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b5", "b11", "b46", "b8", "b24", "b42", "b10", "b34", "b46", "b5", "b11", "b13", "b23" ], "table_ref": [], "text": "Evaluation metrics. We adopt three ranking metrics to evaluate model performance, i.e., Precision@k, Recall@k and NDCG@k, which are common used evaluation metrics [6,12,47]. Particularly, we report results of 𝑘 = {20, 50, 100} in units of 1e-2 in this paper.\nBaselines. We consider two branches of baselines: federated coldstart recommendation methods and centralized cold-start recommendation methods. For federated methods, we compare with the federated multi-view matrix factorization framework FedMVMF [9], and we adapt two state-of-the-art FedRecSys models [25,43] into the cold-start setting (CS_FedNCF and CS_PFedRec). Besides, we choose two representative content enhanced centralized recommendation model, i.e., VBPR [11] and DCN [35], and construct the federated version (FedVBPR and FedDCN) for a more comprehensive comparison. For centralized methods, we survey the recent cold-start recommendation papers and take two latest models (Heater [47] and GAR [6]) as our baselines. Besides, we also Methods Metrics CiteULike XING-5000 XING-10000 XING-20000 @20 @50 @100 @20 @50 @100 @20 @50 @100 @20 @50 @100 adapt two representative recommendation architectures [12,14] into the cold-start setting (CS_NCF and CS_MF). More details about baselines can be found in Appendix D. Implementation details. We implement the proposed method with Pytorch framework [24]. Specifically, we integrate two state-of-theart federated recommendation methods into our framework, named IFedNCF and IPFedRec, respectively. Detailed implementation and parameter configurations are summarized in Appendix E." }, { "figure_ref": [], "heading": "FedRec", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparison Analysis with Baselines (Q1)", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "We compare the model performance with federated baselines and centralized baselines, and then analyze the experimental results.\nCompared with federated cold-start baselines. As shown in Table 1, we have two observations: First, our method consistently performs much better than all federated baselines. Particularly, FedMVMF, CS_FedNCF, Fed-VBPR and FedDCN achieve better performance than CS_PFedRec. Recall the optimization procedure of these methods, it utilizes both the user-item interaction information and the item attribute information during model's training phase, indicating that the item attribute is essential for cold items recommendation. In contrast, CS_PFedRec only takes item attribute as the similarity measure to obtain cold item embedding, performs poorly in cold items recommendation. In our method, the proposed item representation alignment mechanism bridges the connection between attribute representation learned from raw attributes and the item embedding learned from interaction records during optimization. As a result, it facilitates the meta attribute network learning latent item representation that depicts user preferences, and the informative item attribute representation is beneficial for cold item recommendation.\nSecond, integrating existing FedRecSys architectures into our IFedRec framework (IFedNCF and IPFedRec) achieves outstanding performance improvement in all settings. Our method is a general cold-start FedRec framework, which can be easily instantiated with existing FedRecSys architectures. Compared with the vanilla FedNCF and PFedRec, our IFedNCF and IPFedRec deploy the meta attribute network on the server side and add an extra item embedding regularization term on the local model's training, which does not change the recommendation model architecture and introduce no extra computational overhead for clients. Compared with centralized cold-start baselines. In addition to the federated baselines, we also conduct experiments to compare our model's performance against centralized baselines. From the Table 2, we can see that our IFedRec achieves better performance than the centralized baselines on all datasets. Taking Heater as an example, the performance gain (@20) of our method on the CiteULike dataset are 13.86%, 8.74% and 9.34% on three evaluation metrics, respectively. A similar performance gain trend is also shown in other three datasets. We analyze the reason from two aspects: First, for centralized models, all users share the same module parameters in the system. In comparison, our method preserves user embedding and rating prediction modules as personalized components, which is helpful in capturing user preferences and promoting personalized recommendations. Second, compared with centralized models, there are more parameters in our method, which enables the system to possess a stronger representation capacity, allowing it to better capture complex patterns and features present in the data and achieve better performance." }, { "figure_ref": [], "heading": "Ablation Studies (Q2)", "publication_ref": [], "table_ref": [], "text": "We design model variants to explore the effectiveness of the key modules in our method. For a thorough analysis, we conduct experiments based on IFedNCF and IPFedRec on four datasets and report the results of @20 on three metrics." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "Metrics CiteULike XING-5000 XING-10000 XING-20000 @20 @50 @100 @20 @50 @100 @20 @50 @100 @20 @50 @100 means to remove the item representation alignment mechanism from our method. We show the results on @20 metrics." }, { "figure_ref": [], "heading": "CenRec", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "Integrate the attribute network into the local recommendation model. To give a more thorough understanding for the cold-start federated recommendation model construction, we build a model variant by deploying the attribute network on each client, named \"w/ LAN\". Particularly, the local recommendation model replace the item embedding module with the attribute network which takes the item attributes as input. As shown in Table 3, we can see that our method achieves superior performance than the model variant.\nCompared with it, our IFedRec learns two item representations which enhance the system's ability to identify different items. By effectively learning the item attribute representations, the system can better understand the inherent item characteristics, such as item similarities and item-item relationships, which in turn lead to more accurate recommendations that align with users' preferences.\nIn addition, our IFedRec maintains the raw item attributes on the server to avert the potential damage from malicious exploitation.\nRemove the item representation alignment mechanism from IFedRec. To verify the efficacy of our proposed item representation alignment mechanism for the cold-start recommendation, we construct a variant \"w/o IRAM\" by removing it from our method. Hence, the learning process of the model is modified as: First, the system optimizes a federated recommendation model whose ondevice model is trained with only the recommendation loss. Second, the attribute network on the server is initialized with random parameters and never updated. As in Table 3, removing the item representation alignment mechanism from our method degrades the performance significantly. In our method, by aligning two item representations, the client achieves a more comprehensive item embedding enhanced with attribute representation, which promotes local recommendation model training. Besides, the meta attribute network trained with the item embedding can absorb the user preferences towards items, facilitating the cold item recommendation." }, { "figure_ref": [], "heading": "Impact of Hyper-parameters (Q3)", "publication_ref": [], "table_ref": [], "text": "In this section, we study the impact of two key hyper-parameters of IFedRec: the coefficient 𝜆 of item attribute representation regularization on the client and the training epochs 𝐸 1 of the meta attribute network on the server. Particularly, we take the CiteULike dataset as an example and conduct experiments based on IFedNCF and IPFedRec. Due to limited space, we summarize the results in the main text and detailed configurations can be found in Appendix F. Regularization coefficient 𝜆. As shown in Figure 3, we can see that: The performance change trends of IFedNCF and IPFedRec are similar, i.e., as the coefficient increases, the performance first gets better and then decreases. When the regularization coefficient is large, the local recommendation model is injected with too much globally learned item attribute representation information, which interferes with the local model's learning from user preference. As a result, the local item embedding is biased and cannot well characterize user personalization, which leads to a decrease in model performance. The optimal regularization coefficient values for IFedNCF and IPFedRec appear in 1.0 and 10.0, respectively. Meta attribute network training epoch 𝑬 1 . As shown in Figure 4, we find that the performance of IFedNCF is slightly improved as the server training epochs increase. For the IPFedRec, the model gets the best performance when 𝐸 1 = 1. Hence, one-step optimization is enough to achieve satisfactory performance, which is efficient without much computational overhead." }, { "figure_ref": [], "heading": "Convergence with Clients Amount (Q4)", "publication_ref": [], "table_ref": [], "text": "In this section, we investigate the convergence of our proposed IFedRec. Due to limited space, here we summarize the results and conclusions briefly and more details can be found in Appendix G. As shown in Figure 5, our method can achieve outstanding performance at a small sampling ratio, e.g., IPFedRec gets 0.4035 on Recall@20, which also outperforms other baselines. On the other hand, more clients participating in a communication round would accelerate model convergence. In summary, IFedRec supports the FedRecSys to optimize with insufficient client participation, which is common in physical scenarios." }, { "figure_ref": [], "heading": "Privacy-Protection Enhanced IFedRec (Q5)", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "In this section, we investigate the performance of our IFedRec enhanced with the local Differential Privacy technique. Particularly, we set the Laplacian noise strength from 0.1 to 0.5 with an interval of 0.1 and also conduct the experiment on the CiteULike dataset. We give the experimental results of IFedNCF and IPFedRec of @20 on three metrics. As shown in Table 4, model performance degrades as the noise strength 𝛿 increases, while the performance drop is slight if 𝛿 is not too large. Hence, a moderate noise strength, e.g., 0.2 is desirable to achieve a good trade-off between model performance and privacy protection ability." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce IFedRec, the first effort that addresses the new items recommendation scenario in the federated setting.\nOur two-phase learning framework enables the learning of two item representations to protect private user interaction data while preserving item attributes on the server. The proposed item representation alignment mechanism maintains the correlations between item attributes and user preferences. Then, the cold item could be inferred by the item attribute representations learned by the server.\nExtensive experiments and in-depth analysis demonstrate the remarkable performance improvement of our model compared to state-of-the-art baselines, particularly in learning cold items. As a general cold-start recommendation framework, IFedRec can be easily combined with existing techniques to explore additional scenarios, such as recommendation diversity and fair recommendation." }, { "figure_ref": [], "heading": "A NOTATIONS", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We summarize the notations in the following Table 5." }, { "figure_ref": [], "heading": "Notation Notation Notation Descriptions Descriptions Descriptions U", "publication_ref": [], "table_ref": [], "text": "The user set 𝑛\nThe number of users I 𝑤𝑎𝑟𝑚\nThe warm item set I 𝑢\nThe interacted warm item set of user 𝑢 I -" }, { "figure_ref": [], "heading": "𝑢", "publication_ref": [], "table_ref": [], "text": "The negative warm item set of user 𝑢 𝑚\nThe number of warm items I 𝑐𝑜𝑙𝑑\nThe cold item set X 𝑤𝑎𝑟𝑚\nThe attribute matrix of warm items X 𝑐𝑜𝑙𝑑\nThe attribute matrix of cold items " }, { "figure_ref": [], "heading": "B ALGORITHMS", "publication_ref": [], "table_ref": [], "text": "We summarize the optimization procedure on the warm items set into Algorithm 1." }, { "figure_ref": [], "heading": "C DATESETS", "publication_ref": [ "b29", "b46" ], "table_ref": [ "tab_8" ], "text": "We introduce the datasets below and the detailed statistics are summarized in Table 6.\nCiteULike is collected from an article recommendation service platform, where the registered users create personal citation libraries recording interested articles. There are 5, 551 users, 16, 980 articles and 204, 986 user-article interactions in the dataset. Each article has a title and abstract, which can be utilized as the auxiliary item information. Following the preprocess procedure of [30,47], we first calculate the tf-idf to generate an 8, 000 dimension attribute vector for each item, and then utilize SVD to reduce the dimensions to 300. Hence, we obtain a 16, 980 × 300 item attribute matrix X.\nXING is collected from the ACM RecSys 2017 Challenge, which has 106, 881 users, 20, 519 items and 4, 306, 183 user-item interactions. Each item has a 2, 738-dimensional attribute. Particularly, we conduct three subsets by sampling different user population sizes, i.e., 5, 000, 10, 000 and 20, 000. The items amount of three subset are 18, 769, 20, 256 and 20, 510, and the total interactions are 191, 603, 383, 156 and 768, 471, respectively." }, { "figure_ref": [], "heading": "D BASELINES", "publication_ref": [ "b5", "b11", "b13", "b8", "b24", "b42", "b10" ], "table_ref": [], "text": "We introduce the details about baselines as follows:\n• Heater [47]: This method first pretrains a collaborative filtering model with user-item rating information to obtain user embedding and item embedding. Then, it trains a recommendation model based on user/item attributes by regularizing the distance between pretrained user/item embedding and learned latent user/item representation. • GAR [6]: This method presents a generative adversarial recommendation model architecture. A generator takes item attributes as input and learns the latent item representation, and a recommender takes the pretrained embeddings as input and predicts rating. The model is optimized by an adversarial loss between the generator and recommender. • CS_NCF: We replace the item embedding module of NCF [12] with a one-layer MLP to learn latent item representation with item attributes and keep other details unchanged. When new items come, each user makes recommendations with their raw attributes based on the trained model. • CS_MF: We first train MF [14] with the warm items. For the cold item, we find the top-k similar warm items by calculating the item-item attribute similarity, and then take the averaged trained top-k warm item embeddings as the cold item representation to make a prediction based on the trained user embeddings. • FedMVMF [9]: This is a matrix factorization method based on multiple data sources, i.e., user-item interaction information and item attribute information. Particularly, each user maintains the user embedding locally and other model parameters are updated on the server. • CS_FedNCF: We adapt the FedNCF [25] into cold-start setting by replacing the item embedding module with a one-layer MLP and keep other details unchanged. The cold item recommendation method is the same as used in CS_NCF. • CS_PFedRec: We first train PFedRec [43] with the warm items.\nFor the cold items, we adopt the same prediction method as in CS_MF.\n• FedVBPR: VBPR model [11] is a content enhanced recommendation model, which integrates the visual item features into the model to heighten the collaborative filtering framework. We adapt it into the federated learning framework and obtain Fed-VBPR. • FedDCN: DCN is a deep and cross network architecture, which can capture the complex interactions across multiple item features. We adapt it into the federated learning framework and obtain FedDCN." }, { "figure_ref": [], "heading": "E IMPLEMENTATION DETAILS", "publication_ref": [], "table_ref": [], "text": "For a fair comparison, we set the latent representation dimension as 200 and the mini-batch size as 256 for all methods. For the learning rate hyper-parameter, we tune it via grid search on the validation set. Besides, we resample negative items in each epoch/round and set the sampling ratio as 5 for all methods. For our method, we instantiate IFedRec with two representative FedRecSys architectures, i.e., FedNCF and PFedRec, and obtain IFedNCF and IPFedRec, respectively. For IFedNCF, we take a two-layer MLP as the rating prediction module. For IPFedRec, we set a one-layer MLP as the rating prediction module following the original paper. On the server side, we deploy a one-layer MLP as the meta attribute network, for 𝑒 from 1 to 𝐸 1 do 5:\nCompute L (𝑝 𝑡 ; 𝜙) with Eq. ( 5)\n6:\nUpdate 𝜙 𝑡 with Eq. ( 6)\n7:\nend for 8:\nCompute warm items representation 𝑟 𝑡 𝑤𝑎𝑟𝑚 with Eq. ( 4)\n9:\n𝑆 𝑡 ← (select a client subset randomly from all 𝑛 clients with sampling ratio 𝛼) for batch 𝑏 ∈ B do On the server side, we deploy a meta attribute network, which takes raw item attributes as input and learns the item latent representation with the global item embedding as supervision. Particularly, we set the Meta attribute network training epochs from 1 to 10 with an interval of 1. For brevity, we report the results on Recall@20." }, { "figure_ref": [], "heading": "G CONVERGENCE WITH CLIENTS AMOUNT DETAILS", "publication_ref": [], "table_ref": [], "text": "We take the CiteULike dataset as an example to conduct experiments. In federated optimization, there is a trade-off between model convergence efficiency and the client's amount of participation in each communication round. Generally, the larger the number of clients sampled in a training round, the faster the federated model converges. In practical scenarios, due to communication overhead and client computation power limitations, the server usually can only collect a limited number of clients each time to train the model. Especially in the recommendation scenario, the number of clients is large, and it is more difficult to collect enough clients for model training, which poses a challenge for the FedRecSys to train the model with limited clients. To this end, we conduct experiments to simulate the setting. Particularly, we constrain the client sampling ratio in each communication round from 0.1 to 0.5 with an interval of 0.1." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "Chunxu Zhang and Bo Yang are supported by the National Key R&D Program of China under Grant No. 2021ZD0112500; the National Natural Science Foundation of China under Grant Nos. U22A2098, 62172185, 62206105 and 62202200. We would like to appreciate Prof. Xiangyu Zhao of City University of Hong Kong for his invaluable contributions, guidance, and support, which greatly enhanced the quality of this paper." } ]
Federated recommendation systems usually trains a global model on the server without direct access to users' private data on their own devices. However, this separation of the recommendation model and users' private data poses a challenge in providing quality service, particularly when it comes to new items, namely cold-start recommendations in federated settings. This paper introduces a novel method called Item-aligned Federated Aggregation (IFedRec) to address this challenge. It is the first research work in federated recommendation to specifically study the cold-start scenario. The proposed method learns two sets of item representations by leveraging item attributes and interaction records simultaneously. Additionally, an item representation alignment mechanism is designed to align two item representations and learn the meta attribute network at the server within a federated learning framework. Experiments on four benchmark datasets demonstrate IFedRec's superior performance for cold-start scenarios. Furthermore, we also verify IFedRec owns good robustness when the system faces limited client
When Federated Recommendation Meets Cold-Start Problem: Separating Item Attributes and User Interactions
[ { "figure_caption": "Figure 1 :1Figure 1: Three cold-start recommendation systems comparison. The centralized method (a) saves raw item attributes on the server but exposes private user interaction records. Traditional FedRecSys (b) secures the interaction records but exposes the item attributes to the clients. Our IFedRec (c) can protect these two types of security-sensitive information.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "records Y 𝑤𝑎𝑟𝑚 𝑢 , federated cold-start recommendation aims to learn a recommendation model F 𝜃 with optimization objective,", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Impact of the regularization coefficient. The horizontal axis is the value of the regularization coefficient 𝜆, and the vertical axis is the Recall metric.", "figure_data": "", "figure_id": "fig_2", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "12 :12Compute 𝐿 𝑡𝑜𝑡𝑎𝑙 with Eq.(10) 13:Update (𝑝 𝑢 , 𝑞 𝑢 , 𝑠 𝑢 ) with Eq.(11) 14:end for 15: end for 16: Return 𝑝 𝑢 to server whose input dimension is the same as the item attribute size and the out dimension is 200. Notably, two centralized baselines Heater and GAR require the pretrained collaborative filtering representations as model input. Hence, we train a matrix factorization model with a latent factor of 200. We report the average results of five repetitions for all experiments.", "figure_data": "", "figure_id": "fig_3", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "FHYPER-PARAMETER ANALYSIS DETAILS F.1 Regularization term coefficient 𝜆During the training phase, we add the item attribute semantic representation as the regularization term of the local recommendation model by minimizing the distance between it and the local item embedding. According to the validation set performance, we set the regularization term coefficient values on IFedNCF with {0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 5.0, 10.0}, and on IPFedRec with {0.1, 0.5, 1.0, 5.0, 10.0, 15.0, 20.0, 30.0}. For conciseness, we only report the results on metric Recall because the patterns on the other two metrics show similar results as Recall.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "F. 22Meta attribute network training epoch 𝐸 1", "figure_data": "", "figure_id": "fig_5", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "ServerItem EmbeddingMetaGlobalItemWarm item attributeAttribute Network𝓛 !\"#$%\"Item EmbeddingEmbedding … Item EmbeddingMeta Attribute NetworkItemEmbedding𝓛 &#&%\"② Recommendation on the client𝓡Rating Prediction…Item EmbeddingUser User id EmbeddingEmbeddingUserPredictionRating", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Experimental results of the federated baselines and our method.", "figure_data": "Recall6.1814.3424.971.962.703.810.962.614.500.772.424.59FedMVMFPrecision1.571.481.300.770.460.360.520.550.480.510.660.62NDCG5.5510.0414.422.601.781.981.021.852.430.651.502.39Recall1.493.837.210.222.373.150.440.771.510.161.211.72CS_FedNCFPrecision0.370.390.360.140.410.290.240.170.170.100.330.23NDCG1.763.164.380.260.961.200.370.420.670.160.670.93Recall1.372.664.670.130.541.540.292.102.420.161.211.72CS_PFedRecPrecision0.330.250.240.090.130.180.190.440.260.190.330.23NDCG1.401.922.540.150.340.930.351.020.990.160.670.93Recall18.7329.8839.552.033.023.630.420.821.260.401.351.86FedVBPRPrecision3.752.461.660.780.560.360.240.190.140.270.360.24NDCG13.2416.0717.910.951.371.410.350.480.570.320.740.98Recall1.423.576.590.320.651.140.430.831.520.240.801.43FedDCNPrecision0.350.380.350.170.150.130.220.190.170.140.180.16NDCG1.102.443.600.270.460.660.510.460.660.210.460.64Recall42.32 59.92 72.89 23.48 42.05 55.4526.97 41.57 55.37 26.36 41.44 54.48IFedNCFPrecision9.705.803.6513.669.556.3714.389.026.0616.25 10.236.75OursNDCG Recall34.29 41.5137.61 59.6338.74 72.7120.93 27.41 29.46 21.77 37.30 53.1821.65 24.66 27.02 21.99 25.30 27.22 25.92 40.33 54.64 24.67 40.07 53.58IPFedRecPrecision9.485.813.6712.758.766.1213.848.775.9715.299.926.66NDCG33.4837.69 39.0719.7424.7728.3420.6623.9026.5220.5324.4926.91", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results of the centralized baselines and our method. \"CenRec\" denotes the centralized baseline sand \"Ours\" represents that we integrate two state-of-the-art federated models into our framework. The best results are bold.", "figure_data": "Recall37.1755.1368.5214.5116.0918.1616.6019.4822.4816.9419.7422.55HeaterPrecision8.925.503.525.702.691.608.734.192.448.864.212.43NDCG31.3635.9537.688.977.787.4814.0012.0611.1313.1811.3210.66Recall5.458.8113.071.443.225.490.743.386.160.852.876.11GARPrecision1.420.910.660.690.550.460.370.670.620.450.510.39NDCG3.434.425.480.891.572.320.801.862.870.852.022.97Recall29.4146.4361.8518.4232.0345.1921.8035.2647.5419.6533.0045.98CS_NCFPrecision7.064.703.1810.767.495.2811.727.685.2212.098.095.65NDCG24.9330.5233.7116.3820.8523.8817.5820.9823.3215.9119.7622.72Recall1.012.304.320.481.041.990.360.891.780.410.931.73CS_MFPrecision0.250.240.230.240.220.220.200.320.400.260.240.22NDCG0.871.622.590.360.590.970.300.540.880.350.580.87Recall42.32 59.92 72.89 23.48 42.05 55.4526.97 41.57 55.37 26.36 41.44 54.48IFedNCFPrecision9.705.803.6513.669.556.3714.389.026.0616.25 10.236.75OursNDCG Recall34.29 41.5137.61 59.6338.74 72.7120.93 27.41 29.46 21.77 37.30 53.1821.65 24.66 27.02 21.99 25.30 27.22 25.92 40.33 54.64 24.67 40.07 53.58IPFedRecPrecision9.485.813.6712.758.766.1213.848.775.9715.299.926.66NDCG33.4837.69 39.0719.7424.7728.3420.6623.9026.5220.5324.4926.91MethodsCiteULike Recall Precision NDCG Recall Precision NDCG Recall Precision NDCG Recall Precision NDCG XING-5000 XING-10000 XING-20000IFedNCF42.329.7034.29 23.4813.6620.9326.9714.3821.65 26.3616.2521.99w/ LAN38.739.0131.601.590.670.850.860.470.791.540.911.35w/o ISAM0.850.220.790.550.170.250.320.190.270.250.150.20IPFedRec41.519.4833.48 21.7712.7519.7425.9213.8420.66 24.6715.2920.53w/ LAN38.738.9331.272.000.770.950.580.360.530.180.100.12w/o ISAM1.050.261.030.270.150.210.420.240.380.460.270.39", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of privacy-protection IFedRec with variousLaplacian noise strength 𝝀.", "figure_data": "5HFDOO#5HFDOO#5RXQGV3UHFLVLRQ# 1'&*#3HUIRUPDQFH5RXQGV3UHFLVLRQ# 1'&*#3HUIRUPDQFHD,)HG1&)E,3)HG5HFFigure 5: Convergence analysis about the client amount par-ticipated in each communication round. The horizontal axisis the client sampling ratio, and the left vertical axis is thenumber of communication rounds, the right vertical axis ismodel performance on three metrics.Methods Metrics00.1Noise strength 𝛿 0.2 0.30.40.5Recall42.32 41.81 41.87 41.23 41.09 40.84IFedNCFPrecision 9.709.669.599.329.088.79NDCG34.29 33.83 33.62 33.15 33.16 32.86Recall41.51 41.10 40.48 40.11 40.57 39.52IPFedRecPrecision 9.489.499.319.499.459.03NDCG33.48 33.50 33.30 32.68 32.13 31.49", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The loss function of meta attribute network L 𝑢 (Y 𝑢𝑣 ; 𝜃 𝑢 )The loss function of user 𝑢's recommendation model L 𝑡𝑜𝑡𝑎𝑙 The overall loss function of user 𝑢's recommendation model with regularizer Notation table.", "figure_data": "Y 𝑤𝑎𝑟𝑚 𝑢The user 𝑢's interaction records on warm itemsY 𝑐𝑜𝑙𝑑 𝑢The user 𝑢's interaction records on cold itemsY 𝑢𝑣The model prediction of user 𝑢 about item 𝑣F 𝜃The federated cold-start recommendation modelP, Q, S The item embedding module, user embedding module,and rating prediction module𝜃 =(𝑝, 𝑞, 𝑠)Model parametersM 𝜙The meta attribute network𝑟 𝑣The attribution representation of item 𝑣L (𝑝; 𝜙)", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Statistics of four cold-start recommendation datasets. The items are divided into three subsets, where items in the training set are warm items and others are cold items. Algorithm 1 Item-Guided Federated Aggregation for Cold-Start Recommendation -Learning on the Warm Items ServerExecute: 1: Initialize item embedding module parameter 2: Initialize meta attribute network parameter 3: for each round 𝑡 = 1, 2, ... do", "figure_data": "4:", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "for each client 𝑢 ∈ 𝑆 𝑡 in parallel do𝑝 𝑡 +1 𝑢 ← ClientUpdate(𝑡, 𝑢, 𝑝 𝑡 , 𝑟 𝑡 𝑤𝑎𝑟𝑚 )Initialize 𝑞 𝑢 and 𝑠 𝑢 with the latest updates 7: Count all uninteracted items set I - 𝑢 with Eq. (9) 8: Sample negative feedback 𝐷 - 𝑢 from I - 𝑖 9: B ← (split 𝐷 𝑢 ∪ 𝐷 - 𝑢 into batches of size 𝐵) 10: for 𝑒 from 1 to 𝐸 2 do", "figure_data": "11:12:end for13:3:Initialize user embedding module parameter 𝑞 𝑢4:Initialize rating prediction module parameter 𝑠 𝑢5: else6:11:", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" } ]
Chunxu Zhang; Guodong Long; Tianyi Zhou; Zijian Zhang; Peng Yan; Bo Yang
[ { "authors": "Fabian Abel; Yashar Deldjoo; Mehdi Elahi; Daniel Kohlsdorf", "journal": "", "ref_id": "b0", "title": "Recsys challenge 2017: Offline and online evaluation", "year": "2017" }, { "authors": "Abbas Acar; Hidayet Aksu; Mauro Selcuk Uluagac; Conti", "journal": "ACM Computing Surveys (Csur)", "ref_id": "b1", "title": "A survey on homomorphic encryption schemes: Theory and implementation", "year": "2018" }, { "authors": "Muhammad Ammad-Ud-Din; Elena Ivannikova; A Suleiman; Were Khan; Qiang Oyomno; Kuan Fu; Adrian Eeik Tan; Flanagan", "journal": "", "ref_id": "b2", "title": "Federated collaborative filtering for privacy-preserving personalized recommendation system", "year": "2019" }, { "authors": "Fahad Anwaar; Naima Iltaf; Hammad Afzal; Raheel Nawaz", "journal": "Journal of computational science", "ref_id": "b3", "title": "HRS-CE: A hybrid framework to integrate content embeddings in recommender systems for cold start items", "year": "2018" }, { "authors": "Di Chai; Leye Wang; Kai Chen; Qiang Yang", "journal": "IEEE Intelligent Systems", "ref_id": "b4", "title": "Secure federated matrix factorization", "year": "2020" }, { "authors": "Zefan Hao Chen; Feiran Wang; Xiao Huang; Yue Huang; Yishi Xu; Peng Lin; Zhoujun He; Li", "journal": "", "ref_id": "b5", "title": "Generative adversarial framework for cold-start item recommendation", "year": "2022" }, { "authors": "Woo-Seok Choi; Matthew Tomei; Jose ; Rodrigo Sanchez Vicarte; Pavan Kumar Hanumolu; Rakesh Kumar", "journal": "IEEE", "ref_id": "b6", "title": "Guaranteeing local differential privacy on ultra-low-power systems", "year": "2018" }, { "authors": "Yongjie Du; Deyun Zhou; Yu Xie; Jiao Shi; Maoguo Gong", "journal": "Applied Soft Computing", "ref_id": "b7", "title": "Federated matrix factorization for privacy-preserving recommender systems", "year": "2021" }, { "authors": "Adrian Flanagan; Were Oyomno; Alexander Grigorievskiy; Suleiman A Kuan E Tan; Muhammad Khan; Ammad-Ud-Din", "journal": "Springer", "ref_id": "b8", "title": "Federated multi-view matrix factorization for personalized recommendations", "year": "2020" }, { "authors": "Wenjing Fu; Zhaohui Peng; Senzhang Wang; Yang Xu; Jin Li", "journal": "", "ref_id": "b9", "title": "Deeply fusing reviews and contents for cold start users in cross-domain recommendation systems", "year": "2019" }, { "authors": "Ruining He; Julian Mcauley", "journal": "", "ref_id": "b10", "title": "VBPR: visual bayesian personalized ranking from implicit feedback", "year": "2016" }, { "authors": "Xiangnan He; Lizi Liao; Hanwang Zhang; Liqiang Nie; Xia Hu; Tat-Seng Chua", "journal": "", "ref_id": "b11", "title": "Neural collaborative filtering", "year": "2017" }, { "authors": "Mubashir Imran; Hongzhi Yin; Tong Chen; Quoc Viet; Hung Nguyen; Alexander Zhou; Kai Zheng", "journal": "ACM Transactions on Information Systems", "ref_id": "b12", "title": "ReFRS: Resource-efficient federated recommender system for dynamic and diversified user preferences", "year": "2023" }, { "authors": "Yehuda Koren; Robert Bell; Chris Volinsky", "journal": "Computer", "ref_id": "b13", "title": "Matrix factorization techniques for recommender systems", "year": "2009" }, { "authors": "Hoyeop Lee; Jinbae Im; Seongwon Jang; Hyunsouk Cho; Sehee Chung", "journal": "", "ref_id": "b14", "title": "Melu: Meta-learned user preference estimator for cold-start recommendation", "year": "2019" }, { "authors": "Shuangtong Li; Tianyi Zhou; Xinmei Tian; Dacheng Tao", "journal": "", "ref_id": "b15", "title": "Learning to collaborate in decentralized learning of personalized models", "year": "2022" }, { "authors": "Zhiwei Li; Guodong Long; Tianyi Zhou", "journal": "", "ref_id": "b16", "title": "Federated Recommendation with Additive Personalization", "year": "2023" }, { "authors": "Zhuoran Liu; Martha Larson", "journal": "", "ref_id": "b17", "title": "Adversarial item promotion: Vulnerabilities at the core of top-n recommenders that use images to address cold start", "year": "2021" }, { "authors": "Zhiwei Liu; Liangwei Yang; Ziwei Fan; Hao Peng; Philip S Yu", "journal": "ACM Transactions on Intelligent Systems and Technology (TIST)", "ref_id": "b18", "title": "Federated social recommendation with graph neural network", "year": "2022" }, { "authors": "Guodong Long; Ming Xie; Tao Shen; Tianyi Zhou; Xianzhi Wang; Jing Jiang", "journal": "World Wide Web", "ref_id": "b19", "title": "Multi-center federated learning: clients clustering for better personalization", "year": "2023" }, { "authors": "Jie Ma; Tianyi Zhou; Guodong Long; Jing Jiang; Chengqi Zhang", "journal": "", "ref_id": "b20", "title": "Structured Federated Learning through Clustered Additive Modeling", "year": "2023" }, { "authors": "Brendan Mcmahan; Eider Moore; Daniel Ramage; Seth Hampson; Blaise Aguera Y Arcas", "journal": "PMLR", "ref_id": "b21", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Khalil Muhammad; Qinqin Wang; O' Diarmuid; Elias Reilly-Morgan; Barry Tragos; Neil Smyth; James Hurley; Aonghus Geraci; Lawlor", "journal": "", "ref_id": "b22", "title": "Fedfast: Going beyond average for faster training of federated recommender systems", "year": "2020" }, { "authors": "Adam Paszke; Sam Gross; Soumith Chintala; Gregory Chanan; Edward Yang; Zachary Devito; Zeming Lin; Alban Desmaison; Luca Antiga; Adam Lerer", "journal": "", "ref_id": "b23", "title": "Automatic differentiation in pytorch", "year": "2017" }, { "authors": "Vasileios Perifanis; Pavlos S Efraimidis", "journal": "Knowledge-Based Systems", "ref_id": "b24", "title": "Federated neural collaborative filtering", "year": "2022" }, { "authors": "Liang Qu; Ningzhi Tang; Ruiqi Zheng; Quoc Viet; Hung Nguyen; Zi Huang; Yuhui Shi; Hongzhi Yin", "journal": "", "ref_id": "b25", "title": "Semi-decentralized Federated Ego Graph Learning for Recommendation", "year": "2023" }, { "authors": "Karan Singhal; Hakim Sidahmed; Zachary Garrett; Shanshan Wu; John Rush; Sushant Prakash", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Federated reconstruction: Partially local federated learning", "year": "2021" }, { "authors": "Yue Tan; Yixin Liu; Guodong Long; Jing Jiang; Qinghua Lu; Chengqi Zhang", "journal": "", "ref_id": "b27", "title": "Federated learning on non-iid graphs via structural knowledge sharing", "year": "2023" }, { "authors": "Paul Voigt; Axel Von Dem Bussche", "journal": "Springer International Publishing", "ref_id": "b28", "title": "The eu general data protection regulation (gdpr). A Practical Guide", "year": "2017" }, { "authors": "Maksims Volkovs; Guangwei Yu; Tomi Poutanen", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Dropoutnet: Addressing cold start in recommender systems", "year": "2017" }, { "authors": "Maksims Volkovs; Guang Wei Yu; Tomi Poutanen", "journal": "", "ref_id": "b30", "title": "Content-based neighbor models for cold start in recommender systems", "year": "2017" }, { "authors": "Abdel Omar; Gaith Wahab; Jamal Rjoub; Robin Bentahar; Cohen", "journal": "Information Sciences", "ref_id": "b31", "title": "Federated against the cold: A trust-based federated learning approach to counter the cold start problem in recommendation systems", "year": "2022" }, { "authors": "Chong Wang; David M Blei", "journal": "", "ref_id": "b32", "title": "Collaborative topic modeling for recommending scientific articles", "year": "2011" }, { "authors": "Qinyong Wang; Hongzhi Yin; Tong Chen; Junliang Yu; Alexander Zhou; Xiangliang Zhang", "journal": "The VLDB Journal", "ref_id": "b33", "title": "Fast-adapting and privacy-preserving federated recommender system", "year": "2021" }, { "authors": "Ruoxi Wang; Bin Fu; Gang Fu; Mingliang Wang", "journal": "", "ref_id": "b34", "title": "Deep & cross network for ad click predictions", "year": "2017" }, { "authors": "Jian Wei; Jianhua He; Kai Chen; Yi Zhou; Zuoyin Tang", "journal": "Expert Systems with Applications", "ref_id": "b35", "title": "Collaborative filtering and deep learning based recommendation system for cold start items", "year": "2017" }, { "authors": "Yinwei Wei; Xiang Wang; Qi Li; Liqiang Nie; Yan Li; Xuanping Li; Tat-Seng Chua", "journal": "", "ref_id": "b36", "title": "Contrastive learning for cold-start recommendation", "year": "2021" }, { "authors": "Chuhan Wu; Fangzhao Wu; Lingjuan Lyu; Tao Qi; Yongfeng Huang; Xing Xie", "journal": "Nature Communications", "ref_id": "b37", "title": "A federated graph neural network framework for privacy-preserving personalization", "year": "2022" }, { "authors": "Peng Yan; Guodong Long", "journal": "", "ref_id": "b38", "title": "Personalization Disentanglement for Federated Learning", "year": "2023" }, { "authors": "Wei Yuan; Chaoqun Yang; Quoc Viet; Hung Nguyen; Lizhen Cui; Tieke He; Hongzhi Yin", "journal": "", "ref_id": "b39", "title": "Interaction-level membership inference attack against federated recommender systems", "year": "2023" }, { "authors": "Wei Yuan; Hongzhi Yin; Fangzhao Wu; Shijie Zhang; Tieke He; Hao Wang", "journal": "", "ref_id": "b40", "title": "Federated unlearning for on-device recommendation", "year": "2023" }, { "authors": "Chunxu Zhang; Guodong Long; Tianyi Zhou; Peng Yan; Zijjian Zhang; Bo Yang", "journal": "", "ref_id": "b41", "title": "Graph-guided Personalization for Federated Recommendation", "year": "2023" }, { "authors": "Chunxu Zhang; Guodong Long; Tianyi Zhou; Peng Yan; Zijian Zhang; Chengqi Zhang; Bo Yang", "journal": "", "ref_id": "b42", "title": "Dual Personalization on Federated Recommendation", "year": "2023" }, { "authors": "Shuai Zhang; Lina Yao; Aixin Sun; Yi Tay", "journal": "ACM Comput. Surv", "ref_id": "b43", "title": "Deep Learning Based Recommender System: A Survey and New Perspectives", "year": "2019-02" }, { "authors": "Shijie Zhang; Hongzhi Yin; Tong Chen; Zi Huang; Quoc Viet; Hung Nguyen; Lizhen Cui", "journal": "", "ref_id": "b44", "title": "Pipattack: Poisoning federated recommender systems for manipulating item promotion", "year": "2022" }, { "authors": "Zhihui Zhou; Lilin Zhang; Ning Yang", "journal": "ACM", "ref_id": "b45", "title": "Contrastive Collaborative Filtering for Cold-Start Item Recommendation", "year": "2023-04-30" }, { "authors": "Ziwei Zhu; Shahin Sefati; Parsa Saadatpanah; James Caverlee", "journal": "", "ref_id": "b46", "title": "Recommendation for new users and new items via randomized training and mixtureof-experts transformation", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 132.93, 194.38, 333.24, 112.69 ], "formula_id": "formula_0", "formula_text": "𝓛 \"#'%\" ⊕ Client Warm item id 𝓨 # 𝓨 … # 𝓨" }, { "formula_coordinates": [ 3, 402.64, 501.67, 156.1, 9.16 ], "formula_id": "formula_2", "formula_text": "Y 𝑐𝑜𝑙𝑑 𝑢 = F 𝜃 (X 𝑐𝑜𝑙𝑑 )(2)" }, { "formula_coordinates": [ 4, 149.57, 509.7, 145.02, 24.75 ], "formula_id": "formula_3", "formula_text": "𝑝 := 1 𝑛 𝑛 ∑︁ 𝑖=1 𝑝 𝑢(3)" }, { "formula_coordinates": [ 4, 412.43, 102.87, 146.31, 8.43 ], "formula_id": "formula_4", "formula_text": "𝑟 𝑣 := M 𝜙 (𝑥 𝑣 ) (4)" }, { "formula_coordinates": [ 4, 384.5, 241.99, 171.07, 24.75 ], "formula_id": "formula_5", "formula_text": "L (𝑝; 𝜙) := 1 𝑚 𝑚 ∑︁ 𝑣=1 (𝑟 𝑣 -𝑝 (𝑣)) 2 (5" }, { "formula_coordinates": [ 4, 555.57, 250.27, 3.17, 7.94 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 4, 387.93, 330.35, 170.81, 11.24 ], "formula_id": "formula_7", "formula_text": "𝜙 𝑡 := 𝜙 𝑡 -1 -𝛾 𝜕 𝜙 𝑡 -1 L (𝑝; 𝜙)(6)" }, { "formula_coordinates": [ 4, 405.42, 404.04, 153.32, 8.43 ], "formula_id": "formula_8", "formula_text": "Y 𝑢𝑣 := S(P 𝑣 , Q 𝑢 )(7)" }, { "formula_coordinates": [ 4, 324.89, 487.88, 233.85, 22.42 ], "formula_id": "formula_9", "formula_text": "L 𝑢 (Y 𝑢𝑣 ; 𝜃 𝑢 ) := - ∑︁ (𝑢,𝑣) ∈𝐷 𝑢 log Ŷ𝑢𝑣 - ∑︁ (𝑢,𝑣 ′ ) ∈𝐷 - 𝑢 log(1 -Ŷ𝑢𝑣 ′ ) (8)" }, { "formula_coordinates": [ 4, 406.32, 562.43, 152.42, 11.14 ], "formula_id": "formula_10", "formula_text": "I - 𝑢 := I 𝑤𝑎𝑟𝑚 \\I 𝑢(9)" }, { "formula_coordinates": [ 5, 111.16, 124.8, 183.42, 8.43 ], "formula_id": "formula_11", "formula_text": "L 𝑡𝑜𝑡𝑎𝑙 := L 𝑢 (Y 𝑢𝑣 ; 𝜃 𝑢 ) + 𝜆R (𝑝 𝑢 , 𝑟 )(10)" }, { "formula_coordinates": [ 5, 92.85, 263.35, 201.74, 29.6 ], "formula_id": "formula_12", "formula_text": "(𝑞 𝑡 𝑢 , 𝑠 𝑡 𝑢 ) := (𝑞 𝑡 -1 𝑢 , 𝑠 𝑡 -1 𝑢 ) -𝜂 1 𝜕 (𝑞 𝑡 -1 𝑢 ,𝑠 𝑡 -1 𝑢 ) L 𝑡𝑜𝑡𝑎𝑙 𝑝 𝑡 𝑢 := 𝑝 𝑡 -1 𝑢 -𝜂 2 𝜕 𝑝 𝑡 -1 𝑢 L 𝑡𝑜𝑡𝑎𝑙(11)" }, { "formula_coordinates": [ 5, 77.3, 300.96, 4.39, 4.02 ], "formula_id": "formula_13", "formula_text": "𝜂" }, { "formula_coordinates": [ 5, 394.98, 176.02, 163.76, 8.43 ], "formula_id": "formula_15", "formula_text": "𝑝 𝑢 = 𝑝 𝑢 + 𝐿𝑎𝑝𝑙𝑎𝑐𝑒 (0, 𝛿)(13)" } ]
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b13", "b52", "b45", "b41" ], "table_ref": [], "text": "Video object segmentation (VOS) is a crucial task [1; 25; 24; 55; 50; 49] in computer vision that involves segmenting primary objects in a video sequence. This task has numerous applications, such as video editing, autonomous driving and robotics. VOS tasks can be broadly classified into semi-supervised VOS [25; 36; 4; 43; 5; 27] and unsupervised VOS [49; 20; 7; 16; 28; 35; 48; 52]. Semi-supervised VOS utilizes the segmentation mask of the first frame during inference to initialize the model, with the aim of tracking and segmenting specific objects throughout the entire sequence. In contrast, unsupervised VOS relies on the model to discover and extract the masks of the most salient objects without any prior information. Unsupervised VOS is gaining popularity due to its ability to automatically perform segmentation without the need for manual annotation, making it suitable for real-time applications. However, current state-of-the-art unsupervised VOS methods [7; 16; 28; 22; 54; 19] still require a large-scale manually annotated video dataset [29; 44; 31; 9] for training. The annotation of video object segmentation is prohibitively expensive, which needs to provide both the mask and trajectory. Even annotating masks based on coarse polygons takes several times longer than annotating video bounding boxes [6]. The high cost of annotating masks makes it challenging to scale existing VOS. Hence, a mask-free setting [14] may be necessary to handle this task more efficiently. boxes and points. SAM's superior performance in this area has gained widespread recognition. Naturally, we consider the following question: Can we employ SAM in unsupervised VOS by providing simple prompts to liberate the labor-expensive video annotation. This approach could potentially make this task more accessible and cost-effective, while still achieving high-quality results.\nThe main challenge is that SAM inherently loses the capability to discover specific instances [53] and associate the same identities over the sequence [46]. A simple way is manually locating the same target object in different poses frame by frame, and then activate SAM with prompt for segmentation. Although good performance can be achieved in this manner, it is not only time-consuming and labor-intensive, but also deviates from the core principles of unsupervised video object segmentation. We therefore revisit the need for automatically generating video prompts on SAM.\nTo this end, we propose UVOSAM, a mask-free approach based on SAM for unsupervised video object segmentation. The proposed UVOSAM consists of SAM and a novel video salient object tracking (VSOT) model, which inherits the most architecture of an advanced video instance segmentation methods IDOL [42] while removing the mask prediction branch. This approach provides more discriminative object features with better temporal consistency and guarantees more accurate association results. More specifically, VSOT is applied firstly to discovery the salient objects and provide the complete trajectories of them by leveraging both the spatial and temporal correspondence. Then, SAM is employed to obtain precise object mask results frame by frame. Our main contributions can be concluded as follows: 1) we first extend the SAM applications to the unsupervised VOS. Rather than using SAM to directly process each video frame, we propose a new pipeline which first detects salient objects and generates stable trajectories to prompt the SAM. 2) Our proposed method shows superior performance in complex video scenes and outperforms current mask-supervised unsupervised VOS methods by a large margin." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Unsupervised Video Object Segmentation", "publication_ref": [ "b36", "b55", "b34", "b47", "b21", "b53" ], "table_ref": [], "text": "Unsupervised VOS aims to segment class-agnostic salient objects without any manual guidance. Early works [11; 37; 45] consider that the pixels of objects share same motion patterns across consecutive frames and utilize motion cues captured by optical flow to guide segmentation process. For example, MP-Net [37] based on encoder-decoder architecture takes the optical flow as input for moving object segmentation. One limitation to these methods is that their performances are heavily affected by the quality of optical flow, especially in occlusion and fast motion scenarios. Some works [56; 52; 35; 48; 7] also learn to attend to appearance features of objects. MATNet [56] employs the MAT-block to transform appearance features into motion-attentive representations. RTNet [35] reciprocally transforms the appearance features and the motion features to discover primary objects. AMC-Net [48] introduces multi-modality co-attention gate to promote deep collaboration of appearance and motion information for accurate segmentation. By combining motion cues and appearance cues, these methods are able to capture prominent regions of salient objects and achieve more accurate segmentation.\nFor multiple objects segmentation, most methods employ the track-by-detect paradigm where object proposals are generated by instance segmentation models and then tracked by re-identification models. UnOVOST [22] first groups segments into short tracklets and then merges them into long-term object tracks based on their similarity. Target-Aware [54] employs a target-aware adaptive tracking framework to associate proposals across frames, achieving more robust matching results. Different from these methods which require annotation-expensive segmentation datasets for training, our maskfree UVOSAM achieves accurate segmentation by supplying simple prompts to vision foundation models, making the challenging Unsupervised VOS more accessible." }, { "figure_ref": [], "heading": "Vision Foundation Models", "publication_ref": [ "b50", "b7", "b37", "b31", "b39", "b14", "b38", "b56" ], "table_ref": [], "text": "Recently, foundation models have arisen significant interest for their impressive generalization capability on various downstream tasks. In natural language processing, the milestone works are GPT series [33; 34; 3], which demonstrate strong ability on many language generation tasks. Motivated by this, some models were proposed to further explore the potential of zero-shot generalization, including GLM [51], PaLM [8], LLaMA [38] and so on.\nIn the field of computer vision, research of foundation models are still in its infancy. The pioneering work CLIP [32], trained on web-scale image-text pairs, exhibits promising zero-shot capability and is widely adopted in many multi-modal tasks for feature alignment. BLIP [18; 17] performs multi-modal pre-training with a dataset bootstrapped from web-scale noisy data, facilitating zeroshot performance on text-to-video retrieval and VQA. To build a universal detection framework, UniDetector [40] utilizes images of multiple sources and heterogeneous label spaces for training, whose zero-shot performance surpasses the supervised baselines with large margins. More recently, SAM [15], the first promptable foundation model for segmentation tasks, was released and has gained massive attention for its ability to segment any object in any scenarios with various prompts, such as box, point and so on. Concurrent and share the similar idea with SAM, SegGPT [39] and SEEM [57] were proposed as generalist models to perform arbitrary segmentation tasks with different prompt types. Our method focuses on employing vision foundation models to perform mask-free Unsupervised VOS, enhancing segmentation performance in a more efficient way." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "We introduce a novel paradigm, termed as UVOSAM, which is based on SAM for unsupervised video object segmentation without requiring any video mask labels. More specifically, we first train a video salient object tracking network in a mask-free setting, which is employed to generate the trajectories of the targets automatically. The trajectories serve as prompts to SAM, which produce video masks on a frame-by-frame basis. In Section 3.1, we first revisit SAM, followed by the methodology of the video salient object tracking network in Section 3.2. Lastly, in Section 3.3, we present the overall pipeline of UVOSAM." }, { "figure_ref": [], "heading": "A revisit of SAM", "publication_ref": [], "table_ref": [], "text": "SAM is a fundamental model that has been trained using more than one billion masks and showcases remarkable zero-shot proficiency. Its main components comprise an image encoder (i.e. E i ), a prompt encoder (i.e. E p ), and a lightweight mask decoder (i.e. D m ). To be more precise, SAM takes an image I ∈ R H×W ×3 and its corresponding prompts P (e.g., points, bounding boxes, text, or masks) as inputs. These two stream inputs are then embedded by the image encoder and prompt encoder, respectively, which can be formulated as follows:\nF i = E i (I) ∈ R h×w×c F p = E p (P ) ∈ R k×c(1)\nwhere h, w and c denote the height, width and channel number of the feature map, and k represents the length of the prompt tokens. After that F i and F p are sent to D m , there are several learnable tokens F t concatenated to F p as prefix for flexible prompting , these two embeddings are then interacted via cross-attention and the zero-shot mask is obtained by decoding the interactive features, which can be described as:\nM = D m ( Attn(F i , concat(F p , F t )) ) ∈ R H×W (2)" }, { "figure_ref": [], "heading": "VSOT", "publication_ref": [], "table_ref": [], "text": "Our VSOT builds upon the advancements of the VIS method, IDOL. As an online VIS method, IDOL leverages an additional association head to instance segmentation models. By employing contrastive loss, it can extract ReID features that allow for accurate instance association. Since our VSOT does not require mask prediction, we have discarded the original mask head. VSOT focuses on two main components: salient object detection and object association.\nSalient object detection. VSOT employs DeformableDETR as the fundamental detector. Specifically speaking, a CNN backbone is first utilized to extract multi-scale feature maps from the input frame of a video. These feature maps, along with fixed positional encodings and N learnable object queries, are then provided to the Deformable DETR module. The object queries are transformed into output embeddings by the transformer decoder, which are then decoded using the 3-layer feed-forward network into box coordinates and class labels. We only generate a binary category result to separate the foreground objects from the background.\nObject Association. To distinguish objects across multiple frames, VSOT utilizes contrastive learning between frames to distinct representations. Furthermore, for each key frame, it dynamically selects the most relevant ∈ R C from previous T -1 frames, the association of frame T is established. To obtain a global embedding for each trajectory, a temporally weighted softmax score is first utilized, which can be defined as:\ne j = T -1 t=1 d t j × (τ + (T -1)/t) T -1 t=1 τ + (T -1)/t(3)\nWhere τ represents the time constant, Then the similarity is computed between d T i N i=1 and e j to perform the association." }, { "figure_ref": [], "heading": "The overall of proposed UVOSAM", "publication_ref": [], "table_ref": [], "text": "Figure 1: The overall illustration of our proposed UVOSAM, mainly consisting of the VSOT and SAM. The VSOT is trained only using the bounding box annotations to generate the trajectories of video salient objects, which are then served as the prompts for SAM. By freezing the entire SAM, we aim to retain its valuable pre-trained knowledge while also ensuring that our pipeline achieves mask-free training for unsupervised VOS approaches.\nFigure 1 illustrates the proposed UVOSAM approach for unsupervised video object segmentation, which generates high-quality masks without the need for manual annotations. UVOSAM is a twostage paradigm that consists of two main components: VSOT and SAM. In the VSOT stage, the backbone extracts per-frame features that are fed into the transformer to refine the object queries of each frame. The object queries are decoded into objectness score, location, and ReID embedding using the class head, box head, and ReID head, respectively. The decoded outputs are then sent to the tracking head, which matches existing tracklets and identifies new ones. The SAM stage involves the processing of the input frames and the trajectories obtained from the preceding VSOT stage. More specifically, the images are fed into the image encoder, whereas the trajectories are passed to the prompt encoder. The final video object segmentation outputs are then produced by the mask decoder." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b29", "b46", "b9", "b11", "b20" ], "table_ref": [], "text": "Datasets. We evaluate UVOSAM on two popular video segmentation datasets: DAVIS2017unsupervised [30] and Youtube-VIS 2019 [47]. The DAVIS2017-unsupervised dataset consists of 120 high-quality video in total. These videos are further split into 60 for train, 30 for val and 30 for test-dev. The Youtube-VIS 2019 dataset consists of 2238 training, 302 validation, and 343 test videos. Each video has been manually annotated at pixel level and the semantic category number is 40. Since our VSOT only produces the category-agnostic trajectories, it needs provide this label to SAM for evaluating the performance on video instance segmentation. We apply the split set as valid set from Youtube-VIS 2019 training datasets following IDOL so that we can obtain the specific category from ground truth.\nEvaluation Metrics. For DAVIS2017-unsupervised, we employ the official evaluation measures including region similarity J , boundary accuracy F and the overall metric J &F. For Youtube-VIS 2019, we adopt Average Precision (AP) and Average Recall (AR) to evaluate the performance of our model. In addition, the Intersection over Union (IoU) computation is also performed in the whole video.\nImplements Details. We employ pre-trained SAM with the ViT-H [10] image encoder as the base segmentation model. For VSOT, we select ResNet-50 [12] as the default backbone network. All hyper-parameters about the network architecture are the same as the official IDOL, we change the network to a binary classification output, only identifying whether the object is foreground. We use the AdamW [21] optimizer with a base learning rate of 5 × 10 -5 , β 1 = 0.9, β 2 = 0.999, and weight decay = 10 -4 . VSOT is trained on 4 NVIDIA A100 GPUs with 4 pairs of frames per GPU." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 illustrates the results of all compared methods on DAVIS2017-unsupervised valid set. Our UVOSAM achieves J &F scores of 78.9, which outperforms current mask-supervised methods by a large margin. We also produce trajectories with bounding boxes using the ground truth for each object, and then leverage these carefully curated bounding boxes as prompts in SAM , which we refer to as \"Human-prompts\". It can be clearly seen that when the prompts are sufficiently accurate, SAM can produce ideal segmentation results despite complex video scenes." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Methods", "publication_ref": [ "b55", "b1", "b21", "b53", "b18", "b25", "b12", "b40", "b41" ], "table_ref": [ "tab_1" ], "text": "J &F J -M ean F-M ean MATNet [56] 58.6 56.7 60.4 STEm-Seg [2] 64.7 61.5 67.8 UnOVOST [22] 67.9 66.4 69.3 Target-Aware [54] 65.0 63.7 66.2 Propose-Reduce [19] 68 We report our comparisons on YoutubeVIS-19 split set in Table 2. The results of IFC, SeqFormer and IDOL have been provided the ground truth ID for a fair comparision. Without any masks in training, our UVOSAM provides an absolute gain of 2.3 AP over the baseline IDOL. By visualizing outputs on YoutubeVIS-2019 in Figure 2, we observe that SAM masks are often accurate with crisper boundaries. As a result, we hypothesize that the lower AP scores of UVOSAM are largely attributed to the relatively poor quality of annotations in this dataset.\nWe also compare the performance of our UVOSAM with start-of-the-art method, INO [26], using the DAVIS2017-unsupervised dataset. The results are presented in Figure 3, which demonstrates that UVOSAM outperforms INO in various challenging video sequences. More specifically, in the dogjump sequence, the UVOSAM demonstrates remarkable robustness to occlusions and scale changes, Methods AP AP50 AP75 IFC [13] 51.8 --SeqFormer [41] 57.0 --IDOL [42] 56.1 --UVOSAM 58.4 87.9 62.1 Human-prompts + SAM 64.5 95.8 70.9 making it a reliable toolkit for video object segmentation. Moreover, it is capable of generating finely detailed boundary masks even in challenging background situations that closely resemble the target object, as illustrated in the goat sequences." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b22" ], "table_ref": [ "tab_2", "tab_4", "tab_4", "tab_4", "tab_6", "tab_7" ], "text": "All the ablation experiments are conducted on the challenging valid set of DAVIS2017-unsupervised with the same implementation details as described in Section 4.1 if not specified.\nTracking framework. We conduct experiments to study the effect of different tracking framework, including track-by-detect multi-object tracking (MOT) framework and the proposed video salient object tracking (VSOT) framework. For the implementation of the former, we adopt YOLO-v8 2 and DeepOCSORT [23] for detection and association, respectively. As shown in Table 3, the VSOTbased model surpasses the MOT-based counterpart by large margins on all evaluation metrics, which demonstrates that our approach can generate more complete trajectories by effectively exploiting spatial-temporal correlations, and thus achieve more precise segmentation results. Prompt types for SAM. We investigate the effect of different prompt settings for SAM on performance, including box, point and box&point, as shown in Table 4. For simplicity, the box prompt and point prompt are generated as the bounding rectangle of the foreground object and randomly sampled points of the foreground object, respectively. From the first two rows of Table 4, we can see that the box prompt largely improves the J &F score, J -M ean and F-M ean by 2.9, 2.1 and 3.5 respectively compared to the point prompt. This demonstrates that box prompt could provide more structural information of the objects which is helpful for SAM to generate more accurate masks. From the last row of Table 4, the combination of box and point prompts further boosts the performance, which exhibits a rough upper bound of our model. Different settings of point prompt. To investigate the effect of point number in point prompt on performance, we vary it as {1, 3, 5, 10}. As shown in Table 5, the model performance is enhanced consistently with more points in point prompt. Specially, when the point number increases from 1 to 3, the model enjoys an significant gain of 29.6 on J &F score. This is because using more points can help the SAM eliminate semantic ambiguity and thus locate objects more accurately. These results motivate us to explore the combination of box with more points, as discuss below.\nPrompt combination. In Table 6, we study the effect of different prompt combination for SAM.\nIt can be seen that the model performance increases slowly with more points and saturates with 5 points. This demonstrates that a small set of points can provide more structural information about objects, while overmuch points may make the model confused to differentiate different objects. " }, { "figure_ref": [ "fig_4" ], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although UVOSAM has demonstrated outstanding performance under the mask-free setting, there are two significant limitations that hinder its ability to understand complex videos. Firstly, it struggles with detecting slender objects, as depicted in Figure 4 " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, to tackle the challenge of label-expensive for Unsupervised VOS, we propose a simple yet effective framework called UVOSAM. UVOSAM is a two-stage paradigm that first leverages a video salient object tracking model to automatically discovery the main objects and generate the trajectories. It then employs SAM to sequentially generate the segmentation results with the prompting of trajectories. We evaluate our approach on two popular video segmentation datasets and the results show that UVOSAM outperforms the state-of-the-art mask-supervised methods. We hope that our work will inspire future research on label-efficient Unsupervised VOS by prompting the vision fundamental models." } ]
Unsupervised video object segmentation has made significant progress in recent years, but the manual annotation of video mask datasets is expensive and limits the diversity of available datasets. The Segment Anything Model (SAM) has introduced a new prompt-driven paradigm for image segmentation, unlocking a range of previously unexplored capabilities. In this paper, we propose a novel paradigm called UVOSAM, which leverages SAM for unsupervised video object segmentation without requiring video mask labels. To address SAM's limitations in instance discovery and identity association, we introduce a video salient object tracking network that automatically generates trajectories for prominent foreground objects. These trajectories then serve as prompts for SAM to produce video masks on a frame-by-frame basis. Our experimental results demonstrate that UVOSAM significantly outperforms current mask-supervised methods. These findings suggest that UVOSAM has the potential to improve unsupervised video object segmentation and reduce the cost of manual annotation. Recent advancements in segmentation foundation models [15; 39; 57] have led to significant improvements in image segmentation. One of the most notable models is the Segment Anything Model (SAM) [15], which has been trained on over 1 billion masks and has shown exceptional capabilities in accurately creating object masks based on different types of prompts such as bounding
UVOSAM: A Mask-free Paradigm for Unsupervised Video Object Segmentation via Segment Anything Model
[ { "figure_caption": "positive and negative examples from the reference frame to minimize false positives and further enhance the quality of the embedding. During inference, VSOT utilizes a memory bank-based association strategy to improve the association quality. Assuming N predicted objects with N ReID embeddings d T i N i=1 ∈ R C and M trajectories that contain a group of T -1 embeddings d t j j=M,t=T -1 j=1,t=0", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The Mask quality from Ground truth and UVOSAM on YoutubeVIS-19 split set. The annotations of YoutubeVIS are not always accurate in identifying the foreground object, as they sometimes label the background areas near the edge of the object as foreground, which are highlighted with red dotted region. In contrast, the masks generated by UVOSAM are often accurate. Best viewed zoomed in.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Qualitative comparison between our UVOSAM and start-of-the-art method. The five frames in each row are from the same video.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "(a). The bounding box for such objects, like ropes, is consistently uncertain and inaccurate, causing SAM to receive erroneous prompts. Additionally, regular objects may also appear slender due to occlusion and rapid motion, exacerbating the issue. Secondly, there are instances of trajectory drift or tracking disruption when objects obstruct each other, or their scale changes (please refer to Figure4 (b)). These failure cases indicate that our VSOT has scope for improvement.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Visualisation of the failed cases.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "The performance comparison of our method and state-of-the-art methods on DAVIS2017unsupervised valid set.", "figure_data": ".365.071.6INO [26]72.568.776.3UVOSAM78.975.582.0Human-prompts + SAM87.383.591.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The performance comparison of our method and state-of-the-art methods on YoutubeVIS-19 split set.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The performance comparison of different tracking framework on DAVIS2017 valid set.", "figure_data": "Methods J &F J -M ean F-M eanMOT61.159.562.7VSOT78.975.582.0", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The performance comparison of different prompt settings for SAM on DAVIS2017 valid set.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The performance comparison of different point settings of point prompt for SAM on DAVIS2017 valid set.", "figure_data": "CombinationJ &F J -M ean F-M eanbox & 1 point87.784.091.1box & 3 points88.184.491.8box & 5 points88.384.691.9box & 10 points87.884.191.4", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The performance comparison of different prompt combination for SAM on DAVIS2017 valid set.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Zhenghao Zhang; Zhichao Wei; Shengfan Zhang; Zuozhuo Dai; Siyu Zhu; Alibaba Group
[ { "authors": "N Araslanov; S Schaub-Meyer; S Roth", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Dense unsupervised learning for video segmentation", "year": "2021" }, { "authors": "A Athar; S Mahadevan; A Osep; L Leal-Taixé; B Leibe", "journal": "Springer", "ref_id": "b1", "title": "Stem-seg: Spatio-temporal embeddings for instance segmentation in videos", "year": "2020" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Y Chen; C Hao; A X Liu; E Wu", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)", "ref_id": "b3", "title": "Appearance-consistent video object segmentation based on a multinomial event model", "year": "2019" }, { "authors": "Y Chen; C Hao; A X Liu; E Wu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b4", "title": "Multilevel model for video object segmentation based on supervision optimization", "year": "2019" }, { "authors": "B Cheng; O Parkhi; A Kirillov", "journal": "", "ref_id": "b5", "title": "Pointly-supervised instance segmentation", "year": "2022" }, { "authors": "S Cho; M Lee; S Lee; C Park; D Kim; S Lee", "journal": "", "ref_id": "b6", "title": "Treating motion as option to reduce motion dependency in unsupervised video object segmentation", "year": "2023" }, { "authors": "A Chowdhery; S Narang; J Devlin; M Bosma; G Mishra; A Roberts; P Barham; H W Chung; C Sutton; S Gehrmann", "journal": "", "ref_id": "b7", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "H Ding; C Liu; S He; X Jiang; P H Torr; S Bai", "journal": "", "ref_id": "b8", "title": "Mose: A new dataset for video object segmentation in complex scenes", "year": "2023" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner", "journal": "", "ref_id": "b9", "title": "Transformers for image recognition at scale", "year": "2020" }, { "authors": "K Fragkiadaki; P Arbelaez; P Felsen; J Malik", "journal": "", "ref_id": "b10", "title": "Learning to segment moving objects in videos", "year": "2015" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b11", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S Hwang; M Heo; S W Oh; S J Kim", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Video instance segmentation using inter-frame communication transformers", "year": "2021" }, { "authors": "L Ke; M Danelljan; H Ding; Y W Tai; C K Tang; F Yu", "journal": "", "ref_id": "b13", "title": "Mask-free video instance segmentation", "year": "2023" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W Y Lo", "journal": "", "ref_id": "b14", "title": "Segment anything", "year": "2023" }, { "authors": "M Lee; S Cho; S Lee; C Park; S Lee", "journal": "", "ref_id": "b15", "title": "Unsupervised video object segmentation via prototype memory network", "year": "2023" }, { "authors": "J Li; D Li; S Savarese; S Hoi", "journal": "", "ref_id": "b16", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "J Li; D Li; C Xiong; S Hoi", "journal": "PMLR", "ref_id": "b17", "title": "Blip: Bootstrapping language-image pre-training for unified visionlanguage understanding and generation", "year": "2022" }, { "authors": "H Lin; R Wu; S Liu; J Lu; J Jia", "journal": "", "ref_id": "b18", "title": "Video instance segmentation with a propose-reduce paradigm", "year": "2021" }, { "authors": "D Liu; D Yu; C Wang; P Zhou", "journal": "", "ref_id": "b19", "title": "F2net: Learning to focus on the foreground for unsupervised video object segmentation", "year": "2021" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b20", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "J Luiten; I E Zulfikar; B Leibe", "journal": "", "ref_id": "b21", "title": "Unovost: Unsupervised offline video object segmentation and tracking", "year": "2020" }, { "authors": "G Maggiolino; A Ahmad; J Cao; K Kitani", "journal": "", "ref_id": "b22", "title": "Deep oc-sort: Multi-pedestrian tracking by adaptive re-identification", "year": "2023" }, { "authors": "J Miao; Y Wei; Y Yang", "journal": "", "ref_id": "b23", "title": "Memory aggregation networks for efficient interactive video object segmentation", "year": "2020" }, { "authors": "S W Oh; J Y Lee; N Xu; S J Kim", "journal": "", "ref_id": "b24", "title": "Video object segmentation using space-time memory networks", "year": "2019" }, { "authors": "X Pan; P Li; Z Yang; H Zhou; C Zhou; H Yang; J Zhou; Y Yang", "journal": "", "ref_id": "b25", "title": "In-n-out generative learning for dense unsupervised video segmentation", "year": "2022" }, { "authors": "K Park; S Woo; S W Oh; I S Kweon; J Y Lee", "journal": "", "ref_id": "b26", "title": "Per-clip video object segmentation", "year": "2022" }, { "authors": "G Pei; F Shen; Y Yao; G S Xie; Z Tang; J Tang", "journal": "Springer", "ref_id": "b27", "title": "Hierarchical feature alignment network for unsupervised video object segmentation", "year": "2022" }, { "authors": "F Perazzi; J Pont-Tuset; B Mcwilliams; L Van Gool; M Gross; A Sorkine-Hornung", "journal": "", "ref_id": "b28", "title": "A benchmark dataset and evaluation methodology for video object segmentation", "year": "2016" }, { "authors": "J Pont-Tuset; F Perazzi; S Caelles; P Arbeláez; A Sorkine-Hornung; L Van Gool", "journal": "", "ref_id": "b29", "title": "The 2017 davis challenge on video object segmentation", "year": "2017" }, { "authors": "J Qi; Y Gao; Y Hu; X Wang; X Liu; X Bai; S Belongie; A Yuille; P H Torr; S Bai", "journal": "International Journal of Computer Vision", "ref_id": "b30", "title": "Occluded video instance segmentation: A benchmark", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b31", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "A Radford; K Narasimhan; T Salimans; I Sutskever", "journal": "", "ref_id": "b32", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "OpenAI blog", "ref_id": "b33", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "S Ren; W Liu; Y Liu; H Chen; G Han; S He", "journal": "", "ref_id": "b34", "title": "Reciprocal transformations for unsupervised video object segmentation", "year": "2021" }, { "authors": "H Seong; J Hyun; E Kim", "journal": "Springer", "ref_id": "b35", "title": "Kernelized memory network for video object segmentation", "year": "2020" }, { "authors": "P Tokmakov; K Alahari; C Schmid", "journal": "", "ref_id": "b36", "title": "Learning motion patterns in videos", "year": "2017" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar", "journal": "", "ref_id": "b37", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "X Wang; X Zhang; Y Cao; W Wang; C Shen; T Huang", "journal": "", "ref_id": "b38", "title": "Seggpt: Segmenting everything in context", "year": "2023" }, { "authors": "Z Wang; Y Li; X Chen; S N Lim; A Torralba; H Zhao; S Wang", "journal": "", "ref_id": "b39", "title": "Detecting everything in the open world: Towards universal object detection", "year": "2023" }, { "authors": "J Wu; Y Jiang; S Bai; W Zhang; X Bai", "journal": "Springer", "ref_id": "b40", "title": "Seqformer: Sequential transformer for video instance segmentation", "year": "2022" }, { "authors": "J Wu; Q Liu; Y Jiang; S Bai; A Yuille; X Bai", "journal": "Springer", "ref_id": "b41", "title": "In defense of online models for video instance segmentation", "year": "2022" }, { "authors": "H Xie; H Yao; S Zhou; S Zhang; W Sun", "journal": "", "ref_id": "b42", "title": "Efficient regional memory network for video object segmentation", "year": "2021" }, { "authors": "N Xu; L Yang; Y Fan; D Yue; Y Liang; J Yang; T Y V Huang", "journal": "", "ref_id": "b43", "title": "A large-scale video object segmentation benchmark", "year": "2018" }, { "authors": "C Yang; H Lamdouar; E Lu; A Zisserman; W Xie", "journal": "", "ref_id": "b44", "title": "Self-supervised video object segmentation by motion grouping", "year": "2021" }, { "authors": "J Yang; M Gao; Z Li; S Gao; F Wang; F Zheng", "journal": "", "ref_id": "b45", "title": "Track anything: Segment anything meets videos", "year": "2023" }, { "authors": "L Yang; Y Fan; N Xu", "journal": "", "ref_id": "b46", "title": "Video instance segmentation", "year": "2019" }, { "authors": "S Yang; L Zhang; J Qi; H Lu; S Wang; X Zhang", "journal": "", "ref_id": "b47", "title": "Learning motion-appearance co-attention for zero-shot video object segmentation", "year": "2021" }, { "authors": "Y Yang; A Loquercio; D Scaramuzza; S Soatto", "journal": "", "ref_id": "b48", "title": "Unsupervised moving object detection via contextual information separation", "year": "2019" }, { "authors": "Z Yang; Y Wei; Y Yang", "journal": "Springer", "ref_id": "b49", "title": "Collaborative video object segmentation by foreground-background integration", "year": "2020" }, { "authors": "A Zeng; X Liu; Z Du; Z Wang; H Lai; M Ding; Z Yang; Y Xu; W Zheng; X Xia", "journal": "", "ref_id": "b50", "title": "Glm-130b: An open bilingual pre-trained model", "year": "2022" }, { "authors": "K Zhang; Z Zhao; D Liu; Q Liu; B Liu", "journal": "", "ref_id": "b51", "title": "Deep transport network for unsupervised video object segmentation", "year": "2021" }, { "authors": "R Zhang; Z Jiang; Z Guo; S Yan; J Pan; H Dong; P Gao; H Li", "journal": "", "ref_id": "b52", "title": "Personalize segment anything model with one shot", "year": "2023" }, { "authors": "T Zhou; J Li; X Li; L Shao", "journal": "", "ref_id": "b53", "title": "Target-aware object discovery and association for unsupervised video multi-object segmentation", "year": "2021" }, { "authors": "T Zhou; F Porikli; D J Crandall; L Van Gool; W Wang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b54", "title": "A survey on deep learning technique for video segmentation", "year": "2022" }, { "authors": "T Zhou; S Wang; Y Zhou; Y Yao; J Li; L Shao", "journal": "", "ref_id": "b55", "title": "Motion-attentive transition for zero-shot video object segmentation", "year": "2020" }, { "authors": "X Zou; J Yang; H Zhang; F Li; L Li; J Gao; Y J Lee", "journal": "", "ref_id": "b56", "title": "Segment everything everywhere all at once", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 210.31, 441.83, 293.69, 11.72 ], "formula_id": "formula_0", "formula_text": "F i = E i (I) ∈ R h×w×c F p = E p (P ) ∈ R k×c(1)" }, { "formula_coordinates": [ 3, 206.88, 522.26, 297.12, 11.72 ], "formula_id": "formula_1", "formula_text": "M = D m ( Attn(F i , concat(F p , F t )) ) ∈ R H×W (2)" }, { "formula_coordinates": [ 4, 236.42, 198.45, 267.58, 30.63 ], "formula_id": "formula_2", "formula_text": "e j = T -1 t=1 d t j × (τ + (T -1)/t) T -1 t=1 τ + (T -1)/t(3)" } ]
10.1109/IJCNN54540.2023.10191605
2023-11-01
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "occurrence of objects based on semantic segmentation. Firstly, the semantic spatial relation module (SSRM) is designed to explore the spatial relation among objects within a scene. With the help of semantic segmentation, this module decouples the spatial information from the image, effectively avoiding the influence of irrelevant features.\nSecondly, both spatial context features from the SSRM and deep features from the Image Feature Extraction Module are used to distinguish the coexisting object across different scenes. Finally, utilizing the discriminative features mentioned above, we employ the self-attention mechanism to explore the long-range co-occurrence among objects, and further generate a semantic-guided feature representation for indoor scene recognition. Experimental results on three widely used scene datasets demonstrate the effectiveness and generality of the proposed method. The code will be made publicly" }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b6", "b12", "b13", "b14", "b15", "b12", "b13", "b16", "b4", "b6", "b16", "b4", "b17", "b18", "b19", "b20", "b2", "b21", "b22", "b23" ], "table_ref": [], "text": "Scene recognition is a fundamental research topic in computer vision, aimed at predicting the scene category of an image, such as \"bedroom\" and \"beach.\" Although many studies have focused on outdoor scene recognition [1,2], an increasing number of researchers have recently conducted research on indoor scene recognition [3,4,5,6,7] due to its wide range of applications in smart video intelligence, robotics and so on [8].\nAs shown in Fig. 1, compared with outdoor scene recognition, indoor scene recognition presents more significant challenges due to the intra-class diverse spatial layout and the coexisting objects in different scene categories. Therefore, it is crucial to develop effective methods for indoor scene representation.\nExisting scene recognition methods can be broadly classified into handcrafted featurebased and deep learning-based methods. Hand-crafted features, such as LBP [9] and OTC [10], utilize color and texture to recognize scenes and have yielded noteworthy results. However, their limitation is that they only use low-level features such as shape and gradient information, which makes them unsuitable for dealing with large-scale datasets or complex scenes. Recently, Deep Neural Networks (DNNs) based methods attract a lot of interest and obtain higher performance, since DNNs can acquire a more advanced feature representation for images. However, the performance of DNNs for indoor scene recognition is still limited due to the complex relationships among objects within indoor scenes. Some recent studies [11,12] that explore the scene essence also suggest that scene classification is closely related to the inter-object context it contains. Thus, fully exploiting the object information is crucial for recognizing indoor scenes.\nAs shown in Fig. 1, the varying spatial layouts amplify the intra-class distinctions of indoor scenes, while the presence of coexisting objects in different indoor scenes easily leads to category confusion. Inspired by this phenomenon, we propose SpaCoNet, which aims to simultaneously model the Spatial layout and long-range Co-occurrence of objects for indoor scene recognition.\nFig. 1. Some examples of different scene datasets. Images in the bedroom and hospital room categories can easily be confused. Similar ambiguity can be found in the classroom and restaurant. In contrast, the simple composition makes the variation between the different outdoor scene categories quite obvious.\nSeveral similar strategies have been proposed recently to obtain intra-scene object information to assist scene recognition, achieving encouraging results. To model the spatial context, [7] proposes to utilize the object-relation-object triplet to explore the spatial order relationships among objects. Approaches [13,14] analyze the spatial metric relationships among object regions for scene recognition. However, it is noteworthy that spatial relationships encompass not only order and metric relationships but also topological relationships (e.g., the concept of a pillow being surrounded by a bed) [15], which are inherently more complex to formulate. Relying solely on artificial definitions is evidently insufficient for analyzing such diverse spatial relationships within scenes. Motivated by the powerful fitting ability of neural networks [16], this paper proposes using neural networks to adaptively model spatial relationships in an end-to-end manner. [13,14] also assign semantic labels to the backbone features to analyze the spatial metric relationships among object features. However, the effectiveness of these methods mainly depends on the features extracted by the backbone network. It should be noted that space-irrelevant information, such as color, present in these features may impede the optimization of the network's capacity to represent spatial context during training. In contrast, our objective is to thoroughly explore all spatial relationships among objects in the scene. Accordingly, this paper proposes a Semantic Spatial Relation Module, which first decouples the spatial information from the image with the help of semantic segmentation, and then explores the spatial relationships among objects, thereby ensuring the purity of the network input, avoiding the negative effects of irrelevant information.\nIn addition to spatial relationships, object co-occurrence in different scenes is also a significant contributor to scene recognition. Some methods [17,5,7] conclude the discriminative objects associated with scene categories by computing the probability distribution of objects within scenes. However, the discriminative objects in some scenes may be the same (e.g., the bed in the bedroom and the bed in the hospital room shown in Fig. 1), which poses a significant challenge to the above methods. In this paper, we propose a novel idea to solve this problem inspired by an interesting phenomenon. Specifically, even if the discriminative objects in the bedroom and the hospital room are the same, humans can still easily identify the differences between these two scenes because the same objects exhibit different characteristics in different scene classes (e.g., apparent variances between the beds in the above two scenes). Building on this insight, before exploring co-occurrence among objects across scenes, we first assign relevant features of the input scene to objects, enabling the network to distinguish similar objects like humans do. However, since the same object has different characteristics in various scenes, manual statistical methods [17,5] are no longer appropriate for exploring the co-occurrence. Fortunately, the advent of Transformer provides a flexible architecture with a multi-head attention mechanism, which can capture the long-range dependencies among sequential features. Several transformer-based methods [18,19,20,21] consider the global interactions of local features with the attention mechanism to obtain discriminative features. Inspired by them, we introduce a global-local dependency encoder, which can establish long-range co-occurrence among object features in an end-to-end manner.\nIn summary, the main contributions of this paper are presented as follows.\n1. We propose SpaCoNet, a framework that simultaneously models the Spatial relation and Co-occurrence of objects for indoor scene recognition.\n2. A semantic spatial relation module is designed to explore the spatial context among object regions, which decouples the spatial information from the image with the help of semantic segmentation, thus avoiding the negative effects of irrelevant features and ensuring high interpretability. In this module, we also design a simple yet efficient Adaptive Confidence Filter to alleviate the semantic ambiguity caused by limited segmentation precision.\n3. We propose distinguishing the same objects in different scenes by assigning them scene-related features, as a basis for fully exploring the long-range co-occurrence among objects across scenes using attention mechanisms.\n4. The effectiveness of the proposed approach is evaluated on three different scene datasets, namely MIT-67 [3], SUN397 [22], and Places365 [23]. The experimental findings demonstrate the effectiveness and generality of our approach.\nA preliminary version of this work was presented in the conference paper [24]. We make significant extensions from different aspects as follows: 1) We optimize the use of spatial relationships between semantic regions within scenes, and evaluate new datasets to demonstrate the generalization of the proposed approach; 2) We propose a richer type of semantic relationship, i.e., a long-range cooccurrence among objects, to provide more discriminative information; 3) We conduct comprehensive ablation studies for the global-local dependency encoder and different variants of the aggregation methods, to show the effectiveness of each new proposed module; 4) Visualizations for the learned SpaCoNet are provided to present the characteristics of the proposed method." }, { "figure_ref": [], "heading": "Related works", "publication_ref": [], "table_ref": [], "text": "This section briefly reviews several related researches and examines the differences and connections between related works and our proposed approach." }, { "figure_ref": [], "heading": "Scene recognition", "publication_ref": [ "b24", "b8", "b9", "b25", "b26", "b2", "b27", "b15", "b28", "b22", "b28", "b29" ], "table_ref": [], "text": "Conventional scene recognition methods usually rely heavily on handcrafted feature extraction. Gist of the scene proposes to use Generalized Search Trees (GIST) to generate a global low-dimensional representation for each image, but ignore the local structure information of the scene [25]. To cope with this problem, some researchers focus on local visual descriptors (such as Local Binary Patterns(LBP) [9] and Oriented Texture Curves (OTC) [10]), and use the Bag-of-Visual-Word (BOVW) [26] to integrate these local visual descriptors into image representation, but do not take spatial structure information into account. For this reason, Spatial Pyramid Matching (SPM) [27] has been proposed as a component of BOVW, extracting subregions' features and compensating for the missing spatial structure information. Based on the above, Quattoni et al. [3] propose a prototype-based model for indoor scenes that combines local and global discriminative information. However, the features used by the above methods are hand-crafted and low-level, which is limited to distinguish blurred or high-similarity scenes.\nIn recent years, deep neural networks have made significant progress in computer vision. Several network architectures [28,16] have been used to facilitate the development of image classification. Accordingly, many approaches [29,23] attempt to extract visual representations for scene recognition through deep neural networks. For example, Dual CNN-DL [29] proposes a new dictionary learning layer to replace the traditional FCL and ReLu, which simultaneously enhances features' sparse representation and discriminative ability by determining the optimal dictionary. Lin et al. [30] propose to transform convolutional features to the ultimate image representation for scene recognition by a hierarchical coding algorithm. These methods utilize convolutional neural networks to extract scene representations, significantly improving the recognition results. However, they still fall far short of the results achieved in tasks such as image classification. This phenomenon might be due to the lack of effective distinction of co-existing objects within scenes by CNNs. With this in mind, several approaches employ the object information in the scene for scene recognition." }, { "figure_ref": [], "heading": "Semantic segmentation-based modeling", "publication_ref": [ "b30", "b31", "b32", "b13", "b32", "b13", "b16", "b3", "b4", "b16", "b4", "b16", "b13", "b18" ], "table_ref": [], "text": "Semantic segmentation techniques, which can assign semantic labels to each pixel inside an image, have been widely used in diverse applications in recent years. Xu et al. [31] propose distinguishing the background and foreground through a semantic parsing network, and remedy the raised negative transfer caused by variant background to address the challenge of person reidentification. SGUIE-Net [32] uses object information as high-level guidance to obtain visually superior underwater image enhancement by region-wise enhancement feature learning.\nSince the classification of a scene is related to the objects it contains, several approaches [33,14] have used the semantic segmentation result to provide auxiliary information for scene classification. In SAS-Net [33], the semantic features generated by the semantic segmentation score tensor are used to add weights to different positions of the feature map generated by the RGB image, so that the network pays more attention to the discriminative regions in the scene image. ARG-Net [14] detects the scene's foreground and background regions through semantic segmentation technology, combining them with the feature map obtained by the backbone network to establish the context relationship between regional features. Besides exploring contextual relationships, there are some approaches [17,4,5] that propose to use the object co-occurrence across scenes for scene recognition. SDO [17] proposes to use the co-occurrence probability of objects in different scenes to find discriminative objects, so that the negative effects of co-occurring objects in multiple scenes can be excluded. Zhou et al. [5] use the probabilistic method to establish the co-occurrence relationship of objects across scenes, and combine the representative objects with the global representation of the scene to obtain a better scene representation.\nHowever, due to the limited precision of semantic segmentation techniques, all semantic segmentation-based methods face the negative impact of semantic ambiguity. This issue is typically mitigated using confidence threshold [17,14], but as the data volume increases, such methods become inflexible and have limited effectiveness. To address this issue, this paper proposes a simple yet effective method to adaptively filter ambiguous points based on each image's specific state, achieving remarkable results. FCT-Net [19] proposes a Transformer-based framework that combines CNN to improve the discriminative ability of features for scene classification. Inspired by the methods above, in this paper, we explore the long-range co-occurrence of object semantic features within scenes using the multi-head self-attention mechanism." }, { "figure_ref": [], "heading": "Attention mechanism-based modeling", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Our method", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a detailed description for the proposed SpaCoNet, which consists of five modules. We will first outline the entire model and then introduce each module in five subsections. to predict the scene category." }, { "figure_ref": [ "fig_3" ], "heading": "Semantic Spatial Relation Module", "publication_ref": [ "b14", "b15", "b33", "b34", "b15", "b16", "b13", "b32", "b33", "b34", "b35" ], "table_ref": [], "text": "Spatial relationships, as discussed in [15], encompass topological relationships (e.g., a pillow being surrounded by a bed), order relationships (e.g., a chair is behind a table), and metric relationships (e.g., the distance between an artboard and a person). Relying solely on artificial definitions is insufficient for analyzing such diverse spatial relationships within scenes. Motivated by the powerful fitting ability of neural networks [16], we propose the Semantic Spatial Relation module (SSRM) as an adaptive approach to model spatial relationships in an end-to-end manner. Firstly, to avoid the negative impact of space-irrelevant information during training, we input the given image I ∈ R w×h×3 into a semantic segmentation network to obtain the semantic segmentation score tensor M ∈ R w×h×l (l = 150) [34,35]. The score tensor focuses solely on pure spatial information, filtering out non-essential elements.\nThen, we use M as the input for SSRM, ensuring that the module is devoid of spatially irrelevant information and can fully explore the spatial context. The SSRM generates a spatial context feature F S ∈ R w 32 × h 32 ×c , which represents the concentrated representation of the spatial contextual information in I. Based on the ResNet50 [16], we design a more suitable architecture for SSRM according to the specific characteristics of the segmentation score tensor. Compared with the original ResNet50, the proposed network is more efficient and requires less computing power. Fig. 3 illustrates the framework of the proposed SSRM.\nDue to the precision limitation of the semantic segmentation network, errors in segmentation results are inevitable. Many existing methods [17,14] have attempted to address this issue by implementing confidence thresholds to filter ambiguous points.\nStill, these methods are inflexible and have limited efficacy, especially when dealing with large datasets. In contrast, to alleviate the adverse effects caused by semantic ambiguity, we propose an Adaptive Confidence Filter (ACF) that is capable of dynamically adapting to the state of the image, which allows for flexible filtering of each semantic segmentation score tensor. Specifically, ACF employs a filter with a kernel size of 2 × 2 to apply a smoothing process to each channel of M. For each domain the filter is applied to, ACF selects only the pixels with the highest confidence in its coverage, and generates the output represents the predicted semantic probability of the corresponding pixel in image I [33,34,35], the Channel Attention Module (ChAM) [36] is introduced between ResBlocks to help the network to focus better on the critical semantic categories in the image.\nM ′ ∈ R\nThrough exploring the spatial relation between object regions within M ′ , we can obtain the semantic-based spatial feature F S , which encompasses comprehensive spatial context relationships." }, { "figure_ref": [], "heading": "Image Feature Extraction Module", "publication_ref": [ "b22", "b36" ], "table_ref": [], "text": "In addition to spatial relationships, object co-occurrence across scenes is also a significant contributor to scene recognition. Of course, spatial relation also contains shallow multi-object co-occurrence information. Still, its focus is on exploring the positional relationships between objects, independent of the properties of the objects themselves, which is also why we use the decoupled M as the input to the SSRM to explore spatial relations. However, since M does not contain information about the characteristics of the objects themselves (color, texture, etc.), the SSRM lacks the ability to distinguish discriminative objects that may appear in different scenes, leading to under-exploration of object co-occurrence. Therefore, we propose an Image Feature Extraction Module (IFEM) for I to extract the complete image deep feature F I . In this way, we can use F I and F S to link scene-related features with objects in subsequent modules, which allows us to fully explore the co-occurrence among objects within scenes.\nSpecifically, to fully use the information contained in an image, we propose to combine the SSRM with PlacesCNN, which serves as the Image Feature Extraction Module. The PlacesCNN used in this study consists of an almost complete ResNet50 architecture pretrained on the Places365 dataset [23]. Notably, we excluded the pooling layer from the architecture since it causes translation invariance, leading to the blurring of distinctions between various node features in the top-level semantic feature layer [37]. This blurring hinders the exploration of the global-local dependencies that follow. Therefore, we remove it to enhance the model's performance in identifying and distinguishing such dependencies.\nGiven an image I ∈ R w×h×3 , IFEM generates an advanced deep feature F I ∈ R w 16 × h 16 ×c . The feature F S and F I are then forwarded into the semantic node feature aggregation module to link scene-related features with objects. Note that for tractability, we interpolate F S to the same resolutions as F I using the Bilinear interpolation algorithm." }, { "figure_ref": [], "heading": "Semantic node feature aggregation module", "publication_ref": [], "table_ref": [], "text": "To differentiate the same objects across various scenes, we propose the semantic node feature aggregation module, which allocates objects with input scene-related features. By successively feeding spatial features F S and deep features F I to this module, we can obtain their respective semantic feature sequences.\nFor each image, we first obtain its semantic segmentation score tensor M from the SSRM, and generate the label map L from the probabilistic relationships within M. L enables us to extract the object information corresponding to each point in F S and F I .\nHowever, two issues arose:\n1. Semantic ambiguity issue: Due to the limited precision of semantic segmentation, there can be errors in L, which may negatively affect subsequent works." }, { "figure_ref": [ "fig_6" ], "heading": "Resolution mismatch issue:", "publication_ref": [], "table_ref": [], "text": "The resolution of L is the same as the resolution of the input image, which is different from the resolution of the feature map.\nTo address these issues, we perform feature aggregation using the method described below, and the framework of this module is illustrated in Fig. 5. For the Semantic ambiguity issue, we first apply the proposed ACF on the score tensor M to obtain M ′ . Subsequently, the refined tensor is used to generate the label\nmap L ′ ∈ R w 2 × h 2 .\nIn this way, we enhance the confidence level of the segmentation results.\nFor the Resolution mismatch issue, after obtaining L ′ , we use the nearest-neighbor interpolation algorithm to reduce its resolution to be consistent with the feature map F I and F S .\nSubsequently, for each object o in l semantic categories, we generate a binary map L o , as follows:\nL o i, j =          1, L ′ i, j = o 0, L ′ i, j o (1)\nWe apply the binary map L o over the feature map F I and F S , respectively, to extract the image features and spatial features related to object o. Then, we perform average pooling to obtain the deep feature vector x i rgb ∈ R c and the spatial feature vector x i spa ∈ R c of object o, which can be expressed as:\nx i rgb = AveragePooling (F I ⊙ L o ) x i spa = AveragePooling (F S ⊙ L o )(2)\nwhere ⊙ represents the Hadamard product.\nFinally, the feature vectors stack together to form the deep aggregation feature X rgb ∈ R l×c and the spatial aggregation feature X spa ∈ R l×c . The above process can be formulated as follows:\nX rgb = concat(x 0 rgb , x 1 rgb , x 2 rgb , ..., x 149 rgb ) X spa = concat(x 0 spa , x 1 spa , x 2 spa , ..., x 149 spa )(3)\nBoth X rgb and X spa are then fed into the Global-Local Dependency Module to explore the co-occurrence among semantic feature sequences." }, { "figure_ref": [ "fig_7" ], "heading": "Global-Local Dependency Modeling", "publication_ref": [ "b17" ], "table_ref": [], "text": "Since the same object has different characteristics in various scenes, manual methods are no longer appropriate for exploring the co-occurrence among semantic feature sequences. To address this issue, we propose this module to establish a long-range correlation among semantic feature sequences by utilizing the attention mechanism. This relationship can then be used to modify the global feature representation.\nTo be specific, from the semantic node feature aggregation module, we get the deep feature sequence X rgb ∈ R l×c and the spatial feature sequence X spa ∈ R l×c . Since our goal is to use the long-range cooccurrence among semantic feature sequences to modify the global representation, we make changes to the two feature sequences by using their respective global features as feature nodes, which is obtained by performing global average pooling on F I and F S , and thus obtain X 1 rgb ∈ R (l+1)×c and X 1 spa ∈ R (l+1)×c .\nX 1 rgb = concat(X rgb , GlobalAvgPooling(F I )) X 1 spa = concat(X spa , GlobalAvgPooling(F S ))(4)\nMeanwhile, considering the necessity of recording object semantic categories corresponding to each node feature, position embedding P rgb emb ∈ R (l+1)×c and P spa emb ∈ R (l+1)×c are further added to generate coded semantic feature sequences X 2 spa ∈ R (l+1)×c and RGB feature sequence X 2 rgb ∈ R (l+1)×c . Hence, the inputs of two Encoders are formulated as follows:\nX 2 rgb = X 1 rgb + P rgb emb X 2 spa = X 1 spa + P spa emb(5)\nConsidering the disparities between these two feature sequences, we first explore their internal correlations individually, and then combine them to explore the intrinsic relationship of the overall information. The overall process is shown in Fig. 6. We employ the attention mechanism to explore the long-range dependencies among local semantic feature nodes and between them and the global feature node. Inspired by\nVision Transformer [18], an Encoder block consists of three main components: Layer normalization, Multi-head Self-Attention (MSA), and Multi-Layer Perceptron (MLP).\nLayer normalization is used to normalize and smooth data distribution, thereby improving the model's generalization ability. The MSA follows it. Next, a residual connection is employed to convey information to alleviate overfitting. Afterward, Layer normalization is applied again, and the output of the Encoder block is obtained through MLP. The entire operation of the Encoder for processing the image aggregation feature X 2 rgb ∈ R (l+1)×c and the spatial aggregation feature X 2 spa ∈ R (l+1)×c is represented by follows:\nX 3 rgb = X 2 rgb + MS A(LN(X 2 rgb )) X 4 rgb = X 3 rgb + MLP(LN(X 3 rgb )) X 3 spa = X 2 spa + MS A(LN(X 2 spa )) X 4 spa = X 3 spa + MLP(LN(X 3 spa ))(6)\nWith the Encoder, we individually explore the long-range dependencies among the internal nodes of X 2 rgb and X 2 spa . Subsequently, we merge them by taking the elementwise Maximum:\nX o = Max(X 4 spa , X 4 rgb )(7)\nTo better utilize the long-range dependency among nodes for scene representation, the output X o is input to the Decoder to mitigate the disparities between image features and spatial features and explore the overall information's intrinsic relationships. The structure of the Decoder is similar to that of the Encoder and can be formulated as follows:\nX 1 o = X o + MS A(LN(X o )) X 2 o = X 1 o + MLP(LN(X 1 o ))(8)\nWith the implementation of the Encoder and Decoder, we establish a robust longrange co-occurrence among all feature nodes. This integration process generates an optimized representation X 2 o ." }, { "figure_ref": [ "fig_7" ], "heading": "Scene Recognition Module", "publication_ref": [], "table_ref": [], "text": "To \nF o ∈ R c from X 2\no as the final global representation (as shown in Fig. 6). These representations are fed into a fully connected network to obtain the final scene prediction, and the cross entropy function is used as the final loss:\nL = - N n=1 y i log exp(F o (n)) N m=1 exp(F o (m)) (9\n)\nwhere y is the ground truth and F o is the output of this module." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "This section begins by presenting the benchmark datasets and performing ablation experiments to assess the impact of each module on the proposed method. Subsequently, we compare the proposed method with the state-of-the-art methods." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b2", "b2", "b21", "b21", "b22" ], "table_ref": [], "text": "MIT-67 dataset [3] consists of 67 indoor scene classes with a total of 15620 images, and each scene category contains at least 100 images. Following the recommendations by the authors [3], each class has 80 images for training and 20 images for testing. The evaluation of the MIT dataset is challenging due to the large intra-class variation of indoor scenes.\nSUN397 [22] is a large dataset covering indoor and outdoor scenes. It contains 397 scene categories spanning 175 indoor scene categories and 220 outdoor scene categories, where each category contains at least 100 RGB images. In this study, we focus on the 175 indoor scene categories to evaluate our proposed approach. Following the evaluation protocol of the original paper [22], we randomly select 50 images from each scene class for training and another 50 for testing.\nPlaces365 Dataset [23] is one of the largest scene recognition datasets, which in- \ncludes" }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b33", "b34", "b37", "b32", "b32", "b38" ], "table_ref": [], "text": "Vision Transformer Adapter [34] that is pretrained on ADE20K dataset [35] is used as the semantic segmentation network. Given an image I ∈ R w×h×3 , its output is a score tensor M ∈ R w×h×l . This tensor can be used to generate the semantic label map L ∈ R w×h for I. The semantic prediction probability of location (i, j) in I is denoted by\nM i, j ∈ R 1×1×l\n, where l represents the number of semantic labels (l = 150). In L, each pixel (i, j) is assigned a value L i j , representing the semantic label of its corresponding pixel in the input image.\nIn order to thoroughly explore the spatial information in the input scene, it is essential to eliminate the impact of spatially irrelevant information on the network parameters when training the SSRM. As a result, a two-stage training procedure is implemented. Initially, we train the SSRM and IFEM separately. Subsequently, in the second stage, we fix the weights of SSRM and IFEM and train the succeeding modules only. To ease the training process, we use ALI-G [38] to optimize the network parameters. ALI-G requires only an initial learning rate hyperparameter, which is set to 0.1 in all our experiments. In the second stage of training, to prevent over-fitting, the dropout regularization function is used in the final classifier with an omission probability of 0.8.\nThanks to PyTorch and SAS-Net [33], our method is implemented under their opensource framework. When evaluating the final performance, the standard 10-crop testing method [33,39] is used for comparison with other methods." }, { "figure_ref": [], "heading": "Ablation analysis", "publication_ref": [], "table_ref": [], "text": "In this part, we conduct ablation studies to evaluate the effectiveness of the proposed method." }, { "figure_ref": [], "heading": "Evaluation of the Semantic Spatial Relation Module", "publication_ref": [], "table_ref": [ "tab_0", "tab_0", "tab_0" ], "text": "In this subsection, we evaluate the proposed SSRM on the MIT-67 and SUN397 datasets, and study the effect of the Adaptive Confidence Filter (ACF) on the recognition performance. The experimental results are presented in Table 1. As shown in Table 1, filtering the semantic segmentation score tensor by ACF significantly improves recognition performance. When compared to inputting the original score tensor, the use of ACF in SSRM increases the recognition accuracy by 3.43% to 5.75% on MIT-67 and 2.365% to 3.705% on SUN397, while reducing the number of Flops by 17.89G. Additionally, ChAM further improves the accuracy of SSRM with only a slight increase in Flops. Next, we investigate the impact of using ACF with different filtering domains on the recognition performance of SSRM. Our results\nshow that using a 2*2ACF improves the accuracy of SSRM on MIT-67 and SUN397 by 1.12% and 0.929%, respectively, compared to using a 4*4ACF. This phenomenon suggests that while ACF reduces the negative impact of semantic ambiguity, it may also cause the input tensor to lose some object information. Therefore, when selecting the filter domain size of ACF, both the loss of object information and the presence of semantic ambiguity should be considered. However, it is worth noting that even though 4*4ACF results in more object information being lost, it still leads to a notable improvement of 3.43% and 2.776% in the accuracy of SSRM on MIT-67 and SUN397 datasets, respectively, which also indicates the importance of ACF in SSRM. Furthermore, we pretrain the SSRM on the Places365 dataset, achieving higher accuracy as demonstrated in the last row of Table 1." }, { "figure_ref": [], "heading": "Evaluation of different methods of feature aggregation", "publication_ref": [], "table_ref": [ "tab_1", "tab_1", "tab_0", "tab_0", "tab_2" ], "text": "Combining the label map generated by the semantic segmentation and the feature map output by the backbone is essential for the semantic node aggregation module.\nThe output (i.e., the semantic feature sequence) of this module directly influences the exploration of feature dependencies by the subsequent module, so the preliminary experiments are used to select the appropriate aggregation mode. To generate a suitable semantic feature sequence, we try a series of methods to integrate the label map and feature map, as shown in Table 2. Among them, f iltering represents using ACF with the specified kernel size to filter the semantic segmentation score tensor, in the same way as the disambiguation filtering process in SSRM. Nearest means that the label map is interpolated to the specified size using nearest-neighbor interpolation, Bilinear means that the feature map is interpolated to the specified size using bilinear interpolation. For the sake of clarity in our presentation, we have assigned numbers to each of the aggregation methods. In this part, we first combine the output of SSRM and IFEM directly through Maximum and feed it to the classifier, using the result as the baseline, numbered as experiment 0.\nAs shown in Table 2, our approach consistently outperforms the baseline regardless 1, further highlighting the superiority of the ACF in methods that utilize semantic segmentation. Additionally, comparing experiments 2 and 5 with experiments 3 and 6, it can be seen that using a 4 * 4 filter domain produces slightly weaker results than using a 2 * 2 filter domain, echoing the results presented in Table 1, again indicating the need to consider both disambiguation filtering and preservation of object information in the image when processing the score tensor using ACF.\nUpon comparing experiments 2, 5, and 8, it is evident that finer semantic label assignment to features (i.e., interpolation of the feature map to a larger size) hurts final scene recognition. This phenomenon may be due to the interpolation algorithm causing a shift in feature position, which increases the possibility of assigning wrong semantic labels to features. Additionally, the comparison between experiments 5 and 6 also confirms this conclusion, where the large interpolation range results in a 2*2ACF filtered score tensor producing lower recognition than a 4*4ACF filtered score tensor, a result that should have been the opposite. Fortunately, using un-interpolated feature maps expedites the processing of this module. In summary, we finally choose the configu-ration from experiment 2 to generate the feature sequence. Specifically, the 2*2ACF is first used to process the score tensor to generate a suitable label map. This label map is interpolated to the size of the feature map by nearest-neighbor interpolation. Finally, the label map and feature map are combined to produce the final semantic feature sequences. To explore the complementary information of the features generated by IFEM and SSRM, we compare four different Encoder combination methods, namely, Product, Concatenation, Addition, and element-wise Maximum. Table 3 illustrates the results on the MIT-67 and SUN397 datasets. The element-wise maximum combination method outperforms the alternative methods. Compared to the concatenation methods, the maximum method consumes fewer computational resources while achieving higher performance. Compared to the product and addition methods, the maximum method consumes the same computational resources while preserving the key information more cleanly, which allows the Decoder to decode dependencies in greater depth. In summary, we finally choose the element-level Maximum to combine the encoded features." }, { "figure_ref": [], "heading": "Evaluation of different Encoder combination methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_11", "fig_11" ], "heading": "Network components analysis", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_4" ], "text": "We present a detailed ablation study of our method on the MIT-67 dataset in Ta- 4 demonstrate that all the proposed modules positively boost the final recognition result. In Table 4, the best results are marked in bold and show improvements of up to 5.776% over the baseline. Moreover, it could be observed that compared to the baseline, the accuracy is improved by 3.761% after incorporating the spatial contextual information extracted by SSRM into the network, and if the Encoder and Decoder fuse the longrange dependencies among objects on top of this, the accuracy can be improved again by 2.015%. Also, for the Encoder-Decoder, we try using only the output of the Encoder without the Decoder for scene recognition, yielding slightly lower results than those produced using the full Encoder-Decoder. This phenomenon is because the Decoder mitigates the differences in modal features between PlacesCNN and SSRM, thus better exploring the complementary information between them. To better understand the learned feature representation, we evaluate our model and the baseline (PlacesCNN) on the test sets of MIT-67, Places365 7, Places365 14, and SUN397. The results of these evaluations are presented in Table 5. Additionally, we extract the features that will be fed into the classifier and use t-SNE to visualize them by plotting their 2-dimensional representations in Fig. 7. Each point represents an image, and points with the same color indicate images of the same category. The first row of Fig. 7 shows the visualization of the output features from PlacesCNN, while the second row shows the visualization of the features output by our method." }, { "figure_ref": [ "fig_11" ], "heading": "Feature visualization", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "As demonstrated in Table 5 and Fig. 7, the proposed method significantly reduces the differences among scenes of the same category and increases the differences among scenes of different categories. By leveraging the spatial contextual relationships and long-range dependency among objects within scenes, our method is able to learn a more effective feature representation and achieve better performance. " }, { "figure_ref": [], "heading": "Comparison with state-of-the-art methods", "publication_ref": [ "b16", "b3", "b32", "b13", "b39", "b4", "b32", "b4", "b40", "b41", "b42", "b38", "b12", "b13", "b40", "b41", "b42" ], "table_ref": [], "text": "To demonstrate the superior performance of our method, we compare SpaCoNet with existing state-of-the-art methods on MIT-67, Places-7, Places-14, and SUN397\ndatasets. The results are presented in Tables 6, 7 and 8. It is observed that our Spa-CoNet outperforms most existing methods. Compared to the methods [17,4,33,14,40,5] that utilize object information for scene recognition, our method gains better performance, demonstrating that it is effective to exploit the spatial contextual relationships and long-range dependency among objects. Furthermore, our method outperforms current multi-branch-based approaches [33,5,41,42,43], which aim to obtain multi-scale information of the scene. This phenomenon highlights the effectiveness of using object information as an additional source. Moreover, while some methods aim to improve accuracy by increasing the input size [39,13,14,41,42,43], our proposed method achieves superior performance using a 224 × 224 input size, resulting in lower consumption of arithmetic power. Thus, our experiments confirm the superiority and generalization of the proposed method for indoor scene recognition. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a framework, SpaCoNet, to simultaneously model the Spatial relation and Co-occurrence of objects for indoor scene recognition. Initially, we introduce a semantic spatial region module to model the spatial contextual relationships within a scene, in which an adaptive confidence filter is introduced to mitigate the negative impact of errors in the semantic segmentation results. Additionally, to explore co-occurrence among objects across scenes and distinguish the same objects in different scenes, we reformulate these objects as feature nodes, use attention mechanisms to model the long-range co-occurrence among object semantic features, and generate discriminative scene representation. Our approach is shown to be more competitive than existing approaches through comprehensive experiments.\nHowever, the performance of the proposed method exhibits limitations due to the semantic segmentation technique used. Specifically, the segmentation technique used in this study is capable of segmenting 150 semantic objects; however, it does not cover all objects present in the scene, which is one of the reasons why we incorporate global features into the semantic feature sequence. Two potential strategies can be considered to address this issue. The first strategy is to train segmentation techniques that can effectively segment a larger number of semantic objects. However, this strategy requires significant effort to annotate the dataset, making it a resource-intensive endeavor. Alternatively, the second strategy is to use an unsupervised or semi-supervised approach to enable the network to autonomously recognize objects within the scene, which is a direction we plan to explore in the future." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "This work was jointly supported by the Key Development Program for Basic Research of Shandong Province under Grant ZR2019ZD07, the National Natural Science Foundation of China-Regional Innovation Development Joint Fund Project under Grant U21A20486, the Fundamental Research Funds for the Central Universities under Grant 2022JC011." } ]
Exploring the semantic context in scene images is essential for indoor scene recognition. However, due to the diverse intra-class spatial layouts and the coexisting interclass objects, modeling contextual relationships to adapt various image characteristics is a great challenge. Existing contextual modeling methods for indoor scene recognition exhibit two limitations: 1) During training, space-independent information, such as color, may hinder optimizing the network's capacity to represent the spatial context. 2) These methods often overlook the differences in coexisting objects across different scenes, suppressing scene recognition performance. To address these limitations, we propose SpaCoNet, which simultaneously models the Spatial relation and Co-
Semantic-guided spatial relation and object co-occurrence modeling for indoor scene recognition
[ { "figure_caption": "Since the attention mechanism and its variant, the Transformer, can effectively model long-range dependencies, it has been utilized in various applications. SCViT-Net[21] uses the multi-head self-attention mechanism to model the global interactions of the local structural features for remote sensing image classification. Wang et al.[20] introduce a hybrid CNN-Transformer feature extraction network to combine local details, spatial context, and semantic information for visual place recognition.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Pipeline of the proposed SpaCoNet for indoor scene representation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. shows the overall process of SpaCoNet. Our framework contains five modules: (a) semantic spatial relation module, (b) image feature extraction module, (c) semantic node feature aggregation module, (d) global-local dependency module, and (e) scene classifier. (a) provides feature maps that characterize the spatial relationships among object regions within the scene. (b) provides feature maps from PlacesCNN, which is pretrained on the large dataset Places365[23], to obtain the advanced representation of image information. The feature maps from (a) and (b), as well as the semantic segmentation label map of the input, are sent to (c). (c) performs feature aggregation on these two feature maps guided by the label map to generate two semantic feature sequences, which are sent to (d). (d) then explores the long-range co-occurrence among", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Semantic Spatial Relation Module (SSRM), where the part surrounded by the red dashed box represents the confidence filtering stage, which is used to handle semantic ambiguities.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. An example of Adaptive Confidence Filter.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "w 2 × h 222×l after processing all the channels of M. Fig.4illustrates an example of ACF. Compared to the segmentation map corresponding to M, in which internal points represent channels with the highest confidence within the 1 × 1 × l range, each pixel point in the segmentation map corresponding to M ′ represents the channel with the highest confidence within the 2 × 2 × l range in which it is located. By leveraging the coverage of the discriminative domain instead of a fixed threshold, ACF filters the semantic segmentation map, enabling it to adjust to the unique characteristics of each image. Consequently, ACF improves the precision and generalizability of the semantic segmentation map. Moreover, ACF reduces the subsequent networks' input size, which reduces the computational cost of SSRM. We provide a comparative analysis of this concept in Section 4.3.1.Next, the filtered semantic segmentation score tensor is processed using ResBlocks to explore spatial relation among object regions. ResBlocks includes BasicBlocks 2, 3, and 4 of the original ResNet50. Inspired by the fact that each channel value in M ′ (i, j)", "figure_data": "", "figure_id": "fig_5", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The aggregation process of the Semantic Node Feature Aggregation Module for image features or spatial features. Note that this module handles these two feature maps separately.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Detailed structure of Global-Local Dependency Module.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "optimize the scene representation, we employ attention mechanisms (Encoder and Decoder) to model the long-range dependency between global node and local nodes. To prevent over-fitting, we only adopt a one-layer Encoder and one-layer Decoder. After the Global-Local Dependency Modeling, we obtain the optimized representation X 2 o . In classification processing, we extract the fully modified global feature node", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "about 1.8 million training images and 365 scene categories. This paper uses a simplified version, and only indoor scene categories are considered. To ensure a fair comparison with other indoor scene recognition methods[5], we used the same two scene class settings as them, namely Places365-7 and Places365-14. Places 365-7 contains seven indoor scenes: Bath, Bedroom, Corridor, Dining Room, Kitchen, Living Room, and Office. Places 365-14 contain 14 indoor scenes: Balcony, Bedroom, Dining Room, Home Office, Kitchen, Living Room, Staircase, Bathroom, Closet, Garage, Home Theater, Laundromat, Playroom, and Wet Bar. For the test set, we use the same setup as the official dataset[23].", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "ble 4 .4We evaluate the effect of four components: PlacesCNN (baseline), Semantic spatial relation module, Encoder, and Decoder (Global-Local dependency module). In this part, we average the output features of PlacesCNN and use them as inputs to the classifier to obtain the classification results as the baseline. The results in Table", "figure_data": "", "figure_id": "fig_10", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Feature distributions of scene categories (different colors representing different categories).", "figure_data": "", "figure_id": "fig_11", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Ablation results for different architectures for the SSRM", "figure_data": "ArchitecturePretraining MIT-67SUN397Flops(G)resnet50Scratch64.40350.52927.24*4ACF + resnet50Scratch70.14952.8949.312*2ACF + resnet50Scratch69.62753.0359.31resnet50 ChAMScratch69.85154.62427.34*4ACF + resnet50 ChAMScratch73.28457.49.322*2ACF + resnet50 ChAMScratch74.40358.3299.32#2*2ACF +resnet50 ChAMPlaces36581.64266.9539.32# indicates that the model's parameters are pretrained on Places365.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation results for different feature aggregation methods", "figure_data": "ExperimentScore mapLabel mapFeature mapMIT-670---88.7311-Nearest to 14 * 14-89.85122 * 2 f iltering Nearest to 14 * 14-90.74634 * 4 f iltering Nearest to 14 * 14-90.1494--Bilinear to 224 * 224 89.77652 * 2 f iltering-Bilinear to 112 * 112 90.22464 * 4 f iltering-Bilinear to 56 * 5690.2997-Nearest to 56 * 56Bilinear to 56 * 5690.07582 * 2 f iltering Nearest to 56 * 56Bilinear to 56 * 5690.672of the method used to aggregate the label map and feature map. This phenomenon sug-gests that Global-Local Dependency Modeling can obtain more discriminative scenerepresentations. Moreover, upon experiments 1, 2, and 3; experiments 4, 5, and 6; andexperiments 7 and 8, we observe that using the ACF on the score tensor consistentlyyields better results than using the original score tensor, aligning with the phenomenonobserved in Table", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation results for different Encoder combination methods", "figure_data": "MethodMIT-67 SUN397Product89.92575.965Concatenation90.29975.859Addition90.37375.965Maximum90.74676.153", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of all components", "figure_data": "PlacesCNN SSRM Encoder Decoder Accuracy✓---84.970-✓--81.642✓✓--88.731✓✓✓-90.075✓✓✓✓90.746Improvement Over Baseline (PlacesCNN)5.776", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of feature visualization", "figure_data": "MethodMIT-67 Places365 7 Places365 14 SUN397Baseline (PlacesCNN)84.97093.087.64373.129Ours90.74694.28689.71476.153", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "State-of-the-art results on MIT-67 dataset", "figure_data": "ApproachesPublicationNetwork Input Size AccuracyPlaces365+VGGNet16[23]TPAMI'18224×22476.53Dual CNN-DL[29]AAAI'18224×22476.56NNSD+ICLC[30]TMM'20224×22484.3Multi-Resolution CNNs[39]TIP'17336×33686.7SDO[17]PR'18224×22486.76SAS-Net[33]PR'20224×22487.1CCF-Net[43]KBS'21512×51287.3DeepScene-Net[44]Expert Syst. Appl'22224×22471.0ARG-Net[14]TMM'22448×44888.13PL + AP + AI + IM[42]TMM'19448×44888.06LGN[13]TIP'20448×44888.06MR-Net[41]Appl. Soft Comput'22448×44888.08FCT-Net[19]IJMIR'22224×22489.17CSDML[45]PR'22224×22488.28CSRRM[24]IJCNN'23224×22488.731Ours-224×22490.746", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "State-of-the-art results on Places365-14 and Places365-7 dataset.", "figure_data": "ApproachesPublication Network Input SizePlaces-14 Places-7Word2Vec[6]ICRA'18224×22483.7-Deduce[4]IROS'19224×224-88.1BORM-Net[5]IROS'21224×22485.890.1OTS-Net[40]IROS'21224×22485.990.1CSRRM[24]IJCNN'23224×22488.71493.429Ours-224×22489.71494.286", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "State-of-the-art results on SUN397 dataset.", "figure_data": "ApproachesPublicationNetwork Input Size AccuracyPlaces365+VGGNet16[23]TPAMI'18224×22463.24Dual CNN-DL[29]AAAI'18224×22470.13NNSD+ICLC[30]TMM'20224×22464.78Multi-Resolution CNNs[39]TIP'17336×33672.0SDO[17]PR'18224×22473.41SAS-Net[33]PR'20224×22474.04ARG-Net[14]TMM'22448×44875.02PL + AP + AI + IM[42]TMM'19448×44874.12LGN[13]TIP'20448×44873.85MR-Net[41]Appl. Soft Comput'22448×44873.98FCT-Net[19]IJMIR'22224×22476.06AdaNFF[2]PR'22256×25674.18Ours-224×22476.153", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" } ]
Chuanxin Song; Hanbo Wu; Xin Ma; Yibin Li
[ { "authors": "X Zhu; J Men; L Yang; K Li", "journal": "International Journal of Machine Learning and Cybernetics", "ref_id": "b0", "title": "Imbalanced driving scene recognition with class focal loss and data augmentation", "year": "2022" }, { "authors": "Z Zou; W Liu; W Xing", "journal": "Pattern Recognition", "ref_id": "b1", "title": "Adanff: A new method for adaptive nonnegative multifeature fusion to scene classification", "year": "2022" }, { "authors": "A Quattoni; A Torralba", "journal": "IEEE", "ref_id": "b2", "title": "Recognizing indoor scenes", "year": "2009" }, { "authors": "A Pal; C Nieto-Granda; H I Christensen", "journal": "IEEE", "ref_id": "b3", "title": "Deduce: Diverse scene detection methods in unseen challenging environments", "year": "2019" }, { "authors": "L Zhou; J Cen; X Wang; Z Sun; T L Lam; Y Xu; Borm ", "journal": "IEEE", "ref_id": "b4", "title": "Bayesian object relation model for indoor scene recognition", "year": "2021" }, { "authors": "B X Chen; R Sahdev; D Wu; X Zhao; M Papagelis; J K Tsotsos", "journal": "", "ref_id": "b5", "title": "Scene classification in indoor environments for robots using context based word embeddings", "year": "2018" }, { "authors": "X Song; S Jiang; B Wang; C Chen; G Chen", "journal": "IEEE Transactions on Image Processing", "ref_id": "b6", "title": "Image representations with spatial object-to-object relations for rgb-d scene recognition", "year": "2019" }, { "authors": "L Xie; F Lee; L Liu; K Kotani; Q Chen", "journal": "Pattern Recognition", "ref_id": "b7", "title": "Scene recognition: A comprehensive survey", "year": "2020" }, { "authors": "T Ojala; M Pietikainen; T Maenpaa", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b8", "title": "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns", "year": "2002" }, { "authors": "R Margolin; L Zelnik-Manor; A Tal", "journal": "Springer", "ref_id": "b9", "title": "Otc: A novel local descriptor for scene classification", "year": "2014" }, { "authors": "M A Islam; M Kowal; S Jia; K G Derpanis; N D Bruce", "journal": "", "ref_id": "b10", "title": "Global pooling, more than meets the eye: Position information is encoded channel-wise in cnns", "year": "2021" }, { "authors": "J Qiu; Y Yang; X Wang; D Tao", "journal": "", "ref_id": "b11", "title": "Scene essence", "year": "2021" }, { "authors": "G Chen; X Song; H Zeng; S Jiang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b12", "title": "Scene recognition with prototype-agnostic scene layout", "year": "2020" }, { "authors": "H Zeng; X Song; G Chen; S Jiang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b13", "title": "Amorphous region context modeling for scene recognition", "year": "2020" }, { "authors": "M Egenhofer", "journal": "", "ref_id": "b14", "title": "A mathematical framework for the definition of topological relations", "year": "1990" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b15", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "X Cheng; J Lu; J Feng; B Yuan; J Zhou", "journal": "Pattern Recognition", "ref_id": "b16", "title": "Scene recognition with objectness", "year": "2018" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly; J Uszkoreit; N Houlsby", "journal": "ICLR", "ref_id": "b17", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Y Xie; J Yan; L Kang; Y Guo; J Zhang; X Luan", "journal": "International Journal of Multimedia Information Retrieval", "ref_id": "b18", "title": "Fct: fusing cnn and transformer for scene classification", "year": "2022" }, { "authors": "Y Wang; Y Qiu; P Cheng; J Zhang", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b19", "title": "Hybrid cnn-transformer features for visual place recognition", "year": "2022" }, { "authors": "P Lv; W Wu; Y Zhong; F Du; L Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b20", "title": "Scvit: A spatial-channel feature preserving vision transformer for remote sensing image scene classification", "year": "2022" }, { "authors": "J Xiao; J Hays; K A Ehinger; A Oliva; A Torralba", "journal": "IEEE", "ref_id": "b21", "title": "Sun database: Large-scale scene recognition from abbey to zoo", "year": "2010" }, { "authors": "B Zhou; A Lapedriza; A Khosla; A Oliva; A Torralba", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b22", "title": "Places: A 10 million image database for scene recognition", "year": "2017" }, { "authors": "C Song; X Ma", "journal": "", "ref_id": "b23", "title": "Srrm: Semantic region relation model for indoor scene recognition", "year": "2023" }, { "authors": "A Oliva", "journal": "Elsevier", "ref_id": "b24", "title": "Gist of the scene", "year": "2005" }, { "authors": "L Fei-Fei; P Perona", "journal": "IEEE", "ref_id": "b25", "title": "A bayesian hierarchical model for learning natural scene categories", "year": "2005" }, { "authors": "S Lazebnik; C Schmid; J Ponce", "journal": "IEEE", "ref_id": "b26", "title": "Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories", "year": "2006" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Communications of the ACM", "ref_id": "b27", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "Y Liu; Q Chen; W Chen; I Wassell", "journal": "", "ref_id": "b28", "title": "Dictionary learning inspired deep network for scene recognition", "year": "2018" }, { "authors": "L Xie; F Lee; L Liu; Z Yin; Q Chen", "journal": "IEEE Transactions on Multimedia", "ref_id": "b29", "title": "Hierarchical coding of convolutional features for scene recognition", "year": "2019" }, { "authors": "S Xu; L Luo; J Hu; B Yang; S Hu", "journal": "Knowledge-Based Systems", "ref_id": "b30", "title": "Semantic driven attention network with attribute learning for unsupervised person re-identification", "year": "2022" }, { "authors": "Q Qi; K Li; H Zheng; X Gao; G Hou; K Sun", "journal": "IEEE Transactions on Image Processing", "ref_id": "b31", "title": "Sguie-net: Semantic attention guided underwater image enhancement with multi-scale perception", "year": "2022" }, { "authors": "A López-Cifuentes; M Escudero-Vinolo; J Bescós; Á García-Martín", "journal": "Pattern Recognition", "ref_id": "b32", "title": "Semantic-aware scene recognition", "year": "2020" }, { "authors": "Z Chen; Y Duan; W Wang; J He; T Lu; J Dai; Y Qiao", "journal": "ICLR", "ref_id": "b33", "title": "Vision transformer adapter for dense predictions", "year": "2023" }, { "authors": "B Zhou; H Zhao; X Puig; S Fidler; A Barriuso; A Torralba", "journal": "", "ref_id": "b34", "title": "Scene parsing through ade20k dataset", "year": "2017" }, { "authors": "S Woo; J Park; J.-Y Lee; I S Kweon", "journal": "", "ref_id": "b35", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "J T Springenberg; A Dosovitskiy; T Brox; M Riedmiller", "journal": "", "ref_id": "b36", "title": "Striving for simplicity: The all convolutional net", "year": "2015" }, { "authors": "L Berrada; A Zisserman; M P Kumar", "journal": "PMLR", "ref_id": "b37", "title": "Training neural networks for and by interpolation", "year": "2020" }, { "authors": "L Wang; S Guo; W Huang; Y Xiong; Y Qiao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b38", "title": "Knowledge guided disambiguation for large-scale scene classification with multi-resolution cnns", "year": "2017" }, { "authors": "B Miao; L Zhou; A S Mian; T L Lam; Y Xu", "journal": "IEEE", "ref_id": "b39", "title": "Object-to-scene: Learning to transfer object knowledge to indoor scene recognition", "year": "2021" }, { "authors": "C Lin; F Lee; L Xie; J Cai; H Chen; L Liu; Q Chen", "journal": "Applied Soft Computing", "ref_id": "b40", "title": "Scene recognition using multiple representation network", "year": "2022" }, { "authors": "H Zeng; X Song; G Chen; S Jiang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b41", "title": "Learning scene attribute for scene recognition", "year": "2019" }, { "authors": "C Sitaula; S Aryal; Y Xiang; A Basnet; X Lu", "journal": "Knowledge-Based Systems", "ref_id": "b42", "title": "Content and context features for scene image representation", "year": "2021" }, { "authors": "P S Yee; K M Lim; C P Lee", "journal": "Expert Systems with Applications", "ref_id": "b43", "title": "Deepscene: Scene classification via convolutional neural network with spatial pyramid pooling", "year": "2022" }, { "authors": "Y Wang; P Liu; Y Lang; Q Zhou; X Shan", "journal": "Pattern Recognition", "ref_id": "b44", "title": "Learnable dynamic margin in deep metric learning", "year": "2022" } ]
[ { "formula_coordinates": [ 10, 134.27, 607.12, 31.37, 10.88 ], "formula_id": "formula_0", "formula_text": "M ′ ∈ R" }, { "formula_coordinates": [ 13, 133.77, 324.21, 69.24, 12.18 ], "formula_id": "formula_1", "formula_text": "map L ′ ∈ R w 2 × h 2 ." }, { "formula_coordinates": [ 13, 261.51, 445.03, 215.97, 33.96 ], "formula_id": "formula_2", "formula_text": "L o i, j =          1, L ′ i, j = o 0, L ′ i, j o (1)" }, { "formula_coordinates": [ 13, 239.19, 564.74, 238.29, 30.11 ], "formula_id": "formula_3", "formula_text": "x i rgb = AveragePooling (F I ⊙ L o ) x i spa = AveragePooling (F S ⊙ L o )(2)" }, { "formula_coordinates": [ 14, 230.56, 149.6, 246.92, 30.21 ], "formula_id": "formula_4", "formula_text": "X rgb = concat(x 0 rgb , x 1 rgb , x 2 rgb , ..., x 149 rgb ) X spa = concat(x 0 spa , x 1 spa , x 2 spa , ..., x 149 spa )(3)" }, { "formula_coordinates": [ 14, 217, 465.68, 260.48, 30.15 ], "formula_id": "formula_5", "formula_text": "X 1 rgb = concat(X rgb , GlobalAvgPooling(F I )) X 1 spa = concat(X spa , GlobalAvgPooling(F S ))(4)" }, { "formula_coordinates": [ 14, 267.94, 590.45, 209.54, 32.4 ], "formula_id": "formula_6", "formula_text": "X 2 rgb = X 1 rgb + P rgb emb X 2 spa = X 1 spa + P spa emb(5)" }, { "formula_coordinates": [ 15, 244.13, 492.86, 233.35, 66.02 ], "formula_id": "formula_7", "formula_text": "X 3 rgb = X 2 rgb + MS A(LN(X 2 rgb )) X 4 rgb = X 3 rgb + MLP(LN(X 3 rgb )) X 3 spa = X 2 spa + MS A(LN(X 2 spa )) X 4 spa = X 3 spa + MLP(LN(X 3 spa ))(6)" }, { "formula_coordinates": [ 15, 262.78, 619.11, 214.7, 13.08 ], "formula_id": "formula_8", "formula_text": "X o = Max(X 4 spa , X 4 rgb )(7)" }, { "formula_coordinates": [ 16, 254.45, 175.99, 223.03, 30.15 ], "formula_id": "formula_9", "formula_text": "X 1 o = X o + MS A(LN(X o )) X 2 o = X 1 o + MLP(LN(X 1 o ))(8)" }, { "formula_coordinates": [ 16, 156.38, 387.31, 69.46, 11.26 ], "formula_id": "formula_10", "formula_text": "F o ∈ R c from X 2" }, { "formula_coordinates": [ 16, 238.31, 444.36, 235.3, 29.68 ], "formula_id": "formula_11", "formula_text": "L = - N n=1 y i log exp(F o (n)) N m=1 exp(F o (m)) (9" }, { "formula_coordinates": [ 16, 473.61, 454.48, 3.87, 8.9 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 17, 133.77, 289.43, 25.45, 8.9 ], "formula_id": "formula_13", "formula_text": "cludes" }, { "formula_coordinates": [ 17, 134.27, 553.84, 52.63, 11.26 ], "formula_id": "formula_14", "formula_text": "M i, j ∈ R 1×1×l" } ]
10.1145/3539618.3591966
2023-05-22
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b22", "b23", "b3", "b4", "b12", "b2", "b6", "b1", "b15", "b25", "b27", "b0", "b7", "b16", "b17", "b19", "b29", "b31", "b33", "b10", "b13", "b14", "b30", "b11", "b21", "b20", "b32" ], "table_ref": [], "text": "Query reformulation refers to the refinement of a seed query to obtain desired results by bridging the linguistic chasm of query [23]. Empirically, approximately 28% of queries are revised from the original query [24]. It can be categorized into three approaches: (i) query reduction [4,5,13], removing extraneous query terms, (ii) query expansion [3,7], adding extra query terms, and (iii) query refinement [2,16,26,28], modifying the original query terms. Among them, query reduction is particularly beneficial for reducing long queries to better reflect user intent by narrowing down a search space. Interestingly, about 27% of zero-hit queries, where users do not click on any documents, can be turned into successful ones by removing unnecessary query terms on an e-commerce site [1]. In this sense, query reduction is a simple yet effective way to reflect users' information needs.\nExisting query reduction methods have been widely studied in two directions. First, conventional studies [8,17,18,20,30,32,34] focus on identifying salient features to improve the quality of search results in a limited resource, e.g., the TREC dataset [11]. Given a query, they define an optimal reduction by measuring the retrieval effectiveness for all reductions. Although they improve retrieval performance, it is difficult to measure the generalized performance due to the limited dataset (e.g., query-document relevance not being fully annotated). It is thus non-trivial to evaluate whether the defined optimal reduction effectively captures the actual user intent. Second, several studies [14,15,31] collect original and reduced query pairs from user search logs and analyze the characteristics of the dropped terms. Since they utilize real-world search logs, it is possible to capture hidden user intent in general scenarios. To fully exploit search logs, we need to consider three factors: (i) the meaning of query terms can vary depending on the context, (ii) the original and reduced queries are semantically consistent since they imply the same intent, and (iii) there may be inevitable noise in search logs. However, most existing methods are based on simple rule-based or RNN-based methods and do not take them into account well.\nThe pre-trained language model (PLM), e.g., BERT [12], has recently achieved incredible performance improvements in the IR community. For document ranking, monoBERT [22] adopts BERT to capture complex matching signals between a query and documents. For query expansion, CEQE [21] and BERT-QE [33] use BERT to select terms or chunks in documents that are contextually similar to the query based on pseudo-relevance feedback. Inspired by these studies, we attempt to leverage the PLM to better capture the contextual meaning of queries.\nTo this end, we propose a novel PLM-based query reduction model, called Contextualized Query Reduction (ConQueR), using real-world search logs. We develop two methods with different views: core term extraction and sub-query selection. (i) For core term extraction, it takes the original query as input and distinguishes whether each term is important at the term level. That is, it validates whether a given term deviates from user intent in the context. (ii) For sub-query selection, it takes the original query and its candidate sub-query as input. It then determines whether a given sub-query sufficiently represents the original query at the sequence level. Hence, it evaluates whether the candidate sub-query is semantically complete and it is suitable for reflecting the original intent.\nFinally, we aggregate the two methods because they complement each other by tackling query reduction at different levels. For example, since the core term extraction method determines importance at the term level, it may remove a subset of the terms that only make sense when they exist together (e.g., \"Silicon Valley\"). Meanwhile, the sub-query selection method at the sequence level helps preserve the coherent semantics between the original query and its sub-query; however, it may also yield a high relevance score even if the sub-query is not sufficiently reduced. Therefore, it is beneficial to combine them to identify the most appropriate sub-query among the candidates. We additionally adopt a robust training strategy with a truncated loss to deal with noisy search logs. Experimental results show that the proposed model outperforms existing models with gains of up to 8.45% on real-world search logs." }, { "figure_ref": [ "fig_0" ], "heading": "PROPOSED MODEL", "publication_ref": [ "b9", "b11" ], "table_ref": [], "text": "In this section, we propose a novel query reduction model, namely Contextualized Query Reduction (ConQueR). To effectively reduce a long query while preserving the semantics of the original query, ConQueR exploits the contextualized representations of queries using PLMs [10,12] with two different views. As shown in Figure 1, one extracts the core terms from the query at the term level; it determines whether each term is necessary and can leave only the appropriate terms that capture the user's intent. The other selects the most appropriate sub-query among the candidate sub-queries at the sequence level; it evaluates whether a given sub-query is suitably reduced or not by measuring the coherence of the original and the sub-query. It evaluates whether the sub-query properly captures the semantics of the user intent. Since they learn query reduction at different granularities, we finally combine them to produce synergies in performance. Furthermore, we adopt a robust training strategy with a truncated loss to deal with noisy samples in search logs." }, { "figure_ref": [], "heading": "Core Term Extraction", "publication_ref": [ "b11" ], "table_ref": [], "text": "The core term extraction method determines the importance of each term in the query. It effectively predicts important terms and drops all nonessential terms. Model architecture. Given a query 𝑞, we first tokenize it and take them as input, including two special tokens [CLS] and [SEP]. where each embedding vector e 𝑖 is combined with a positional embedding. Each embedding vector is passed into the transformer encoder and processed into hidden vector h 𝑖 ∈ R 𝑘 for 𝑖 ∈ {[CLS], 1, . . . , |𝑞|, [SEP]} where 𝑘 is the dimension of embedding vectors. Note that we follow the conventional structure in BERT [12]. We evaluate whether each query term is important or not at a term level. That is, each hidden vector h 𝑖 is projected into term importance score ŷ𝑖 .\ne [CLS] , e 1 , . . . , e |𝑞 | , e [SEP] ,(1)\nŷ𝑖 = 𝜎 (w 𝑐 h 𝑖 + 𝑏 𝑐 ),(2)\nwhere 𝜎 (•) is the sigmoid function, w 𝑐 ∈ R 1×𝑘 is a weight parameter, and 𝑏 𝑐 is a bias. Here, we determine whether each term should be retained or dropped; the retained terms indicate core terms reflecting the user's actual information need, and the dropped terms are unnecessary terms disrupting desired search results.\nTraining & inference. As the core term extraction method learns reduction in a term level, we adopt a binary cross-entropy loss for each term.\nL core = - |𝑞 | ∑︁ 𝑖=1 𝑦 𝑖 log ŷ𝑖 + (1 -𝑦 𝑖 ) log(1 -ŷ𝑖 ),(3)\nwhich is defined over an original and reduced query pair (𝑞, 𝑞 + ) ∈ R. R is a training set of query pairs. 𝑦 𝑖 is 1 if ground-truth reduced query 𝑞 + contains 𝑖-th query term 𝑞 𝑖 , and otherwise 0. At inference, we remove the terms with an importance score ŷ𝑖 less than 0.5." }, { "figure_ref": [], "heading": "Sub-query Selection", "publication_ref": [ "b21" ], "table_ref": [], "text": "The sub-query selection method takes the original query and a candidate sub-query as input to the transformer encoder. It determines whether the given sub-query is suitably reducing the original query. The coherence of two queries is evaluated by the cross-encoder mechanism, which performs all-to-all interactions across all terms from queries, as discussed in [22]. Model architecture. Given the original query 𝑞 and a candidate sub-query 𝑞 ′ , we tokenize and pass them into the transformer encoder.\ne \nBy passing them to the PLM, we utilize contextualized hidden vector h [CLS] to quantify the coherence of two sequences. It is projected into a sub-query score 𝑠 sub (𝑞, 𝑞 ′ ) for the pair of the query 𝑞 and its sub-query 𝑞 ′ .\n𝑠 sub (𝑞, 𝑞 ′ ) = w 𝑠 h [CLS] + 𝑏 𝑠 ,(5)\nwhere w 𝑠 ∈ R 1×𝑘 is a weight parameter, and 𝑏 𝑠 is a bias.\nTraining & inference. As the sub-query selection method learns reduction at the sequence level, we a negative log-likelihood of the positive query pair (𝑞, 𝑞 + ).\nL sub = -log exp(𝑠 sub (𝑞, 𝑞 + )) exp(𝑠 sub (𝑞, 𝑞 + )) + 𝑞 -∈N (𝑞) exp(𝑠 sub (𝑞, 𝑞 -)) ,(6)\nwhere N (𝑞) is a set of the negative sub-queries for the original query 𝑞. Although it is possible to utilize all sub-queries except for the ground truth as the negative set, it significantly increases the training time. We thus sample a subset of them. (Empirically, we set the size of N (𝑞) to five and re-sample them for each epoch.)\nWe infer the best sub-query with a greedy search instead of enumerating all the sub-queries, i.e., all subsets of query terms, which induces huge complexity. We first generate all possible candidates that delete only a single term from the original query and compute the score for these |𝑞| candidates to find the top-1 sub-query. Likewise, we generate new |𝑞| -1 candidates by deleting one word from the previous top-1 and scoring them. We repeatedly generate candidates and select the top-1 sub-query until the top-1 sub-query is not changed." }, { "figure_ref": [], "heading": "Aggregating Two Methods", "publication_ref": [ "b7", "b28" ], "table_ref": [], "text": "Since the two methods capture contextual information at multifaceted levels, i.e., term and sequence levels, they complement each other. However, it is non-trivial to aggregate them because they derive their reductions in distinct ways; one removes terms in the original query using term importance scores, while the other selects the most appropriate sub-query among the candidates using sub-query scores. To bridge the gap between the two methods, we additionally obtain sub-query scores from the core term extraction method. Specifically, we define a sub-query score using probabilities for terms that are present or deleted in the subquery 𝑞 ′ . First, we derive the probability that each query term is retained (or removed) as follows:\n𝑝 core (𝑞 𝑖 |𝑞, 𝑞 ′ ) = ŷ𝑖 , if 𝑞 𝑖 ∈ 𝑞 ′ , 1 -ŷ𝑖 , otherwise, for 𝑖 ∈ {1, . . . , |𝑞|}.(7)\nThe term importance score ŷ𝑖 from (2) equals the retention probability, so that 1 -ŷ𝑖 means the removal probability. Then, we estimate the score of the sub-query 𝑞 ′ using by averaging the term probabilities.\n𝑠 core (𝑞, 𝑞 ′ ) = 1 |𝑞| |𝑞 | ∑︁ 𝑖=1 𝑝 core (𝑞 𝑖 |𝑞, 𝑞 ′ ).(8)\nFinally, we aggregate the two methods by summing the scores, 𝑠 sub (𝑞, 𝑞 ′ ) in Eq. ( 5) and 𝑠 core (𝑞, 𝑞 ′ ) in Eq. (8).\n𝑠 (𝑞, 𝑞 ′ ) = 𝑠 sub (𝑞, 𝑞 ′ ) + 𝛼 • 𝑠 core (𝑞, 𝑞 ′ ),(9)\nwhere 𝛼 is a hyperparameter to control the weight of 𝑠 core (𝑞, 𝑞 ′ ).\n(When 𝛼 = 4, we empirically observe the best performance.) To select the final sub-query, we use the same greedy search as in the sub-query selection method, i.e., we repeatedly create candidate sub-queries and select the top-1 sub-query until it is no longer changed.\nDenoising training strategy. For robust training, we adopt a lossbased denoising strategy for two methods. The truncated loss is a technique for dynamically pruning large-loss samples during the training process [29]. Assuming that the large-loss samples are noisy, we dismiss the top 𝜖 (𝑇 )% large-loss samples from the training set R, where 𝜖 (𝑇 ) is a drop rate function with respect to the training epoch 𝑇 . Specifically, we define the drop rate function as 𝜖 (𝑇 ) = 𝑚𝑖𝑛( 𝜖 𝑚𝑎𝑥 𝛾 𝜖 𝑁 -1 (𝑇 -1), 𝜖 𝑚𝑎𝑥 ), where 𝜖 𝑚𝑎𝑥 , 𝛾 and 𝜖 𝑁 are hyperparameters that control the drop rate per epoch." }, { "figure_ref": [], "heading": "EVALUATION 3.1 Experimental Setup", "publication_ref": [ "b14", "b30", "b5", "b13", "b13", "b8", "b5", "b26", "b18", "b24", "b17", "b19", "b29", "b31", "b33", "b9" ], "table_ref": [], "text": "Datasets. We collect the search logs from a commercial web search engine 2 . The dataset consists of Korean query pairs, where two queries are successive in a session, and the latter is always a terminological subset of the former. The total number of query pairs is 239,976, while the number of unique original queries is 104,002. This means that each query is reduced to 2.3 different forms on average, reflecting the various query reformulations of users. We split them into three subsets, i.e., a training set (83,202, 80%), a validation set (10,400, 10%), and a test set (10,400, 10%), depending on the number of unique original queries. To remove the noise of the validation and test sets, we use the query pairs where the original query is always reduced to the same sub-query and appears in two or more sessions.\nCompeting models. We adopt seven existing methods as baselines. Rule-based methods [15,31] are (i) LEFTMOST (LM): deleting 𝑁 𝑞 leftmost terms, (ii) RIGHTMOST (RM): deleting 𝑁 𝑞 rightmost terms, (iii) DF: deleting the most frequently deleted 𝑁 𝑞 terms on the training set, and (iv) CDF: deleting 𝑁 𝑞 terms with the highest ratio of #deletions/#appearances on the training set. For DF and CDF, backoff schemes are noted after the semi-colon. For neural methods [6,14], GRU [14] predicts the removal probability of each query term using a bi-directional GRU [9]. SEQUER [6] is a transformer [27] model that takes the original query as input and generates a reformulated query, and it is trained from scratch in the original paper setting. For a fair comparison, we initialize it with a PLM (BART base [19]) and denote it as SEQUER BART . As an additional baseline, SEQUER GPT is a transformer decoder model initialized with GPT-2 (125M) [25]. ConQueR core , ConQueR sub , and ConQueR agg indicate the core term extraction, the sub-query selection, and the aggregated model, respectively. Some methods [18,20,30,32,34] are excluded since the actual ranking of sub-queries, and various query features cannot be obtained from the search logs.\nEvaluation metrics. We use an exact match score (EM), accuracy (Acc), precision (P), recall (R), and F1-score (F1). For each query, EM is 1 if the reduction is correct and 0 otherwise. For Acc, we divide the number of correct terms by the number of terms in a query. To compute P, R, and F1, we set the retention as a positive class and the removal as a negative class. All evaluation metrics are computed for each query and averaged.\nReproducibility. For the proposed models, we initialized them with ELECTRA base [10]. We used the Adam optimizer and set the maximum sequence length to 60 and 120 for ConQueR core and ConQueR sub , respectively. We set the dropout ratio to 0.2 and the maximum epoch to 5. On the validation set, we ran a grid search for batch size, learning rate, and warm-up ratio for a linear scheduler, and set them to 32, 1e-5, and 0.2, respectively. For truncated loss, 𝜖 𝑚𝑎𝑥 , 𝜖 𝑁 , and 𝛾 were set to 0.3, 4, and 2, respectively. For aggregation, 𝛼 is tuned in { 1 8 , 1\n8}. Experimental results are averaged over five runs with different seeds. For rule-based methods, we set the number of reduced terms 𝑁 𝑞 to 1 because the majority of queries exclude only one word." }, { "figure_ref": [ "fig_1" ], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 1 shows the overall query reduction accuracy. The key observations are as follows: (i) All proposed models consistently outperform the baselines. Although SEQUER BART and SEQUER GPT are the most competitive baselines, they are less effective than ours in capturing contextual information. ConQueR agg surpasses the best competing model by 8.45% gain in EM and 5.54% gain in Acc. (ii) ConQueR agg shows higher accuracy than ConQueR core and ConQueR sub , proving that the different characteristics of the two methods are fully exploited by aggregation. (iii) Simple rule-based methods show relatively lower accuracy than neural methods, suggesting the limitations of naive approaches that cannot account for the contextual meaning of query terms. If we set the number of removing terms 𝑁 𝑞 =2, RIGHTMOST and CDF;RM show 0.080 and 0.061 for EM, and if 𝑁 𝑞 is greater than 2, the performance is severely degraded.\nLastly, we conduct a qualitative evaluation for query reduction. Given the original query and the reduced queries of anonymized models, users were asked whether each reduction was appropriate or not. We calculated the Positive Response Ratio as 𝑢 𝑠𝑒𝑙𝑒𝑐𝑡 𝑢 #𝑢𝑠𝑒𝑟𝑠 and averaged it over all queries, where 𝑠𝑒𝑙𝑒𝑐𝑡 𝑢 ∈ {0, 1} indicates whether the user 𝑢 responded that the reduction is appropriate or not. As in Figure 2, ConQueR agg shows the best performance, indicating that the two methods are well aggregated. On average, about 35% of the users think that ConQueR agg correctly reduces the original queries. Interestingly, ConQueR core and ConQueR sub perform better on both subsets of Long and Short. This is because the longer the query, the more it needs to be reduced, and ConQueR core is more suited for deleting multiple terms. While ConQueR sub tends to give a high score to sub-queries that remove only a single term and are contextually similar to the original user intent." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel query reduction model, namely Contextualized Query Reduction (ConQueR), with two different views. The core term extraction method distinguishes whether each term is essential at the term level. The sub-query selection method determines whether a sub-query is semantically coherent with the original query at the sequence level. To effectively consider query reduction in a multifaceted way, two methods are aggregated and complement each other. In addition, the truncated loss is employed to reduce the effect of noisy samples from search logs. Extensive results show that the proposed model outperforms existing models on the real-world search log dataset collected from a commercial web search engine. The user study verifies that it reduces queries more appropriately than the other methods. " }, { "figure_ref": [], "heading": "A ADDITIONAL RESULTS", "publication_ref": [], "table_ref": [], "text": "Effect of the number of reduced terms. We further evaluate the proposed models depending on the number of reduced terms in Table 2. For rule-based methods, we set 𝑁 𝑞 =1 for single-term deletion and 𝑁 𝑞 =2 for multi-term deletion. ConQueR agg shows the best performance with gains of 7.75% and 10.26% in EM over the best baseline for single and multi-term deletion, respectively. ConQueR sub performs better for single term deletion, while ConQueR core performs better for multi-term deletion. This suggests that ConQueR sub tends to give a high score to sub-queries with only a single term removed because they are contextually similar to the original user intent. The main observations are as follows: (i) The proposed models consistently achieve the best performance in EM for both single and multi-term deletion. Effect of the aggregating parameter. ConQueR SS , respectively. F1 shows the highest accuracy when 𝛼=1 and the highest accuracy in EM when 𝛼 is about 0.9. Additionally, to validate the gain from the ensemble method, we also aggregate the two same components with different parameter initialization. Using two core-term or sub-query selection components achieve 0.896 and 0.893 in EM, respectively." }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENT", "publication_ref": [], "table_ref": [], "text": "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2022-0-00680, 2022-0-01045, 2019-0-00421, 2021-0-02068, and IITP-2023-2020-0-01821)." } ]
Query reformulation is a key mechanism to alleviate the linguistic chasm of query in ad-hoc retrieval. Among various solutions, query reduction effectively removes extraneous terms and specifies concise user intent from long queries. However, it is challenging to capture hidden and diverse user intent. This paper proposes Contextualized Query Reduction (ConQueR) using a pre-trained language model (PLM). Specifically, it reduces verbose queries with two different views: core term extraction and sub-query selection. One extracts core terms from an original query at the term level, and the other determines whether a sub-query is a suitable reduction for the original query at the sequence level. Since they operate at different levels of granularity and complement each other, they are finally aggregated in an ensemble manner. We evaluate the reduction quality of ConQueR on real-world search logs collected from a commercial web search engine. It achieves up to 8.45% gains in exact match scores over the best competing model.
ConQueR: Contextualized Query Reduction using Search Logs
[ { "figure_caption": "Figure 1 :1Figure 1: Model architecture of ConQueR. Note that while only the original query 𝑞 is used as input for core term extraction, a pair of the original query 𝑞 and the candidate subquery 𝑞 ′ is used for sub-query selection.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: User study of existing query reduction models and ours. We reported user responses on 25 queries from 88 users. Short (< 5) and Long (≥ 5) consist of 13 and 12 queries, and are set based on the length of the original queries.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig 3 shows the effect of hyperparameter 𝛼 balancing the core-term component and sub-query component. When 𝛼 is 0 or 1, ConQueR is equal to ConQueR CE or", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "[CLS] , e 1 , . . . , e |𝑞 | , e [SEP] , e |𝑞 |+1 , . . . , e |𝑞 |+ |𝑞 ′ | , e [SEP] .", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of existing query reduction models and ours. The best and second-best models are marked in bold and underlined. Significant differences (𝑝 < 0.01) between baselines and ConQueR agg are denoted with *.", "figure_data": "ModelsEMAccPRF1LEFTMOST0.170* 0.338* 0.396* 0.415* 0.403*RIGHTMOST 0.693* 0.794* 0.801* 0.838* 0.814*DF;RM0.596* 0.738* 0.761* 0.801* 0.775*CDF;RM0.697* 0.814* 0.822* 0.864* 0.836*GRU0.752* 0.882* 0.885* 0.920* 0.893*SEQUER BART 0.833* 0.884* 0.894* 0.901* 0.894*SEQUER GPT0.840* 0.885* 0.895* 0.901* 0.896*ConQueR core0.8920.9280.9350.9340.932ConQueR sub0.9050.9290.9350.9390.935ConQueR agg0.911 0.934 0.939 0.943 0.940", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of query reduction models and ours according to the number of reduced terms. The best and second-best models are marked in bold and underlined. The number in parentheses indicates the number of queries in each set. Significant differences (𝑝 < 0.01) between the baselines and ConQueR agg are denoted with *. Accuracy of ConQueR agg over varying 𝛼.", "figure_data": "0.940.940.930.93F10.91 0.92F1 EM0.91 0.92EMReduction typeModelsEMAccPRF10.90.9LEFTMOST RIGHTMOST 0.780* 0.819* 0.836* 0.836* 0.836* 0.191* 0.344* 0.407* 0.407* 0.407*0.890.89DF;RM0.670* 0.755* 0.790* 0.790* 0.790*0 0.10.30.50.70.9 1CDF;RM0.784* 0.834* 0.854* 0.854* 0.854*Single term deletion (9,235)GRU SEQUER BART SEQUER GPT0.772* 0.891* 0.899* 0.920* 0.902* 0.859* 0.893* 0.908* 0.902* 0.904* 0.864* 0.896* 0.909* 0.905* 0.906*Figure 3:ConQueR core0.9090.9370.9460.9390.941ConQueR sub0.9250.9420.9500.9470.948ConQueR agg0.9310.9460.9530.9500.951LEFTMOST0.147* 0.347* 0.262* 0.279* 0.266*RIGHTMOST 0.714* 0.819* 0.775* 0.802* 0.783*DF;RM0.542* 0.756* 0.721* 0.754* 0.730*CDF;RM0.682*0.8380.803* 0.841* 0.814*Multi term deletion (1,165)GRU SEQUER BART SEQUER GPT0.588* 0.810* 0.778* 0.634* 0.811* 0.784* 0.652* 0.804* 0.783* 0.873* 0.814* 0.925 0.823* 0.892 0.821*ConQueR core0.7580.8590.8450.8960.861ConQueR sub0.7430.8260.8150.8730.833ConQueR agg0.7520.8400.8270.8890.847", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Hye-Young Kim; Minjin Choi; Sunkyung Lee; Eunseong Choi; Jongwuk Lee
[ { "authors": "Yuki Amemiya; Tomohiro Manabe; Sumio Fujita; Tetsuya Sakai", "journal": "", "ref_id": "b0", "title": "How Do Users Revise Zero-Hit Product Search Queries?", "year": "2021" }, { "authors": "G Peter; Anick", "journal": "", "ref_id": "b1", "title": "Using terminological feedback for web search refinement: a log-based study", "year": "2003" }, { "authors": "Hiteshwar Kumar; Azad ; Akshay Deepak", "journal": "Inf. Process. Manag", "ref_id": "b2", "title": "Query expansion techniques for information retrieval: A survey", "year": "2019" }, { "authors": "Michael Bendersky; W Bruce Croft", "journal": "", "ref_id": "b3", "title": "Discovering Key Concepts in Verbose Queries", "year": "2008" }, { "authors": "Michael Bendersky; Donald Metzler; W Bruce Croft", "journal": "", "ref_id": "b4", "title": "Parameterized Concept Weighting in Verbose Queries", "year": "2011" }, { "authors": "Kaibo Cao; Chunyang Chen; Sebastian Baltes; Christoph Treude; Xiang Chen", "journal": "", "ref_id": "b5", "title": "Automated Query Reformulation for Efficient Search based on Query Logs From Stack Overflow", "year": "2021" }, { "authors": "Claudio Carpineto; Giovanni Romano", "journal": "ACM Comput. Surv", "ref_id": "b6", "title": "A Survey of Automatic Query Expansion in Information Retrieval", "year": "2012" }, { "authors": "Messaoud Chaa; Omar Nouali; Patrice Bellot", "journal": "", "ref_id": "b7", "title": "Verbose Query Reduction by Learning to Rank for Social Book Search Track", "year": "2016" }, { "authors": "Junyoung Chung; Çaglar Gülçehre; Kyunghyun Cho; Yoshua Bengio", "journal": "CoRR", "ref_id": "b8", "title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling", "year": "2014" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b9", "title": "ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators", "year": "2020" }, { "authors": "L A Charles; Nick Clarke; Ian Craswell; Gordon V Soboroff; Cormack", "journal": "TREC", "ref_id": "b10", "title": "Overview of the TREC 2010 Web Track", "year": "2010" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b11", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Manish Gupta; Michael Bendersky", "journal": "", "ref_id": "b12", "title": "Information Retrieval with Verbose Queries", "year": "2015" }, { "authors": "Kishaloy Halder; Heng-Tze; Ellie Cheng; In Ka; Georgios Chio; Tao Roumpos; Ritesh Wu; Agarwal", "journal": "CoRR", "ref_id": "b13", "title": "Modeling Information Need of Users in Search Sessions", "year": "2020" }, { "authors": "Rosie Jones; Daniel C Fain", "journal": "", "ref_id": "b14", "title": "Query word deletion prediction", "year": "2003" }, { "authors": "Jürgen Koenemann; Nicholas J Belkin", "journal": "", "ref_id": "b15", "title": "A Case for Interaction: A Study of Interactive Information Retrieval Behavior and Effectiveness", "year": "1996" }, { "authors": "Bevan Koopman; Liam Cripwell; Guido Zuccon", "journal": "", "ref_id": "b16", "title": "Generating Clinical Queries from Patient Narratives: A Comparison between Machines and Humans", "year": "2017" }, { "authors": "Giridhar Kumaran; R Vitor; Carvalho", "journal": "", "ref_id": "b17", "title": "Reducing long queries using query quality predictors", "year": "2009" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b18", "title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "year": "2020" }, { "authors": "K ; Tamsin Maxwell; W Bruce Croft", "journal": "", "ref_id": "b19", "title": "Compact query term selection using topically related text", "year": "2013" }, { "authors": "Shahrzad Naseri; Jeff Dalton; Andrew Yates; James Allan", "journal": "", "ref_id": "b20", "title": "CEQE: Contextualized Embeddings for Query Expansion", "year": "2021" }, { "authors": "Rodrigo Nogueira; Kyunghyun Cho", "journal": "CoRR", "ref_id": "b21", "title": "Passage Re-ranking with BERT", "year": "2019" }, { "authors": "Jessie Ooi; Xiuqin Ma; Hongwu Qin; Siau Chuin Liew", "journal": "", "ref_id": "b22", "title": "A survey of query expansion, query suggestion and query refinement techniques", "year": "2015" }, { "authors": "Greg Pass; Abdur Chowdhury; Cayley Torgeson", "journal": "", "ref_id": "b23", "title": "A picture of search", "year": "2006" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b24", "title": "Language Models are Unsupervised Multitask Learners", "year": "2019" }, { "authors": "Eldar Sadikov; Jayant Madhavan; Lu Wang; Alon Y Halevy", "journal": "WWW", "ref_id": "b25", "title": "Clustering query refinements by user intent", "year": "2010" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b26", "title": "Attention is All you Need", "year": "2017" }, { "authors": "Bienvenido Vélez; Ron Weiss; Mark A Sheldon; David K Gifford", "journal": "", "ref_id": "b27", "title": "Fast and Effective Query Refinement", "year": "1997" }, { "authors": "Wenjie Wang; Fuli Feng; Xiangnan He; Liqiang Nie; Tat-Seng Chua", "journal": "", "ref_id": "b28", "title": "Denoising Implicit Feedback for Recommendation", "year": "2021" }, { "authors": "Xiaobing Xue; J Samuel; W Bruce Huston; Croft", "journal": "", "ref_id": "b29", "title": "Improving verbose queries using subset distribution", "year": "2010" }, { "authors": "Bishan Yang; Nish Parikh; Gyanit Singh; Neel Sundaresan", "journal": "", "ref_id": "b30", "title": "A Study of Query Term Deletion Using Large-Scale E-commerce Search Logs", "year": "2014" }, { "authors": "Peilin Yang; Hui Fang", "journal": "ICTIR", "ref_id": "b31", "title": "Can Short Queries Be Even Shorter?", "year": "2017" }, { "authors": "Zhi Zheng; Kai Hui; Ben He; Xianpei Han; Le Sun; Andrew Yates", "journal": "", "ref_id": "b32", "title": "BERT-QE: Contextualized Query Expansion for Document Re-ranking", "year": "2020" }, { "authors": "Ingrid Zukerman; Bhavani Raskutti; Yingying Wen", "journal": "", "ref_id": "b33", "title": "Query Expansion and Query Reduction in Document Retrieval", "year": "2003" } ]
[ { "formula_coordinates": [ 2, 127.92, 701.49, 166.13, 8.94 ], "formula_id": "formula_0", "formula_text": "e [CLS] , e 1 , . . . , e |𝑞 | , e [SEP] ,(1)" }, { "formula_coordinates": [ 2, 405.53, 526.11, 152.67, 8.43 ], "formula_id": "formula_1", "formula_text": "ŷ𝑖 = 𝜎 (w 𝑐 h 𝑖 + 𝑏 𝑐 ),(2)" }, { "formula_coordinates": [ 2, 358.16, 632.96, 200.04, 27.63 ], "formula_id": "formula_2", "formula_text": "L core = - |𝑞 | ∑︁ 𝑖=1 𝑦 𝑖 log ŷ𝑖 + (1 -𝑦 𝑖 ) log(1 -ŷ𝑖 ),(3)" }, { "formula_coordinates": [ 3, 124.18, 270.92, 169.87, 12.37 ], "formula_id": "formula_4", "formula_text": "𝑠 sub (𝑞, 𝑞 ′ ) = w 𝑠 h [CLS] + 𝑏 𝑠 ,(5)" }, { "formula_coordinates": [ 3, 60.98, 339.66, 233.06, 24.23 ], "formula_id": "formula_5", "formula_text": "L sub = -log exp(𝑠 sub (𝑞, 𝑞 + )) exp(𝑠 sub (𝑞, 𝑞 + )) + 𝑞 -∈N (𝑞) exp(𝑠 sub (𝑞, 𝑞 -)) ,(6)" }, { "formula_coordinates": [ 3, 64.79, 685.51, 229.25, 23.64 ], "formula_id": "formula_6", "formula_text": "𝑝 core (𝑞 𝑖 |𝑞, 𝑞 ′ ) = ŷ𝑖 , if 𝑞 𝑖 ∈ 𝑞 ′ , 1 -ŷ𝑖 , otherwise, for 𝑖 ∈ {1, . . . , |𝑞|}.(7)" }, { "formula_coordinates": [ 3, 373.88, 134.83, 184.32, 27.63 ], "formula_id": "formula_7", "formula_text": "𝑠 core (𝑞, 𝑞 ′ ) = 1 |𝑞| |𝑞 | ∑︁ 𝑖=1 𝑝 core (𝑞 𝑖 |𝑞, 𝑞 ′ ).(8)" }, { "formula_coordinates": [ 3, 371.04, 192.14, 187.16, 12.37 ], "formula_id": "formula_8", "formula_text": "𝑠 (𝑞, 𝑞 ′ ) = 𝑠 sub (𝑞, 𝑞 ′ ) + 𝛼 • 𝑠 core (𝑞, 𝑞 ′ ),(9)" } ]
10.18653/v1/D19-1006
2024-02-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b3", "b23", "b7", "b32", "b22", "b3", "b7", "b14", "b6", "b2" ], "table_ref": [], "text": "Neural text generation has attracted increasing attention from both academia and industry. The canonical approach factors the generation process in an autoregressive fashion, reducing the generation into a series of next-token predictions conditioned on their preceding sequences. With the development of large language models (LMs) (Brown et al., 2020;Touvron et al., 2023a,b), the estimation of the probability distribution for next-token predictions has become remarkably accurate. However, when it comes to open-ended text generation, such as story generation (Fan et al., 2018) and writing assistance (Shi et al., 2022), perhaps counter-intuitively, searching for the most likely sequences (e.g., greedy search and beam search) often results in low-quality outputs. Concretely, the generations are prone to falling into tedious and repetitive loops, a notorious issue referred to as neural text degeneration (Holtzman et al., 2020;Xu et al., 2022;Shi et al., 2024).\nTo address the above problem, two lines of research efforts have been devoted to devising better decoding strategies. The canonical approaches take random samples from the LM's output distribution (Fan et al., 2018;Holtzman et al., 2020;Meister et al., 2022;Hewitt et al., 2022). The introduced stochasticity can alleviate repetitive generation, however, it also increases the chance of unnatural topic drift and semantic incoherence. More recently, another class of approaches proposes to re-rank top candidate tokens using extra ob-Figure 1: FSD exploits the contrasts between the LM and the anti-LM, where the probabilities from the LM and the anti-LM are used as rewards and penalties respectively. In the above example, the top prediction of the LM is \"driving\". However, the anti-LM also gives a large penalty to \"driving\" because it will result in repetition. Consequently, \"wearing\" is instead selected and the anti-LM is updated accordingly. jectives. Concretely, contrastive search (CS) (Su et al., 2022) uses a look-ahead mechanism and penalizes tokens compromising the isotropy of the LM's latent space (Ethayarajh, 2019). Contrastive decoding (CD) (Li et al., 2023a) searches for the token that maximizes the probability difference between the LM and another smaller LM with the same tokenization. Despite better generation quality is achieved, the look-ahead mechanism in CS and the running of an external LM in CD considerably increase computational overhead. Moreover, CS relies on the isotropic property of the LM and CD depends on another LM using the same tokenization, thereby limiting their applicability.\nIn this paper, we propose Frustratingly Simple Decoding (FSD) for addressing the degeneration issue with minimal computational cost and without any assumptions on the underlying LM. As illustrated in Figure 1, FSD works by imposing penalties on repetitive patterns that have appeared in the prefix. This is realized through an anti-LM that can capture and memorize these patterns. Specifically, at each generation step, both the LM and the anti-LM take the current prefix as input and separately produce two next-token distributions. The generation probabilities from the LM serve as rewards and those from the anti-LM act as penalties. FSD subtracts the penalties from the rewards, selects the token that maximizes the final score, and continuously updates the anti-LM based on the growing prefix. The anti-LM can be implemented as simple as an n-gram language model or a vectorized variant, making FSD as fast as greedy search.\nWe perform extensive experiments to demonstrate the effectiveness, efficiency, and universality of FSD. The key findings can be summarized as follows: (1) On three canonical open-ended text generation benchmarks, the generation quality of FSD not only surpasses the standard top-p sampling but also is comparable to, if not better than, recent state-of-the-art methods, according to both automatic and human evaluations. (2) FSD exhibits robustness in handling varying generation lengths, particularly demonstrating its superiority in generating longer sequences where existing state-ofthe-art methods often struggle. (3) The generation speed of FSD is as fast as greedy search (the theoretical upper bound for autoregressive generation). The speed advantage over existing state-of-theart methods amplifies as the generation length increases. (4) FSD shows versatility across a variety of models, languages, and tasks (e.g., instruction following and summarization)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b20", "b26", "b21", "b3", "b7", "b4", "b19", "b3", "b7", "b14", "b6", "b30", "b9", "b32" ], "table_ref": [], "text": "Recent years have witnessed enormous progress in neural text generation, particularly with the success of large LMs (Radford et al., 2019). The most straightforward heuristics for generating text from an LM is to find the most likely sequence estimated by the LM. Although maximizing the LM probabilities (e.g., greedy search and beam search) obtains excellent performance in close-ended text generation tasks (e.g., translation (Sutskever et al., 2014) and summarization (See et al., 2017)), these search-based methods suffer from generating non-sensical output in open-ended text generation tasks (e.g., story generation (Fan et al., 2018)). One prominent issue is that they tend to generate dull and repetitive output (Holtzman et al., 2020;Fu et al., 2021;Pillutla et al., 2021).\nDecoding Methods To tackle the above challenge, different decoding methods have been proposed, which can be broadly categorized into two classes. The first class is truncated sampling, where each token is randomly sampled from a truncated next-token distribution. For instance, topk sampling (Fan et al., 2018) only samples from the k most likely tokens. Top-p sampling (Holtzman et al., 2020) only considers the minimal set of top tokens that cover a specified percentage p of the distribution. Typical sampling (Meister et al., 2022) sorts tokens according to the differences between distribution entropy and probabilities. Hewitt et al. (2022) truncate words whose probabilities are below an entropy-dependent threshold. Although sampling-based methods reduce repetitions, the randomness at each sampling step also increases the chance of incoherence and topic drift.\nThe second class of decoding methods is still search-based but optimizes a different objective. Contrastive Search (CS) (Su et al., 2022) assumes the LM has an isotropic representation space and adds a penalty term that decreases the generation probabilities of tokens producing hidden states that are similar to the previous context. However, the look-ahead operation at each step brings considerable additional cost. Contrastive Decoding (CD) (Li et al., 2023a) employs an amateur LM (a smaller pre-trained LM using the same tokenization) and penalizes undesired attributes associated with the amateur model. In contrast, FSD is much more lightweight and efficient; FSD only constructs an ngram model on-the-fly, requiring no external model and introducing negligible computational cost. In addition, FSD holds the potential for broader applicability as it does not assume the existence of an amateur LM or any properties of the LM.\nTraining Methods Another group of methods attempts to improve text generation quality by fine-tuning the LMs with new training objectives. Welleck et al. (2020) propose unlikelihood training, which explicitly minimizes the generation probability of repetitive tokens. Lagutin et al. (2021) improve the generation using policy gradient with a repetition objective. Xu et al. (2022) learn to penalize probabilities of sentence-level repetitions from pseudo-repetitive data. Su et al. (2022) devise a contrastive training objective that encourages discriminative and isotropic token representations. In contrast, FSD simply employs off-the-shelf pretrained LMs and requires zero training." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Language Models", "publication_ref": [], "table_ref": [], "text": "An LM is a probability distribution over token sequences. Given a sequence x 1:t = x 1 , x 2 , . . . , x t of length t, LM assigns a probability p(x 1:t ) to the sequence, which is usually decomposed in an autoregressive fashion: p(x 1:t ) = t i=1 p(x i |x <i )." }, { "figure_ref": [], "heading": "N -gram Language Model", "publication_ref": [ "b8", "b27" ], "table_ref": [], "text": "The most traditional LM is the n-gram model, which relies on the Markov assumption (Jurafsky and Martin, 2009). In an n-gram LM, the probability of the i-th token only depends on the previous n -1 tokens, expressed as p(x i |x <i ) = p n (x i |x i-n+1:i-1 ). This probability can be computed by evaluating the relative frequency counts within a training corpus:\np n (x i |x i-n+1:i-1 ) = C(x i-n+1:i ) C(x i-n+1:i-1 )(1)\nwhere C(•) counts the number of occurrences of the input sequence within the training corpus.\nIn practice, the probability distributions are often smoothed to improve the model's generalizability. For example, the interpolation of n-gram models of different orders can help prevent the LM from assigning zero probability to unseen sequences (Tonella et al., 2014)." }, { "figure_ref": [], "heading": "Neural Language Model", "publication_ref": [ "b20", "b1", "b13" ], "table_ref": [], "text": "With the rise of deep learning, n-gram LMs have been largely superseded by neural networks, for example, the GPT family (Radford et al., 2019;Brown et al., 2020) and the LLaMA family (Touvron et al., 2023a,b). These models are trained to predict the next token by conditioning on the preceding context:\nL θ = - t i=1 log p θ (x i |x <i )\n, where θ denotes the model parameters. With the capabilities acquired by largescale pre-training, these neural LMs can be readily applied to text generation (Liu et al., 2022)." }, { "figure_ref": [], "heading": "Open-Ended Text Generation", "publication_ref": [ "b7", "b31" ], "table_ref": [], "text": "Most of our experiments are conducted on openended text generation tasks, where the input is a short prompt and the goal is to generate a fluent and coherent continuation. Formally, given a prompt x 1:l = x 1 , x 2 , . . . , x l , we aim to generate the next m tokens, denoted by x l+1:l+m = x l+1 , x l+2 , . . . , x l+m . A pre-trained neural LM can complete this task autoregressively by a series of next-token predictions:\np θ (x l+1:l+m |x 1:l ) = l+m i=l+1 p θ (x i |x <i )\nPrevious works have revealed that the decoding method that selects the token at each generation step has a significant impact on the generation quality (Holtzman et al., 2020;Wiher et al., 2022). For example, greedy and beam search often result in repetitions while sampling-based methods suffer from incoherence (Su et al., 2022;Li et al., 2023a)." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We present our proposed decoding method, Frustratingly Simple Decoding (FSD), named after its remarkably straightforward nature. We begin by introducing the intuition and the general framework of FSD ( §4.1). We then describe the implementation of FSD in the discrete version ( §4.2) and further extend it to the vectorized version ( §4.3)." }, { "figure_ref": [], "heading": "Intuition & Framework", "publication_ref": [], "table_ref": [], "text": "To produce coherent and diverse generations, it is crucial not only to select the most probable tokens but also to prevent repetitive content. While the former objective can be achieved using the original LM, the latter requires a mechanism for tracking previously generated content and reducing their likelihood of reoccurrence. To this end, we propose the construction of an anti-LM based on the preceding context. This anti-LM is expected to assign higher scores to tokens that will cause repetitions in the preceding context. Consequently, these scores serve as penalties. By integrating the original LM and the anti-LM, we can discourage repetitive token generation and promote other contextually appropriate choices.\nFormally, when decoding the t-th token, we calculate an FSD score for each candidate token v:\nFSD(v|x <t ) = p θ (v|x <t ) -α × p ω (v|x <t ) (2)\nwhere p θ and p ω represent the LM and the anti-LM respectively. The hyper-parameter α ≥ 0 is used to balance the two scores. In practice, we first select the top-k most probable tokens according to p θ (•|x), denoted by V (k) . The token in V (k) with the largest FSD score is chosen as the t-th token." }, { "figure_ref": [], "heading": "N -gram Model as anti-LM", "publication_ref": [ "b8" ], "table_ref": [], "text": "Following the intuition described above, we start to devise the anti-LM. In principle, any language model capable of capturing patterns in a token sequence can be harnessed to implement the anti-LM. However, we note several critical design principles. First, the prediction of the anti-LM should be efficient, given that it is invoked at every decoding step. Second, the anti-LM should not assume any particular properties of the LM or the language, thus ensuring our method's universal applicability across diverse settings. Last but not least, the update of the anti-LM should be easy, as it undergoes continuous evolution with the expanding prefix. \nInitialize r = 1, cv = 0 2 for n = N, • • • , 2 do 3 if pn(v|x<t) ̸ = 0 then 4 λn = r * β 5 r = r -λi 6 cv += λnpn(v|xt-n+1:t-1) 7 cv += rp1(v) Output : pω(v|x<t) = cv\nOne natural (and perhaps the simplest) choice is the n-gram LM, which offers two key advantages. First, all the operations (i.e., construction, prediction, and update) associated with an n-gram model add little computational overhead. Second, the effectiveness and efficiency of n-gram LM is scalable across different prefix lengths.\nConstruction and Update Given an input prompt x 1:l , the n-gram LM is constructed and updated as follows. Initially, the prompt x 1:l is split into n-grams. These n-grams can be stored as a set of key-value pairs D n . For each n-gram x i-n+1:i , the key is the first n-1 tokens x i-n+1:i-1 and the value is the last token x i . After generating each new token, we update D n to include the new n-gram composed by the last n tokens in the sequence.\nTo calculate next-token probabilities, we use the last n -1 tokens in the sequence as the query. We first identify all key-value pairs in D n whose key precisely matches the query and then compute the probabilities according to Eq. 1. All of the above operations introduce little computational overhead compared to the running of the original neural LM.\nSmoothed N -gram Model An ordinary n-gram model cannot penalize the m(m < n)-gram repetitions. Inspired by two common smoothing techniques in modern n-gram models, back-off and interpolation (Jurafsky and Martin, 2009), we combine n-gram models with different orders from n = 1 to N (N being the highest order). The result is a smoothed n-gram model p:\np = λ N p N + λ N -1 p N -1 + • • • + λ 1 p 1 (3)\nwhere λ n is the weight of p n and N n=1 λ n = 1. The detailed process is elaborated in Alg. 1. In brief, we enumerate n-gram models from n = N to n = 1, setting λ n to decrease exponentially with a decay factor β = 0.9, thus assigning greater weights to higher-order sub-models. The construction and update of the smoothed n-gram LM are straightforward; We only need to maintain N copies of key-value pairs (D 1 , D 2 , . . . , D N ) separately." }, { "figure_ref": [], "heading": "Vectorized N -gram Model", "publication_ref": [], "table_ref": [], "text": "We further provide a vectorized version where the keys are represented using continuous vectors instead of discrete tokens. It offers two advantages compared with the discrete version. First, it possesses the ability to penalize not only identical but also similar patterns in the preceding context, thus allowing for more generalizable pattern recognition. Second, the computation of the vectorized version can be efficiently conducted on GPU, resulting in faster decoding speed.\nSpecifically, we use the hidden states from the last layer of the original LM as the keys. Let h 1 , h 2 , . . . , h t-1 be the hidden states for the current sequence x 1:t-1 (h t-1 is used to predict the t-th token in the original LM). Each key-value pair in the discrete version (x i-n+1:i-1 , x i ) now turns to be (h i-n+1:i-1 , x i ). Accordingly, the exact querykey matching in the discrete version becomes a \"soft\" vector matching. To predict the next token, the query is h t-n+1:t-1 and the matching score between the query and a key h i-n+1:i-1 is computed as follows:\nc i = cos(cat(h i-n+1:i-1 ), cat(h t-n+1:t-1 )) (4)\nwhere cos computes cosine similarity and cat denotes vector concatenation. For a candidate token v that appears multiple times in the sequence, we take the largest matching score as its penalty score. In addition, we clip the penalty score to ensure it is always greater than or equal to zero." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b10" ], "table_ref": [], "text": "Our main experiments focus on open-ended text generation. This task has been used for evaluating various decoding methods in recent works (Li et al., 2023a;Su et al., 2022;Lan et al., 2022) because it is particularly susceptible to the repetition issue. We follow the standard setups ( §5.1) and report the results in §5.2. In addition, we assess the speed of the decoding methods in §5.3, an essential aspect when considering real-world deployment. Moreover, we explore the universality of our proposed method in §5.4 from several perspectives: (1) robustness across various models, languages, and datasets (2) versatility for tackling other tasks such as instruction following (the most popular use of LLMs) and closed-ended generation." }, { "figure_ref": [], "heading": "Setup for Open-Ended Text Generation", "publication_ref": [ "b10", "b15", "b34", "b33", "b20", "b19", "b5", "b7", "b14" ], "table_ref": [], "text": "Datasets & Models Following previous works (Su et al., 2022;Li et al., 2023a;Lan et al., 2022) on three English benchmarks. That is, wikinews1 in the news domain, wikitext-103 (Merity et al., 2017) in the Wikipedia domain and bookcorpus (Zhu et al., 2015) in the story domain. For each test case, the first 32 tokens are used as the prompt and the task is to generate the following 256 tokens. We test three off-the-shelf LMs of different scales: OPT-6.7b (Zhang et al., 2022), GPT2-XL, and GPT2-Medium (Radford et al., 2019). The amateur LM used in CD is OPT-125m for OPT-6.7b and GPT2 for GPT2-XL and GPT2-Medium.\nEvaluation Metrics For automatic evaluation, we report three metrics assessing different aspects of the generations:\n• Diversity measures the degree of repetition at different n-gram levels. The calculation can be expressed as\n4 n=2 (1 -REP n ), where REP n = (1 - #unique n-grams(x)\n#total n-grams(x) ). x is the generated continuation. A higher diversity score indicates that generated outputs contain fewer repetitive segments.\n• MAUVE (Pillutla et al., 2021) measures the distribution similarity between the generated texts and reference texts.\n• Coherence (Su et al., 2022) is defined as the cosine similarity between the embeddings of the prompt x and the generated continuation\nx: COH = f (x)f (x) ∥f (x)∥∥f (x)∥\n, where f is the SimCSE (Gao et al., 2021) sentence embedding function.\nFor human evaluation, we conduct blind A/B tests with the help of proficient English speakers from a third-party grading platform. In the process of annotation, annotators are asked to compare two continuations of the same prompt and decide which one is better (or two are equally good/bad) by jointly considering fluency, coherence, and commonsense. Each case is rated by three annotators and we use majority vote.\nImplementation Details For clarity, the variant of FSD using the vectorized n-gram model is named as FSD-vec. We set n to 3 and 2 for FSD and FSD-vec respectively and k to 6 for both variants. Based on our preliminary experiments, the penalty strength α is set to 3 and 1 for FSD and FSD-vec respectively. We find this setting is quite robust and generalizes well to different scenarios.\nBaselines To show the superior performance of FSD/FSD-vec, we mainly compared it with two recent search-based decoding methods, CD (Li et al., 2023a) and CS (Su et al., 2022), since they were reported to outperform other existing decoding methods. 2 We follow the suggested hyper-parameter settings from their respective papers. We also compare with top-p sampling (Holtzman et al., 2020) because it is the most popular decoding method for open-ended text generation. We also include the results of typical sampling (Meister et al., 2022). We set p in top-p sampling and typical sampling to 0.95, as adopted by Li et al. (2023a)." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Automatic Evaluation Results", "publication_ref": [ "b7", "b14", "b32", "b25" ], "table_ref": [ "tab_0" ], "text": "For automatic metrics, we believe that results closer to human are better because a higher score does not always indicate a better generation. For example, a random token sequence would obtain an extremely high diversity score, and a continuation identical to the input prompt would get a full coherence score. This is also commonly adopted in previous works (Holtzman et al., 2020;Meister et al., 2022;Xu et al., 2022). Therefore, we highlight the results that are closest to human in our experiments. From Table 1, we can observe that: • For diversity (div), FSD/FSD-vec matches or outperforms all other decoding baselines in six/five out of nine settings (the combinations of three LMs and three domains). In cases where FSD and FSD-vec are not the best, the gaps between them and the best scores are minimal (< 0.03). It is worth noting that recent state-of-the-art methods (CD and CS) are very sensitive to the choices of the LMs. For example, CS fails to achieve reasonable diversity scores on all three benchmarks when using GPT2-Medium. The reason is that CS relies on the isotropy of the LM's latent space and GPT2-Medium may not fulfill this requirement. The diversity scores of CD also decrease significantly as the LM switches from GPT2-XL to GPT2-Medium, perhaps because the difference between the LM and its amateur is not sufficiently indicative of degeneration. In contrast, FSD and FSD-vec are much more stable in diversity. We attribute the success to that the operations of the anti-LM in FSD are relatively independent to the choice of the LM.\n• For coherence (coh), FSD/FSD-vec achieves the best coherence scores in seven/four out of nine settings. These results emphasize the effectiveness of FSD and FSD-vec in generating coherent and contextually-appropriate continuations. We can see that sampling-based methods (top-p and typical sampling) often deliver lower coherence scores than search-based methods (CS and CD). This confirms that sampling-based methods produce lexically diverse text at the expense of topic drift. Importantly, FSD and FSD-vec often attain better diversity and better coherence at the same time, suggesting our methods provide a better trade-off between diversity and coherence.\n• For MAUVE (mau), sampling-based methods (particularly top-p sampling) are generally better than search-based methods (CS, CD, FSD, and FSD-vec) though the gaps are often very small. However, it has been reported that the generation quality of CS and CD is better according to human evaluation. This indicates that MAUVE may not be a reliable metric which is also pointed out by Su and Xu (2022). Therefore, we turn to extensive human evaluation." }, { "figure_ref": [], "heading": "Human Evaluation Results", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "For human evaluation, we randomly select 100 prompts from each of the three benchmarks. We first compare FSD against top-p sampling and two recent state-of-theart methods, CS and CD. The results are shown in Table 2. We can see that on average across settings, annotators prefer FSD 1.30x more than CD, 1.26x more than top-p sampling and 1.14x more than CS. FSD wins all the comparisons with the only exception: FSD vs CS on book. The results show that CS is the most competitive method, we then turn to compare FSD-vec with CS and FSD. As shown in Table 3, FSD-vec wins all the comparisons against CS and is preferred 1.49x more than CS. The quality of FSD-vec is on par with FSD." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "We find that, compared with CS, FSD is less likely to generate continuations that deviate from the topic. Table 4 shows two continuations from CS and FSD respectively. The prefix's topic is \"a musician is considering running for presidency\". But the topic of CS's output is concert tours which is irrelevant to that of the prefix. It may be because CS tends to excessively penalize tokens in comparison to FSD. For instance, CS has the potential to penalize tokens that have never occurred in the preceding context, as long as they produce similar hidden states. In contrast, FSD only penalizes tokens that appear in the context and genuinely result in repetitions." }, { "figure_ref": [ "fig_2" ], "heading": "Effect of Decoding Length", "publication_ref": [], "table_ref": [], "text": "Next, we investigate the robustness of our methods in addressing the degeneration issue under different generation lengths. In Figure 2, we present the diversity scores of FSD, FSD-vec, CS and CD when the generation length is 256, 512 and 768. As seen, the diversity of human-generated text is most stable across different lengths. The diversity of CS and CD drops dramatically as the generation length increases, resulting in a progressively larger disparity between the generated text and human-generated text. In contrast, FSD has the smallest slope and FSD-vec exhibits a similar slope to FSD from 256 to 512, and slightly steeper from 512 to 768. It reveals that our method is much more robust in reducing repetitions in longer sequence generation." }, { "figure_ref": [ "fig_0" ], "heading": "Decoding Speed", "publication_ref": [], "table_ref": [], "text": "To compare the decoding speed of different methods, we plot the decoding latency (seconds per instance) of search-based methods in Figure 3 This can be attributed to the minimal computational overhead brought by the n-gram anti-LM, as opposed to the time-consuming look-ahead mechanism in CS and the running of an amateur LM in CD. Importantly, as the generation length increases, the absolute speed gap between FSD and CS/CD becomes even more pronounced, increasing from 8/10 seconds to 20/40 seconds per instance. This highlights the great efficiency advantage of our methods in generating long sequences. Note that FSD-vec is slightly faster than FSD. The reason is that the computation of the vectorized n-gram can be efficiently performed on GPUs." }, { "figure_ref": [], "heading": "Universality", "publication_ref": [ "b33", "b16" ], "table_ref": [ "tab_10", "tab_9" ], "text": "More Languages, Models and Datasets So far, our evaluation has been primarily focused on En- glish corpora, and the types of LMs used are also limited. We here expand our evaluation to include other non-English languages using various LMs.\nWe conduct experiments on four datasets, chinesewiki3 , japanese-news4 , german-wiki, and frenchwiki5 . We adopt a variety of popular LMs, including BLOOM-7b (BigScience, 2023), LLaMA-7b (Touvron et al., 2023a), OPT-6.7b (Zhang et al., 2022).\nThe evaluation results are shown in Table 5, where also report the results of the state-of-the-art decoding methods, CS and top-p (the missing positions indicate the LM does not support the language.). As seen, FSD and FSD-vec generally outperform CS and top-p (most of the boldfaced numbers are from FSD and FSD-vec.). It should be noted that for BLOOM-7b, CS does not work entirely in all four languages (see the extremely low diversity scores).\nAdditionally, the performance of CS also exhibits greater sensitivity to different languages. For instance, when applied to the japanese dataset using OPT, the diversity and MAUVE scores are notably low. In contrast, FSD and FSD-vec deliver much more stable performance across different settings,\nPrompt: Following months of rumours, musician Wyclef Jean today confirmed that he is considering standing for the presidency of Haiti in the country's November elections. A statement from CS: Jean's publicist reads: \"Wyclef will make a decision on his political future in the coming days and is consulting with his family and advisors. \"The announcement comes after Jean, who has been touring the world in support of his latest album A New Day In The Old Town, was forced to cancel a concert in the Dominican Republic last week due to security concerns. He had been due to play the concert in Santo Domingo on November 4 but the concert was called off after Dominican authorities raised concerns about the safety of the concert venue, which is close to the border with Haiti. Table 5: Automatic evaluation results on four non-English datasets and three LMs. setting (Li et al., 2023b), i.e., reporting the win rates against text-davinci-003 on the alpaca_eval dataset6 with the help of GPT-4 (OpenAI, 2023). We adopt the LLaMA-2-7b-chat model (Touvron et al., 2023b) since it is among the most popular instruction-tuned models. The results of different decoding methods are shown in Table 7.\nBLOOM OPT R-1 R-2 R-L R-1 R-2 R-L beam (\nThe results clearly indicate that FSD/FSD-vec outperforms the baselines, thus further validating the effectiveness of our approach.\nClose-Ended Generation Task: Summarization So far, our evaluation has been focused on openended text generation and general-purpose instruction following. We also evaluate our methods on a specific, close-ended generation task: summarization. We use the XSum dataset (Narayan et al., 2018). As shown in Table 6, FSD/FSD-vec is generally better than other baselines. It showcases that our methods can also work well in close-ended scenarios." }, { "figure_ref": [], "heading": "Hyperparameter Analysis", "publication_ref": [], "table_ref": [], "text": "Analysis of α We first study the effect of the penalty strength α. We present the results in Table 8. We notice that as α increases, the div score consistently increases. This is an expected outcome, as a larger α imposes a greater penalty on repetitive content, thereby promoting increased diversity in the model's outputs. The coh score demonstrates a decreasing trend. The reason is that penalizing the most probable tokens may damage the coherence between the prefix and the continuation. Consequently, we see that the mauve score initially shows an upward trend and then experiences a slight decrease." }, { "figure_ref": [], "heading": "Analysis of n", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "We study the effect of the hyperparameter n as shown in Table 9. We can observe that diversity and coherence are very stable for different n, when n > 3, the mauve begins to decrease." }, { "figure_ref": [], "heading": "Analysis of k", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "We investigate the impact of the hyperparameter k, as presented in Table 10.\nWhen k is assigned a minimal value, a notably lower diversity (div) is observed. This can be attributed to the reduced search space associated with a smaller k, which consequently constrains the diversity of generated outcomes. Conversely, upon incrementing k past a specific threshold, all evaluated metrics-diversity, mauve, and coherence-demonstrate substantial stability, with only negligible fluctuations observed. This stability suggests that the effective selection space of FSD predominantly comprises a limited number of top tokens.\nDespite that the hyperparameters can take different values, we recommend using the default settings of those hyperparameters and only adjusting α to suit different tasks." }, { "figure_ref": [], "heading": "Conclusion and Future Directions", "publication_ref": [], "table_ref": [], "text": "We proposed FSD, an effective, efficient, and universal decoding method for avoiding the degeneration problem and improving generation quality. FSD constructs an anti-LM on-the-fly to penalize An intriguing future research direction could involve a more nuanced approach to repetitions. In fact, some grams (like named entities) might not require penalization at all. Therefore, researchers may develop more meticulous algorithms based on FSD to discern the contexts and conditions under which repetitions should be penalized. This would enable a more refined and context-sensitive application of repetition management in text generation." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Due to the nature of language models, we note that the generations of our method may have offensive, toxic, unfair, or unethical content. The generations of our method may also have hallucinated content and can be misleading. When deployed in realworld applications, special attention should be paid to avoid inappropriate generations. For example, one can use post-process steps such as toxicity identification and fact checking. Output :\nThe generated text x." }, { "figure_ref": [], "heading": "A. Implementation Details", "publication_ref": [], "table_ref": [], "text": "We provide the detailed pseudo code of FSD in Alg. 2." }, { "figure_ref": [], "heading": "Stopwords and Punctuations", "publication_ref": [], "table_ref": [ "tab_5", "tab_8", "tab_17" ], "text": "Stopwords significantly influence the diversity of sentence structures as they often appear at the beginning of sentences, such as \"The...\" or \"He...\". To provide finer control over the penalty applied to stopwords, we introduce a discount factor ϕ. This factor is multiplied by the second term of Eq. 2, replacing α with ϕ • α specifically for stopwords. A smaller ϕ tends to produce sentences with similar structures, as demonstrated in the example provided in Table 13. Conversely, a larger ϕ can lead to the generation of invalid sentences due to the heavy penalty imposed on stopwords at the beginning of a sentence. This may result in the selection of incorrect tokens, as illustrated in the example presented in Table 14.\nWe also experimentally find that penalizing punctuations can sometimes introduce grammar errors in the generated text. Specifically, when utilizing GPT2 as the base model, we have found that the punctuation symbols ĊĊ (representing \"\\n \\n\") and Ċ (representing \"\\n\") have a significant impact on the grammatical correctness of the output. An example illustrating this phenomenon is provided in Table 15. In our experiments, we do not penalize punctuations and the punctuations set is P = . , : \" ' ĊĊ Ċ ." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Hyper-parameter Settings", "publication_ref": [], "table_ref": [ "tab_14" ], "text": "We search ϕ from {0.2, 0.4, 0.6, 0.8, 1, 1.5}. The detailed parameter settings are listed in Table 11. 4. We found that if unsmoothed n-gram is applied, the best performance is achieved when n = 2. The reason for this phenomenon is that if n > 2, the unsmoothed n-gram LM can not penalize grams with lengths smaller than n, which is manifested by the high REP-i, i < n in Figure 4. However, setting n = 2 sometimes may not be a good option due to the BPE encoding algorithm, under which a word (e.g., name of a person) can be decomposed into multiple tokens. If penalized heavily, these words may not be recovered. " }, { "figure_ref": [], "heading": "5(3L UHSUDWH", "publication_ref": [], "table_ref": [], "text": "Q Q Q Q" }, { "figure_ref": [], "heading": "C. More Cases", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1. FSD vs CS", "publication_ref": [], "table_ref": [], "text": "We provide more cases on " }, { "figure_ref": [], "heading": "C.2. FSD vs FSD-vec", "publication_ref": [], "table_ref": [ "tab_21", "tab_22", "tab_4", "tab_5", "tab_8", "tab_9", "tab_11" ], "text": "We provide more cases on Table 19, Table 20, Table 21 for wikinews, wikitext and book respectively to compare FSD with FSD-vec.\nPrompt: Following months of rumours, musician Wyclef Jean today confirmed that he is considering standing for the presidency of Haiti in the country's November elections. A statement from CS: Jean's publicist reads: \"Wyclef will make a decision on his political future in the coming days and is consulting with his family and advisors. \"The announcement comes after Jean, who has been touring the world in support of his latest album A New Day In The Old Town, was forced to cancel a concert in the Dominican Republic last week due to security concerns. He had been due to play the concert in Santo Domingo on November 4 but the concert was called off after Dominican authorities raised concerns about the safety of the concert venue, which is close to the border with Haiti. In an interview with Rolling Stone, Jean said he was \"saddened\" by the cancellation, but that he had no choice but to cancel the concert. \"Hearing the news of the earthquake in Haiti, I was deeply saddened and saddened to hear of the loss of life and the destruction of homes and businesses.\nTable 12: One case of FSD versus CS. The prefix's topic is \"a musician is condsiering runing for presidency\". But the topic of CS's output is concert tours which is irrelevant to that of the prefix. By contrast, FSD always sticks to the topic of the prefix.\nPrompt: A gas explosion in the Grand Riviera Princess resort in Playa del Carmen, Mexico earlier today killed seven people and left 20 people injured, two critically according to\nGenerations: local media. The explosion occurred at around 8:30am local time (10:00am GMT) at the hotel's pool area, which is located on the second floor of the resort.The hotel is a popular destination for tourists and is known for its large pool and spa. The cause of the explosion is still unknown, but local authorities have said that the cause is not related to the fire that broke out in the pool earlier in the day. The resort's management has said that the incident is not connected to the blaze that erupted in the same area earlier in the day.The fire was extinguished by the resort's staff and the guests were evacuated. The victims of the blast have been identified as: A man in his 30s A woman in her 20s A young man in his 20s A middle-aged woman A 40-year-old man A 50-year old woman A 60-something woman A 70-some-year women A man in his 50s A woman in her 30s A 30-something man A 20-something girl A 15-some year Prompt: According to data from South Korea's national bank, the country's economy grew by 0.2% in the last quarter of 2009. The rise is now Generations: expected to be 0.3% in the first quarter of 2010. The government has been trying to boost the economy by cutting taxes and increasing spending.The central bank has also been cutting interest rates to stimulate the economy.The Bank of Korea has cut its benchmark interest rate to 1.5% from 2.5%.The rate is still higher than the rate of inflation.The country has been struggling with a huge debt burden.The debt is estimated at about $1.4 trillion.The South Korean government has been attempting to reduce the debt by selling off state-owned companies.The government has been selling off the Hyundai Motor Group, the nation's largest carmaker, and the Korea Development Bank.The state has also been selling off its stake in the Samsung Group.The sale of the two companies has been a success.The Hyundai Group has been sold for $8.6 billion.The Korea DevelopmentBank has been sold for about half that amount.The two sales have been a big success.The sales have helped the government to lower its debt to GDP ratio.The debt to the GDP is now down to about 50% of GDP.The Korean economy is expected to grow by 1.\nTable 13: No penalty on stopwords may procude sentences with similar grammatical structure.\nPrompt:Charlie, a Chimpanzee in a South African zoo who became addicted to cigarettes, has died at the age of 52, exceeding the average lifespan of a captive Generation: animal. The cause of death was not immediately known. Charlie, who was born in captivity in 1989, was diagnosed with lung cancer in 2011. He was given a lung transplant in 2013, but his health deteriorated again and he was put down. His death comes just weeks after another chimpanzee, Harambe, was shot dead by zoo officials after a child fell into his enclosure. Image copyright Getty Images Image caption Charlie was one of the last chimps to be born at an American zoo In a statement, the zoo said: \"Charlie's passing is a great loss to the animal kingdom. \"Charlie was a beloved member of our family and we will miss him dearly.\" Chimpanzees are the closest living relatives of humans, and are often used in research. They are also used for entertainment, as they are highly intelligent and can mimic human speech. Charlie, whose name means \"chimney sweep\" in Swahili, was born on 19 April 1989 at San Diego Zoo. He lived at the San Francisco Zoo for five years before moving to Cincinnati Zoo in July 2014. Image caption Chimps are known for their intelligence and mimicry The Prompt:Police have said that an explosion in a hotel garage in San Pedro, Los Angeles, California, that sparked an evacuation of a large area was an act of vandalism Generation: . The incident occurred at around 2:30am local time (11:00 GMT) on Thursday. A fire broke out in the garage, which is located near the hotel, and caused a huge explosion. Image copyright Getty Images Image caption The hotel was evacuated after the explosion \"It was a very loud explosion,\" said one witness. \"I was sleeping and woke up to the sound of the explosion. I looked out the window and saw a lot of smoke and fire.\" Another witness, who did not want to be named, said: \"I heard a loud bang and then I saw the fire. \"The fire was so big it was like a forest fire. It was just like something out of a movie.\" Media playback is unsupported on your device Media caption Footage shows the aftermath of the blast Los Angeles Mayor Eric Garcetti said the incident was \"an act against our city\". \"We will not tolerate this type of violence,\" he said. \"This is an attack on our way of life, our economy, our infrastructure, and our future.\" He said he had ordered the evacuation \"out of an abundance of caution\". \"Our city Table 14: Penalizing stopwords heavily may produce invalid generations. The invalid generation \"Image copyright ...\" may occur in some examples because the stopwords that have a larger probability than \"Image\" have a larger penalty. Therefore, \"Image\" is selected.\nPrompt: Apple Inc. announced Friday that it will give owners of its new iPhone 4 a free case in response to mounting concerns over the device's antenna placement. The iPhone Generation: maker said it is offering the free cases to customers who have already purchased the phone. The company said the new cases will be available in stores and online starting Friday. Apple said the cases will have a \"unique design\" that will help protect the iPhone's antennas from damage. , which is made by Corning Inc., will be sold in the U.S. and Canada for $29.99. The company also said that the case will be made in China, where the company has been making its products for years. \"We are committed to providing our customers with the best possible products and we are always looking for ways to improve our products and services,\" Apple spokeswoman Kristin Huguet said in a statement. \"We know that our fans want to protect their iPhone and this case is designed to do just that.\" Apple's decision to offer the free protective cases comes after a number of reports of people experiencing problems with their iPhones' antennas. Some people have said they have been unable to make calls or send text messages because their phones were unable to receive signals. Others have complained that their devices have lost reception or have been damaged by the antenna. a report by The Wall Street Journal last week that said some iPhone owners had been complaining\nPrompt:Wikinews reporter Iain Macdonald has performed an interview with Dr Isabella Margara, a London-based member of the Communist Party of Greece (K Generation: KE), who has been a member of the party since the 1970s. Dr Margaras is a well-known figure in the Greek left, having been a prominent member of the KKE since the 1980s, and having served as a deputy to the party's leader, Dimitris Koutsoumbas. She is also a former member of the Socialist Party (PASOK) and a long-time member of the European Parliament. In the interview, she discusses the current situation in Greece, the role of the EU in the country's crisis, and the possibility of a new left government in Athens. and the role of the EU in the countrys crisis,. In the article, Dr Margareta Margara also discusses the recent decision by the Greek government to impose capital controls on the banks, and the implications of this for the country. The interview was conducted by Iain MacDonald on behalf of The Independent. Iain: What is the situation in your country? Margaras: The situation is very bad. The government has imposed capital control on the banking system. This is a very serious situation. The banks are closed and there is no access to money. There is no money in the banks. There is Prompt-1: On Friday, the 189 Parties to the Nuclear Non-Proliferation Treaty (NPT) agreed to a final resolution calling for steps toward the banning of weapons FSD: of mass destruction. The resolution, which was adopted by consensus, calls for the \"immediate and complete cessation of all nuclear, chemical and biological weapons-related activities\" and \"the establishment of a comprehensive, verifiable and irreversible nuclear-weapons-free zone in the Middle East.\" It also calls for \"a comprehensive, effective and enforceable prohibition on the production, stockpiling, acquisition, transfer, use and threat of use of nuclear weapons and all other weapons of indiscriminate and disproportionate impact.\" The NPT was created in 1968 to prevent the spread of nuclear and other arms. \"The international community has been working for many years to achieve a world without nuclear weapons,\" said U.S. Ambassador to the United Nations Samantha Power. \"Today, we have taken a major step forward toward that goal.\" \"This is a historic day for the international non-proliferating movement,\" she said. \"It is a day of hope and optimism for all those who believe that the world can be a safer, more peaceful place.\" \"We are proud of the work that has been done by the 193 N.G.O. member states to reach this historic agreement,\" Power said. \"We will continue to CS: of mass destruction. In the meantime, the United States is continuing to develop nuclear weapons and ballistic missile defense systems that it can use against Iran if it were to violate the terms of the agreement. On Monday, Secretary of State John Kerry said Iran's \"threats and provocations are a clear violation of the spirit and letter\" of the NPT, which was signed by 122 countries in the 1970s. Kerry's comments came during a visit to South Korea, where he met with President Park Geun-hye and Foreign Minister Yun Byung-se. \"We have made it clear to the Iranians that their nuclear program must be subject to IAEA safeguards, and that they must cease work on a heavy water reactor that could produce plutonium,\" Kerry said. The Obama administration has been working for more than a year to reach a deal with Iran that would curb its nuclear program in exchange for relief from economic sanctions. But negotiations have hit snags, most recently over Iran's refusal to allow inspectors into Parchin, a military site where Iran is believed to have carried out experiments on nuclear weapons in the past. Iranian officials have said the site is for research and development, but Western intelligence Prompt-2: Inventor Walter Frederick Morrison has died of the age of 90. He was famous for being the inventor of the flying disc, more commonly identified as the fr FSD: isbee. Morrison was born in New York City in 1891. His father was a lawyer and his mother was a homemaker. Morrison's father died when he was young, and he moved to Chicago to live with his aunt. He attended the University of Chicago, where he studied engineering. In 1915, he joined the U.S. Army Air Corps, and was stationed in France during World War I. In 1919, Morrison was commissioned as an officer in the Army Signal Corps. He served in the Signal Service, which was responsible for transmitting messages across the Atlantic. After the war, Morrison went to work for the Bell Telephone Company, where his job was to design and build telephone systems. In 1928, he founded the Morrison Electric Company, which manufactured and sold electric motors and generators. In 1939, Morrison founded the American Aircraft Corporation, which made aircraft engines. In 1941, the company was sold to the General Electric Corporation, and Morrison became president of the new company. The company's first aircraft, the P-51 Mustang, was designed by Morrison. The P2V Mustang, a fighter plane, was the first American fighter to be powered by a jet engine.\nCS: sbee. Morrison's name is on the back of every disc sold, and the discs are made in his hometown of St. Louis, Missouri. The disc was invented in 1887 by a man named George Eastman, who patented it in the United States and sold it to Parker Brothers for $1,500. It was a flimsy disc with a hole in the middle that could be thrown with one hand, according to Smithsonian Magazine. In 1889, Parker Brothers began to sell the disc in Europe, and by the 1920s, there were over 2 million discs in circulation, according to Smithsonian Magazine. But the disc was not without its problems. It was difficult to make, and many people complained about the disc breaking on the way to the basket. \"I have seen it, and it is terrible,\" one golfer said, according to Smithsonian Magazine. \"The ball goes in, and you have to hold it in your hand for a long time, and then it breaks in the basket.\" The frisbee, on the other hand, was much more durable and easy to throw. \"It's like a golf club in that way,\" Morrison told National Geographic in a 2011 interview. \"\nTable 16: Comparison of CS and FSD on wikinews. We can see that Both FSD and CS can generate fluent sentences. However, FSD is more sticky to the topic on these two prompts.\nPrompt-1: <|endoftext|> With a satisfying protagonist completed, Sega turned to esteemed programmer Yuji Naka, who had impressed them through his work on Phantasy Star and the Genesis port FSD: of Sonic the Hedgehog 2. Nana was a new face to the company, but he had a reputation for being a hard worker and a good communicator. Nana's first task was to create a character that would be able to stand out from the crowd. He chose Sonic, a character who was already well known in Japan. The game's story was also designed to be unique, with the player controlling Sonic as he travels through a world that is not only different from any other Sonic game, but also from other games in the Sonic series. The game was originally going to be called Sonic Adventure, but Sega decided to change the name to Sonic the Fighters. Sega's decision to use a different name for the game was not a coincidence. In the early 1990s, the Japanese video game industry was in a state of flux. Sega was struggling to find its place in the market, and it was looking for a way to differentiate itself from Nintendo. In the United States, Nintendo had been dominating the market for years, and Sega had struggled to gain a foothold. As a result, Sega began to look for ways to differentiate itself. One of the ways Sega tried to do this was by creating a game that was CS: of Final Fantasy. Naka was brought in to work on Sonic the Hedgehog, and the result was a game that had everything you could want in a Sonic game: speed, platforming, a sense of humor, and, of course, lots and lots of explosions. Sonic 3 & Knuckles is an incredibly well-made game, and the story is one of the best in the series to this day. It's not perfect, but it's a great example of what can be done with the Sonic formula. In this article, we're going to take a look at some of the things that make Sonic 3 & Knuckles such a great game, and how they were implemented in the game. If you're new to Sonic, this is a great place to start. 1. The Game Over screen is the best part of the game The game over screen is where you get to see your score, how many lives you have left, and what level you're on. It's a great way to end a game, and it's one of the most memorable parts of the game. 2. There are a lot of secrets to find The levels are packed with secrets, and they're hidden all over the Prompt-2: <|endoftext|> \" Kiss You \" was written by Kristoffer Fogelmark, Kristian Lundin, Albin Nedler, Savan Kotecha, Shellback FSD: , and Jens-Christian Svenning. It was produced by the Swedish Film Institute and directed by Jonas Åkerlund. The film was released in the United States on October 21, 2010. Plot [ edit ] A young man, who is a member of the \"Kiss You\" group, is on a date with a woman. He is very nervous and tries to hide his feelings. The woman, however, is not interested in him and leaves. The man is left alone and begins to cry. Later, the man meets a man who has been in a coma for a year. The two men are friends and the man tells the young guy that he has a girlfriend. The young boy is shocked and asks the girl if she is his girlfriend. She says that she is not and that they are just friends. The man then asks if he can kiss her. The girl says no and tells him that it is too soon. The man says that if it's not too late, he will kiss her. The young couple then kiss and the woman says \"I love you\". Prompt-1: The Sons vowed to combine their wisdoms and forsake their names and worship wholeheartedly the Unmarked Ones so that they one day might return. They would FSD: be the first to bring the world back to the way it was before the fall. The Firstborn In the years that followed, the Sons of Svanir grew in power and influence. The First Born were the most powerful of their number, and they were known as the \"Firstborn.\" They were also the most secretive, for they had no need of a public face. Their true name was kept secret, but they knew it well. They were born of the same blood as their brothers, but their bodies were different. Instead of being born with a single eye, they had two. This was because the Firstborn were not born from the same mother as their brethren, but from a different mother. This was not a problem for the FirstBorn, for their mother was a woman named Nithra. She was the daughter of an Unmarked One named Kord. Kords was an old man who had lived for many years. He had been a great warrior, and he had fought in many wars. One day, Kord was attacked by a group of bandits. He was killed, and his body was taken to a nearby village. There, his wife, Nethra, was told of his death CS: be a force of good in the world, and their god-kings would rule with an iron fist. The Sons of Sanguinius had no need for this kind of power, but it was the only way they could keep from being wiped out by the Emperor's Children, who were the most powerful Space Marine Legion at the time. In the end, the Emperor's Children won the Great Crusade, and Sanguinius was forced to flee into the Eye of Terror, where he remained for a thousand standard years before being found by the Ultramarines. Siege of Vraks \"You are a fool, Horus. I know what it is to be an outcast, and the shame is yours.\" -Fulgrim, to Horus during the Siege of Vraks (Unknown Date.M41) In 998.M41, the Blood Angels were deployed to the world of Vraks, which had been the site of a bloody civil war between the Sons of Sanguinius and the Traitor Legion known as the World Eaters. The Sons of Sanguinius had taken the planet as a staging ground for their invasion of the Segmentum Obscurus, and were intent on using Prompt-2: But that's enough about aging, enough gloom and despondency. I shall tell you a story about when I myself was young and susceptible. It was in FSD: the early days of the war, and I was a student at the University of Chicago. The war was raging, and we were all in a state of excitement. We were not yet aware of the fact that we had been born into a world of death. I was studying in my room, with my books open on the table. Suddenly I heard a noise outside. I looked out the window, and saw a man running across the street. He was dressed in civilian clothes, but he was carrying a rifle. \"What are you doing?\" I asked. He stopped, looked at me, and then he ran back to his house. The next day, I went to the university, where I found that the man had died. It was a strange experience. For a moment I felt that I had lived in a dream. Then I remembered that it was only a dream. But it is a fact, that in those days, when the world was full of fear, there were people who were prepared to die for their country. And it is also a fact, that when we are young, we have a tendency to forget that there are people in the world who are prepared to sacrifice their lives for us CS: the summer of 1884, and my family was on a visit to New York. We stayed at the Waldorf-Astoria, which was in the heart of Times Square. The hotel was a grand, opulent edifice, with marble floors, gold-trimmed walls, and a grand ballroom that could seat two thousand people at a time. I was in the ballroom one day with my mother, father, and a few of my friends. It was a hot day, and the air was filled with steam from the boilers, which were in operation all over the hotel. We were sitting on the verandah, enjoying the cool breeze and the view of the city below. Suddenly there was a commotion in the ballroom, and a man in a black suit rushed in with a pistol in his hand. He pointed it at my mother, and demanded to know who she was. When she answered that she was my mother, the man drew his pistol and shot her point-blank in the chest. She fell to the floor, and I was left to look on in horror. My mother was dead, and there was nothing I could do to save her. The next day, when I got home, I found a\nTable 18: Comparison of CS and FSD on book.\nPrompt-1: A private security guard, defending the Panamanian based MV Almezaan, killed a Somali pirate who was attempting to hijack the ship on Tuesday." }, { "figure_ref": [], "heading": "FSD:", "publication_ref": [], "table_ref": [], "text": "The MV Maersk Alabama was sailing from the port of Bremerhaven in Germany to the Gulf of Aden when it was attacked by a pirate vessel. The crew of the MV Ala'a were able to board the pirate ship and kill the Somali pirates. \"The security guards of the Almezan were on board the ship when they were attacked by pirates,\" said a statement from the German Maritime Ministry. \"The guards were armed with pistols and were able to shoot and injure the pirates.\" The Almezan was able to return to Bremershaven and continue its journey to the gulf of Aden. \"We are grateful to the security forces of Germany and the United Arab Emirates for their quick response and for the safe return of the vessel,\" the statement said. \"This is a clear example of the cooperation between the maritime authorities of the two countries.\" Piracy off the coast of Somalia has been on the rise in recent years. In 2012, the UN Security Council passed a resolution to tackle the problem. The resolution called for a \"comprehensive and coordinated international response\" to the problem. It also called for the creation of a \"global maritime force said NASA Administrator Charles Bolden. \"This is a great day for NASA and the nation. We are grateful to the crew and their families for their dedication and professionalism.\" \"This is a very exciting day for the entire NASA family,\" added NASA Chief Technologist David Weaver. \"The shuttle program has been a tremendous success, and we are proud of the work of the men and women of the shuttle fleet. This is a day of celebration for all of us at NASA.\" The STS flight was the first of two planned shuttle missions to the ISS. The second mission, STS-134, is scheduled for launch in late November. \"Today's successful docking marks the end of a long journey for the space shuttle,\" noted NASA Administrator Boldin. \"It is a testament to the hard work and dedication of the astronauts and engineers who have flown on these missions. It is also a tribute to the strength of the American spirit and the ingenuity of our nation's scientists and engineers.\" \"We are very proud of the team that has worked so hard FSD-vec: 7:30 a.m. EDT. The docking was completed at 8:15 a.m. EDT, NASA officials said. \"This is a great day for all of us,\" said Rick Mastracchio, commander of the space shuttle program. \"We are very proud of the crew and the team that worked so hard to get this done.\" The shuttle's arrival at the ISS marks the first time a U.S. spacecraft has docked with the orbiting laboratory since the end of the Apollo era in 1972. \"Today is a historic moment for the United States and the world, and we are grateful to the astronauts and their families for their dedication and professionalism throughout this process,\" President Obama said in a statement. \"I want to thank the entire NASA team for their efforts to bring the Space Shuttle back to Earth safely, and I congratulate the STS-133 crew on their successful return to flight. This is a very special achievement for our nation, and it is a testament to the ingenuity and determination of the American people. We are all looking forward to seeing what they do next!\" The mission, which was originally scheduled to launch on May 20, will be the last of the current generation of shutt Joel Pagdilao told ABS-CBN News. \"There will be gusty wind, so we need to be prepared for that.\" Pagdilao added that the NDRRMC is monitoring the situation closely. \"Our teams are on Prompt-2: <|endoftext|> Malaysia LNG is a Liquefied Natural Gas ( LNG ) manufacturing complex located in Bintulu that currently contains eight LNG trains with a ninth FSD: train under construction. The complex is owned by the Malaysian government and is operated by Malaysia's state-owned Petronas. The complex was built in the early 1990s to supply LPG to the domestic market. In 2004, Petrolio, a subsidiary of Petronas, acquired the LNGL project and began to produce LNB. In 2009, the government of Malaysia announced that it would be building a second LGNL complex in Kuala Lumpur. This second complex, which is currently under development, will be capable of producing up to 1.5 million tonnes of LBN per year. Petronas has stated that the second facility will have a capacity of up to 2.2 million tons per annum. LNG train The first LNG train was delivered to BINTUL in April 2010. The train is capable of transporting up to 4.3 million litres of LNB per day. A second train, which was originally scheduled to be delivered in 2012, was delayed due to a number of issues. In September 2012, Petromonas announced that the first train would be completed by December 2012. However, in January 2013, the company announced that the train would not be ready FSD-vec: train under construction. The facility is owned by Malaysia's state-owned Petronas. The plant was built in the early 1990s and has been used to produce liquified natural gas for export. It is one of the largest industrial facilities in Southeast Asia. In addition to the four existing rail lines, there are plans to build a fifth line.\nLiquifed-natural gasses are produced from coal, oil, or biomass. They are then transported by pipeline to the terminal, where they are loaded onto tankers and shipped to other countries. Malaysia produces about 1.5 million tonnes of liquid fuel per year. A new railway will be constructed to transport the gas to the port. Construction of the new train began in 2009. It is expected to be completed by 2015. The first two tracks were laid in 2010. The third track was installed in 2011. The fourth and final section of the railway was opened in 2012. The LNGL project is part of the Malaysian government's plan to increase its energy security. Petrol prices have been rising steadily since the beginning of the year. According to the Petroleum Ministry, petrol costs Prompt-2: The trip ended far too soon; I was a bit confused as we pulled up to a small beach cottage on the far side of the inlet. The whit FSD: ewashed walls were covered in graffiti, and the windows were boarded up. I asked the owner if he knew where we could find a place to stay, but he didn't seem to know either. We decided to walk back to the car and head back out to the inlets. We walked along the beach for a while, until we came across a large group of people sitting on a bench. They were all dressed in white, and they were talking about something. It was hard to make out what they were saying, but I could tell they had a lot of energy. I asked one of the guys if I could join them, and he said sure. He led me to a spot on the beach where he had set up a tent. As we sat down, I noticed that the guy was wearing a white shirt with a black tie. \"What's your name?\" I asked. \"I'm John,\" he replied. He was tall, with dark hair and a beard. He was also wearing jeans and a t-shirt. John was an older man, probably in his late 50s or early 60s. He had dark skin, a thin mustache, and was balding. As we FSD-vec: ewashed walls were covered in graffiti, and the windows were boarded up. We walked inside, and found a large, empty room. \"I'm sorry, but there's nothing here,\" said my guide. We sat down at the table, and he told us about his life. He had been born in Mexico, but moved to the United States when he was five. His parents divorced, and he lived with his mother. He worked as a cook, and then became a security guard. When that job dried up, he started working for a construction company. When the recession hit, things got worse. The economy was in shambles, and many people lost their jobs. His boss fired him, and sent him to live with his grandmother. That's when everything changed. A few months later, his father died. It was hard for him to cope. So, after living with his grandparents for a while, he decided to move back to Mexico. But, before he left, he took some photos. One day, while walking home, he saw a man who looked like him. They talked, and eventually, they agreed to meet " } ]
We introduce a frustratingly simple, highly efficient, and surprisingly effective decoding method, termed Frustratingly Simple Decoding (FSD), for neural text generation. The idea behind FSD is straightforward: We construct an anti-language model (anti-LM) based on previously generated text, which is employed to penalize the future generation of repetitive content. The anti-LM can be implemented as simple as an n-gram language model or a vectorized variant. In this way, FSD incurs no additional model parameters and negligible computational overhead (FSD can be as fast as greedy search). Despite its simplicity, FSD is surprisingly effective and generalizes across different datasets, models, and languages. Extensive experiments show that FSD outperforms established strong baselines in terms of generation quality, decoding speed, and universality. The code is available at https://github.com/LHRYANG/FSD
A Frustratingly Simple Decoding Method for Neural Text Generation
[ { "figure_caption": "Figure 3 :3Figure 3: Decoding latency tested on GPT2-XL.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 2 :82FSD DecodingInput : the LM p θ (e.g. GPT2); the anti-LM pω;the prompt text x 1:l ; the decoding length m; the stopwords set S; the punctuation set P; 1 Construct the anti-LM pω with the prompt x 1:l ; 2 for step t = l + 1 to l + m do 3 Compute next token distribution p θ (•|x<t); 4 Get V (k) from p θ (•|x<t); 5 for candidate v ∈ V (k) do 6 Get the penalty pω(v|x<t) according to Eq. 3 (discrete version) or Eq. 4 (vectorized version); 7 xt = arg max v∈V (k) {FSD(v|x<t)}; Update pω with xt;", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Repetition rate for different n.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Calculation of Penalty pω(v|x<t) Input : prefix x<t; n-gram models with different orders from 1 to N (p1, p2, • • • pN ); candidate token v; decay factor β = 0.9 1", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Automatic evaluation results. The best results (the", "figure_data": ",", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": ". For", "figure_data": "wikinewsA is better FSD 41% FSD 45% † FSD 52% †Neutral 22% 25% 12%B is better 37% top-p 30% CS 36% CDwikiextA is better FSD 46% † FSD 39%Neutral 24% 24%B is better 30% top-p 37% CSFSD37%30%33%CDA is betterNeutralB is betterbookFSD FSD41% 38%24% 22%35% top-p 40% CSFSD 46% †19%35%CD", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Human evaluation results of FSD. † means the advantage is statistically significant as judged by Sign Test with p-value< 0.05.", "figure_data": "wikinewsA is better FSD-vec 44% † FSD-vec 36%Neutral 25% 21%B is better 31% CS 43% FSDwikiextA is better FSD-vec 51% † FSD-vec 36% †Neutral 26% 33%B is better 23% CS 31% FSDbookA is better FSD-vec 42%Neutral 20%B is better 38% CSFSD-vec37%27%36% FSD", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Human evaluation results of FSD-vec.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Jean's management company, La Nuit Productions, said: \"Wyclef is currently in the process of gathering information and will make a decision in the coming days.\"Jean, who has been a vocal supporter of the Haitian government, has previously said he would consider running for the position. The musician, who is also a member of the hip-hop group A Tribe Called Quest, has not yet announced his candidacy. Haiti's current president, Michel Martelly, is widely expected to run for re-election. • • •", "figure_data": "", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Case study: FSD vs CS.", "figure_data": "germanfrenchjapanesechinesedivmau cohdivmau cohdivmau cohdivmau cohHuman0.9310.77 0.8810.65 0.5610.60 0.7510.87BLOOMp=0.95 CS FSD0.93 0.45 0.69 0.90 0.95 0.63 0.56 0.08 0.51 0.72 0.82 0.86 0.20 0.72 0.72 0.50 0.90 0.64 0.04 0.23 0.53 0.07 0.57 0.80 0.93 0.79 0.76 0.90 0.90 0.67 0.59 0.11 0.53 0.75 0.75 0.84FSD-vec 0.92 0.73 0.74 0.91 0.91 0.65 0.55 0.06 0.50 0.80 0.75 0.83OPTp=0.95 CS0.91 0.70 0.73 0.89 0.70 0.60 0.73 0.61 0.55 0.83 0.60 0.72 0.84 0.72 0.60 0.42 0.18 0.53------FSD0.93 0.69 0.73 0.91 0.73 0.62 0.65 0.69 0.59---FSD-vec 0.93 0.64 0.73 0.85 0.69 0.61 0.64 0.58 0.56---LLaMAp=0.95 CS FSD0.94 0.94 0.75 0.90 0.93 0.64 0.90 0.78 0.73 0.90 0.73 0.61 0.93 0.88 0.75 0.93 0.85 0.65------------------FSD-vec 0.92 0.94 0.74 0.91 0.88 0.64------", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Automatic evaluation results on XSum.", "figure_data": "methodtop-p CDCSFSDFSD-vecsize=8) 32.0 6.7 27.8 35.8 5.9 31.2win rate 77.20-78.32 82.3281.84p=0.9527.5 2.9 23.6 24.1 2.5 20.7CS34.1 5.5 30.4 35.6 8.3 31.3FSD34.2 5.9 31.3 37.4 9.8 33.7FSD-vec33.2 5.2 29.1 37.1 8.6 32.1indicating FSD/FSD-vec can be a universal choicefor open-ended text generation.Instruction Following The latest generation ofLLMs such as ChatGPT (OpenAI, 2022) andLLaMA-2-chat (Touvron et al., 2023b) have the ca-pabilities to perform various tasks by following natu-", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Win rate of GPT-4 evaluation. CD is omitted since it requires a smaller amateur model and the model we use is already the smallest one.", "figure_data": "", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Analysis of α. The experiments are conducted on wikinews using GPT2-XL with FSD.", "figure_data": "div mauve cohn = 2 0.930.930.64n = 3 0.930.940.65n = 4 0.920.880.65", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Analysis of n. The experiments are conducted on wikinews using GPT2-XL with FSD.", "figure_data": "div mauve cohk = 2 0.690.900.66k = 4 0.910.920.65k = 6 0.930.930.64k = 8 0.940.940.64k = 10 0.940.940.64", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Analysis of k. The experiments are conducted on wikinews using GPT2-XL with FSD.", "figure_data": "repetitive generation. Extensive evaluations andanalyses confirm its effectiveness across open-ended text generation, instruction following, andsummarization tasks. In addition, FSD demon-strates better efficiency and generality comparedwith existing state-of-the-art decoding methods.", "figure_id": "tab_13", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Parameter settings of ϕ s", "figure_data": "B. Further Analysis", "figure_id": "tab_14", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "\"I'm a human being and it's my job to do what I have to do,\" he said. \"If there's a risk, I have to take it. But I can't go to the Dominican Republic with this threat hanging over my head.\"Jean's tour, which has taken him to South Africa, the United Arab Emirates (UAE, where he performed at the Abu Dhabi Formula One Grand Prix in October) and Brazil, is the first time he has performed in Haiti since FSD: Jean's management company, La Nuit Productions, said: \"Wyclef is currently in the process of gathering information and will make a decision in the coming days.\"Jean, who has been a vocal supporter of the Haitian government, has previously said he would consider running for the position. The musician, who is also a member of the hip-hop group A Tribe Called Quest, has not yet announced his candidacy. Haiti's current president, Michel Martelly, is widely expected to run for re-election. In a statement, Jean said: Ï am considering running for president of Haiti. I have been following the election process closely and I believe that Haiti needs a new leader who can bring about change and rebuild the country. \"I have always been a supporter of President Martello and his government. I am also a fan of the music of A Tribe called Quest and have supported them in the past. \"The Haitian people have suffered greatly under the current government and I feel that the time has come for a change.", "figure_data": "", "figure_id": "tab_16", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Penalizing punctuations can produce grammar errors.", "figure_data": "", "figure_id": "tab_17", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Comparison of CS and FSD on wikitext. CS obviously has a topic drift on Prompt-2.", "figure_data": "", "figure_id": "tab_19", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "The incident occurred at about 10:30 a.m. local time (1:30 p.m. ET) when the crew of the MV Almezaan, a cargo vessel, spotted a small boat in the water off the coast of Somalia. The security guard boarded the vessel and ordered the pirates to surrender. \"The pirate refused to comply with the order and fired a warning shot at the guard,\" said Captain John G. Smith, commander of the U.S. Navy's 7th Fleet. \"The armed guard then opened fire, killing the suspected pirate.\" The Almezaan, which is owned by the Dutch company Maersk Line, was carrying about 1,000 tons of fuel oil, food and other supplies. The Somali Pirates are known for their piracy and have been involved in several attacks on ships in the region. In April, a Somali Pirate hijacked a tanker, the MSC Vos Hestia, off the Horn of Africa. The ship was eventually rescued by the Royal Navy. Last year, a group of Somali pirates attacked a merchant shipping vessel, the SSV Punt, off the east coast of Africa. The attack resulted in the", "figure_data": "", "figure_id": "tab_20", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of FSD and FSD-vec on wikinews. <|endoftext|> A tropical depression developed near Yap on November 11. The system intensified gradually as it tracked quickly westward toward the Philippines. An anticyclone aloft FSD: over the central Pacific was responsible for the development of the system. The system was located about 1,000 miles west of Yape, Papua New Guinea, and about 2,500 miles east of Guam. It was moving west at about 15 mph. A tropical storm warning was issued for Yap and the surrounding area. On November 12, the system strengthened to a tropical cyclone. It was named Yolanda after the city in the Philippines where it formed. Yoland, a small town in Yapto, is located on the northern coast of the island of Yap. The National Hurricane Center said the storm was expected to strengthen further and become a hurricane by the time it reaches the western Pacific. Forecasters said Yolan will likely become a Category 1 hurricane on Friday. \"The center of this system is expected to move over land near the southern Philippines on Saturday and Sunday,\" the center said. \"The maximum sustained winds are near 45 mph, with higher gusts.\" Yolanda is about 3,200 miles south of Manila. \"Yolan is forecast to become a major hurricane later this week,\" the National Weather Service said. \"It is likely FSD-vec: over the central Pacific was responsible for the development of the storm. The typhoon, which is named after the island of Yape, formed in the western part of the Central Philippine Sea. It moved northwest at a speed of about 10 mph (16 km/h) and then turned east-northeast. It weakened slightly during the night, but remained a Category 1 hurricane with maximum sustained winds of 115 miles per hour. Yap is located about 2,000 miles southwest of Manila. A Tropical Storm Warning was issued for the northernmost parts of the country, including the provinces of Cebu, Iloilo, and Bohol. Forecasters said the cyclone could bring heavy rain and strong winds to the southern and eastern portions of the archipelago. \"We are expecting some very intense rainfall, especially in the south and southeast,\" National Disaster Risk Reduction and Management Council (NDRRMC) Director General", "figure_data": "Prompt-1:", "figure_id": "tab_21", "figure_label": "19", "figure_type": "table" }, { "figure_caption": "Comparison of FSD and FSD-vec on wikitext. The person slowly backed out of the snag. Olivia was expecting either a kid or a deranged hermit, so she was startled to see a young woman, FSD: maybe in her early twenties, standing there. She was dressed in a black dress, with a white blouse and black pants. Her hair was tied back in a ponytail, and she had a small, round face. \"Hello, Olivia,\" the woman said. \"I'm Sarah. I'm here to help you.\" Olivia's eyes widened. \"You're Sarah?\" Sarah nodded. \"Yes. I've been looking for you for a while. You're the one who saved my life.\" She smiled. \"Thank you. It was my pleasure.\" Oblivious to the fact that she'd just met a stranger, Olivia walked over to Sarah and hugged her. Sarah returned the hug, and Olivia felt her heart swell with gratitude. \"I've got to go,\" Sarah said. As Olivia stood up, she noticed that Sarah had left her purse behind. \"Where's your purse?\" Olivia asked. Sarah shrugged. \"It's in my room. I'll be back soon.\" Olivetti The next day, Sarah came back to Olivia's room. \"You look tired,\" she said. I know, I FSD-vec: maybe in her early twenties, standing there. She had long, dark hair, and wore a black dress with a white collar. Her eyes were closed, but they looked like those of a child. \"Hello,\" said the girl. \"I'm Emily. I've been waiting for you.\" Olivia's heart sank. Emily smiled. She took off her glasses and wiped her eyes. The two of them walked into the house. They sat down on the couch. It was quiet. Olivia felt as if she'd been transported back to that night. But this time, it wasn't just because of the silence. There was something else. Something different. The air was warm. And, somehow, there was a sense of peace. Olivia couldn't quite put her finger on it. Maybe it was the fact that the room was empty. Or maybe, just maybe, it was because there was someone sitting across from her. A tall, thin man. He didn't look at her. Instead, he stared at the floor. His face was expression", "figure_data": "Prompt-1:", "figure_id": "tab_22", "figure_label": "20", "figure_type": "table" }, { "figure_caption": "Comparison of FSD and FSD-vec on book. ", "figure_data": "", "figure_id": "tab_23", "figure_label": "21", "figure_type": "table" } ]
Haoran Yang; Deng Cai; Huayang Li; Wei Bi; Wai Lam; Shuming Shi
[ { "authors": "", "journal": "References BigScience", "ref_id": "b0", "title": "Bloom: A 176b-parameter openaccess multilingual language model", "year": "2023" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Kawin Ethayarajh", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings", "year": "2019" }, { "authors": "Angela Fan; Mike Lewis; Yann Dauphin", "journal": "", "ref_id": "b3", "title": "Hierarchical neural story generation", "year": "2018" }, { "authors": "Zihao Fu; Wai Lam; Anthony ; Man-Cho So; Bei Shi", "journal": "", "ref_id": "b4", "title": "A theoretical analysis of the repetition problem in text generation", "year": "2021-02-02" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "", "ref_id": "b5", "title": "SimCSE: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "John Hewitt; Christopher Manning; Percy Liang", "journal": "", "ref_id": "b6", "title": "Truncation sampling as language model desmoothing", "year": "2022" }, { "authors": "Ari Holtzman; Jan Buys; Li Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b7", "title": "The curious case of neural text degeneration", "year": "2020-04-26" }, { "authors": "Dan Jurafsky; James H Martin", "journal": "", "ref_id": "b8", "title": "Speech and language processing : an introduction to natural language processing, computational linguistics, and speech recognition", "year": "2009" }, { "authors": "Evgeny Lagutin; Daniil Gavrilov; Pavel Kalaidin", "journal": "", "ref_id": "b9", "title": "Implicit unlikelihood training: Improving neural text generation with reinforcement learning", "year": "2021" }, { "authors": "Tian Lan; Yixuan Su; Shuhang Liu; Heyan Huang; Xian-Ling Mao", "journal": "", "ref_id": "b10", "title": "Momentum decoding: Open-ended text generation as graph exploration", "year": "2022" }, { "authors": "Lisa Xiang; Ari Li; Daniel Holtzman; Percy Fried; Jason Liang; Tatsunori Eisner; Luke Hashimoto; Mike Zettlemoyer; ; Lewis", "journal": "", "ref_id": "b11", "title": "Contrastive decoding: Open-ended text generation as optimization", "year": "2023" }, { "authors": "Xuechen Li; Tianyi Zhang; Yann Dubois; Rohan Taori; Ishaan Gulrajani; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b12", "title": "Alpacaeval: An automatic evaluator of instruction-following models", "year": "2023" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "ACM Comput. Surv. Just Accepted", "ref_id": "b13", "title": "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2022" }, { "authors": "Clara Meister; Tiago Pimentel; Gian Wiher; Ryan Cotterell", "journal": "", "ref_id": "b14", "title": "Typical decoding for natural language generation", "year": "2022" }, { "authors": "Stephen Merity; Caiming Xiong; James Bradbury; Richard Socher", "journal": "", "ref_id": "b15", "title": "Pointer sentinel mixture models", "year": "2017-04-24" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "", "ref_id": "b16", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018" }, { "authors": " Openai", "journal": "", "ref_id": "b17", "title": "Introducing chatgpt", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b18", "title": "", "year": "2023" }, { "authors": "Krishna Pillutla; Swabha Swayamdipta; Rowan Zellers; John Thickstun; Sean Welleck; Yejin Choi; Zaïd Harchaoui", "journal": "", "ref_id": "b19", "title": "MAUVE: measuring the gap between neural text and human text using divergence frontiers", "year": "2021-12-06" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b20", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Abigail See; Peter J Liu; Christopher D Manning", "journal": "", "ref_id": "b21", "title": "Get to the point: Summarization with pointer-generator networks", "year": "2017" }, { "authors": "Chufan Shi; Haoran Yang; Deng Cai; Zhisong Zhang; Yifan Wang; Yujiu Yang; Wai Lam", "journal": "", "ref_id": "b22", "title": "A thorough examination of decoding methods in the era of llms", "year": "2024" }, { "authors": "Shuming Shi; Enbo Zhao; Duyu Tang; Yan Wang; Piji Li; Wei Bi; Haiyun Jiang; Guoping Huang; Leyang Cui; Xinting Huang", "journal": "", "ref_id": "b23", "title": "Effidit: Your ai writing assistant", "year": "2022" }, { "authors": "Yixuan Su; Tian Lan; Yan Wang; Dani Yogatama; Lingpeng Kong; Nigel Collier", "journal": "", "ref_id": "b24", "title": "A contrastive framework for neural text generation", "year": "2022" }, { "authors": "Yixuan Su; Jialu Xu", "journal": "", "ref_id": "b25", "title": "An empirical study on contrastive search and contrastive decoding for open-ended text generation", "year": "2022" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "", "ref_id": "b26", "title": "Sequence to sequence learning with neural networks", "year": "2014-12-08" }, { "authors": "Paolo Tonella; Roberto Tiella; Cu Duy Nguyen", "journal": "", "ref_id": "b27", "title": "Interpolated n-grams for model based testing", "year": "2014" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b28", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b29", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Sean Welleck; Ilia Kulikov; Stephen Roller; Emily Dinan; Kyunghyun Cho; Jason Weston", "journal": "", "ref_id": "b30", "title": "Neural text generation with unlikelihood training", "year": "2020-04-26" }, { "authors": "Gian Wiher; Clara Meister; Ryan Cotterell", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b31", "title": "On decoding strategies for neural text generators", "year": "2022" }, { "authors": "Jin Xu; Xiaojiang Liu; Jianhao Yan; Deng Cai; Huayang Li; Jian Li", "journal": "", "ref_id": "b32", "title": "Learning to break the loop: Analyzing and mitigating repetitions for neural text generation", "year": "2022" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b33", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Yukun Zhu; Ryan Kiros; Rich Zemel; Ruslan Salakhutdinov; Raquel Urtasun; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b34", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "year": "2015" } ]
[ { "formula_coordinates": [ 3, 105, 283.87, 185.93, 23.23 ], "formula_id": "formula_0", "formula_text": "p n (x i |x i-n+1:i-1 ) = C(x i-n+1:i ) C(x i-n+1:i-1 )(1)" }, { "formula_coordinates": [ 3, 72, 497.12, 218.27, 23.1 ], "formula_id": "formula_1", "formula_text": "L θ = - t i=1 log p θ (x i |x <i )" }, { "formula_coordinates": [ 3, 104.8, 704.11, 152.66, 30.55 ], "formula_id": "formula_2", "formula_text": "p θ (x l+1:l+m |x 1:l ) = l+m i=l+1 p θ (x i |x <i )" }, { "formula_coordinates": [ 3, 322.13, 489.66, 204.08, 9.94 ], "formula_id": "formula_3", "formula_text": "FSD(v|x <t ) = p θ (v|x <t ) -α × p ω (v|x <t ) (2)" }, { "formula_coordinates": [ 4, 73.52, 115.42, 144.28, 94.18 ], "formula_id": "formula_4", "formula_text": "Initialize r = 1, cv = 0 2 for n = N, • • • , 2 do 3 if pn(v|x<t) ̸ = 0 then 4 λn = r * β 5 r = r -λi 6 cv += λnpn(v|xt-n+1:t-1) 7 cv += rp1(v) Output : pω(v|x<t) = cv" }, { "formula_coordinates": [ 4, 102.74, 640.27, 188.2, 9.94 ], "formula_id": "formula_5", "formula_text": "p = λ N p N + λ N -1 p N -1 + • • • + λ 1 p 1 (3)" }, { "formula_coordinates": [ 4, 319.91, 360.96, 206.31, 9.94 ], "formula_id": "formula_6", "formula_text": "c i = cos(cat(h i-n+1:i-1 ), cat(h t-n+1:t-1 )) (4)" }, { "formula_coordinates": [ 5, 73.2, 576.14, 217.07, 31.82 ], "formula_id": "formula_7", "formula_text": "4 n=2 (1 -REP n ), where REP n = (1 - #unique n-grams(x)" }, { "formula_coordinates": [ 5, 73.2, 701.73, 217.07, 25.01 ], "formula_id": "formula_8", "formula_text": "x: COH = f (x)f (x) ∥f (x)∥∥f (x)∥" }, { "formula_coordinates": [ 8, 87.8, 502.78, 185.64, 29.61 ], "formula_id": "formula_9", "formula_text": "BLOOM OPT R-1 R-2 R-L R-1 R-2 R-L beam (" }, { "formula_coordinates": [ 13, 470.85, 558.43, 2.77, 29.82 ], "formula_id": "formula_10", "formula_text": "Q Q Q Q" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b3", "b4", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b3", "b4", "b16", "b2", "b3", "b4", "b9", "b10", "b11", "b17", "b18", "b19", "b20", "b21" ], "table_ref": [], "text": "Energy-based language models (ELMs), as a class of energybased models (EBMs) [1,2], parameterize an unnormalized distribution up to an unknown normalizing constant for natural sentences [3,4,5]. ELMs are radically different from popular autoregressive language models (ALMs), which are locally normalized. Unfortunately, local normalization in ALMs brings some drawbacks, e.g., ALMs are prone to exposure bias [6,7] and label bias [8,9]. ELMs potentially address these issues, as they do not require any local normalization. However, both exact computation of the normalizing constant and exact generation of samples from ELMs are generally intractable, which makes training especially difficult for ELMs.\nIn recent years, there are encouraging progresses in both theories and applications of ELMs. Applications of ELMs have covered computation of sentence likelihoods (up to a constant) [4,5,10,11,12,13], text generation [14], language model pretraining [15], calibrated natural language understanding [16], and so on. As an important application, ELMs have been successfully used as a means for calculating sentence scores in automatic speech recognition (ASR). [4,5] proposes trans-dimensional random field language model (TRF-LM) and applies to rescoring (i.e., reranking) of n-best lists for speech recognition. TRF-LMs outperform modified Kneser-Ney smoothing n-gram models [17] when both using n-gram features. Early ELMs are log-linear models [3,4,5]. Later, ELMs using neural network based energy functions have been developed [10,11,12], outperforming ALMs with similar model sizes, but they all use old-fashioned CNN or LSTM networks. The recent progress in Transformer networks [18] and large pretrained models such as BERT [19] and GPT2 [20] opens new possibility to further advancing ELMs. In this paper, we explore different architectures of energy functions and different training methods to investigate the potential capabilities of ELMs in rescoring for speech recognition.\nThe architectures of energy functions in ELMs can be very flexibly defined. In this work, we summarize and improve a suite of ELM architectures and name them SumTargetLogit, Hidden2Scalar, SumMaskedLogit and SumTokenLogit respectively. Model training of ELMs is challenging due to the intractable normalizing constant. We leave detailed discussions to the sections of Related Work and Methods.\nExtensive experiments are conducted on two widely used Chinese speech recognition datasets, AISHELL-1 [21] and WenetSpeech [22]. We adopt large pretrained language models (PLMs) as the backbones of all energy models, noise models and proposal models in this work. We compare different combinations of architectures and training methods on these two datasets. The results show that the best ELM achieves competitive results with the finetuned GPT2 and performs significantly better than the finetuned BERT. The advantage of ELM is more obvious on the large-scale WenetSpeech. Further analysis show that the ELM obtains better confidence estimate performance than the finetuned GPT2." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Architectures of ELMs", "publication_ref": [ "b11", "b14", "b22", "b15", "b13", "b23", "b24" ], "table_ref": [], "text": "One is generally free to choose the energy function, as long as it assigns a scalar energy to every sentence. TRF-LM [12] builds ELM in different dimensions according to sentence lengths, and directly sum the logit output from an ALM as energy, which corresponds to the SumTargetLogit architecture in this paper. Electric [15] is not strictly an ELM over sentences. It is in fact a cloze model, using contextualized encoder outputs to define conditional energies. Electric leverages the pseudo-loglikelihood (PLL) [23] to score the sentence, which inspires the SumMaskedLogit and SumTokenLogit architectures. In [16], three variants of energy functions are introduced. The first corresponds to the Hidden2Scalar architecture, and the latter two are based on a classification model, which is not relevant to the rescoring task in this paper. [14] proposes a residual energy based model for conditional text generation, where a residual energy is defined on top of an ALM. In addition to ELMs, there have also existed energy-based end-to-end speech recognition models for ASR, for which we refer readers to [24,25]." }, { "figure_ref": [], "heading": "Training methods for ELMs", "publication_ref": [ "b25", "b26", "b11" ], "table_ref": [], "text": "There are two mainstream training methods, maximum likelihood estimate (MLE) and noise contrastive estimate (NCE) [26]. In MLE, calculating gradients of the log likelihood usually resorts to Monte Carlo sampling methods. Two widelyused classes of sampling methods are importance sampling (IS) and Markov Chain Monte Carlo (MCMC) [27]. MCMC covers a range of specific algorithms, e.g., Metropolis independent sampling (MIS) is a special instance, where the proposed move is generated independent of the previous state. NCE, on the other hand, fitting unnormalized models by learning from distinguishing data samples and noise samples, where a noise distribution is required. Dynamic NCE (DNCE) [12] is an extension of NCE, which updates the noise distribution dynamically during training." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b3", "b3", "b4" ], "table_ref": [], "text": "Let x be a natural sentence (i.e., a token sequence). An energybased language model (ELM) is defined as follows\np θ (x) = exp(-E θ (x)) Z(θ)(1)\nwhere E θ (x) denotes an energy function with parameter θ, Z(θ) =\nx exp(-E θ (x)) is the normalizing constant and p θ (x) is the probability of sentence x. The design of E θ (x) and the optimization of θ are the focus of this work.\nTRF-LM: [4] proposes trans-dimensional random field language model (TRF-LM), which builds energy models in different dimensions according to sentence lengths. Let |x| be the length of sentence x, TRF-LM is defined as\np θ (x) = π |x| exp(-E θ (x)) Z |x| (θ)(2)\nwhere Z |x| (θ) is the normalizing constant for length |x|. π |x| is the prior probability of length |x|, which is usually set as the empirical length probability, calculated from training data. The motivation of introducing length probabilities π |x| is that the empirical length probabilities can serve as a control device to improve sampling from multiple distributions over different lengths [4,5]. To be differentiated from TRF-LM, the model in Eq. 1 is called globally-normalized ELM (GN-ELM)." }, { "figure_ref": [], "heading": "Architectures of Energy Functions", "publication_ref": [ "b11", "b9", "b13", "b14", "b15", "b22" ], "table_ref": [], "text": "The architectures of energy functions in ELMs can be very flexibly defined. In the following, we summarize and introduce some architectures for ELMs.\nLet x = {xi} i=1...|x| , where xi ∈ {1, • • • , V } is the i-th token in x.\nV denotes the size of token vocabulary. By abuse of notation, xi represents both the index of xi and the token itself. SumTargetLogit: Similar to [12], we borrow the architecture from ALMs. Given history x1:i-1, let the output logits to predict the next token be denoted by\nf θ (x1:i-1), whose dimension is equal to V . The k-th logit is denoted by f θ (x1:i-1)[k].\nThen, the energy is defined as\nE θ (x) = - |x| i=1 f θ (x1:i-1)[xi](3)\nThis energy function sums the logits corresponding to the target token (next token) at each position, hence it is named by SumTargetLogit. In contrast, the ALM applies local normalization (softmax) to the logits f θ (x1:i-1) to obtain the conditional probability of xi given history x1:i-1. Hidden2Scalar: The energy of SumTargetLogit is defined in uni-directional order like in ALMs. More generally, like in [10,14,15,16], we can use a bi-directional text encoder (e.g., BERT) to encode x and we denote the encoder output (hidden vectors) by enc θ (x). At position i, we have enc θ (x) [i]. Then, the energy is defined as\nE θ (x) = -Linear   |x| i=1 enc θ (x)[i]  (4)\nwhere Linear(•) denotes a trainable linear layer whose output is a scalar. SumMaskedLogit: For masked language model (MLM), e.g., BERT, pseudo-log-likelihood (PLL) is introduced for scroing sentences [23]. Inspired by this, we can define the energy function as follows:\nE θ (x) = - |x| i=1 g θ (MASK(x, i))[i][xi](5)\nwhere g θ denote the MLM, whose output, at each position, is the logits before softmax. g θ (MASK(x, i)) means masking the i-th token in x and sending the masked sequence into the MLM for a forward pass. At position i, the logit corresponding to the masked token xi is denoted as\ng θ (MASK(x, i))[i][xi].\nNotably, this architecture is much time-consuming than others since it requires |x| forward passes to calculate the energy of one sentence, therefore we do not experiment with this architecture. SumTokenLogit: To overcome the deficiency of SumMasked-Logit, we propose a simplication, i.e., omitting the masking step and feeding x directly to the MLM, so that the logits at all positions can be calculated in parallel. The energy is defined as:\nE θ (x) = - |x| i=1 g θ (x)[i][xi](6)" }, { "figure_ref": [], "heading": "Training Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Noise Contrastive Estimate", "publication_ref": [ "b25", "b25", "b11" ], "table_ref": [], "text": "Noise Contrastive Estimate (NCE) [26] optimizes the ELM by learning from discrimination between data samples and noise samples. Let q ϕ be the noise distribution with parameter ϕ, the NCE objective is formulated as:\nJNCE(θ) = E x∼p data log pθ (x) pθ (x)+νq ϕ (x) + ν E x∼q ϕ log νq ϕ (x) pθ (x)+νq ϕ (x)\nwhere ν is the ratio between the noise prior and the data prior and pθ = exp(-E θ (x)) denotes the unnormalized probability. It is important that the noise distribution q ϕ is close to the data distribution pdata so that the binary classification task is sufficiently challenging for NCE to work [26]. Dynamic NCE (DNCE): DNCE [12] was proposed with two motivations. One is to push the noise distribution to be close to the data distribution; the other is to prevent the model overfitting to the empirical distribution when the training data cannot represent the oracle data distribution. DNCE modifies NCE from the above two aspects, and we only adopt the first modification in this paper, that is performing maximum likelihood optimization of q ϕ 's parameter over the training data\nJDNCE(θ, ϕ) = JNCE(θ) + Ex∼p data log q ϕ (x)(7)\nAlgorithm 1 Metropolis Independence Sampling in ELM.\nInput: A target distribution p θ , a proposal distribution q ϕ , iteration number T . Randomly initialize x (0) ; for t =1 to T do Generate x ′ from the proposal q ϕ ; Accept x (t) = x ′ with probability min{1,\np θ (x ′ )q ϕ (x (t-1) )\np θ (x (t-1) )q ϕ (x ′ ) }, otherwise set x (t) = x (t-1) ; end for Return: {x (1) , ..., x (T ) }" }, { "figure_ref": [], "heading": "Maximum Likelihood Estimate", "publication_ref": [ "b27", "b26", "b9", "b26" ], "table_ref": [], "text": "The gradient of log likelihood in MLE learning of ELMs can be derived as follows:\n∂JMLE(θ) ∂θ = -Ex∼p data ∂E θ (x) ∂θ + Ex∼p θ ∂E θ (x) ∂θ (8\n)\nThe challenge is that the second expectation Ex∼p θ [ ∂E θ (x) ∂θ ] requires sampling from the unnormalized ELM p θ . Similar to [28], we compare two sampling approaches. Both methods need a proposal distribution q ϕ , which is implemented by an ALM in this paper. Note that the parameters of q ϕ are also updated during training, similar to Eq. 7.\nMetropolis Independence Sampling (MIS): MIS is a special case of Metropolis-Hasting [27] and has been applied for ELM in [10]. Algorithm 1 shows the MIS details. In experiments, we run the Markov chain for T steps, then use the samples {x (1) , ..., x (T ) } to approximate the expectation Ex∼p θ [ ∂E θ (x) ∂θ ] in Eq. 8 via Monte Carlo averaging. Importance Sampling (IS): different from the accept/reject sampling mode in MIS, IS [27] computes an importance weight w(x ′ ) = p θ (x ′ ) q ϕ (x ′ ) for each proposed sample x ′ . For N proposed samples x ′ 1 , ..., x ′ N from q ϕ , the second expectation in Eq. 8 is estimated as\nEx∼p θ ∂E θ (x) ∂θ ≈ N i=1 w(x ′ i ) ∂E θ (x ′ i ) ∂θ N i=1 w(x ′ i )(9)\nwhich theoretically is biased estimate. Note that we restart the chain after each parameter update in applying MIS, hence its gradient estimate is also biased. One research question to be addressed in this work is to compare MLE based on MIS and IS with NCE and DNCE for learning ELMs." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b20", "b21", "b28", "b29", "b19", "b30", "b18", "b3" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Datasets. We AISHELL-1 [21] and WenetSpeech [22] in experiments. AISHELL-1 is a 178-hour mandarin speech dataset and WenetSpeech is a 1000+ hours multi-domain transcribed Mandarin speech dataset. ASR Model. The main task of ELM is to rescore the n-best list output from the first-pass ASR decoding. We interpolate the score of ASR model, the score of language model and the sentence length to get the final score. In this work, the ASR n-best lists are obtained from a RNN-T [29] model, where the encoder is a Conformer [30] of 92M parameters. Note that the ASR model we use is rather competitive in terms of error rates of \"no LM\" on the two datasets in Table 1 and2. Implementation Details. We use Chinese GPT2 [20,31] as the ALM f θ in Eq. 3 and Chinese BERT [19] as the encoder and MLM in Eq. 4 " }, { "figure_ref": [], "heading": "Results on AISHELL-1", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 1 shows the Character Error Rate (CER) of different models on AISHELL-1 test set. The table is mainly divided into two parts. The first part is the rescoring results of non-energy models. Pretrained GPT2 and BERT represents that we rescore the n-best lists without any finetuning. Note that since BERT is not a traditional language model, we calculate the PLL mentioned above as the score of BERT. The second part is the results of ELMs trained with different methods and architectures on transcripts of AISHELL-1, which are explained in Sec 3.\nIt can be seen that among all the ELMs, the GN-ELM with Hidden2Scalar architecture and trained with DNCE achieves the lowest CERs, which are on par with the best results achieved by the finetuned GPT2, showing the competitiveness of ELM for scoring sentence. Besides, by observing and analyzing all the ELM experiments, we have the following conclusions: DNCE outperforms NCE. We can see that the GN-ELMs trained with DNCE perform better than those trained with NCE under different architectures. This is not surprising since in DNCE, the binary classification challenge gradually increases with the optimization of noise model, which is more beneficial to optimizing ELM than a fixed noise model. GN-ELM and TRF-LM perform closely to each other. When trained with DNCE, TRF-LM performs better than GN-ELM with SumTargetLogit, while it slightly underperforms GN-ELM with Hidden2Scalar and SumTokenLogit. MLE underperforms NCE/DNCE. Most of the results of GN-ELMs trained with MLE-IS and MLE-MIS are worse than those trained with NCE/DNCE. Moreover, we find that the training process of MLE is quite unstable and not easy to converge when the hyper-parameters are not appropriately tuned. We attribute this to the difficulty of sampling in the high dimensional space of ELM (or say in other words, obtaining unbiased gradient estimates) within acceptable number of IS/MIS steps. Bi-directional architectures perform better.\nThe bidirectional architectures (Hidden2Scalar and SumTokenLogit) based on BERT are generally better than the unidirectional architecture (SumTargetLogit) except DNCE (TRF-LM)." }, { "figure_ref": [], "heading": "Results on WenetSpeech", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "According to the conclusions above, we only conduct experiments of GN-ELM and TRF-LM trained with DNCE with different architectures on transcripts of WenetSpeech. In fact, we also conducted MLE experiments on WenetSpeech but the training did not converge with the loss tending to be negative infinite. 2 shows the rescoring results on WenetSpeech. Different from the results on AISHELL-1, the GN-ELM with Sum-TokenLogit achieves the best results among all the ELMs and it even outperforms the finetuned GPT2. TRF-LMs do not achieve very good results on two test sets. Presumably, this is because the empirical length distribution obtained from the training data is not very applicable to the test set. Overall, the ELMs obtain the best performance upper on the large-scale WenetSpeech. This may reflect the benefit of ELMs in using bi-directional architecture and alleviating exposure bias and label bias." }, { "figure_ref": [ "fig_0" ], "heading": "Analysis and Discussion", "publication_ref": [ "b31", "b15", "b32", "b33" ], "table_ref": [ "tab_1", "tab_2", "tab_3", "tab_2" ], "text": "Significance Test. To see whether the differences in Table 1 and Table 2 are significant, we conduct matched-pair significance test [32] for several pairs of experiments, whose p-values are shown in Table 3. If we set the significance level α = 0.05, then all the experiment pairs with p-value less than 0.05 are considered be significantly different. Main observations are as 2) is significantly superior to the finetuned BERT in AISHELL-1 cross domain test and on the TEST-NET and TEST-MEETING of WenetSpeech, and it performs significantly better than the finetuned GPT2 on TEST-MEETING of WenetSpeech. Under other test situations, it achieves equally strong results as the finetuned GPT2 and BERT. b) The best TRF-LM is on par with the best GN-ELM in AISHELL-1 in-domain test but it underperforms the GN-ELM under other test situations significantly.\nConfidence Estimate Performance. The advantages of ELM also include confidence calibration [16,33]. Following [34], we plot the precision-recall curve by changing the confidence threshold and calculate the AUC. The results are shown in Figure 1. The best GN-ELM on WenetSpeech achieves a higher AUC, which illustrates that the GN-ELM has a better confidence estimate performance than the finetuned GPT2." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we explore energy-based language models with different architectures and training methods for rescoring in ASR. We summarize and improve several architectures and examine four different training methods, all using large pretrained models as backbones. Experiments are conducted on two widely-used datasets and the results show that the best ELM can achieve competitive results with the finetuned GPT2 and significantly better results than the finetuned BERT. We hope these new findings would be helpful for future work to further explore ELMs." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "† Equal contribution. This work is supported by NSFC 61976122." } ]
Energy-based language models (ELMs) parameterize an unnormalized distribution for natural sentences and are radically different from popular autoregressive language models (ALMs). As an important application, ELMs have been successfully used as a means for calculating sentence scores in speech recognition, but they all use less-modern CNN or LSTM networks. The recent progress in Transformer networks and large pretrained models such as BERT and GPT2 opens new possibility to further advancing ELMs. In this paper, we explore different architectures of energy functions and different training methods to investigate the capabilities of ELMs in rescoring for speech recognition, all using large pretrained models as backbones. Extensive experiments are conducted on two datasets, AISHELL-1 and WenetSpeech. The results show that the best ELM achieves competitive results with the finetuned GPT2 and performs significantly better than the finetuned BERT. Further analysis show that the ELM obtains better confidence estimate performance than the finetuned GPT2.
Exploring Energy-based Language Models with Different Architectures and Training Methods for Speech Recognition
[ { "figure_caption": "Figure 1 :1Figure 1: The confidence estimate performace of the finetuned GPT2 and the best ELM on the TEST-NET of WenetSpeech.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "and Eq. 6 respectively. Both pretrained models have 12 transformer layers with about 100M parameters. The noise distribution in NCE and the proposal distribution in MLE are both initialized from a finetuned GPT2, while the noise distribution in DNCE is initialized from a GPT2 without finetuning and is continuously optimized during training. We set the iteration number T = 256 in MIS sampling in Algorithm 1. Test Details. On AISHELL-1 test set, we use the ASR model trained on AISHELL-1 and that trained on WenetSpeech to generate n-best lists respectively, i.e., in-domain test and crossdomain test, the results of which are denoted as CER1 and CER2 respectively in Table1. As for WenetSpeech, we only use the ASR model trained on itself to generate n-best lists on the two test sets of WenetSpeech, TEST-NET and TEST-MEETING. The results are also denoted as CER1 and CER2 respectively in Table2.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Rescoring results on AISHELL-1. CER1 and CER2 denote the Character Error Rate (CER) in in-domain test and cross-domain test respectively.", "figure_data": "MethodArchitectureCER1 CER2No LM4.765.145-gram LM4.674.40Pretrained GPT23.223.66Pretrained BERT (PLL)3.293.66Finetuned GPT23.113.33Finetuned BERT (PLL)3.123.47SumTargetLogit3.323.39NCE (GN-ELM)Hidden2Scalar SumTokenLogit3.20 3.273.36 3.43SumTargetLogit3.253.40DNCE (GN-ELM)Hidden2Scalar SumTokenLogit3.11 3.153.34 3.43SumTargetLogit3.113.44DNCE (TRF-LM)Hidden2Scalar SumTokenLogit3.13 3.213.39 3.47SumTargetLogit3.423.61MLE-IS (GN-ELM)Hidden2Scalar SumTokenLogit3.36 3.263.48 3.41SumTargetLogit3.353.59MLE-MIS (GN-ELM)Hidden2Scalar SumTokenLogit3.26 3.253.39 3.49", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Rescoring results on WenetSpeech. CER1 and CER2 denote the CER in two test sets, TEST-NET and TEST-MEETING, respectively.", "figure_data": "MethodArchitectureCER1 CER2No LM9.6917.91Pretrained GPT29.1015.75Pretrained BERT (PLL)9.0715.69Finetuned GPT28.8215.52Finetuned BERT (PLL)8.9615.55SumTargetLogit9.0316.02DNCE (GN-ELM)Hidden2Scalar SumTokenLogit8.98 8.8115.69 15.47SumTargetLogit8.9715.77DNCE (TRF-LM)Hidden2Scalar SumTokenLogit8.95 9.0015.67 15.65Table", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Matched pair test. The two p-values of each pair correspond to CER1 and CER2 in Table1and Table2respectively. A small p-value represents a more significant difference. The best GN-ELM (both in Table1 and Table", "figure_data": "DatasetModel Pairsp-valueAISHELL-1Finetuned GPT2 DNCE (GN-ELM) + Hidden2Scalar0.979 0.828AISHELL-1Finetuned BERT DNCE (GN-ELM) + Hidden2Scalar0.8211e-5AISHELL-1DNCE (TRF-LM) + SumTargetLogit 0.939 0.002 DNCE (GN-ELM) + Hidden2ScalarWenetSpeechFinetuned GPT2 DNCE (GN-ELM) + SumTokenLogit0.577 0.015WenetSpeechFinetuned BERT DNCE (GN-ELM) + SumTokenLogit1e-70.008WenetSpeechDNCE (TRF-LM) + Hidden2Scalar DNCE (GN-ELM) + SumTokenLogit1e-71e-7follows: a)", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Hong Liu; Zhaobiao Lv; Zhijian Ou; Wenbo Zhao; Qing Xiao
[ { "authors": "D Koller; N Friedman", "journal": "MIT press", "ref_id": "b0", "title": "Probabilistic graphical models: principles and techniques", "year": "2009" }, { "authors": "Y Lecun; S Chopra; R Hadsell; M Ranzato; F Huang", "journal": "Predicting structured data", "ref_id": "b1", "title": "A tutorial on energy-based learning", "year": "2006" }, { "authors": "R Rosenfeld; S F Chen; X Zhu", "journal": "Computer Speech & Language", "ref_id": "b2", "title": "Whole-sentence exponential language models: a vehicle for linguistic-statistical integration", "year": "2001" }, { "authors": "B Wang; Z Ou; Z Tan", "journal": "", "ref_id": "b3", "title": "Trans-dimensional random fields for language modeling", "year": "2015" }, { "authors": "B Wang; Z Ou; Z Tan", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b4", "title": "Learning trans-dimensional random fields with applications to language modeling", "year": "2017" }, { "authors": "S Wiseman; A M Rush", "journal": "", "ref_id": "b5", "title": "Sequence-to-sequence learning as beam-search optimization", "year": "2016" }, { "authors": "M Ranzato; S Chopra; M Auli; W Zaremba", "journal": "", "ref_id": "b6", "title": "Sequence level training with recurrent neural networks", "year": "2016" }, { "authors": "J Lafferty; A Mccallum; F C Pereira", "journal": "", "ref_id": "b7", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "year": "2001" }, { "authors": "D Andor; C Alberti; D Weiss; A Severyn; A Presta; K Ganchev; S Petrov; M Collins", "journal": "", "ref_id": "b8", "title": "Globally normalized transition-based neural networks", "year": "2016" }, { "authors": "B Wang; Z Ou", "journal": "IEEE", "ref_id": "b9", "title": "Language modeling with neural transdimensional random fields", "year": "2017" }, { "authors": "B Wang; Z Ou", "journal": "", "ref_id": "b10", "title": "Learning neural trans-dimensional random field language models with noise-contrastive estimation", "year": "2018" }, { "authors": "B Wang; Z Ou", "journal": "", "ref_id": "b11", "title": "Improved training of neural transdimensional random field language models with dynamic noisecontrastive estimation", "year": "2018" }, { "authors": "S Gao; Z Ou; W Yang; H Xu", "journal": "", "ref_id": "b12", "title": "Integrating discrete and neural features via mixed-feature trans-dimensional random field language models", "year": "2020" }, { "authors": "Y Deng; A Bakhtin; M Ott; A Szlam; M Ranzato", "journal": "", "ref_id": "b13", "title": "Residual energy-based models for text generation", "year": "2020" }, { "authors": "K Clark; M.-T Luong; Q Le; C D Manning", "journal": "", "ref_id": "b14", "title": "Pre-training transformers as energy-based cloze models", "year": "2020" }, { "authors": "T He; B Mccann; C Xiong; E Hosseini-Asl", "journal": "", "ref_id": "b15", "title": "Joint energybased model training for better calibrated natural language understanding models", "year": "2021" }, { "authors": "S F Chen; J Goodman", "journal": "Computer Speech & Language", "ref_id": "b16", "title": "An empirical study of smoothing techniques for language modeling", "year": "1999" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b17", "title": "Attention is all you need", "year": "2017" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b18", "title": "Bert: Pretraining of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "OpenAI blog", "ref_id": "b19", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "H Bu; J Du; X Na; B Wu; H Zheng", "journal": "", "ref_id": "b20", "title": "Aishell-1: An opensource mandarin speech corpus and a speech recognition baseline", "year": "2017" }, { "authors": "B Zhang; H Lv; P Guo; Q Shao; C Yang; L Xie; X Xu; H Bu; X Chen; C Zeng; D Wu; Z Peng", "journal": "", "ref_id": "b21", "title": "Wenetspeech: A 10000+ hours multi-domain mandarin corpus for speech recognition", "year": "2022" }, { "authors": "A Wang; K Cho", "journal": "", "ref_id": "b22", "title": "BERT has a mouth, and it must speak: BERT as a Markov random field language model", "year": "2019" }, { "authors": "H Xiang; Z Ou", "journal": "", "ref_id": "b23", "title": "Crf-based single-stage acoustic modeling with ctc topology", "year": "2019" }, { "authors": "E Variani; K Wu; M D Riley; D Rybach; M Shannon; C Allauzen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Global normalization for streaming speech recognition in a modular framework", "year": "2022" }, { "authors": "M Gutmann; A Hyvärinen", "journal": "", "ref_id": "b25", "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "year": "2010" }, { "authors": "J S Liu", "journal": "Springer", "ref_id": "b26", "title": "Monte Carlo strategies in scientific computing", "year": "2001" }, { "authors": "T Parshakova; J.-M Andreoli; M Dymetman", "journal": "", "ref_id": "b27", "title": "Global autoregressive models for data-efficient sequence learning", "year": "2019" }, { "authors": "A Graves", "journal": "", "ref_id": "b28", "title": "Sequence transduction with recurrent neural networks", "year": "2012" }, { "authors": "A Gulati; J Qin; C.-C Chiu; N Parmar; Y Zhang; J Yu; W Han; S Wang; Z Zhang; Y Wu", "journal": "", "ref_id": "b29", "title": "Conformer: Convolutionaugmented transformer for speech recognition", "year": "2020" }, { "authors": "Z Zhao; H Chen; J Zhang; X Zhao; T Liu; W Lu; X Chen; H Deng; Q Ju; X Du", "journal": "", "ref_id": "b30", "title": "Uer: An open-source toolkit for pretraining models", "year": "2019" }, { "authors": "L Gillick; S J Cox", "journal": "", "ref_id": "b31", "title": "Some statistical issues in the comparison of speech recognition algorithms", "year": "1989" }, { "authors": "W Grathwohl; K.-C Wang; J.-H Jacobsen; D Duvenaud; M Norouzi; K Swersky", "journal": "", "ref_id": "b32", "title": "Your classifier is secretly an energy based model and you should treat it like one", "year": "2019" }, { "authors": "Q Li; Y Zhang; D Qiu; Y He; L Cao; P C Woodland", "journal": "", "ref_id": "b33", "title": "Improving confidence estimation on out-of-domain data for end-toend speech recognition", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 127.12, 328.03, 157.25, 19.75 ], "formula_id": "formula_0", "formula_text": "p θ (x) = exp(-E θ (x)) Z(θ)(1)" }, { "formula_coordinates": [ 2, 119.88, 461.58, 164.49, 20.57 ], "formula_id": "formula_1", "formula_text": "p θ (x) = π |x| exp(-E θ (x)) Z |x| (θ)(2)" }, { "formula_coordinates": [ 2, 57.6, 619.11, 226.77, 18.47 ], "formula_id": "formula_2", "formula_text": "Let x = {xi} i=1...|x| , where xi ∈ {1, • • • , V } is the i-th token in x." }, { "formula_coordinates": [ 2, 57.6, 681.57, 226.77, 18.76 ], "formula_id": "formula_3", "formula_text": "f θ (x1:i-1), whose dimension is equal to V . The k-th logit is denoted by f θ (x1:i-1)[k]." }, { "formula_coordinates": [ 2, 114.91, 717.75, 169.46, 27.53 ], "formula_id": "formula_4", "formula_text": "E θ (x) = - |x| i=1 f θ (x1:i-1)[xi](3)" }, { "formula_coordinates": [ 2, 358.06, 195.99, 181.43, 29.93 ], "formula_id": "formula_5", "formula_text": "E θ (x) = -Linear   |x| i=1 enc θ (x)[i]  (4)" }, { "formula_coordinates": [ 2, 354.94, 294.26, 184.55, 27.53 ], "formula_id": "formula_6", "formula_text": "E θ (x) = - |x| i=1 g θ (MASK(x, i))[i][xi](5)" }, { "formula_coordinates": [ 2, 420.96, 364.77, 85.35, 8.35 ], "formula_id": "formula_7", "formula_text": "g θ (MASK(x, i))[i][xi]." }, { "formula_coordinates": [ 2, 375.1, 453.09, 164.39, 27.53 ], "formula_id": "formula_8", "formula_text": "E θ (x) = - |x| i=1 g θ (x)[i][xi](6)" }, { "formula_coordinates": [ 2, 312.72, 568.83, 225.59, 17.92 ], "formula_id": "formula_9", "formula_text": "JNCE(θ) = E x∼p data log pθ (x) pθ (x)+νq ϕ (x) + ν E x∼q ϕ log νq ϕ (x) pθ (x)+νq ϕ (x)" }, { "formula_coordinates": [ 2, 344.22, 734.01, 195.27, 9.53 ], "formula_id": "formula_10", "formula_text": "JDNCE(θ, ϕ) = JNCE(θ) + Ex∼p data log q ϕ (x)(7)" }, { "formula_coordinates": [ 3, 105.65, 149.14, 56.03, 8.28 ], "formula_id": "formula_11", "formula_text": "p θ (x ′ )q ϕ (x (t-1) )" }, { "formula_coordinates": [ 3, 63.4, 256.21, 217.48, 19.75 ], "formula_id": "formula_12", "formula_text": "∂JMLE(θ) ∂θ = -Ex∼p data ∂E θ (x) ∂θ + Ex∼p θ ∂E θ (x) ∂θ (8" }, { "formula_coordinates": [ 3, 280.89, 262.31, 3.48, 7.77 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 3, 92.64, 476.05, 191.73, 28.65 ], "formula_id": "formula_14", "formula_text": "Ex∼p θ ∂E θ (x) ∂θ ≈ N i=1 w(x ′ i ) ∂E θ (x ′ i ) ∂θ N i=1 w(x ′ i )(9)" } ]
2023-05-22
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b8", "b9", "b10", "b11", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b23", "b24", "b20", "b21", "b23", "b22", "b21", "b25", "b22", "b23", "b26", "b27", "b28", "b29", "b30", "b31", "b32", "b33", "b32", "b33", "b34", "b37", "b38", "b39", "b41" ], "table_ref": [], "text": "G RAPHS, as a powerful data structure, are widely used to represent entities and their relations in a variety of domains, such as social networks in sociology and proteinprotein interaction networks in biology. Their complex features (e.g., attribute features and topology features) make the graph mining tasks very challenging.\nGraph Neural Networks (GNNs) [1]- [3], owing to the message passing mechanism that aggregates neighborhood information for learning the node representations [4], have been recognized as a type of powerful deep learning techniques for graph mining tasks [5]- [9] over the last decade. Though effective, message passing-based GNNs have a number of inherent limitations, including oversmoothing [10] and over-squashing [11] with the increment of model depth, limiting their potential capability for graph representation learning. Though recent efforts [12]- [15] have been devoted to alleviating the impact of these issues, the Manuscript received April ??th, 2023; accepted ??, 202?. This work was supported by National Natural Science Foundation (62076105,U22B2017). The first two authors contribute equally. (Corresponding author: Kun He. E-mail:[email protected].) negative influence of their inherent limitations cannot be eliminated completely.\nTransformers [16], on the other hand recently, are wellknown deep learning architectures that have shown superior performance in a variety of data with an underlying Euclidean or grid-like structure, such as natural languages [17], [18] and images [19], [20]. Due to their great modeling capability, there is a growing interest in generalizing Transformers to non-Euclidean data like graphs [21]- [24]. However, graph-structured data generally contain more complicated properties, including structural topology and attribute features, that cannot be directly encoded into Transformers as the tokens.\nExisting graph Transformers have developed three techniques to address this challenge [25]: introducing structural encoding [21], [22], using GNNs as auxiliary modules [24], and incorporating graph bias into the attention matrix [23]. By integrating structural information into the model, graph Transformers exhibit competitive performance on various graph mining tasks, outperforming GNNs on node classification [22], [26] and graph classification [23], [24] tasks in small to mediate scale graphs.\nDespite effectiveness, we observe that existing graph Transformers treat the nodes as independent tokens and construct a single sequence composed of all the node tokens to train Transformer model, leading to a quadratic complexity on the number of nodes for the self-attention calculation. Training such a model on large graphs will cost a huge amount of GPU resources that are generally unaffordable since the mini-batch training is unsuitable for graph Transformers using a single long sequence as the input. Meanwhile, effective strategies that make GNNs scalable to large-scale graphs, including node sampling [27], [28] and approximation propagation [29], [30], are not directly applicable to graph Transformers, as they capture the global attention of all node pairs and are independent of the message passing mechanism.\nRecent works [31], [32] apply various efficient attention calculation techniques [33], [34] into graph Transformers to achieve the linear computational complexity on the number of nodes and edges. Unfortunately, a graph could contain a great quantity of edges. For instance, a common benchmark dataset of Reddit contains around 23K nodes and 11M edges, making it hard to directly train the linear graph Transformer on such graphs [33], [34]. In other words, the current paradigm of graph Transformers makes it intractable to generalize to large graphs.\nTo address the above challenge, we propose a Neighborhood Aggregation Graph Transformer (NAGphormer) for node classification in large graphs. Unlike existing graph Transformers that regard the nodes as independent tokens, NAGphormer treats each node as a sequence and constructs tokens for each node by a novel neighborhood aggregation module called Hop2Token. The key idea behind Hop2Token is to aggregate neighborhood features from multiple hops and transform each hop into a representation, which could be regarded as a token. Hop2Token then constructs a sequence for each node based on the tokens in different hops to preserve the neighborhood information. The sequences are then fed into a Transformerbased module for learning the node representations. By treating each node as a sequence of tokens, NAGphormer could be trained in a mini-batch manner and hence can handle large graphs even on limited GPU resources.\nConsidering that the contributions of neighbors in different hops differ to the final node representation, NAGphormer further provides an attention-based readout function to learn the importance of each hop adaptively. Moreover, we provide theoretical analysis on the relationship between NAGphormer and an advanced category of GNNs, the decoupled Graph Convolutional Network (GCN) [35]- [38]. The analysis is from the perspective of self-attention mechanism and Hop2Token, indicating that NAGphormer is capable of learning more informative node representations from the multi-hop neighborhoods.\nIn this paper, we extend our conference version [39] by proposing a novel data augmentation method to further enhance the performance of NAGphormer. Data augmentation methods are known as effective techniques to improve the training effort. Recent graph data augmentation methods [40]- [42] focus on modifying the information of nodes or edges by generating new node features or graph topology structures during the training stage, showing promising effectiveness for strengthening the model performance. Nevertheless, most graph data augmentation methods focus on nodes or edges and are tailored to GNNs, which is unsuitable for NAGphormer, a Transformer method built on the features of multi-hop neighborhoods.\nBenefited from Hop2Token that transforms the graph information of each node into the sequence of multi-hop neighborhoods, we introduce a new data augmentation method, Neighborhood Augmentation (NrAug), to augment the data obtained by Hop2Token from the perspective of global mixing and local destruction. During the model training, NrAug is applied to each sequence obtained from Hop2Token with a fixed probability. First, we mix one sequence with another within the same batch and interpolating their labels accordingly. Then NrAug masks a portion of the sequence to get the data for subsequent network. The advantage of this method is that it can fully utilize the neighborhood information of multiple nodes and destroy the data appropriately to reduce the risk of overfitting. The overall framework is shown in Figure 1.\nWe conduct extensive experiments on various popular benchmarks, including six small datasets and three large datasets, and the results demonstrate the superiority of the proposed method. The main contributions of this work are as follows:" }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We propose Hop2Token, a novel neighborhood aggregation method that aggregates the neighborhood features from each hop into a node representation, resulting in a sequence of token vectors that preserves neighborhood information for different hops. In this way, we can regard each node in the complex graph data as a sequence of tokens, and treat them analogously as in natural language processing and computer vision fields." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We propose a new graph Transformer model, NAGphormer, for the node classification task. NAGphormer can be trained in a mini-batch manner depending on the output of Hop2Token, and therefore enables the model to handle large graphs. We also develop an attention-based readout function to adaptively learn the importance of different-hop neighborhoods to boost the model performance." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We prove that from the perspective of the selfattention mechanism, compared to an advanced category of GNNs, the decoupled GCN, the proposed NAGphormer can learn more expressive node representations from the multi-hop neighborhoods." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We further propose a novel data augmentation method NrAug that augments the neighborhood information obtained by Hop2Token from both global and local perspectives to enhance the training effect of NAGphormer." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "Extensive experiments on benchmark datasets from small to large demonstrate that NAGphormer consistently outperforms existing graph Transformers and mainstream GNNs. And the proposed NrAug can further boost the performance of NAGphormer effectively." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Graph Neural Network", "publication_ref": [ "b1", "b2", "b9", "b10" ], "table_ref": [], "text": "Graph Neural Network (GNN) has become a powerful technique for modeling graph-structured data. Based on the message-passing mechanism, GNN can simultaneously learn the node representations from topology features and attribute features. Typical GNNs, such as GCN [2] and GAT [3], leverage the features of immediate neighbors via different aggregation strategies to learn the node representations, exhibiting competitive performance on various graph Local Neighborhood Augmentation Mask Fig. 1. The overall framework of NAGphormer with Neighborhood Augmentation (NrAug). NAGphormer first uses a novel neighborhood aggregation module, Hop2Token, to construct a sequence for each node based on the tokens of different hops of neighbors. Then NrAug is adopted to augment the information of multi-hop neighborhoods from both global and local perspectives. After the data augmentation process, NAGphormer learns the node representations using a Transformer backbone, and an attention-based readout function is developed to adaptively aggregate neighborhood information of different hops. An MLP-based module is used in the end for label prediction. mining tasks. However, typical GNNs obey the coupled design that binds the aggregation and feature transformation modules in each GNN layer, leading to the oversmoothing [10] and over-squashing issues [11] on deeplayer GNNs. Such a problem limits the model's ability to capture deep graph structural information." }, { "figure_ref": [], "heading": "Global Neighborhood Augmentation", "publication_ref": [ "b35", "b37", "b34", "b35", "b36", "b11", "b11", "b13", "b12", "b14", "b0", "b2", "b42", "b27", "b43", "b45", "b43", "b27", "b28", "b29", "b46", "b46", "b29" ], "table_ref": [], "text": "A reasonable solution is to decouple the aggregation and feature transformation modules in each GNN layer, treating them as independent modules [36]- [38], termed decoupled Graph Convolutional Network (decoupled GCN) [35]. Decoupled GCN utilizes various propagation methods, such as personalized PageRank [36] and random walk [37], to aggregate features of multi-hop neighborhoods and further generate the node representations. Since the nonlinear activation functions between GNN layers are removed, decoupled GCN exhibits high computational efficiency and has become an advanced type of GNNs in recent years.\nBesides the decoupled strategy, recent works [12]-[15] make efforts to address the over-smoothing and oversquashing issues by developing novel training tricks [12], [14] or new graph neural network architectures [13], [15]. By introducing carefully designed techniques, the impact of over-smoothing and over-squashing problems in GNNs could be well alleviated.\nMost GNNs [1]- [3], [43] require the entire adjacency matrix as the input during training. In this way, when applying to large-scale graphs, the cost of training is too high to afford. There are two categories of strategies for generalizing GNN to large-scale graphs:\n(I) The node sampling strategy [28], [44]- [46] that samples partial nodes from the whole graph via different methods, such as random sampling from neighbors [44] and sampling from GNN layers [28], to reduce the size of nodes for model training.\n(II) The approximation propagation [29], [30], [47] that accelerates the propagation operation via several approximation methods, such as approximate PageRank [47] and sub-matrix approximation [30].\nHowever, by designing various sampling-based or approximation-based methods to reduce the training cost, these models will inevitably lead to information loss and somehow restrict their performance on large-scale networks." }, { "figure_ref": [], "heading": "Graph Transformer", "publication_ref": [ "b20", "b20", "b21", "b23", "b25", "b30", "b22", "b47", "b20", "b48", "b31", "b30", "b31", "b47", "b49" ], "table_ref": [], "text": "In existing graph Transformers, there are three main strategies to incorporate graph structural information into the Transformer architecture so as to learn the node representations:\n(I) Extracting the positional embedding from graph structure. Dwivedi et al. [21] utilize Laplacian eigenvectors to represent positional encodings of the original Transformer and fuse them with the raw attributes of nodes as the input. Derived from [21], Devin et al. [22] leverage the full spectrum of Laplacian matrix to learn the positional encodings.\n(II) Combining GNN and Transformer. In addition to representing structural information by the eigenvectors, Wu et al. [24] regard GNNs as an auxiliary module to extract fixed local structural information of nodes and further feed them into the Transformer to learn long-range pairwise relationships. Chen et al. [26] utilize a GNN model as the structure extractor to learn different types of structural information, such as k-subtree and k-subgraph, to capture the structure similarity of node pairs via the self-attention mechanism. Rampášek et al. [31] develop a hybrid layer that contains a GNN layer and a self-attention layer to capture both local and global information.\n(III) Integrating the graph structural bias into the selfattention matrix. There are several efforts to transform various graph structure features into attention biases and integrate them into the self-attention matrix to enable the Transformer to capture graph structural information. Ying et al. [23] propose a spatial encoding method that models the structural similarity of node pairs based on the length of their shortest path. Zhao et al. [48] propose a proximityenhanced attention matrix by considering the relationship of node pairs in different neighborhoods. Besides, by modeling edge features in chemical and molecular graphs, Dwivedi et al. [21] extend graph Transformers to edge feature representation by injecting them into the self-attention module of Transformers. Hussain et al. [49] utilize the edge features to strengthen the expressiveness of the attention matrix. Wu et al. [32] introduce the topology structural information as the relational bias to strengthen the original attention matrix.\nNevertheless, the computational complexity of most existing graph Transformers is quadratic with the number of nodes. Although GraphGPS [31] and NodeFormer [32] achieve linear complexity with the number of nodes and edges by introducing various linear Transformer backbones, Such high complexity makes these methods hard to directly handle graph mining tasks on large-scale networks with millions of nodes and edges since they require the entire graph as the input.\nRecent works [48], [50] sample several ego-graphs of each node and then utilize Transformer to learn the node representations on these ego-graphs so as to reduce the computational cost of model training. However, the sampling process is still time-consuming in large graphs. Moreover, the sampled ego-graphs only contain limited neighborhood information due to the fixed and small sampled graph size for all nodes, which is insufficient to learn the informative node representations." }, { "figure_ref": [], "heading": "Graph Data Augmentation", "publication_ref": [ "b50", "b40", "b51", "b39", "b52", "b53", "b54", "b55", "b41" ], "table_ref": [], "text": "Most current data augmentation techniques involve modifying existing data directly or generating new data with the same distribution using existing training data. However, graph data are irregular and non-Euclidean structures, making developing data augmentation techniques for graphs challenging. Existing graph data augmentation methods can be categorized into three groups: node augmentation, edge augmentation, and feature augmentation.\nNode augmentation methods attempt to operate on nodes in the graph. Wang et al. [51] propose a method that interpolates a pair of nodes and their ground-truth labels to produce a novel and synthetic sample for training. Verma et al. [41] present GraphMix, which trained an auxiliary Fully-Connected Network to generate better features using the node features. Feng et al. [52] propose DropNode, which removes the entire feature vector for some nodes to enhance the model robustness.\nEdge augmentation methods modify the graph connectivity by adding or removing edges. The most representative work is DropEdge [40], which randomly removes some edges from the input graph and can be plugged into exiting popular GCNs to improve the performance. Another approach is to update the graph structure with the model's predicted results, such as AdaEdge [53], GAUG [54], and MH-Aug [55].\nFeature augmentation methods seek to augment node features for better performance. FLAG [56] improves the generalization ability of GNNs through gradient-based adversarial perturbation. LAGNN [42] learns the distribution of the neighbor's node representation based on the central node representation and uses the resulting features with the raw node features to enhance the representation of GNN.\nUnlike the ideas of previous studies, which augment graph data from the perspective of nodes or edges, we propose a new augmentation method based on the output of Hop2Token and augments graph data from the perspective of neighborhood information." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "Let G = (V, E) be an unweighted and undirected attributed graph, where\nV = {v 1 , v 2 , • • • , v n }, and n = |V |. Each node v ∈ V has a feature vector x v ∈ X,\nwhere X ∈ R n×d is the feature matrix describing the attribute information of nodes and d is the dimension of feature vector. A ∈ R n×n represents the adjacency matrix and D is the diagonal degree matrix. The normalized adjacency matrix is defined as  = D-1/2 à D-1/2 , where à denotes the adjacency matrix with self-loops and D denotes the corresponding diagonal degree matrix. The node classification task provides a labeled node set V l and an unlabeled node set V u . Let Y ∈ R n×c denote the label matrix where c is the number of classes. Given the labels Y V l , the goal is to predict the labels Y Vu for unlabeled nodes." }, { "figure_ref": [], "heading": "Graph Neural Network", "publication_ref": [ "b1", "b56", "b9", "b35", "b36", "b37" ], "table_ref": [], "text": "Graph Neural Network (GNN) has become a powerful technique to model the graph-structured data. Graph Convolutional Network (GCN) [2] is a typical model of GNN that applies the first-order approximation of spectral convolution [57] to aggregate information of immediate neighbors. A GCN layer can be written as:\nH (l+1) = σ( ÂH (l) W (l) ),(1)\nwhere\nH (l) ∈ R n×d (l) and W (l) ∈ R d (l) ×d (l+1)\ndenote the representation of nodes and the learnable parameter matrix in the l-th layer, respectively. σ(•) denotes the non-linear activation function.\nEq. (1) contains two operations, i.e., neighborhood aggregation and feature transformation, which are coupled in the GCN layer. Such a coupled design would lead to the over-smoothing problem [10] when the number of layers increases, limiting the model to capture deep structural information. To address this issue, the decoupled GCN [36], [37] separates the feature transformation and neighborhood aggregation in the GCN layer and treats them as independent modules. A general form of decoupled GCN is described as [38]:\nZ = K k=0 β k H (k) , H (k) = ÂH (k-1) , H (0) = f θ (X), (2)\nwhere Z denotes the final representations of nodes, H (k) denotes the hidden representations of nodes at propagation step k, β k denotes the aggregation coefficient of propagation step k,  denotes the normalized adjacency matrix, f θ denotes a neural network module and X denotes the raw attribute feature matrix. Such a decoupled design exhibits high computational efficiency and enables the model to capture deeper structural information." }, { "figure_ref": [], "heading": "Transformer", "publication_ref": [ "b15" ], "table_ref": [], "text": "The Transformer encoder [16] contains a sequence of Transformer layers, where each layer is comprised of a multihead self-attention (MSA) and a position-wise feed-forward network (FFN). The MSA module is the critical component that aims to capture the semantic correlation between the input tokens. For simplicity, we use the single-head selfattention module for description. Suppose we have an input H ∈ R n×d for the self-attention module where n is the number of tokens and d the hidden dimension. The self-attention module first projects H into three subspaces, namely Q, K and V:\nQ = HW Q , K = HW K , V = HW V ,(3)\nwhere\nW Q ∈ R d×d K , W K ∈ R d×d K and W V ∈ R d×d V are\nthe projection matrices. The output matrix is calculated by:\nH = softmax QK √ d K V.(4)\nThe attention matrix, softmax QK √ d K\n, captures the pairwise similarity of input tokens in the sequence. Specifically, it calculates the dot product between each token pair after projection. The softmax is applied row-wise." }, { "figure_ref": [], "heading": "NAGPHORMER", "publication_ref": [], "table_ref": [], "text": "In this section, we present the proposed NAGphormer in detail. To handle graphs at scale, we first introduce a novel neighborhood aggregation module called Hop2Token, then we build NAGphormer together with structural encoding and attention-based readout function. We also provide the computational complexity of NAGphormer. Finally, we conduct the theoretical analysis of NAGphormer, which brings deeper insights into the relation between NAGphormer and decoupled GCN." }, { "figure_ref": [], "heading": "Hop2Token", "publication_ref": [ "b37", "b57", "b58", "b1", "b35", "b37" ], "table_ref": [], "text": "How to aggregate information from adjacent nodes into the node representation is crucial in reasonably powerful Graph Neural Network (GNN) architectures. To inherit the desirable properties, we design Hop2Token that considers the neighborhood information of different hops.\nSpeciffically, for each node v, let N k (v) = {u ∈ V |d(v, u) ≤ k} denote its k-hop neighborhood, where d(v, u) represents distance of the shortest path between v and u. We define N 0 (v) = {v}, i.e., the 0-hop neighborhood is the node itself. In Hop2Token, we transform the k-hop neighborhood N k (v) into a neighborhood embedding x k v Algorithm 1 The Hop2Token Algorithm Input: Normalized adjacency matrix Â; Feature matrix X;\nPropagation step K Output: Sequences of all nodes X G 1: for k = 0 to K do 2:\nfor i = 0 to n do 3:\nX G [i, k] = X[i]; 4:\nend for 5:\nX = ÂX; 6: end for 7: return Sequences of all nodes X G ; with an aggregation operator φ. In this way, the k-hop representation of a node v can be expressed as:\nx k v = φ(N k (v)).(5)\nBy Eq. ( 5), we can calculate the neighborhood embeddings for variable hops of a node and further construct a sequence to represent its neighborhood information, i.e.,\nS v = (x 0 v , x 1 v , ..., x K v )\n, where K is fixed as a hyper-parameter. Assume x k v is a d-dimensional vector, the sequences of all nodes in graph G will construct a tensor X G ∈ R n×(K+1)×d . To better illustrate the implementation of Hop2Token, we decompose\nX G to a sequence S = (X 0 , X 1 , • • • , X K ),\nwhere X k ∈ R n×d can be seen as the k-hop neighborhood matrix. Here we define X 0 as the original feature matrix X.\nIn practice, we apply a propagation process similar to the method in [38], [58] to obtain the sequence of K-hop neighborhood matrices. Given the normalized adjacency matrix  (aka the transition matrix [59]) and X, multiplying  with X aggregates immediate neighborhood information. Applying this multiplication consecutively allows us to propagate information at larger distances. For example, we can access the 2-hop neighborhood information by Â( ÂX). Thereafter, the k-hop neighborhood matrix can be described as:\nX k = Âk X.(6)\nThe detailed implementation is drawn in Algorithm 1. The advantages of Hop2Token are two-fold. (I) Hop2Token is a non-parametric method. It can be conducted offline before the model training, and the output of Hop2Token supports mini-batch training. In this way, the model can handle graphs of arbitrary sizes, thus allowing the generalization of graph Transformer to large-scale graphs. (II) Encoding k-hop neighborhood of a node into one representation is helpful for capturing the hop-wise semantic correlation, which is ignored in typical GNNs [2], [36], [38]." }, { "figure_ref": [], "heading": "NAGphormer for Node Classification", "publication_ref": [ "b20", "b21", "b15", "b59", "b43", "b2" ], "table_ref": [], "text": "Given an attributed graph, besides the attribute information of nodes, the structural information of nodes is also a crucial feature for graph mining tasks. Hence, we construct a hybrid feature matrix by concatenating the structural feature matrix to the attribute feature matrix to preserve the structural information and attribute information of nodes simultaneously. Specifically, We adopt the eigenvectors of the graph's Laplacian matrix to capture the nodes' structural information. In practice, we select the eigenvectors corresponding to the s smallest non-trivial eigenvalues to construct the structure matrix U ∈ R n×s [21], [22]. Then we combine the original feature matrix X with the structure matrix U to preserve both the attribute and structural information:\nX = X U.(7)\nHere indicates the concatenation operator and X ∈ R n×(d+s) denotes the fused feature matrix, which is then used as the input of Hop2Token for calculating the information of different-hop neighborhoods. Accordingly, the effective feature vector for node v is extended as\nx v ∈ R 1×(d+s) .\nNext, we assemble an aggregated neighborhood sequence as S v = (x 0 v , x 1 v , ..., x K v ) by applying Hop2Token. Then we map S v to the hidden dimension d m of the Transformer with a learnable linear projection:\nZ (0) v = x 0 v E; x 1 v E; • • • ; x K v E ,(8)\nwhere E ∈ R (d+s)×dm and Z\nv ∈ R (K+1)×dm . Then, we feed the projected sequence into the Transformer encoder. The building blocks of the Transformer contain multi-head self-attention (MSA) and position-wise feed-forward network (FFN). We follow the implementation of the vanilla Transformer encoder described in [16], while LayerNorm (LN) is applied before each block [60]. And the FFN consists of two linear layers with a GELU non-linearity:\nZ ( ) v = MSA LN Z ( -1) v + Z ( -1) v ,(9)\nZ ( ) v = FFN LN Z ( ) v + Z ( ) v ,(10)\nwhere = 1, . . . , L implies the -th layer of the Transformer.\nIn the end, a novel readout function is applied to the output of the Transformer encoder. Through several Transformer layers, the corresponding output Z ( ) v contains the embeddings for all neighborhoods of node v. It requires a readout function to aggregate the information of different neighborhoods into one embedding. Common readout functions include summation and mean [44]. However, these methods ignore the importance of different neighborhoods. Inspired by GAT [3], we propose an attention-based readout function to learn such importance by computing the attention coefficients between 0-hop neighborhood (i.e., the node itself) and every other neighborhood. Specifically, for the output matrix Z v ∈ R (K+1)×dm of node v, Z v 0 is the token representation of the node itself and Z v k is its khop representation. We calculate the normalized attention coefficients for its k-hop neighborhood:\nα v k = exp((Z v 0 Z v k )W a ) K i=1 exp((Z v 0 Z v i )W a ) ,(11)\nwhere W a ∈ R 1×2dm denotes the learnable projection and i = 1, . . . , K. Therefore, the readout function takes the correlation between each neighborhood and the node representation into account. The node representation is finally aggregated as follows:\nZ v out = Z v 0 + K k=1 α v k Z v k .(12)\nBased on Eq. ( 12), we could obtain the final representation matrix of all nodes Z out ∈ R n×dm . We further utilize the Multilayer Perceptron (MLP) as the classifier to predict the labels of nodes:\nŶ = MLP(Z out ),(13)\nwhere Ŷ ∈ R n×c denotes the predicted label matrix of nodes. And the loss function is described as follows:\nL = - i∈V l c j=0 Y i,j ln Ŷi,j .(14)" }, { "figure_ref": [], "heading": "Computational Complexity Analysis", "publication_ref": [], "table_ref": [], "text": "We provide the computational complexity analysis of NAGphormer on time and space. Time complexity. The time complexity of NAGphormer mainly depends on the self-attention module of the Transformer. So the computational complexity of NAGphormer is O(n(K + 1) 2 d), where n denotes the number of nodes, K denotes the number of hops and d is the dimension of parameter matrix (i.e., feature vector).\nSpace complexity. The space complexity is based on the number of model parameters and the outputs of each layer. The first part is mainly on the Transformer layer O(d 2 L), where L is the number of Transformer layers. The second part is on the attention matrix and the hidden node representations, O(b(K + 1) 2 + b(K + 1)d), where b denotes the batch size. Thus, the total space complexity is\nO(b(K + 1) 2 + b(K + 1)d + d 2 L).\nThe computational complexity analysis reveals that the memory cost of training NAGphormer on GPU devices is restricted to the batch size b. Hence, with a suitable b, NAGphormer can handle graph learning tasks in large-scale graphs even on limited GPU resources." }, { "figure_ref": [], "heading": "Theoretical analysis of NAGphormer", "publication_ref": [], "table_ref": [], "text": "In this subsection, we discuss the relation of NAGphormer and decoupled GCN through the lens of node representations of Hop2Token and self-attention mechanism. We theoretically show that NAGphormer could learn more informative node representations from the multi-hop neighborhoods than decoupled GCN does.\nFact 1. From the perspective of the output node representations of Hop2Token, we can regard the decoupled GCN as applying a self-attention mechanism with a fixed attention matrix S ∈ R (K+1)×(K+1) , where S K,k = β k (k ∈ {0, ..., K}) and other elements are all zeroes.\nHere K denotes the total propagation step, k represents the current propagation step, β k represents the aggregation weight at propagation step k in the decoupled GCN.\nProof. First, both Hop2Token and decouple GCN utilize the same propagation process to obtain the information of different-hop neighborhoods. So we use the same symbol H (k) i ∈ R 1×d to represent the neighborhood information of node i at propagation step k for brevity.\nFor an arbitrary node i, each element Z i,m (m ∈ {1, ..., d}) of the output representation Z i ∈ R 1×d learned by the decoupled GCN according to Eq. ( 2) is calculated as:\nZ i,m = K k=0 β k H (k) i,m . (15\n)\nOn the other hand, the output X i ∈ R (K+1)×d of Hop2Token in the matrix form for node i is described as:\nX i =        H (0) i,0 H (0) i,1 • • • H (0) i,d H (1) i,0 H (1) i,1 • • • H (1) i,d . . . . . . . . . . . . H (K) i,0 H (K) i,1 • • • H (K) i,d        . (16\n)\nSuppose we have the following attention matrix S ∈ R (K+1)×(K+1) :\nS =      0 0 • • • 0 0 0 • • • 0 . . . . . . . . . . . . β 0 β 1 • • • β K      . (17\n)\nFollowing Eq. ( 4), the output matrix T ∈ R (K+1)×d learned by the self-attention mechanism can be described as:\nT = SX i =      0 0 • • • 0 0 0 • • • 0 . . . . . . . . . . . . γ 0 γ 1 • • • γ d      , (18\n)\nwhere\nγ m = K k=0 β k H (k)\ni,m (m ∈ {1, ..., d}). Further, we can obtain each element T f inal m (m ∈ {1, ..., d}) of the final representation T f inal ∈ R 1×d of node i by using a summation readout function:\nT f inal m = K k=0 T k,m = (0+0+• • •+γ m ) = K k=0 β k H (k) i,m = Z i,m . (19)\nFinally, we can obtain Fact 1. Fact 1 indicates that the decoupled GCN, an advanced category of GNN, only captures partial information of the multi-hop neighborhoods through the incomplete attention matrix. Moreover, the fixed attention coefficients of β k (k ∈ {0, ..., K} ) for all nodes also limit the model to learn the node representations adaptively from their individual neighborhood information.\nIn contrast, our proposed NAGphormer first utilizes the self-attention mechanism to learn the representations of different-hop neighborhoods based on their semantic correlation. Then, NAGphormer develops an attention-based readout function to adaptively learn the node representations from their neighborhood information, which helps the model learn more informative node representations." }, { "figure_ref": [], "heading": "NEIGHBORHOOD AUGMENTATION", "publication_ref": [], "table_ref": [], "text": "Benefited from the proposed Hop2Token that enables us to augment the graph data from the perspective of neighborhood information, in this section, we propose a novel graph data augmentation method called Neighborhood Augmentation (NrAug), which augments the sequences X G obtained by Hop2Token through two parts, namely Global Neighborhood Augmentation (GNA) and Local Neighborhood Augmentation (LNA), from the perspectives of global mixing and local destruction, respectively." }, { "figure_ref": [], "heading": "Global Neighborhood Augmentation", "publication_ref": [ "b60", "b60" ], "table_ref": [], "text": "Inspired by Mixup [61], Global Neighborhood Augmentation (GNA) aims to generate new training examples by mixing the information of pairwise nodes of the same training batch.\nSpecifically, we first decide whether to apply GNA with probability p aug , which is a fixed hyper-parameter. Then we randomly combine two sequences of different nodes in a mini-batch, S i v and S j v , to generate a new global augmentation sequence and its corresponding interpolating label Ỹv . The GNA sample can be described as:\nSglo v = λS i v + (1 -λ)S j v , Ỹv = λY i + (1 -λ)Y j ,(20)\nwhere we follow the setting in [61] and sample λ from the beta distribution Beta(α, β), where α and β control the shape of the beta distribution." }, { "figure_ref": [], "heading": "Local Neighborhood Augmentation", "publication_ref": [], "table_ref": [], "text": "The goal of Local Neighborhood Augmentation (LNA) is to generate augmentation data examples by randomly masking a portion of the sequence obtained by Hop2Token for each node, which could be regarded as local destruction to the original neighborhood information. Specifically, for the k-hop neighborhood representation of node v, we randomly select some neighborhood representations to be masked. The corresponding augmented example Sloc v is defined as:\nSloc v = M (x 0 v , x 1 v , ..., x K v ),(21)\nwhere M ∈ {0, 1} d×(K+1) denotes a randomly generated binary mask that controls the area to mask. The column vectors in M are d-dimensional vectors filled with all zeros or all ones. Operator represents element-wise multiplication.\nIn addition, we use a hyper-parameter τ to control the mask ratio. And we set the number of column vectors filled with zeros to (K+1)×τ while ensuring that at least one column vector is filled with all zeros." }, { "figure_ref": [], "heading": "NrAug for Data Augmentation", "publication_ref": [], "table_ref": [], "text": "In the training phase, our NrAug is adopted to generate new training samples by combining the output of two main modules, GNA and LNA. In practice, each mini-batch will randomly determine whether to perform the NrAug operator with probability p aug and returns the new sequences of all nodes in the mini-batch with the corresponding labels after augmentation. The resulting augmented data is then used to train the subsequent network. The overall process is shown in Algorithm 2. It's worth noting that the core modules, GNA and LNA, could be applied independently to achieve neighborhood data augmentation. We conduct experiments to analyze the contributions of each module to the model performance in Section 6.6." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct extensive experiments to validate the effectiveness of NAGphormer, and the extended version of NAGphormer via NrAug called NAGphormer+. We for S i v in S batch do 6:\nS j v = random sample(S batch );\nCalculate Sglo v , Ỹ by Eq. ( 20) using S i v , S j v , λ, Y; first introduce the experimental setup. Then we report the performances of NAGphormer and NAGphormer+ against representative baselines on small-scale and large-scale realworld datasets. Finally, we provide the parameter and ablation studies to understand our proposed methods deeply." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b29", "b29", "b1", "b2", "b35", "b37", "b45", "b46", "b29", "b20", "b25", "b21", "b22", "b30", "b31", "b61", "b60" ], "table_ref": [ "tab_1" ], "text": "Here we briefly introduce the datasets, baselines, and implementation details in our experiments. Datasets. We conduct experiments on nine widely used datasets of various scales, including six small-scale datasets and three relatively large-scale datasets. For small-scale datasets, we adopt Pubmed, CoraFull, Computer, Photo, CS and Physics from the Deep Graph Library (DGL). We apply 60%/20%/20% train/val/test random splits for small-scale datasets. For large-scale datasets, we adopt AMiner-CS, Reddit and Amazon2M from [30]. The splits of large-scale datasets follow the settings of [30]. Statistics of the datasets are reported in Table 1.\nBaselines. We compare NAGphormer with 12 advanced baselines, including: (I) four full-batch GNNs: GCN [2], GAT [3], APPNP [36] and GPRGNN [38]; (II) three scalable GNNs: GraphSAINT [46], PPRGo [47] and GRAND+ [30]; (III) five graph Transformers 1 : GT [21],\n1. Another recent graph Transformer, SAT [26], is not considered as it reports OOM even in our small-scale graphs.\nSAN [22], Graphormer [23], GraphGPS [31] and Node-Former [32].\nImplementation details. Referring to the recommended settings in the official implementations, we perform hyperparameter tuning for each baseline. For the model configuration of NAGphormer, we try the number of Transformer layers in {1, 2, ..., 5}, the hidden dimension in {128, 256, 512}, and the propagation steps in {2, 3, ..., 20}. Parameters are optimized with the AdamW [62] optimizer, using a learning rate of in {1e -3, 5e -3, 1e -4} and the weight decay of {1e -4, 5e -4, 1e -5}. We also search the dropout rate in {0.1, 0.3, 0.5}. We follow the setting in [61] and set the shape parameters α and β of beta distribution in GNA to 1.0. The batch size is set to 2000. The training process is early stopped within 50 epochs. For the hyper-parameters of NrAug, we try the mask ratio τ in {0.25, 0.5, 0.75}, and the augmented probability p aug in {0.25, 0.5, 0.75, 1.0}. All experiments are conducted on a Linux server with 1 I9-9900k CPU, 1 RTX 2080TI GPU and 64G RAM." }, { "figure_ref": [], "heading": "Comparison on Small-scale Datasets", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We conduct 10 trials with random seeds for each model and take the mean accuracy and standard deviation for comparison on small-scale datasets, and the results are reported in Table 2. From the experimental results, we can observe that NAGphormer outperforms the baselines consistently on all these datasets. For the superiority over GNN-based methods, it is because NAGphormer utilizes Hop2Token and the Transformer model to capture the semantic relevance of different hop neighbors overlooked in most GNNs, especially compared to two decoupled GCNs, APPNP and GPRGNN. Besides, the performance of NAGphormer also surpasses graph Transformer-based methods, indicating that leveraging the local information is beneficial for node classification. In particular, NAGphormer outperforms GT and SAN, which also introduce the eigenvectors of Laplacian matrix as the structural encoding into Transformers for learning the node representations, demonstrating the superiority of our proposed NAGphormer. Moreover, We observe that Graphormer, SAN, and GraphGPS suffer from the out-of-memory error even in some small graphs, further demonstrating the necessity of designing a scalable graph Transformer for large-scale graphs. Finally, it is noteworthy that NAGphormer+ surpasses all models and leads to stateof-the-art results, which shows the performance of NAGphormer has been further improved upon incorporating NrAug, indicating that NrAug effectively augments the input data from small-scale datasets." }, { "figure_ref": [], "heading": "Comparison on Large-scale Datasets", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To verify the scalability of NAGphormer and NAGphormer+, we continue the comparison on three large-scale datasets. For the baselines, we only compare with three scalable GNNs, as existing graph Transformers can not directly work on such large-scale datasets due to their high training cost. The results are summarized in Table 3. One can see that NAGphormer consistently outperforms the scalable GNNs on all datasets, indicating that NAGphormer can better preserve the local information of nodes and is capable of handling the node classification task in large graphs. Furthermore, NAGphormer+ boosts the performance on all the three datasets, showing that NAGphormer+ can still effectively perform the node classification task on large-scale datasets." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "To analyze the effectiveness of structural encoding and attention-based readout function, we perform a series of ablation studies on all datasets.\nStructural encoding. We compare our proposed NAGphormer and NAGphormer+ to its variant without the structural encoding module to measure the gain of structural encoding. The results are summarized in Table 4. We can observe that the gains of adding structural encoding vary in different datasets, since different graphs exhibit different topology structure. Therefore, the gain of structural encoding is sensitive to the structure of graphs. These results also indicate that introducing the structural encoding can improve the model performance for the node classification task.\nAttention-based readout function. We conduct a comparative experiment between the proposed attention-based readout function ATT. in Eq. ( 11) with previous readout functions, i.e., SIN. and SUM.. The function of SIN. utilizes the corresponding representation of the node itself learned by the Transformer layer as the final output to predict labels. And SUM. can be regarded as aggregating all information of different hops equally. We evaluate the performance of Fig. 3. On the number of propagation steps K. be attributed to its use of mixed samples generated from multiple nodes' data to capture a wider range of neighborhood information. Furthermore, the two methods continue to improve the performance on most datasets when combined, demonstrating that the two methods can complement each other.\nThen, we study the influence of two key hyperparameters, the mask ratio τ for LNA and the probability p aug , on the model performance. For simplicity, we set K to the value at which the searched parameter yields the best performance for NAGphormer. We interpolate the test accuracy after the grid search for hyper-parameters. As shown in Figure 5, applying NrAug on different datasets to achieve the best performance requires different hyperparameters. Generally speaking, a larger value of p aug tends to result in a more effective combination of GNA and LNA.\nIn a word, it is clear that we can easily implement NrAug leveraging the benefits of Hop2Token, which can transform irregular and non-Euclidean data into structural data. Experiments demonstrate that our NrAug can improve the model performance, further highlighting the innovative nature of NAGphormer." }, { "figure_ref": [], "heading": "Efficiency Study", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "In this subsection, we validate the efficiency of NAGphormer and NAGphormer+ on large-scale graphs. Specifically, we compare the training cost in terms of running time (s) and GPU memory (MB) of NAGphormer, NAGphormer+ and three scalable GNNs, PPRGo, Graph-SAINT and GRAND+. For scalable GNNs, we adopt the official implements on Github. However, all methods contain diverse pre-processing steps built on different programming language frameworks, such as approximate matrixcalculation based on C++ framework in GRAND+. To ensure a fair comparison, we report the running time cost including the training stage and inference stage since these stages of all models are based on Pytorch framework. The results are summarized in Table 6.\nFrom the results, we can observe that NAGphormer shows high efficiency when dealing with large graphs. For instance, on Amazon2M which contains two million nodes and 60 million edges, NAGphormer achieves almost 3× acceleration compared with the second fastest model PPRGo. The reason is that the time complexity of NAGphormer mainly depends on the number of nodes and is not related to the number of edges, while the time consumption of other methods is related to the number of both edges and nodes since these methods involve the propagation operation during the training and inference stages. And the increase in time required for NAGphormer+ compared to NAGphormer may be attributed to NrAug providing more challenging data, thereby causing the network to spend more time on learning. But the additional time required is acceptable. As for the GPU memory cost, since NAGphormer utilizes the mini-batch training, the GPU memory cost is determined by the batch size. Hence, the GPU memory cost of NAGphormer and NAGphormer+ is affordable by choosing a proper batch size even on large-scale graphs." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we first propose NAGphormer, a novel and powerful graph Transformer for the node classification task. Through two novel components, Hop2Token and attentionbased readout function, NAGphormer can handle largescale graphs and adaptively learn the node representation from multi-hop neighborhoods. The theoretical analysis indicates that NAGphormer can learn more expressive node representations than the decoupled GCN. Based on the property of Hop2Token, we further propose NrAug, a novel data augmentation method augmenting the neighborhood information from global and local perspectives, to enhance the training effect of NAGphormer.\nExperiments on various datasets from small to large demonstrate the superiority of NAGphormer over representative graph Transformers and Graph Neural Networks, and the effectiveness of proposed NrAug in strengthening the performance of NAGphormer. Further ablation study shows that the effectiveness of structural encoding and attentionbased readout function in NAGphormer, followed by the parameter studies. And analysis of the two components of NrAug shows that both LNA and GNA are useful for boosting the model performance. In the end, we show that NAGphormer and NAGphormer+ are efficient on memory and running time. We can conclude that our tokenized design makes graph Transformers possible to handle large graphs." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work is supported by National Natural Science Foundation (U22B2017,62076105)." }, { "figure_ref": [], "heading": "Parameter Study", "publication_ref": [], "table_ref": [], "text": "To further evaluate the performance of NAGphormer and NAGphormer+, we study the influence of two key parameters: the number of propagation steps K and the number of Transformer layers L. Specifically, we perform experiments on AMiner-CS, Reddit and Amazon2M by setting different values of K and L, respectively.\nOn parameter K. We fix L = 1 and vary the number of propagation steps K in {4, 6, • • • , 20}. Figure 3 reports the model performance. We can observe that the values of K are different for each dataset to achieve the best performance since different networks exhibit different neighborhood structures. Besides, we can also observe that the model performance does not decline significantly even if K is relatively large to 20. For instance, the performance on Reddit dataset changes slightly (< 0.1%) with the increment of K, which indicates that learning the node representations from information of multi-hop neighborhoods via the self-attention mechanism and attention-based readout function can alleviate the impact of over-smoothing and oversquashing problems. In addition, the model performance changes differently on three datasets with the increment of K. The reason may be that these datasets are different types of networks and have diverse properties. This observation also indicates that neighborhood information on different types of networks has different effects on the model performance. In practice, we set K = 16 for AMiner-CS, and set K = 10 for others since the large propagation step will bring the high time cost of Hop2Token on Amanzon2M. On parameter L. We fix the best value of K and vary L from 1 to 5 on each dataset. The results are shown in Figure 4. Generally speaking, a smaller L can achieve a high accuracy while a larger L degrades the performance of NAGphormer and NAGphormer+. Such a result can attribute to the fact that a larger L is more likely to cause over-fitting. we set L = 3 for AMiner-CS, and set L = 1 for other datasets.\nIt is worth noting that the variation trend of NAGphormer+ under different parameters is essentially consistent with that of NAGphormer, indicating that the NrAug we designed is highly suitable for NAGphormer." }, { "figure_ref": [], "heading": "Analysis of NrAug", "publication_ref": [], "table_ref": [], "text": "We further conduct additional experiments to deeply analyze our proposed NrAug. As mentioned in Section 5.3, the two core modules of NrAug, GNA and LNA, could be applied independently for augmenting the input data of NAGphormer. Hence, we first conduct experiments to evaluate the performance of NAGphormer via only GNA or LNA. The results are reported in Table 5.\nAs shown in " } ]
Graph Transformers, emerging as a new architecture for graph representation learning, suffer from the quadratic complexity on the number of nodes when handling large graphs. To this end, we propose a Neighborhood Aggregation Graph Transformer (NAGphormer) that treats each node as a sequence containing a series of tokens constructed by our proposed Hop2Token module. For each node, Hop2Token aggregates the neighborhood features from different hops into different representations, producing a sequence of token vectors as one input. In this way, NAGphormer could be trained in a mini-batch manner and thus could scale to large graphs. Moreover, we mathematically show that compared to a category of advanced Graph Neural Networks (GNNs), called decoupled Graph Convolutional Networks, NAGphormer could learn more informative node representations from multi-hop neighborhoods. In addition, we propose a new data augmentation method called Neighborhood Augmentation (NrAug) based on the output of Hop2Token that augments simultaneously the features of neighborhoods from global as well as local views to strengthen the training effect of NAGphormer. Extensive experiments on benchmark datasets from small to large demonstrate the superiority of NAGphormer against existing graph Transformers and mainstream GNNs, and the effectiveness of NrAug for further boosting NAGphormer.
Tokenized Graph Transformer with Neighborhood Augmentation for Node Classification in Large Graphs
[ { "figure_caption": "Algorithm 2 1 : 2 :212The Neighborhood Augmentation Algorithm Input: Sequences of all nodes in a mini-batch S batch ; Label matrix Y; Mask ratio τ ; Probability p aug ; Shape parameters α, β of the beta distribution Output: Augmented sequences of all nodes in a batch Sbatch ; Augmented label matrix Ỹ if random(0, 1) > p aug then Sbatch = S batch ; Ỹ = Y;", "figure_data": "", "figure_id": "fig_2", "figure_label": "212", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. On the number of propagation steps L.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Statistics on datasets.", "figure_data": "Dataset# Nodes# Edges # Features # ClassesPubmed19,71744,3245003CoraFull19,793126,8428,71070Computer13,752491,72276710Photo7,650238,1637458CS18,333163,7886,80515Physics34,493495,9248,4155AMiner-CS593,4866,217,00410018Reddit232,965 11,606,91960241Amazon2M 2,449,029 61,859,14010047", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of all models in terms of mean accuracy ± stdev (%) on small-scale datasets. The best results appear in bold. OOM indicates the out-of-memory error. ± 0.12 61.76 ± 0.14 89.65 ± 0.52 92.70 ± 0.20 92.92 ± 0.12 96.18 ± 0.07 GAT 86.32 ± 0.16 64.47 ± 0.18 90.78 ± 0.13 93.87 ± 0.11 93.61 ± 0.14 96.17 ± 0.08 APPNP 88.43 ± 0.15 65.16 ± 0.28 90.18 ± 0.17 94.32 ± 0.14 94.49 ± 0.07 96.54 ± 0.07 GPRGNN 89.34 ± 0.25 67.12 ± 0.31 89.32 ± 0.29 94.49 ± 0.14 95.13 ± 0.09 96.85 ± 0.08 GraphSAINT 88.96 ± 0.16 67.85 ± 0.21 90.22 ± 0.15 91.72 ± 0.13 94.41 ± 0.09 96.43 ± 0.05 PPRGo 87.38 ± 0.11 63.54 ± 0.25 88.69 ± 0.21 93.61 ± 0.12 92.52 ± 0.15 95.51 ± 0.08 GRAND+ 88.64 ± 0.09 71.37 ± 0.11 88.74 ± 0.11 94.75 ± 0.12 93.92 ± 0.08 96.47 ± 0.04 ± 0.14 61.82 ± 0.25 91.12 ± 0.19 95.27 ± 0.17 95.68 ± 0.08 97.19 ± 0.04 NAGphormer 89.70 ± 0.19 71.51 ± 0.13 91.22 ± 0.14 95.49 ± 0.11 95.75 ± 0.09 97.34 ± 0.", "figure_data": "MethodPubmedCoraFullComputerPhotoCSPhysicsGCN 86.54 GT 88.79 ± 0.12 61.05 ± 0.38 91.18 ± 0.17 94.74 ± 0.13 94.64 ± 0.13 97.05 ± 0.05GraphormerOOMOOMOOM92.74 ± 0.14OOMOOMSAN88.22 ± 0.15 59.01 ± 0.34 89.83 ± 0.16 94.86 ± 0.10 94.51 ± 0.15OOMGraphGPS88.94 ± 0.16 55.76 ± 0.23OOM95.06 ± 0.13 93.93 ± 0.12OOMNodeFormer89.24", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ".34 ± 0.09", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of all models in terms of mean accuracy ± stdev (%) on large-scale datasets. The best results appear in bold.", "figure_data": "MethodAMiner-CSRedditAmazon2MPPRGo49.07 ± 0.1990.38 ± 0.1166.12 ± 0.59GraphSAINT51.86 ± 0.2192.35 ± 0.0875.21 ± 0.15GRAND+54.67 ± 0.2592.81 ± 0.0375.49 ± 0.11NAGphormer56.21 ± 0.4293.58 ± 0.0577.43 ± 0.24NAGphormer+ 57.", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The accuracy (%) with or without structural encoding.", "figure_data": "Pumbed CorafullCSComputer Photo Physics Aminer-CS Reddit Amazon2MW/O-SE89.0670.4295.5290.4495.0297.1055.6493.4776.98NAGphormerWith-SE89.7071.5195.7591.2295.4997.3456.2193.5877.43Gain+0.64+1.09+0.23+0.78+0.47+0.24+0.57+0.11+0.45W/O-SE90.0971.8195.9091.6096.2097.2756.1493.5377.78NAGphormer+With-SE90.3872.1696.0691.9596.6197.3457.0293.7477.98Gain+0.29+0.35+0.16+0.35+0.41+0.07+0.88+0.21+0.20NAGphormerNAGphormer+AMiner-CSRedditAmazon2MAccuracy (%)48.00 50.00 52.00 54.00 56.00 58.0093.00 93.25 93.50 93.75 94.0072.00 74.00 76.00 78.00 80.004 6 8 10 12 14 16 18 204 6 8 10 12 14 16 18 204 6 8 10 12 14 16 18 20KKK", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The accuracy (%) of NAGphormer with different augmentation methods. The best results appear in bold.", "figure_data": "Pubmed CoraFull Computer PhotoCSPhysics Aminer-CS Reddit Amazon2MNAGphormer89.7071.5191.2295.49 95.7597.3456.2193.5877.43+LNA89.8971.8391.8896.36 95.8997.2355.8893.2677.37+GNA90.3172.1190.9996.36 95.9997.2556.5693.5577.60+NrAug90.3872.1691.9596.61 96.0697.3457.0293.7477.98", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The training cost on large-scale graphs in terms of GPU memory (MB) and running time (s).", "figure_data": "Aminer-CSRedditAmazon2MMemory (MB) Time (s) Memory (MB) Time (s) Memory (MB) Time (s)GraphSAINT1,64123.672,56543.155,317334.08PPRGo1,07514.211,09335.731,097152.62GRAND+1,09121.411,213197.971,123207.85NAGphormer1,82719.871,92520.722,03558.66NAGphormer+2,51826.922,70646.802,29065.22", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" } ]
Jinsong Chen; Chang Liu; Kaiyuan Gao; Gaichao Li; K He
[ { "authors": "M Chen; Z Wei; Z Huang; B Ding; Y Li", "journal": "", "ref_id": "b0", "title": "Simple and Deep Graph Convolutional Networks", "year": "2020" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b1", "title": "Semi-supervised Classification with Graph Convolutional Networks", "year": "2017" }, { "authors": "P Veličković; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "", "ref_id": "b2", "title": "Graph Attention Networks", "year": "2018" }, { "authors": "J Gilmer; S S Schoenholz; P F Riley; O Vinyals; G E Dahl", "journal": "", "ref_id": "b3", "title": "Neural Message Passing for Quantum Chemistry", "year": "2017" }, { "authors": "K Xu; W Hu; J Leskovec; S Jegelka", "journal": "", "ref_id": "b4", "title": "How Powerful Are Graph Neural Networks", "year": "2019" }, { "authors": "W Fan; Y Ma; Q Li; Y He; Y E Zhao; J Tang; D Yin", "journal": "", "ref_id": "b5", "title": "Graph Neural Networks for Social Recommendation", "year": "2019" }, { "authors": "R Ying; R He; K Chen; P Eksombatchai; W L Hamilton; J Leskovec", "journal": "", "ref_id": "b6", "title": "Graph Convolutional Neural Networks for Webscale Recommender Systems", "year": "2018" }, { "authors": "M Zhang; Y Chen", "journal": "", "ref_id": "b7", "title": "Link Prediction Based on Graph Neural Networks", "year": "2018" }, { "authors": "D Jin; Z Liu; W Li; D He; W Zhang", "journal": "", "ref_id": "b8", "title": "Graph Convolutional Networks Meet Markov Random Fields: Semi-supervised Community Detection in Attribute Networks", "year": "2019" }, { "authors": "D Chen; Y Lin; W Li; P Li; J Zhou; X Sun", "journal": "", "ref_id": "b9", "title": "Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks From the Topological View", "year": "2020" }, { "authors": "U Alon; E Yahav", "journal": "", "ref_id": "b10", "title": "On the Bottleneck of Graph Neural Networks and its Practical Implications", "year": "2021" }, { "authors": "C Yang; R Wang; S Yao; S Liu; T Abdelzaher", "journal": "", "ref_id": "b11", "title": "Revisiting Over-smoothing in Deep GCNs", "year": "2020" }, { "authors": "W Lu; Y Zhan; Z Guan; L Liu; B Yu; W Zhao; Y Yang; D Tao", "journal": "", "ref_id": "b12", "title": "SkipNode: On Alleviating Over-smoothing for Deep Graph Convolutional Networks", "year": "2021" }, { "authors": "W Huang; Y Rong; T Xu; F Sun; J Huang", "journal": "", "ref_id": "b13", "title": "Tackling Over-Smoothing for General Graph Convolutional Networks", "year": "2020" }, { "authors": "Q Sun; J Li; H Yuan; X Fu; H Peng; C Ji; Q Li; P S Yu", "journal": "", "ref_id": "b14", "title": "Position-aware Structure Learning for Graph Topologyimbalance by Relieving Under-reaching and Over-squashing", "year": "2022" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "", "ref_id": "b15", "title": "Attention Is All You Need", "year": "2017" }, { "authors": "J Devlin; M Chang; K Lee; K Toutanova", "journal": "NAACL-HLT", "ref_id": "b16", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Y Liu; M Ott; N Goyal; J Du; M Joshi; D Chen; O Levy; M Lewis; L Zettlemoyer; V Stoyanov", "journal": "", "ref_id": "b17", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "year": "2019" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly; J Uszkoreit; N Houlsby", "journal": "", "ref_id": "b18", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2021" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b19", "title": "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows", "year": "2021" }, { "authors": "V P Dwivedi; X Bresson", "journal": "", "ref_id": "b20", "title": "A Generalization of Transformer Networks to Graphs", "year": "2020" }, { "authors": "D Kreuzer; D Beaini; W Hamilton; V Létourneau; P Tossou", "journal": "", "ref_id": "b21", "title": "Rethinking Graph Transformers with Spectral Attention", "year": "2021" }, { "authors": "C Ying; T Cai; S Luo; S Zheng; G Ke; D He; Y Shen; T.-Y Liu", "journal": "", "ref_id": "b22", "title": "Do Transformers Really Perform Badly for Graph Representation", "year": "2021" }, { "authors": "P Jain; Z Wu; M Wright; A Mirhoseini; J E Gonzalez; I Stoica", "journal": "", "ref_id": "b23", "title": "Representing Long-Range Context for Graph Neural Networks with Global Attention", "year": "2021" }, { "authors": "E Min; R Chen; Y Bian; T Xu; K Zhao; W Huang; P Zhao; J Huang; S Ananiadou; Y Rong", "journal": "", "ref_id": "b24", "title": "Transformer for Graphs: An Overview from Architecture Perspective", "year": "2022" }, { "authors": "D Chen; L Bray; K Borgwardt", "journal": "", "ref_id": "b25", "title": "Structure-aware transformer for graph representation learning", "year": "2022" }, { "authors": "J Chen; T Ma; C Xiao", "journal": "", "ref_id": "b26", "title": "FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling", "year": "2018" }, { "authors": "D Zou; Z Hu; Y Wang; S Jiang; Y Sun; Q Gu", "journal": "", "ref_id": "b27", "title": "Layerdependent Importance Sampling for Training Deep and Large Graph Convolutional Networks", "year": "2019" }, { "authors": "M Chen; Z Wei; B Ding; Y Li; Y Yuan; X Du; J.-R Wen", "journal": "", "ref_id": "b28", "title": "Scalable Graph Neural Networks via Bidirectional Propagation", "year": "2020" }, { "authors": "W Feng; Y Dong; T Huang; Z Yin; X Cheng; E Kharlamov; J Tang", "journal": "", "ref_id": "b29", "title": "GRAND+: Scalable Graph Random Neural Networks", "year": "2022" }, { "authors": "L Rampásek; M Galkin; V P Dwivedi; A T Luu; G Wolf; D Beaini", "journal": "", "ref_id": "b30", "title": "Recipe for a General, Powerful, Scalable Graph Transformer", "year": "2022" }, { "authors": "Q Wu; W Zhao; Z Li; D Wipf; J Yan", "journal": "", "ref_id": "b31", "title": "NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification", "year": "2022" }, { "authors": "K M Choromanski; V Likhosherstov; D Dohan; X Song; A Gane; T Sarl Ós; P Hawkins; J Q Davis; A Mohiuddin; L Kaiser; D B Belanger; L J Colwell; A Weller", "journal": "", "ref_id": "b32", "title": "Rethinking Attention with Performers", "year": "2021" }, { "authors": "M Zaheer; G Guruganesh; K A Dubey; J Ainslie; C Alberti; S Onta Ñ Ón; P Pham; A Ravula; Q Wang; L Yang; A Ahmed", "journal": "", "ref_id": "b33", "title": "Big Bird: Transformers for Longer Sequences", "year": "2020" }, { "authors": "H Dong; J Chen; F Feng; X He; S Bi; Z Ding; P Cui", "journal": "", "ref_id": "b34", "title": "On the Equivalence of Decoupled Graph Convolution Network and Label Propagation", "year": "2021" }, { "authors": "J Klicpera; A Bojchevski; S G Ünnemann", "journal": "", "ref_id": "b35", "title": "Predict then Propagate: Graph Neural Networks meet Personalized PageRank", "year": "2019" }, { "authors": "F Wu; A Souza; T Zhang; C Fifty; T Yu; K Weinberger", "journal": "", "ref_id": "b36", "title": "Simplifying Graph Convolutional Networks", "year": "2019" }, { "authors": "E Chien; J Peng; P Li; O Milenkovic", "journal": "", "ref_id": "b37", "title": "Adaptive Universal Generalized PageRank Graph Neural Network", "year": "2021" }, { "authors": "J Chen; K Gao; G Li; K He", "journal": "", "ref_id": "b38", "title": "NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs", "year": "2023" }, { "authors": "Y Rong; W Huang; T Xu; J Huang", "journal": "", "ref_id": "b39", "title": "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification", "year": "2020" }, { "authors": "V Verma; M Qu; K Kawaguchi; A Lamb; Y Bengio; J Kannala; J Tang", "journal": "", "ref_id": "b40", "title": "GraphMix: Improved Training of GNNs for Semi-Supervised Learning", "year": "2021" }, { "authors": "S Liu; R Ying; H Dong; L Li; T Xu; Y Rong; P Zhao; J Huang; D Wu", "journal": "", "ref_id": "b41", "title": "Local Augmentation for Graph Neural Networks", "year": "2022" }, { "authors": "W Jin; T Derr; Y Wang; Y Ma; Z Liu; J Tang", "journal": "", "ref_id": "b42", "title": "Node Similarity Preserving Graph Convolutional Networks", "year": "2021" }, { "authors": "W Hamilton; Z Ying; J Leskovec", "journal": "", "ref_id": "b43", "title": "Inductive Representation Learning on Large Graphs", "year": "2017" }, { "authors": "W.-L Chiang; X Liu; S Si; Y Li; S Bengio; C.-J Hsieh", "journal": "", "ref_id": "b44", "title": "Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks", "year": "2019" }, { "authors": "H Zeng; H Zhou; A Srivastava; R Kannan; V K Prasanna", "journal": "", "ref_id": "b45", "title": "GraphSAINT: Graph Sampling Based Inductive Learning Method", "year": "2020" }, { "authors": "A Bojchevski; J Klicpera; B Perozzi; A Kapoor; M Blais; B Ózemberczki; M Lukasik; S G Ünnemann", "journal": "", "ref_id": "b46", "title": "Scaling Graph Neural Networks with Approximate Pagerank", "year": "2020" }, { "authors": "J Zhao; C Li; Q Wen; Y Wang; Y Liu; H Sun; X Xie; Y Ye", "journal": "", "ref_id": "b47", "title": "Gophormer: Ego-Graph Transformer for Node Classification", "year": "2021" }, { "authors": "M S Hussain; M J Zaki; D Subramanian", "journal": "", "ref_id": "b48", "title": "Global Self-Attention as a Replacement for Graph Convolution", "year": "2022" }, { "authors": "Z Zhang; Q Liu; Q Hu; C Lee", "journal": "", "ref_id": "b49", "title": "Hierarchical Graph Transformer with Adaptive Node Sampling", "year": "2022" }, { "authors": "Y Wang; W Wang; Y Liang; Y Cai; B Hooi", "journal": "", "ref_id": "b50", "title": "Mixup for Node and Graph Classification", "year": "2021" }, { "authors": "W Feng; J Zhang; Y Dong; Y Han; H Luan; Q Xu; Q Yang; E Kharlamov; J Tang", "journal": "", "ref_id": "b51", "title": "Graph Random Neural Networks for Semi-Supervised Learning on Graphs", "year": "2020" }, { "authors": "D Chen; Y Lin; W Li; P Li; J Zhou; X Sun", "journal": "", "ref_id": "b52", "title": "Measuring and Relieving the Over-Smoothing Problem for Graph Neural Networks from the Topological View", "year": "2020" }, { "authors": "T Zhao; Y Liu; L Neves; O Woodford; M Jiang; N Shah", "journal": "", "ref_id": "b53", "title": "Data Augmentation for Graph Neural Networks", "year": "2021" }, { "authors": "H Park; S Lee; S Kim; J Park; J Jeong; K.-M Kim; J.-W Ha; H J Kim", "journal": "", "ref_id": "b54", "title": "Metropolis-Hastings Data Augmentation for Graph Neural Networks", "year": "2021" }, { "authors": "K Kong; G Li; M Ding; Z Wu; C Zhu; B Ghanem; G Taylor; T Goldstein", "journal": "", "ref_id": "b55", "title": "Robust Optimization as Data Augmentation for Large-scale Graphs", "year": "2022" }, { "authors": "M Defferrard; X Bresson; P Vandergheynst", "journal": "", "ref_id": "b56", "title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering", "year": "2016" }, { "authors": "Q He; J Chen; H Xu; K He", "journal": "", "ref_id": "b57", "title": "Structural Robust Label Propagation on Homogeneous Graphs", "year": "2022" }, { "authors": "J Gasteiger; S Weiß; S G Ünnemann", "journal": "", "ref_id": "b58", "title": "Diffusion Improves Graph Learning", "year": "2019" }, { "authors": "R Xiong; Y Yang; D He; K Zheng; S Zheng; C Xing; H Zhang; Y Lan; L Wang; T Liu", "journal": "", "ref_id": "b59", "title": "On Layer Normalization in the Transformer Architecture", "year": "2020" }, { "authors": "H Zhang; M Cissé; Y N Dauphin; D Lopez-Paz", "journal": "", "ref_id": "b60", "title": "mixup: Beyond Empirical Risk Minimization", "year": "2018" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b61", "title": "Decoupled Weight Decay Regularization", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 312, 290.9, 252, 21.19 ], "formula_id": "formula_0", "formula_text": "V = {v 1 , v 2 , • • • , v n }, and n = |V |. Each node v ∈ V has a feature vector x v ∈ X," }, { "formula_coordinates": [ 4, 384.81, 536.78, 179.2, 12.03 ], "formula_id": "formula_1", "formula_text": "H (l+1) = σ( ÂH (l) W (l) ),(1)" }, { "formula_coordinates": [ 4, 341.94, 553.71, 171.55, 12.69 ], "formula_id": "formula_2", "formula_text": "H (l) ∈ R n×d (l) and W (l) ∈ R d (l) ×d (l+1)" }, { "formula_coordinates": [ 4, 326.64, 720.71, 237.36, 29.64 ], "formula_id": "formula_3", "formula_text": "Z = K k=0 β k H (k) , H (k) = ÂH (k-1) , H (0) = f θ (X), (2)" }, { "formula_coordinates": [ 5, 90.26, 305.45, 209.74, 11.56 ], "formula_id": "formula_4", "formula_text": "Q = HW Q , K = HW K , V = HW V ,(3)" }, { "formula_coordinates": [ 5, 77.04, 325.74, 222.96, 11.06 ], "formula_id": "formula_5", "formula_text": "W Q ∈ R d×d K , W K ∈ R d×d K and W V ∈ R d×d V are" }, { "formula_coordinates": [ 5, 115.97, 359.76, 184.04, 23.79 ], "formula_id": "formula_6", "formula_text": "H = softmax QK √ d K V.(4)" }, { "formula_coordinates": [ 5, 317.4, 117.34, 106.8, 20.63 ], "formula_id": "formula_7", "formula_text": "X G [i, k] = X[i]; 4:" }, { "formula_coordinates": [ 5, 403.92, 227.83, 160.08, 12.69 ], "formula_id": "formula_8", "formula_text": "x k v = φ(N k (v)).(5)" }, { "formula_coordinates": [ 5, 312, 271.27, 252, 22.16 ], "formula_id": "formula_9", "formula_text": "S v = (x 0 v , x 1 v , ..., x K v )" }, { "formula_coordinates": [ 5, 365.16, 328.94, 198.84, 9.68 ], "formula_id": "formula_10", "formula_text": "X G to a sequence S = (X 0 , X 1 , • • • , X K )," }, { "formula_coordinates": [ 5, 412.08, 478.35, 151.92, 12.2 ], "formula_id": "formula_11", "formula_text": "X k = Âk X.(6)" }, { "formula_coordinates": [ 6, 149.02, 97.49, 150.98, 9.51 ], "formula_id": "formula_12", "formula_text": "X = X U.(7)" }, { "formula_coordinates": [ 6, 236.03, 160.03, 63.14, 12.19 ], "formula_id": "formula_13", "formula_text": "x v ∈ R 1×(d+s) ." }, { "formula_coordinates": [ 6, 100.76, 225.73, 199.24, 12.69 ], "formula_id": "formula_14", "formula_text": "Z (0) v = x 0 v E; x 1 v E; • • • ; x K v E ,(8)" }, { "formula_coordinates": [ 6, 92.02, 349.39, 207.98, 12.69 ], "formula_id": "formula_16", "formula_text": "Z ( ) v = MSA LN Z ( -1) v + Z ( -1) v ,(9)" }, { "formula_coordinates": [ 6, 94.32, 369.38, 205.68, 12.69 ], "formula_id": "formula_17", "formula_text": "Z ( ) v = FFN LN Z ( ) v + Z ( ) v ,(10)" }, { "formula_coordinates": [ 6, 107.58, 592.88, 192.42, 27.61 ], "formula_id": "formula_18", "formula_text": "α v k = exp((Z v 0 Z v k )W a ) K i=1 exp((Z v 0 Z v i )W a ) ,(11)" }, { "formula_coordinates": [ 6, 124.78, 690.16, 175.22, 29.64 ], "formula_id": "formula_19", "formula_text": "Z v out = Z v 0 + K k=1 α v k Z v k .(12)" }, { "formula_coordinates": [ 6, 402.96, 72.97, 161.04, 12.2 ], "formula_id": "formula_20", "formula_text": "Ŷ = MLP(Z out ),(13)" }, { "formula_coordinates": [ 6, 383.19, 120.08, 180.81, 30.24 ], "formula_id": "formula_21", "formula_text": "L = - i∈V l c j=0 Y i,j ln Ŷi,j .(14)" }, { "formula_coordinates": [ 6, 312, 354.9, 145.15, 10.31 ], "formula_id": "formula_22", "formula_text": "O(b(K + 1) 2 + b(K + 1)d + d 2 L)." }, { "formula_coordinates": [ 6, 395.89, 720.71, 164.15, 29.64 ], "formula_id": "formula_23", "formula_text": "Z i,m = K k=0 β k H (k) i,m . (15" }, { "formula_coordinates": [ 6, 560.04, 731, 3.96, 9.14 ], "formula_id": "formula_24", "formula_text": ")" }, { "formula_coordinates": [ 7, 97.76, 74.62, 198.29, 62.65 ], "formula_id": "formula_25", "formula_text": "X i =        H (0) i,0 H (0) i,1 • • • H (0) i,d H (1) i,0 H (1) i,1 • • • H (1) i,d . . . . . . . . . . . . H (K) i,0 H (K) i,1 • • • H (K) i,d        . (16" }, { "formula_coordinates": [ 7, 296.04, 102, 3.96, 9.14 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 7, 118.11, 174.44, 177.93, 51.61 ], "formula_id": "formula_27", "formula_text": "S =      0 0 • • • 0 0 0 • • • 0 . . . . . . . . . . . . β 0 β 1 • • • β K      . (17" }, { "formula_coordinates": [ 7, 296.04, 196.12, 3.96, 9.14 ], "formula_id": "formula_28", "formula_text": ")" }, { "formula_coordinates": [ 7, 103.73, 270.66, 192.32, 51.61 ], "formula_id": "formula_29", "formula_text": "T = SX i =      0 0 • • • 0 0 0 • • • 0 . . . . . . . . . . . . γ 0 γ 1 • • • γ d      , (18" }, { "formula_coordinates": [ 7, 296.04, 292.34, 3.96, 9.14 ], "formula_id": "formula_30", "formula_text": ")" }, { "formula_coordinates": [ 7, 76.79, 331.97, 82.82, 14.05 ], "formula_id": "formula_31", "formula_text": "γ m = K k=0 β k H (k)" }, { "formula_coordinates": [ 7, 48, 390.19, 255.59, 38.84 ], "formula_id": "formula_32", "formula_text": "T f inal m = K k=0 T k,m = (0+0+• • •+γ m ) = K k=0 β k H (k) i,m = Z i,m . (19)" }, { "formula_coordinates": [ 7, 386.53, 179.76, 177.47, 27.72 ], "formula_id": "formula_33", "formula_text": "Sglo v = λS i v + (1 -λ)S j v , Ỹv = λY i + (1 -λ)Y j ,(20)" }, { "formula_coordinates": [ 7, 379.76, 388.06, 184.25, 13.14 ], "formula_id": "formula_34", "formula_text": "Sloc v = M (x 0 v , x 1 v , ..., x K v ),(21)" } ]
2023-05-25
[ { "figure_ref": [ "fig_0" ], "heading": "", "publication_ref": [ "b13", "b20", "b16", "b3", "b25", "b22", "b24", "b11" ], "table_ref": [], "text": "1 Introduction E-commerce platforms, such as Amazon and Lazada, have achieved steady development. These platforms generally provide purchasers' reviews to supply justification information for new consumers and help them make decisions. Nevertheless, the quality and usefulness of reviews can vary hugely: some are helpful with coherent and informative content while others unhelpful with trivial or irrelevant information. Due to this, the Multimodal Review Helpfulness Prediction (MRHP) task is proposed. It ranks the reviews by predicting their helpfulness scores based on the textual and visual * Corresponding Author modality of products and reviews, because helpful reviews should comprise not only precise and informative textual material, but also consistent images with text content (Liu et al., 2021;Nguyen et al., 2022). This can help consumers find helpful reviews instead of unhelpful ones, resulting in more appealing E-commerce platforms.\nIn MRHP, multimodal reviews naturally form ranking partitions based on user votings, where each partition exhibits distinct helpfulness feature level (Ma et al., 2021). As such, the MRHP score regressor's function is to assign scores to indicate the partition for hidden features of product reviews. However, current MRHP approaches employ fullyconnected neural networks (FCNNs), which cannot fulfill the partition objective. In particular, FCNNs are ineffective in feature scaling and transformation, thus being inadept at feature space splitting and failing to work efficiently in ranking problems that involve ranking partitions (Beutel et al., 2018;Qin et al., 2021). An illustration would be in Figure 1, where the helpfulness scores predicted by FC-NNs do not lucidly separate helpful and unhelpful reviews. Severely, some unhelpful reviews possess logits that can even stay in the range of helpful ones, bringing about fallacious ranking.\nIn addition to incompetent model architectures, existing MRHP frameworks also employ suboptimal loss function: they are mostly trained on a pairwise loss to learn review preferences, which unfortunately mismatches the listwise nature of review ordering prediction. Firstly, the mistmatch might empirically give rise to inefficient ranking performance (Pasumarthi et al., 2019;Pobrotyn and Białobrzeski, 2021). Second, pairwise traning loss considers all pairs of review as equivalent. In consequence, the loss cannot differentiate a pair of useful and not useful reviews from a pair of moderately useful and not useful ones, which results in a model that distinguishes poorly between useful and moderately useful reviews. To address these issues, we first propose a Gradient-Boosted Decision Tree (GBDT) as the helpfulness score regressor to utilize both its huge capacity of partitioning feature space (Leboeuf et al., 2020) and differentiability compared with standard decision trees for end-to-end training. We achieve the partition capability with the split (internal) nodes of the tree implemented with non-linear single perceptron, to route review features to the specific subspace in a soft manner.\nFurthermore, we develop a theoretical analysis to demonstrate that pairwise training indeed has lower model generalization than listwise approach. We proceed to propose a novel listwise training objective for the proposed MRHP architecture. We also equip our architecture with a listwise attention network that models the interaction among the reviews to capture the listwise context for the MRHP ranking task.\nIn sum, our contributions are four-fold:\n• We propose a novel gradient-boosted decision tree score predictor for multimodal review helpfulness prediction (MRHP) to partition product review features and properly infer helpfulness score distribution.\n• We propose a novel listwise attention module for the MRHP architecture that conforms to the listwise context of the MRHP task by relating reviews in the list.\n• We perform theoretical study with the motivation of ameliorating the model generalization error, and accordingly propose a novel MRHP training objective which satisfies our aim.\n• We conducted comprehensive experiments on two benchmark datasets and found that our approach significantly outperforms both textonly and multimodal baselines, and accomplishes state-of-the-art results for MRHP." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "In this section, we recall the Multimodal Review Helpfulness Prediction (MRHP) problem. Then, we introduce theoretical preliminaries which form the basis of our formal analysis of the ranking losses for the MRHP problem in the next section." }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [ "b13", "b8", "b20" ], "table_ref": [], "text": "Following (Liu et al., 2021;Han et al., 2022;Nguyen et al., 2022), we formulate MRHP as a ranking task. In detail, we consider an instance X i to consist of a product item p i , composed of product description T p i and images I p i , and its respective review list\nR i = {r i,1 , r i,2 , . . . , r i,|R i | }.\nEach review r i,j carries user-generated text T r i,j , images I r i,j , and an integer scalar label y i,j ∈ {0, 1, . . . , S} denoting the helpfulness score of review r i,j . The ground-truth result associated with X i is the descending order determined by the helpfulness score list Y i = {y i,1 , y i,2 , . . . , y i,|R i | }. The MRHP task is to generate helpfulness scores which match the groundtruth ranking order, formulated as follows:\ns i,j = f (p i , r i,j ),(1)\nwhere f represents the helpfulness prediction model taking ⟨p i , r i,j ⟩ as the input." }, { "figure_ref": [], "heading": "Analysis of Generalization Error", "publication_ref": [ "b1" ], "table_ref": [], "text": "The analysis involves the problem of learning a deep θ-parameterized model f θ : X → Y that maps the input space X to output space Y and a stochastic learning algorithm A to solve the optimization problem as follows:\nf θ * = arg min f θ E (x,y)∼P l(f θ ; (x, y)) ,(2)\nwhere P denotes the distribution of (x, y), l the loss function on the basis of the difference between ŷ = f θ (x) and y, and R true (f θ ) = E (x,y)∼P l(f θ ; (x, y)) is dubbed as the true risk.\nSince P is unknown, R true is alternatively solved through optimizing a surrogate empirical risk\nR emp (f θ D ) = 1 N N i=1 l(f θ ; (x i , y i )), where D = {(x i , y i )} N\ni=1 denotes a training dataset drawn from P that f θ D is trained upon. Because the aim of deep neural model training is to produce a model f θ that provides a small gap between the performance over D, i.e. R emp (f θ D ), and over any unseen test set from P, i.e. R true (f θ D ), the analysis defines the main focus to be the generalization error\nE(f θ D ) = R true (f θ D ) -R emp (f θ D )\n, the objective to be achieving a tight bound of E(f θ D ), and subsequently the foundation regarding the loss function's Lipschitzness as:\nDefinition 1. (Lipschitzness). A loss function l(ŷ, y) is γ-Lipschitz with respect to ŷ if for γ ≥ 0, ∀u, v ∈ R K , we have: |l(u, y) -l(v, y)| ≤ γ|u -v|,(3)\nwhere | • | denotes the l 1 -norm, K the dimension of the output ŷ.\nGiven the foundation, we have the connection between the properties of loss functions and the generalization error: Theorem 1. Consider a loss function that 0 ≤ l(ŷ, y) ≤ L that is convex and γ-Lipschitz with respect to ŷ. Suppose the stochastic learning algorithm A is executed for T iterations, with an annealing rate λ t to solve problem (2). Then, the following generalization error bound holds with probability at least 1δ (Akbari et al., 2021):\nE(f θ D ) = R true (f θ D ) -R emp (f θ D ) ≤ L log(2/δ) 2N + 2γ 2 T t=1 λ t 2 log(2/δ) T + 2 log(2/δ) N + 1 N .(4)\nTheorem (1) implies that by establishing a loss function L with smaller values of γ and L, we can burnish the model generalization performance." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we elaborate on our proposed architecture, listwise attention network, tree-based helpfulness regressor, and listwise ranking loss along with its comparison against the pairwise one from the theoretical perspective. The overall architecture is illustrated in Figure 2." }, { "figure_ref": [], "heading": "Multimodal Encoding", "publication_ref": [], "table_ref": [], "text": "Our model receives product description T p i , product images I p i , review text T r i,j , and review images I r i,j as input. We perform the encoding procedure for those inputs as follows. Textual Encoding. For both product text T p i and review text T r i,j , we index their sequences of words into the word embeddings and forward to the respective LSTM layer to yield token-wise representations:\nH p i = LSTM p (W emb (T p i )), (5) H r i,j = LSTM r (W emb (T r i,j )),(6)\nwhere t } m t=1 for product and review images, respectively. We then feed those object features into the self-attention module to obtain visual representations as:\nH p i ∈ R l p i ×d , H r i,j ∈ R l r i,\nV p i = SelfAttn({e p i t } m t=1 ),(7)\nV r i,j = SelfAttn({e\nr i,j t } m t=1 ),(8)\nwhere V p i , V r i,j ∈ R m×d , and d denotes the hidden size." }, { "figure_ref": [], "heading": "Coherence Reasoning", "publication_ref": [], "table_ref": [], "text": "We then learn intra-modal, inter-modal, and intraentity coherence among product-review elements.\nIntra-modal Coherence. There are two types of intra-modal coherence relations: (1) product textreview text and (2) product image -review image.\nInitially, we designate self-attention modules to capture the intra-modal interaction as:\nH intraM i,j = SelfAttn([H p i , H r i,j ]),(9)\nV intraM i,j = SelfAttn([V p i , V r i,j ]).(10)\nThen, intra-modal interaction features are passed to a CNN, then condensed into hidden vectors via pooling layer:\nz intraM i,j = Pool(CNN([H intraM i,j , V intraM i,j ])),(11)\nwhere review text (rt). Similar to the intra-modal coherence, we first perform cross-modal correlation by leveraging the self-attention mechanism:\nH pt-ri i,j = SelfAttn([H p i , V r i,j ]),(12)\nH pi-rt i,j = SelfAttn([V p i , H r i,j ]).(13)\nThereafter, we pool the above features and concatenate the pooled vectors to attain the inter-modal vector:\nz pt-ri i,j = Pool(H pt-ri i,j ),(14)\nz pi-rt i,j = Pool(H pi-rt i,j ),(15)\nz interM i,j = z pt-ri i,j , z pi-rt i,j .(16)\nIntra-entity Coherence. Analogous to the intermodal coherence, we also conduct self-attention and pooling computation, but on the (1) product text (pt) -product image (pi) and (2) review text (rt) -review image (ri) as follows:\nH pt-pi i = SelfAttn([H p i , V p i ]),(17)\nH rt-ri i,j = SelfAttn([H r i,j , V r i,j ]),(18)\nz pt-pi i = Pool(H pt-pi i ),(19)\nz rt-ri i,j = Pool(H rt-ri i,j ),(20)\nz intraR i,j = z pt-pi i , z rt-ri i,j .(21)\nEventually, the concatenation of the intra-modal, inter-modal, and intra-entity vectors becomes the result of the coherence reasoning phase:\nz i,j = z intraM i,j , z interM i,j , z intraR i,j . (22\n)" }, { "figure_ref": [], "heading": "Listwise Attention Network", "publication_ref": [], "table_ref": [], "text": "In our proposed listwise attention network, we encode list-contextualized representations to consider relative relationship among reviews. We achieve this by utilizing self-attention mechanism to relate list-independent product reviews' features {z i,1 , z i,2 , . . . , z i,|R i | } as follows:\n{z list i,j } |R i | j=1 = SelfAttn({z i,j } |R i | j=1 ),(23)\nwhere R i denotes the review list associated with product p i ." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Gradient-boosted Decision Tree for Helpfulness Estimation", "publication_ref": [], "table_ref": [], "text": "In this section, we delineate our gradient-boosted decision tree to predict helpfulness scores that efficaciously partition review features. Tree Structure. We construct a d tree -depth binary decision tree composed of internal nodes N (|N | = 2 dtree-1 -1) and leaf nodes L (|L| = 2 dtree-1 ). Our overall tree structure is depicted in Figure 2. Score Prediction. Receiving the list-attended vectors {z list i } N i=1 , our decision tree performs soft partitioning through probabilistic routing for those vectors to their target leaf nodes. In such manner, each internal node n calculates the routing decision probability as:\np left n = σ(Linear(z list )),(24)\np right n = 1 -p left n ,(25)\nwhere p left n and p right n denote the likelihood of directing the vector to the left sub-tree and right sub-tree, respectively. Thereupon, the probability of reaching leaf node l is formulated as follows:\nµ l = n∈P(l) (p left n ) 1 ln • (p right n ) 1 rn ,(26)\nwhere 1 ln denotes the indicator function of whether leaf node l belongs to the left sub-tree of the internal node n, equivalently for 1 rn , and P(l)\nthe node sequence path to leaf l. For example, in Figure 2, the routing probability to leaf 6 is µ 6 = p right 1 p left 3 p right 6 . For the score inference at leaf node l, we employ a linear layer for calculation as follows:\ns l,i,j = Linear l (z list i,j ). (27\n)\nwhere s l,i,j denotes the helpfulness score generated at leaf node l. Lastly, due to the probabilistic routing approach, the final helpfulness score f i,j is the average of the leaf node scores weighted by the probabilities of reaching the leaves:\nf i,j = f (p i , r i,j ) = l∈L s l,i,j • µ l . (28\n)" }, { "figure_ref": [], "heading": "Listwise Ranking Objective", "publication_ref": [ "b13", "b8", "b20" ], "table_ref": [], "text": "Since MRHP task aims to produce helpfulness order for a list of reviews, we propose to follow a listwise approach to compare the predicted helpfulness scores with the groundtruth. Initially, we convert two lists of prediction scores\n{f i,j } |R i | j=1 and groundtruth labels {y i,j } |R i | j=1 into two probability distributions. f ′ i,j = exp(f i,j ) |R i | t=1 exp(f i,t ) , y ′ i,j = exp(y i,j ) |R i | t=1 exp(y i,t ) .(29)\nSubsequently, we conduct theoretical derivation and arrive in interesting properties of the listwise computation.\nTheoretical Derivation. Our derivation demonstrates that discrimination computation of both listwise and pairwise functions (Liu et al., 2021;Han et al., 2022;Nguyen et al., 2022) satisfy the preconditions in Theorem (1). Lemma 1. Given listwise discrimination function on the total training set as\nL list = - |P | i=1 |R i | j=1 y ′ i,j log(f ′ i,j\n), where P denotes the product set, then L list is convex and γ list -Lipschitz with respect to f ′ i,j . Lemma 2. Given pairwise discrimination function on the total training set as\nL pair = |P | i=1 -f i,r + + f i,r -+ α\n+ , where r + , r -denote two random indices in R i and y i,r + > y i,r -, and α = max\n1≤j≤|R i | (y i,j ) -min 1≤j≤|R i | (y i,j ), then\nL pair is convex and γ pair -Lipschitz with respect to\nf i,r + , f i,r -.\nBased upon the above theoretical basis, we investigate the connection between L list and L pair .\nTheorem 2. Let L list and L pair are γ list -Lipschitz and γ pair -Lipschitz, respectively. Then, the following inequality holds:\nγ list ≤ γ pair . (30\n)\nTheorem 3. Let 0 ≤ L list ≤ L list and 0 ≤ L pair ≤ L pair . Then, the following inequality holds:\nL list ≤ L pair . (31\n)\nWe combine Theorem (1), (2), and (3), to achieve the following result. \n= {p i , {r i,j } |R i | j=1 } |P |\ni=1 . Then, we have the following inequality:\nE(f list D ) ≤ E(f pair D ),(32)\nwhere\nE(f D ) = R true (f D ) -R emp (f D ).\nAs in Theorem (4), models optimized by listwise function achieve a tighter bound on the generalization error than the ones with the pairwise function, thus upholding better generalization performance.\nWe provide proofs of all the lemmas and theorems in Appendix A. Indeed, empirical results in Section 4.6 also verify our theorems.\nWith such foundation, we propose to utilize listwise discrimination as the objective loss function to train our MRHP model:\nL list = - |P | i=1 |R i | j=1 y ′ i,j log(f ′ i,j ).(33)\n4 Experiments" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b13", "b13" ], "table_ref": [], "text": "For evaluation, we conduct experiments on two large-scale MRHP benchmark datasets: Lazada-MRHP and Amazon-MRHP. We present the dataset statistics in Appendix B. Amazon-MRHP (Liu et al., 2021) includes crawled product and review content from Amazon.com, the international e-commerce brand, between 2016 and 2018. All of the product and review texts are expressed in English. Lazada-MRHP (Liu et al., 2021) comprises product information and user-generated reviews from Lazada.com, a popular e-commerce platform in Southeast Asia. Both product and review texts are written in Indonesian.\nBoth datasets are composed of 3 categories: (1) Clothing, Shoes & Jewelry (Clothing), (2) Electronics (Electronics), and (3) Home & Kitchen (Home). We divide the helpfulness votes of the reviews into 5 partitions, i.e. [1, 2), [2, 4), [4,8), [8,16), and [16, ∞), corresponding to 5 helpfulness scores, i.e. y i,j ∈ {0, 1, 2, 3, 4}." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b4", "b23" ], "table_ref": [], "text": "For input texts, we leverage pretrained word embeddings with fastText embedding (Bojanowski et al., 2017) and 300-dimensional GloVe word vectors (Pennington et al., 2014) for Lazada-MRHP and Amazon-MRHP datasets, respectively. Each embedded word sequence is passed into an 1-layer LSTM whose hidden dimension is 128. For input images, we extract their ROI features of 2048 dimensions and encode them into 128-dimensional vectors. Our gradient-boosted decision tree score predictor respectively exhibits a depth of 3 and 5 in Lazada-MRHP and Amazon-MRHP datasets, which are determined on the validation performance. We adopt Adam optimizer, whose batch size is 32 and learning rate 1e-3, to train our entire architecture in the end-to-end fashion." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b27", "b5", "b6", "b0", "b33", "b13", "b8", "b20" ], "table_ref": [], "text": "We compare our approach with an encyclopedic list of baselines:\n• BiMPM (Wang et al., 2017): a ranking model that uses 2 BiLSTM layers to encode input sentences.\n• EG-CNN (Chen et al., 2018): a RHP baseline which leverages character-level representations and domain discriminator to improve cross-domain RHP performance.\n• Conv-KNRM (Dai et al., 2018): a CNNbased system which uses kernel pooling on multi-level n-gram encodings to produce ranking scores.\n• PRH-Net (Fan et al., 2019): a RHP baseline that receives product metadata and raw review text as input.\n• SSE-Cross (Abavisani et al., 2020): a crossmodal attention-based approach to filter nonsalient elements in both visual and textual input components.\n• DR-Net (Xu et al., 2020): a combined model of decomposition and relation networks to learn cross-modal association.\n• MCR (Liu et al., 2021): an MRHP model that infers helpfulness scores based on crossmodal attention-based encodings.\n• SANCL (Han et al., 2022): a baseline which extracts salient multimodal entries via probebased attention and applies contrastive learning to refine cross-modal representations.\n• Contrastive-MCR (Nguyen et al., 2022): an MRHP approach utilizing adaptive contrastive strategy to enhance cross-modal representations and performance optimization." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b13", "b8", "b20" ], "table_ref": [ "tab_3", "tab_4" ], "text": "Inspired by previous works (Liu et al., 2021;Han et al., 2022;Nguyen et al., 2022), we report Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG@N), where N = 3 and N = 5. We include the performance of baseline models and our approach in Table 1 and2.\nOn Amazon dataset, we consistently outperform prior methods of both textual and multimodal settings. Particularly, our architecture improves over Contrastive-MCR on MAP of 15.2 points in Clothing, NDCG@3 of 20.4 points in Electronics, and NDCG@5 of 21.0 points in Home subset. Furthermore, we accomplish a gain in MAP of 2.2 points in Clothing over PRH-Net, NDCG@3 of 16.4 points in Electronics and NDCG@5 of 11.8 points in Home category over Conv-KNRM baseline, where PRH-Net and Conv-KNRM are the best prior text-only baselines.\nFor Lazada dataset, which is in Indonesian, we outperform Contrastive-MCR with a significant margin of MAP of 10.4 points in Home, NDCG@5 of 11.6 points in Electronics, and NDCG@3 of 12.4 points in Clothing domain. The text-only variant of our model also gains a considerable improvement of 4.7 points of NDCG@5 in Clothing, 5.0 points of MAP in Electronics over PRH-Net, and 1.4 points of NDCG@3 in Home over Conv-KNRM model.\nThese outcomes demonstrate that our method is able to produce more sensible helpfulness scores to polish the review ranking process, not only being efficacious in English but also generalizing to other language as well. Over and above, it is worth pointing out in Lazada-Electronics, the textual setting of our approach even achieves higher helpfulness" }, { "figure_ref": [], "heading": "Setting", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Clothing Electronics Home MAP N@3 N@5 MAP N@3 N@5 MAP N@3 N@5 Setting Method Clothing Electronics Home MAP N@3 N@5 MAP N@3 N@5 MAP N@3 N@5 prediction capacity than the state-of-the-art multimodal baseline, i.e. the Contrastive-MCR model." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "To verify the impact of our proposed (1) Gradientboosted decision tree regressor, (2) Listwise ranking loss, and (3) Listwise attention network, we conduct ablation experiments on the Home category of the Amazon and Lazada datasets. GBDT Regressor. In this ablation, we substitute our tree-based score predictor with various FC-NNs score regressor. Specifically, we describe each substitution with a sequence of dimensions in its fully-connected layers, and each hidden layer is furnished with a Tanh activation function.\nAs shown in Table 3, FCNN-based score regressors considerably hurt the MRHP performance, with a decline of NDCG@3 of 16.7 points, and MAP of 6.9 points in the Amazon and Lazada datasets, respectively. One potential explanation is that without the decision tree predictor, the model lacks the partitioning ability to segregate the features of helpful and non-helpful reviews. Listwise Ranking Loss. As can be observed in Table 3, replacing listwise objective with the pairwise one degrades the MRHP performance substantially, with a drop of NDCG@3 of 11.8 scores in Amazon, and NDCG@5 of 7.3 scores in Lazada dataset. MRHP, respectively. We can attribute the improvement to the advantage of listwise attention, i.e. supplying the MRHP model with the context among product reviews to assist the model into inferring the reviews' ranking positions more precisely. " }, { "figure_ref": [], "heading": "Based on Theorem 4 and", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Analysis of Generalization Error", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Case Study", "publication_ref": [ "b20" ], "table_ref": [], "text": "In Figure 1, we present helpfulness prediction results predicted by our proposed MRHP model and Contrastive-MCR (Nguyen et al., 2022), the previous best baseline. While our model is capable of producing helpfulness scores that evidently sepa- rate helpful with unhelpful product reviews, scores generated by Contrastive-MCR do mingle them. Hypothetically, our method could partition product reviews according to their encoded helpfulness features to obtain inherent separation. We provide more detailed analysis of the partitioning capability of our model and individual produced helpfulness scores in Appendix D and E." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b10", "b12", "b5", "b7", "b13", "b8", "b12", "b10", "b14", "b34", "b26", "b20", "b30", "b31", "b25", "b16" ], "table_ref": [], "text": "For real-world applications, existing methods are oriented towards extracting hidden features from input samples (Kim et al., 2006;Krishnamoorthy, 2015;Liu et al., 2017;Chen et al., 2018;Nguyen et al., 2021). Modern approaches have gradually taken into account additional and useful modalities, for instance meta-data (Tuan et al., 2016;Fan et al., 2019;Qu et al., 2020), images (Liu et al., 2021;Han et al., 2022), etc. They also depend on hand-crafted features, such as argument-based (Liu et al., 2017), lexical (Krishnamoorthy, 2015;Luu et al., 2015), and semantic features (Yang et al., 2015;Luu et al., 2016;Nguyen and Luu, 2022) to utilize automatic deep representation learning to train the helpfulness predictor. Some also utilize unsupervised learning techniques to polish the learned representations of input samples (Wu et al., 2020(Wu et al., , 2023a;;Nguyen and Luu, 2021;Wu et al., 2022Wu et al., , 2023b)). Despite performance upgrade, deep neural approaches for multimodal RHP (MRHP) problem, have been shown to still be inadept at modeling partitioned and ranking data (Qin et al., 2021), which is the crucial characteristic of MRHP reviews (Ma et al., 2021). In this work, we seek to address those issues for the MRHP system with our proposed tree-based helpfulness predictor and listwise architectural framework.\nIn this paper, for the MRHP task, we introduce a novel framework to take advantage of the partitioned structure of product review inputs and the ranking nature of the problem. Regarding the partitioned preference, we propose a gradientboosted decision tree to route review features towards proper helpfulness subtrees managed by decision nodes. For the ranking nature, we propose listwise attention network and listwise training objective to capture review list-contextualized context. Comprehensive analysis provides both theoretical and empirical grounding of our approach in terms of model generalization. Experiments on two largescale MRHP datasets showcase the state-of-the-art performance of our proposed framework." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b24", "b32" ], "table_ref": [], "text": "Firstly, from the technical perspective, we have advocated the advantages of our proposed listwise loss for the MRHP task in terms of generalization capacity. Nevertheless, there are other various listwise discrimination functions that may prove beneficial for the MRHP model training, for example NeuralNDCG (Pobrotyn and Białobrzeski, 2021), ListMLE (Xia et al., 2008), etc. Moreover, despite the novelty of our proposed gradient-boosted tree in partitioning product reviews into helpful and unhelpful groups, our method does not employ prior contrastive representation learning, whose objective is also to segregate helpful and unhelpful input reviews. The contrastive technique might discriminate reviews of distinctive helpfulness features to bring further performance gain to multimodal review helpfulness prediction. At the moment, we leave the exploration of different listwise discrimination functions and contrastive learning as our prospective future research direction.\nSecondly, our study can be extended to other problems which involve ranking operations. For instance, in recommendation, there is a need to rank the items according to their appropriateness to present to the customers in a rational order. Our gradient-boosted decision tree could divide items into corresponding partitions in order for us to recommend products to the customer from the highly appropriate partition to the less appropriate one. Therefore, we will discover the applicability of our proposed architecture in such promising problem domain in our future work." }, { "figure_ref": [], "heading": "A Proofs", "publication_ref": [ "b1" ], "table_ref": [], "text": "Lemma 1. Given listwise loss on the total training set as\nL list = - |P | i=1 |R i | j=1 y ′ i,j log(f ′ i,j\n), where P denotes the product set, then L list is convex and γ list -Lipschitz with respect to f ′ i,j . Proof. Taking the second derivative of Equation ( 33), we have\n∇ 2 f ′ i,j L list = |P | i=1 |R i | j=1 y ′ i,j (f ′ i,j ) 2 > 0,(34)\nproving the convexity of L list . The Lipschitz property of L list can be derived from such property of the logarithm function, which states that\n| log(u) -log(v)| = log(1 + u v -1) ≤ u v -1 = 1 v (u -v) ≤ γ|u -v|, (35\n)\nwhere the first inequality stems from log(1 + x) ≤ x ∀x > -1 and γ is chosen s.\nt. |v| ≥ 1 γ . Let x = u i,j y i,j , z = v i,j\ny i,j . Applying the above result for L list , we obtain\n|log(u i,j ) -log(v i,j )| = log u i,j y i,j -log v i,j y i,j ≤ γ u i,j y i,j - v i,j y i,j ,(36)\nMultiplying both sides by y i,j , and integrating the summation on all inequalities for i ∈ {1, 2, . . . , |P |} and j ∈ {1, 2, . . . , |R i |}, we achieve\n|P | i=1 |R i | j=1 |y i,j log(u i,j ) -y i,j log(v i,j )| ≤ γ |P | i=1 |R i | j=1 |u i,j -v i,j | .(37)\nUtimately, we obtain:\n|L list (u, y) -L list (v, y)| ≤ γ list |u -v|,(38)\nWhere γ list = γ. This proves the γ list -Lipschitz property of L list .\nLemma 2. Given pairwise loss on the total training set as\nL pair = |P | i=1 -f i,r + + f i,r -+ α + , where\nr + , r -denote two random indices in R i and y i,r + > y i,r -, and α = max\n1≤j≤|R i | (y i,j ) -min 1≤j≤|R i | (y i,j ), then\nL pair is convex and γ pair -Lipschitz with respect to\nf i,r + , f i,r -. Proof. Let h pair i (⟨f i,r + , f i,r -⟩), y i ) = [-f i,r + + f i,r -+ α] + , u i = ⟨f i,u + , f i,u -⟩, v i = ⟨f i,r v + , f i,r v -⟩ be two inputs of h pair i . For θ ∈ [0, 1], we have h pair i (θu i + (1 -θ)v i , y i ) = h pair i (θ⟨f i,u + , f i,u -⟩ + (1 -θ)⟨f i,v + , f i,v -⟩, y i ) = h pair i (⟨θf i,u + + (1 -θ)f i,v + , θf i,u -+ (1 -θ)f i,v -⟩, y i ) = -(θf i,u + + (1 -θ)f i,v + ) + (θf i,u -+ (1 -θ)f i,v -) + α + = θ(-f i,u + + f i,u -+ α) + (1 -θ)(-f i,v + + f i,v -+ α) + ≤ θ[-f i,u + + f i,u -+ α] + + (1 -θ)[-f i,v + + f i,v -+ α] + = θh pair i (u i , y i ) + (1 -θ)h pair i (v, y i ). (39)\nEmploying summation of the inequality on all i ∈ {1, 2, . . . , |P |}, we have\nL pair (θu+(1-θ)v, y) ≤ θ |P | i=1 h pair i (u i , y i )+(1-θ) |P | i=1 h pair i (v i , y i ) = θL pair (u, y)+(1-θ)L pair (v, y),(40)\nwhich proves the convexity of L pair .\nRegarding the Lipschitz property, we first show that h pair i holds the property:\n|h pair i (u i , y i )-h pair i (v i , y i )| = (-u + i + u - i + α) -(-v + i + v - i + α) + = -u + i + u - i -v - i + u - i + .\n(41) Note that y min ≤ u + i , u - i , v + i , v - i ≤ y max , since we take the non-negative values in (41). Thus,\n|h pair i (u i , y i ) -h pair i (v i , y i )| ≤ 2(y max -y min ).(42)\nSimilarly, applying the aforementioned observation, we have:\n|u i -v i | = u + i -v + i + u - i -v - i ≥ 2(y max -y min ).(43)\nCombining ( 42) and ( 43) leads to:\n|h pair i (u i , y i ) -h pair i (v i , y i )| ≤ γ pair |u i -v i |,(44)\nsuch that γ pair ≥ 1. Adopting the summation of ( 44) on all i ∈ {1, 2, . . . , |P |}, we obtain:\n|L pair (u, y) -L pair (v, y)| = |P | i=1 h pair i (u i , y i ) - |P | i=1 h pair i (v i , y i ) ≤ γ pair |P | i=1 |u i -v i | = γ pair |u -v|.\n(45) The Lipschitz property of L pair follows result (45).\nTheorem 2. Let L list and L pair are γ list -Lipschitz and γ pair -Lipschitz, respectively. Then, the following inequality holds:\nγ list ≤ γ pair .(46)\nProof. In order to prove Theorem (2), we first need to find the formulation of γ list and γ pair . We leverage the following lemma:\nLemma 3. A function L is γ-Lipschitz, if γ satisfies the following condition (Akbari et al., 2021):\nγ = sup f i,j L ′ i,j (f i,j ) .(47)\nWith the foundation in mind, we take the derivative of L list i,j and L pair i,j :\n(L list i,j (f i,j )) ′ =      y ′ i,j log |R i | t=1 exp(f i,t ) exp(f i,j )      ′ = y ′ i,j      exp(f i,j ) |R i | t=1 exp(f i,t ) -1      = -y ′ i,j       |R i | k=1,k̸ =j exp(f i,k ) |R i | t=1 exp(f i,t )       ,(48)\n(L pair i,j (f i,j )) ′ = ±1.(49)\n(48) and (49) imply that\nL list i,j (f i,j ) ′ ≤ y ′ i,j ≤ 1 = L pair i,j (f i,j ) ′ .(50)\nCombining equation ( 50) and Lemma (3), we obtain γ list ≤ γ pair . ■ Theorem 3. Let 0 ≤ L list ≤ L list and 0 ≤ L pair ≤ L pair . Then, the following inequality holds:\nL list ≤ L pair .(51)\nProof. Adoption of Jensen's inequality on L list gives:\nL list = - |P | i=1 |R i | j=1 y ′ i,j log f ′ i,j(52)\n= |P | i=1 |R i | j=1 y ′ i,j log |R i | t=1 exp(f i,t ) exp(f i,j )(53)\n= |P | i=1 |R i | j=1 y ′ i,j   log |R i | t=1 exp f i,t -f i,j   (54) = |P | i=1 |R i | j=1 y ′ i,j   log   1 |R i | |R i | t=1 exp(f i,t )   -f i,j + log |R i |  (55)\n≤ |P | i=1 |R i | j=1 y ′ i,j   1 |R i | |R i | t=1 f i,t -f i,j + log |R i |  (56)\n≤ |P | i=1 |R i | j=1 y ′ i,j (f max -f min + log |R i |) (57) = |P |(f max -f min ) + |P | log |R i |,(58)\nwhere f min ≤ f i,j ≤ f max , ∀i, j. Now, such bounds of f i,j on L pair yields:\nL pair = |P | i=1 -f i,r + + f i,r -+ α + ≤ |P |(f max -f min ) + |P |(y max -y min ),(59)\nwhere y max = max \nE(f list D ) ≤ E(f pair D ). (61\n)\nwhere\nE(f D ) = R true (f D ) -R emp (f D ).\nThe inequality immediately follows from Theorems (1), ( 2) and (3). From Theorems (1) and ( 2), because T and N are constant, the second term of L list is always smaller than that of L pair . From Theorems (1) and (3), we realize that L list ≤ L pair , thus proving the smaller value of the first term of L list ." }, { "figure_ref": [], "heading": "B Dataset Statistics", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "In this section, we provide dataset statistics of the Amazon and Lazada datasets on the MRHP task. All of the numerical details are included in Table 5. " }, { "figure_ref": [], "heading": "C Generalization Errors of the Models trained with Listwise and Pairwise Ranking Losses", "publication_ref": [], "table_ref": [], "text": "In this Appendix, we illustrate the empirical evolution of generalization errors of pairwise-trained and listwise-trained models on the remaining categories of the Amazon-MRHP and Lazada-MRHP datasets.\nThe discovered characteristics regarding generalization in Figures 5 and 6 agree with those in Section 4.6, corroborating the intensified generalizability of our proposed listwise ranking loss. " }, { "figure_ref": [], "heading": "D Analysis of Partitioning Function of Gradient-Boosted Decision Tree", "publication_ref": [], "table_ref": [], "text": "We examine the partitioning operation of our proposed gradient-boosted decision tree for the multimodal review helpfulness prediction. In particular, we inspect the µ = [µ 1 , µ 2 , . . . , µ |L| ] probabilities, which route review features to the target leaf nodes in a soft approach. Our procedure is to gather µ at the leaf nodes for all reviews, estimate their mean value with respect to each leaf, then plot the results on Clothing and Home of the Amazon and Lazada datasets, respectively, in Figures 7,8,9,10,and 11.\nFrom the figures, we can observe our proposed gradient-boosted decision tree's behavior of assigning high routing probabilities {µ i } |L| i=1 to different partitions of leaf nodes, with the partitions varying according to the helpfulness scale of the product reviews. In consequence, we can claim that our GBDT divides the product reviews into corresponding partitions to their helpfulness degrees, thus advocating the partitioned preference of the input reviews. " }, { "figure_ref": [ "fig_0" ], "heading": "E Examples of Product and Review Samples", "publication_ref": [ "b20" ], "table_ref": [], "text": "We articulate product and review samples in Figure 1, comprising their textual and visual content, with the helpfulness scores generated by Contrastive-MCR (Nguyen et al., 2022), whose score predictor is FCNN-based, and our GBDT-based model. 1.467 -0.724 These are fun, but I did learn that ice maker ice shaped like little half moon as many USA freezers have as their automatic ice maker, fit the curves of this class perfectly and will use surface water tension cohesion to slide up the glass inside to your mouth and act like a dam to block your drink believe it or not. So i have gotten used to that for personal use and know how to tilt the glass now, but when friends come, I use square tubes from an ice tray so I don't have to explain it to them or chance them spilling on themselves. Review 2 -Label: 1\n1.147 -0.874 If I could give less than a star I would. I am very disappointed in how low quality this product is and would not recommend buying it. Review 3 -Label: 1 6.622 -0.964 Very cool & futuristic looking.\nReview 4 -Label: 1 1.731 -0.868 These are attractive glasses which seem a good deal more classy than the cost here would imply. They feel higher end and when you plink one with your fingernail it'll give off a fine crystal like ring. They are every bit as attractive as they look in the pictures. Review 5 -Label: 3 0.494 0.882 Mixed reviews did not deviated me from getting this set. Just the add shape is a turn on. A very well packed box arrived bubble wrap with every glass intact. The glasses are beautiful and everything I expected. One thing though, It's interesting that there is only one picture on the page. This picture shows no detail. Used to many types of glass drinkware, the first thing I noticed is the \"seams\" on each glass (see pictures). This makes obvious the fact that these are mold made. This is the reason for 4 stars. Being using them for just a couple of weeks by the time I wrote this review. Will update as time goes on. " }, { "figure_ref": [], "heading": "Review Information", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "NN-based Score", "publication_ref": [], "table_ref": [], "text": "Tree-based Score Review 6 -Label: 1 0.044 -0.778 I hate going through the hassle of returning things but it had to be done with this purchase. Review 7 -Label: 1 0.684 -0.800 The short glasses are nice, but the tall ones break easily. SUPER easily. I had two of them break just by holding them. I will absolutely not be reordering this. Review 8 -Label: 1 0.443 -0.897 I love these. We had them in a highly stylized Japanese restaurant and were psyched to find them here. Tall glasses have a \"seam\". No tipping or breakage yet as mentioned by other reviewers. 0.281 -0.192 I really loved this and used it to carry my laptop to and from work. I used the cross-body strap. However, the metal hardware of the strap broke after three months, and the stitching where the cross-body strap attached to the purse ripped off the same week. Love this ourselves but the handles are too short for me to wear comfortably without the cross body strap. Review 2 -Label: 1 2.938 -0.138 Hello, I am Alicia and work as a researcher in the health area. Moreover, I was looking for a feminine, classical and practical bag-briefcase for my work. I would like to begin with the way you show every product. I love when I can see the inner parts and the size of the bag, not only using measures but when you show a model using the product too. Also, the selection of colour is advantageous a big picture with the tone selected. There are many models, sizes and prices. I consider that is a right price for the quality and design of the product. The products I bought have a high-quality appearance, are professional and elegant, like in the pictures! I was not in a hurry, so I was patient, and the product arrived a couple of days before the established date. The package was made thinking in the total protection of every product I bought, using air-plastic bubbles and a hard carton box. Everything was in perfect conditions. I use them for every day-work is very resistant, even in rain time I can carry many things, folders and sheet of paper, a laptop. Their capacity is remarkable. The inner part is very soft and stands the dirty. I am enjoying my bags! All the people say they are gorgeous! Review 3 -Label: 1 0.460 -0.226 This purse has come apart little by little within a month of receiving it. First the thread that held on the zipper began to unravel. Then the decorative seam covering began to come off all over the purse. Yesterday I was on my way into the grocery and the handle broke as I was walking. I've only had it a few months. Poorly made. Review 4 -Label: 1 -0.646 -0.067 I bought this because of reviews but i am extremely disappointed... This bag leather is too hard and i don't think i will use it Review 5 -Label: 2 5.094 -0.493 There are slight scratches on the hardware otherwise great size and it's a gorgeous bag. Got it for use while I'm in a business casual environment. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by Alibaba Innovative Research (AIR) programme with research grant AN-GC-2021-005." }, { "figure_ref": [], "heading": "Review Information", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "NN-based Score", "publication_ref": [], "table_ref": [], "text": "Tree-based Score Review 6 -Label: 1 -1.794 -0.222 Tight bag, has no flexibility. stiff. I do receive a lot of compliments. Review 7 -Label: 1 0.819 -0.284 I love this bag!!! I use it every day at work and it has held up to months of use with no sign of wear and tear. It holds my laptop, planner, and notebooks as well as my large wallet and pencil case. It holds so much! I've gotten so many compliments on it. It feels and looks high quality. Review 8 -Label: 3 0.259 0.939 This bag is perfect! It doubles as somewhat of a \"briefcase\" for me, as it fits my IPad, planner, and files, while still accommodating my wallet and normal \"purse\" items. My only complaint was that Jre scratches already on the gold metal accents when I unwrapped it from the packaging. Otherwise-great deal for the price! Review 9 -Label: 2 2.695 0.462 I believe this the most expensive looking handbag I have ever owned. When your handbag comes in its own bag, you are on to something wonderful. I also purchased a router in the same order, and I'm serious, the handbag was better wrapped and protected. Now for a review : The handbag is stiff, but I expected that from other reviews. The only reason I didn't give a five star rating is because it is not as large as I hoped. A laptop will not fit. Only a tablet. This is a regular good size purse, so don't expect to be able to carry more than usual. I probably won't be able to use it for my intented purpose, but it is so beautiful, I don't mind. Review 10 -Label: 1 -0.235 -0.189 Look is great can fit HP EliteBook 8470p (fairly bulky laptop 15 inch), but very snug. I can only fit my thin portfolio and the laptop into bag. Review 11 -Label: 1 6.290 -0.194 This bag is really great for my needs for work, and is cute enough for every day. Other reviews are correct that this is a very stiff-leather bag, but I am fine with that. I love the color and the bag is super adorable. I get so many compliments on this. Also, I travelled recently and this was a perfect bag to use as your \"personal item\" on the airplane-it zips up so you don't have to worry about things falling out and is just right for under the seat. I love the options of having handles AND the long strap. I carry an Iphone 6+ (does not fit down in the outside pocket completely but I use the middle zipper pouch for my tech), wallet, glasses, sunglasses, small makeup bag, a soapdish sized container that I use for holding charger cords (fits perfect in the inside liner pockets), and on the other side of the zipper pouch I carry an A5-sized Filofax Domino. " }, { "figure_ref": [], "heading": "Review Information", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "NN-based Score", "publication_ref": [], "table_ref": [], "text": "Tree-based Score Review 12 -Label: 3 2.262 0.923 Absolutely stunning and expensive looking for the price. I just came back from for a tote bad at Macy's and so I had the chance to look and feel at all the different bags both high end brand names and generic. This has a very distinguished character to it. A keeper. The size it rather big for an evening out as long as it is not a formal one. I like that it can accommodate a tablet plus all other things we women consider must haves. The silver metal accents are just of enough amount to give it ump but not superfluous to make it look tacky. The faux ostrich material feel so real. The whole bag is very well balance.\nInside it has two zippered pockets and two open pockets for cell phone and sun glasses. Outside it has one zippered pocket by the back. I won't be using the shoulder strap too much as the the handles are long enough to be carried on the shoulders. Review 13 -Label: 4 7.685 1.969 I added pictures. I hate the fact that people selling things do not give CLEAR defined pictures. This purse was well shipped. Not one scratch... and I don't think there COULD have been a scratch made in shipping. The handles and the bottom are a shiny patent leather look. The majority of the case is a faux ostrich look. It has a 'structure' to it. Not a floppy purse. There is a center divider that is soft and has a zipper to store things. One side (inside) has two pockets that do not zipper. One side (inside) has a zippered pocket. It comes with a long shoulder strap. Please see my photos. So far I really like this purse. The water bottle is a standard 16.9oz. Review 14 -Label: 2 2.309 0.584 Love this purse! When I opened the package it seemed like it was opening purse I had purchased for $450.00 it was packaged so nicely!! Every little detail of the purse was covered for shipping protection. This was/is extremely impressive to me for a purse I paid less than $40.00 for. Wow. " } ]
Multimodal Review Helpfulness Prediction (MRHP) aims to rank product reviews based on predicted helpfulness scores and has been widely applied in e-commerce via presenting customers with useful reviews. Previous studies commonly employ fully-connected neural networks (FCNNs) as the final score predictor and pairwise loss as the training objective. However, FCNNs have been shown to perform inefficient splitting for review features, making the model difficult to clearly differentiate helpful from unhelpful reviews. Furthermore, pairwise objective, which works on review pairs, may not completely capture the MRHP goal to produce the ranking for the entire review list, and possibly induces low generalization during testing. To address these issues, we propose a listwise attention network that clearly captures the MRHP ranking context and a listwise optimization objective that enhances model generalization. We further propose gradient-boosted decision tree as the score predictor to efficaciously partition product reviews' representations. Extensive experiments demonstrate that our method achieves state-of-the-art results and polished generalization performance on two large-scale MRHP benchmark datasets.
Gradient-Boosted Decision Tree for Listwise Context Model in Multimodal Review Helpfulness Prediction
[ { "figure_caption": "Figure 1 :1Figure1: Examples of helpfulness scores produced by score regressors built upon neural network and gradientboosted decision tree. We present the content of the product and review samples in Appendix E.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of our Multimodal Review Helpfulness Prediction model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Theorem 4 .4Consider two models f list D and f pair D under common settings trained to minimize L list and L pair , respectively, on dataset D", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Generalization error curves per training epoch on the Electronics category in Amazon-MRHP dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figures 33Figures 3 and 4 illustrate the approximation of the generalization error Ê(f θ D ) = R val (f θ D ) -R train (f θ D ) of the model after every epoch, where R val and R train indicate the average loss values of the trained model f θ D on the validation and training sets, respectively. Procedurally, due to different scale of the loss values, we normalize them to the range [0, 1]. The plots demonstrate that generalization errors of our MRHP model trained with the listwise ranking loss are constantly lower than those obtained by pairwise loss training, thus exhibiting better generalization performance. Additionally, as further shown in Table 4, f θ, list D incurs a smaller training-testing performance discrepancy △ MAP = |MAP training -MAP testing | than f θ, pair D , along with Figures 3 and 4 empirically substantiating our Theorem (4).", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Generalization error curves per training epoch on the Electronics category in Lazada-MRHP dataset.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": ",j). Note that Table5 revealsthat max |R i | ≤ 2043. Therefore, log |R i | ≤ 3.31, whereas y maxy min = 4, giving rise to the conclusion log |R i | ≤ y maxy min . Therefore, L list ≤ L pair , (60) which concludes the proof of Theorem (3). Theorem 4. Consider two models f list D and f pair D learned under common settings utilizing listwise and pairwise ranking losses, respectively, on dataset D = {p i , {r i,j } |R i | j=1 } |P | i=1 . Then, we have the following inequality:", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :56Figure 5: Generalization error curves per training epoch on the Clothing category in Amazon-MRHP and Lazada-MRHP datasets.", "figure_data": "", "figure_id": "fig_7", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Mean µ i values on 2-rating reviews of Amazon-Home dataset", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: Mean µ i routing probabilities at the proposed GBDT's leaves for 1-rating and 2-rating reviews in Amazon-Home dataset.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Mean µ i values on 4-rating reviews of Amazon-Home dataset", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Mean µ i routing probabilities at the proposed GBDT's leaves for 3-rating and 4-rating reviews in Amazon-Home dataset.", "figure_data": "", "figure_id": "fig_11", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure9: Mean µ routing probabilities at the proposed GBDT's leaves for 0-rating and 1-rating reviews in Lazada-Clothing dataset.", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Mean µ i values on 3-rating reviews of Lazada-Clothing dataset", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Mean µ i routing probabilities at the proposed GBDT's leaves for 2-rating and 3-rating reviews in Lazada-Clothing dataset.", "figure_data": "", "figure_id": "fig_14", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Mean µ i values on 4-rating reviews of Lazada-Clothing dataset", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Mean µ i routing probabilities at the proposed GBDT's leaves for 4-rating reviews in Lazada-Clothing dataset.", "figure_data": "", "figure_id": "fig_16", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "true that the taller 18-oz glasses are delicate. If you're the kind of person who buys glassware expecting every glass to last 20 years, this set isn't for you. If you're the kind of person who enjoys over function, I'd highly recommend them. Review 10 -Label: 1 6.074 -0.844 Quality is good. Does not hold water from the underside if you put it in the dishwasher. Review 11 -Label: 1 2.615 -0.923 I have owned these glasses for 20-plus years. After breaking most of the tall ones, I looked around for months to find great glasses but still thought these were the best, so I bought more. Review 12 -Label: 3 7.529 0.836 I am sooooooo disappointed in these glasses. They are thin. Of course, right after opening we put in the dishwasher and upon taking them out it looked like they were washed with sand! We could even see the fingerprints. And we have a watersoftener! In the photo I have included, this is after one dishwasher washing!", "figure_data": "", "figure_id": "fig_18", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "j ×d , l p i and l r i,j denote sequence lengths of the product and review text, respectively, d the hidden dimension.", "figure_data": "Visual Encoding. We adapt a pre-trained Faster R-CNN to extract ROI features of m objects {e p i t } m t=1 and {e r i,j", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Helpfulness review prediction results on the Amazon-MRHP dataset.", "figure_data": "BiMPM57.741.8 46.052.340.5 44.156.643.6 47.6EG-CNN56.440.6 44.751.539.4 42.155.342.4 46.7Text-onlyConv-KNRM57.241.2 45.652.640.5 44.257.444.5 48.4PRH-Net58.342.2 46.552.440.1 43.957.144.3 48.1Our Model60.551.7 52.859.856.9 57.963.459.4 60.2SSE-Cross65.056.0 59.153.743.8 47.260.851.0 54.0DR-Net65.256.1 59.253.944.2 47.561.251.8 54.6MultimodalMCR SANCL66.4 67.357.3 60.2 58.6 61.554.4 56.245.0 48.1 47.0 49.962.6 63.453.5 56.6 54.3 57.4Contrastive-MCR 67.458.6 61.656.547.6 50.863.554.6 57.8Our Model82.680.3 79.374.268.0 69.881.776.5 78.8", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Helpfulness review prediction results on the Lazada-MRHP dataset.", "figure_data": "BiMPM60.052.4 57.774.467.3 72.270.664.7 69.1EG-CNN60.451.7 57.573.566.3 70.870.763.4 68.5Text-onlyConv-KNRM62.154.3 59.974.167.1 71.971.465.7 70.5PRH-Net62.154.9 59.974.367.0 72.271.665.2 70.0Our Model66.459.6 64.679.363.8 78.072.967.1 71.5SSE-Cross66.159.7 64.876.068.9 73.872.266.0 71.0DR-Net66.560.7 65.376.169.2 74.072.466.3 71.4MultimodalMCR SANCL68.8 70.262.3 67.0 64.6 68.876.8 77.870.7 75.0 71.5 76.173.8 75.167.0 72.2 68.4 73.3Contrastive-MCR 70.364.7 69.078.272.4 76.575.268.8 73.7Our Model78.577.1 79.087.986.7 88.185.678.8 83.1Dataset ModelMAP N@3 N@5Our Model81.776.5 78.8-w/ d zi,j -8-4-2-1 NN64.655.2 58.6Amazon-w/ d zi,j -32-16-8-4-2-1 NN -w/ d zi,j -32-32-32-32-1 NN 64.9 70.659.8 63.8 57.1 59.9-w/o L list -w/o LAN72.4 64.864.7 67.1 55.8 59.3Our Model85.678.8 83.1-w/ d zi,j -8-4-2-1 NN76.269.3 74.3Lazada-w/ d zi,j -32-16-8-4-2-1 NN -w/ d zi,j -32-32-32-32-1 NN 77.6 78.771.9 77.6 70.9 75.2-w/o L list -w/o LAN78.0 76.571.3 75.8 69.9 74.4", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study on the Home category of Amazon-MRHP and Lazada-MRHP datasets.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "We proceed to ablate our proposed listwise attention module and re-execute the model training. Results in Table3betray that inserting listwise attention brings about performance upgrade with 16.9 and 9.1 points of MAP in Amazon-MRHP and Lazada-", "figure_data": ", we postulate that", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Training-testing performance of our model trained with listwise and pairwise ranking losses on the Electronics category of Amazon and Lazada datasets.", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Statistics of MRHP datasets. Max #R/P denotes the maximum number of reviews associated with each product.", "figure_data": "Dataset CategoryTrainDevTestMax #R/PCS&J12K/277K 3K/71K 4K/87K691AmazonElec.10K/260K 3K/65K 3K/80K836H&K15K/370K 4K/93K 5K/111K2043CS&J7K/104K 2K/26K 2K/32K540LazadaElec.4K/42K1K/11K 1K/13K346H&K3K/37K1K/10K 1K/13K473", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Generated helpfulness scores on reviews 1-5 for product B00005MG3K.", "figure_data": "", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Generated helpfulness scores on reviews 6-12 for product B00005MG3K.", "figure_data": "", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Generated helpfulness scores on reviews 1-5 for product B00Q82T3XE.", "figure_data": "", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" } ]
Thong Nguyen; Xiaobao Wu; Xinshuai Dong; Anh Tuan Luu; Cong-Duy Nguyen; Zhen Hai; Bing Lidong
[ { "authors": "Mahdi Abavisani; Liwei Wu; Shengli Hu; Joel Tetreault; Alejandro Jaimes", "journal": "", "ref_id": "b0", "title": "Multimodal categorization of crisis events in social media", "year": "2020" }, { "authors": "Ali Akbari; Muhammad Awais; Manijeh Bashar; Josef Kittler", "journal": "", "ref_id": "b1", "title": "How does loss function affect generalization performance of deep learning? application to human age estimation", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "Alex Beutel; Paul Covington; Sagar Jain; Can Xu; Jia Li; Vince Gatto; Ed H Chi", "journal": "", "ref_id": "b3", "title": "Latent cross: Making use of context in recurrent recommender systems", "year": "2018" }, { "authors": "Piotr Bojanowski; Edouard Grave; Armand Joulin; Tomas Mikolov", "journal": "Transactions of the association for computational linguistics", "ref_id": "b4", "title": "Enriching word vectors with subword information", "year": "2017" }, { "authors": "Cen Chen; Yinfei Yang; Jun Zhou; Xiaolong Li; Forrest Bao", "journal": "", "ref_id": "b5", "title": "Cross-domain review helpfulness prediction based on convolutional neural networks with auxiliary domain discriminators", "year": "2018" }, { "authors": "Zhuyun Dai; Chenyan Xiong; Jamie Callan; Zhiyuan Liu", "journal": "", "ref_id": "b6", "title": "Convolutional neural networks for soft-matching n-grams in ad-hoc search", "year": "2018" }, { "authors": "Chao Miao Fan; Lin Feng; Mingming Guo; Ping Sun; Li", "journal": "", "ref_id": "b7", "title": "Product-aware helpfulness prediction of online reviews", "year": "2019" }, { "authors": "Wei Han; Hui Chen; Zhen Hai; Soujanya Poria; Lidong Bing", "journal": "", "ref_id": "b8", "title": "Sancl: Multimodal review helpfulness prediction with selective attention and natural contrastive learning", "year": "2022" }, { "authors": "Soo-Min Kim; Patrick Pantel; Timothy Chklovski; Marco Pennacchiotti", "journal": "", "ref_id": "b9", "title": "Automatically assessing review helpfulness", "year": "2006" }, { "authors": "Srikumar Krishnamoorthy", "journal": "Expert Systems with Applications", "ref_id": "b10", "title": "Linguistic features for review helpfulness prediction", "year": "2015" }, { "authors": "Jean-Samuel Leboeuf; Frédéric Leblanc; Mario Marchand", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Decision trees as partitioning machines to characterize their generalization properties", "year": "2020" }, { "authors": "Haijing Liu; Yang Gao; Pin Lv; Mengxue Li; Shiqiang Geng; Minglan Li; Hao Wang", "journal": "", "ref_id": "b12", "title": "Using argument-based features to predict and analyse review helpfulness", "year": "2017" }, { "authors": "Junhao Liu; Zhen Hai; Min Yang; Lidong Bing", "journal": "", "ref_id": "b13", "title": "Multi-perspective coherent reasoning for helpfulness prediction of multimodal reviews", "year": "2021" }, { "authors": "Anh Tuan Luu; Jung-Jae Kim; See Kiong Ng", "journal": "", "ref_id": "b14", "title": "Incorporating trustiness and collective synonym/contrastive evidence into taxonomy construction", "year": "2015" }, { "authors": "Anh Tuan Luu; Yi Tay; Siu Cheung Hui; See Kiong Ng", "journal": "", "ref_id": "b15", "title": "Learning term embeddings for taxonomic relation identification using dynamic weighting neural network", "year": "2016" }, { "authors": "Jiaqi Ma; Xinyang Yi; Weijing Tang; Zhe Zhao; Lichan Hong; Ed Chi; Qiaozhu Mei", "journal": "", "ref_id": "b16", "title": "Learning-torank with partitioned preference: fast estimation for the plackett-luce model", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "Thong Nguyen; Anh Tuan Luu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Contrastive learning for neural topic model", "year": "2021" }, { "authors": "Thong Nguyen; Anh Tuan Luu; Truc Lu; Tho Quan", "journal": "", "ref_id": "b19", "title": "Enriching and controlling global semantics for text summarization", "year": "2021" }, { "authors": "Thong Nguyen; Xiaobao Wu; Anh-Tuan Luu; Cong-Duy Nguyen; Zhen Hai; Lidong Bing", "journal": "", "ref_id": "b20", "title": "Adaptive contrastive learning on multimodal transformer for review helpfulness predictions", "year": "2022" }, { "authors": "Thanh Thong; Anh Tuan Nguyen; Luu", "journal": "", "ref_id": "b21", "title": "Improving neural cross-lingual abstractive summarization via employing optimal transport distance for knowledge distillation", "year": "2022" }, { "authors": "Rama Kumar Pasumarthi; Xuanhui Wang; Michael Bendersky; Marc Najork", "journal": "", "ref_id": "b22", "title": "Self-attentive document interaction networks for permutation equivariant ranking", "year": "2019" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b23", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": "Przemysław Pobrotyn; Radosław Białobrzeski", "journal": "", "ref_id": "b24", "title": "Neuralndcg: Direct optimisation of a ranking metric via differentiable relaxation of sorting", "year": "2021" }, { "authors": "Zhen Qin; Le Yan; Honglei Zhuang; Yi Tay; Rama Kumar Pasumarthi; Xuanhui Wang; Mike Bendersky; Marc Najork; ; Zhao Li; Jialin Wang; Zhipeng Zhang; Pengcheng Zou; Junxiao Jiang; Jiaming Huang; Rong Xiao; Ji Zhang; Jun Gao", "journal": "", "ref_id": "b25", "title": "Category-aware graph neural networks for improving e-commerce review helpfulness prediction", "year": "2020" }, { "authors": "Anh Luu; Siu Cheung Tuan; See Kiong Hui; Ng", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "Utilizing temporal information for taxonomy construction", "year": "2016" }, { "authors": "Zhiguo Wang; Wael Hamza; Radu Florian", "journal": "", "ref_id": "b27", "title": "Bilateral multi-perspective matching for natural language sentences", "year": "2017" }, { "authors": "Xiaobao Wu; Xinshuai Dong; Thong Nguyen; Chaoqun Liu; Liangming Pan; Anh Tuan Luu", "journal": "", "ref_id": "b28", "title": "Infoctm: A mutual information maximization perspective of cross-lingual topic modeling", "year": "2023" }, { "authors": "Xiaobao Wu; Xinshuai Dong; Thong ; Thanh Nguyen; Anh Tuan Luu", "journal": "PMLR", "ref_id": "b29", "title": "Effective neural topic modeling with embedding clustering regularization", "year": "2023" }, { "authors": "Xiaobao Wu; Chunping Li; Yan Zhu; Yishu Miao", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Short text topic modeling with topic distribution quantization and negative sampling decoder", "year": "2020" }, { "authors": "Xiaobao Wu; Anh Tuan Luu; Xinshuai Dong", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Mitigating data sparsity for short text topic modeling by topic-semantic contrastive learning", "year": "2022" }, { "authors": "Fen Xia; Tie-Yan Liu; Jue Wang; Wensheng Zhang; Hang Li", "journal": "", "ref_id": "b32", "title": "Listwise approach to learning to rank: theory and algorithm", "year": "2008" }, { "authors": "Nan Xu; Zhixiong Zeng; Wenji Mao", "journal": "", "ref_id": "b33", "title": "Reasoning with multimodal sarcastic tweets via modeling cross-modality contrast and semantic association", "year": "2020" }, { "authors": "Yinfei Yang; Yaowei Yan; Minghui Qiu; Forrest Bao", "journal": "", "ref_id": "b34", "title": "Semantic analysis and helpfulness prediction of text for online product reviews", "year": "2015" } ]
[ { "formula_coordinates": [ 2, 402.72, 394.98, 123.6, 18.93 ], "formula_id": "formula_0", "formula_text": "R i = {r i,1 , r i,2 , . . . , r i,|R i | }." }, { "formula_coordinates": [ 2, 377.49, 531.38, 147.65, 11.36 ], "formula_id": "formula_1", "formula_text": "s i,j = f (p i , r i,j ),(1)" }, { "formula_coordinates": [ 2, 320.69, 685.2, 204.45, 23.31 ], "formula_id": "formula_2", "formula_text": "f θ * = arg min f θ E (x,y)∼P l(f θ ; (x, y)) ,(2)" }, { "formula_coordinates": [ 3, 70.87, 98.45, 218.27, 34.49 ], "formula_id": "formula_3", "formula_text": "R emp (f θ D ) = 1 N N i=1 l(f θ ; (x i , y i )), where D = {(x i , y i )} N" }, { "formula_coordinates": [ 3, 126.9, 208.65, 142.78, 19.88 ], "formula_id": "formula_4", "formula_text": "E(f θ D ) = R true (f θ D ) -R emp (f θ D )" }, { "formula_coordinates": [ 3, 70.87, 270.17, 219, 71.81 ], "formula_id": "formula_5", "formula_text": "Definition 1. (Lipschitzness). A loss function l(ŷ, y) is γ-Lipschitz with respect to ŷ if for γ ≥ 0, ∀u, v ∈ R K , we have: |l(u, y) -l(v, y)| ≤ γ|u -v|,(3)" }, { "formula_coordinates": [ 3, 70.87, 538.3, 226.07, 73.9 ], "formula_id": "formula_6", "formula_text": "E(f θ D ) = R true (f θ D ) -R emp (f θ D ) ≤ L log(2/δ) 2N + 2γ 2 T t=1 λ t 2 log(2/δ) T + 2 log(2/δ) N + 1 N .(4)" }, { "formula_coordinates": [ 3, 345.13, 222.45, 180.01, 29.89 ], "formula_id": "formula_7", "formula_text": "H p i = LSTM p (W emb (T p i )), (5) H r i,j = LSTM r (W emb (T r i,j )),(6)" }, { "formula_coordinates": [ 3, 334.36, 263.54, 120.97, 22.44 ], "formula_id": "formula_8", "formula_text": "H p i ∈ R l p i ×d , H r i,j ∈ R l r i," }, { "formula_coordinates": [ 3, 355.49, 397.79, 169.65, 21.22 ], "formula_id": "formula_9", "formula_text": "V p i = SelfAttn({e p i t } m t=1 ),(7)" }, { "formula_coordinates": [ 3, 440.03, 414.58, 85.11, 22.05 ], "formula_id": "formula_10", "formula_text": "r i,j t } m t=1 ),(8)" }, { "formula_coordinates": [ 3, 341.94, 601.34, 183.2, 14.19 ], "formula_id": "formula_11", "formula_text": "H intraM i,j = SelfAttn([H p i , H r i,j ]),(9)" }, { "formula_coordinates": [ 3, 342.18, 619.48, 182.96, 14.19 ], "formula_id": "formula_12", "formula_text": "V intraM i,j = SelfAttn([V p i , V r i,j ]).(10)" }, { "formula_coordinates": [ 3, 316.67, 696.05, 208.47, 13.94 ], "formula_id": "formula_13", "formula_text": "z intraM i,j = Pool(CNN([H intraM i,j , V intraM i,j ])),(11)" }, { "formula_coordinates": [ 4, 110.73, 305.64, 179.13, 15.47 ], "formula_id": "formula_14", "formula_text": "H pt-ri i,j = SelfAttn([H p i , V r i,j ]),(12)" }, { "formula_coordinates": [ 4, 110.73, 325.32, 179.13, 15.47 ], "formula_id": "formula_15", "formula_text": "H pi-rt i,j = SelfAttn([V p i , H r i,j ]).(13)" }, { "formula_coordinates": [ 4, 135.05, 391.37, 154.82, 15.47 ], "formula_id": "formula_16", "formula_text": "z pt-ri i,j = Pool(H pt-ri i,j ),(14)" }, { "formula_coordinates": [ 4, 135.05, 411.05, 154.82, 15.47 ], "formula_id": "formula_17", "formula_text": "z pi-rt i,j = Pool(H pi-rt i,j ),(15)" }, { "formula_coordinates": [ 4, 129.05, 432.39, 160.82, 15.47 ], "formula_id": "formula_18", "formula_text": "z interM i,j = z pt-ri i,j , z pi-rt i,j .(16)" }, { "formula_coordinates": [ 4, 112.68, 537.56, 177.18, 15.47 ], "formula_id": "formula_19", "formula_text": "H pt-pi i = SelfAttn([H p i , V p i ]),(17)" }, { "formula_coordinates": [ 4, 108.78, 556.24, 181.08, 14.19 ], "formula_id": "formula_20", "formula_text": "H rt-ri i,j = SelfAttn([H r i,j , V r i,j ]),(18)" }, { "formula_coordinates": [ 4, 133.72, 574.63, 156.15, 15.47 ], "formula_id": "formula_21", "formula_text": "z pt-pi i = Pool(H pt-pi i ),(19)" }, { "formula_coordinates": [ 4, 136.38, 593.57, 153.49, 13.94 ], "formula_id": "formula_22", "formula_text": "z rt-ri i,j = Pool(H rt-ri i,j ),(20)" }, { "formula_coordinates": [ 4, 129.93, 613.37, 159.93, 15.47 ], "formula_id": "formula_23", "formula_text": "z intraR i,j = z pt-pi i , z rt-ri i,j .(21)" }, { "formula_coordinates": [ 4, 112.55, 692.3, 172.78, 13.94 ], "formula_id": "formula_24", "formula_text": "z i,j = z intraM i,j , z interM i,j , z intraR i,j . (22" }, { "formula_coordinates": [ 4, 285.32, 694.89, 4.54, 9.46 ], "formula_id": "formula_25", "formula_text": ")" }, { "formula_coordinates": [ 4, 342.14, 307.9, 183, 21.67 ], "formula_id": "formula_26", "formula_text": "{z list i,j } |R i | j=1 = SelfAttn({z i,j } |R i | j=1 ),(23)" }, { "formula_coordinates": [ 4, 364.24, 600.11, 160.9, 13.94 ], "formula_id": "formula_27", "formula_text": "p left n = σ(Linear(z list )),(24)" }, { "formula_coordinates": [ 4, 378.27, 618.07, 146.87, 20.17 ], "formula_id": "formula_28", "formula_text": "p right n = 1 -p left n ,(25)" }, { "formula_coordinates": [ 4, 343.04, 707.74, 182.1, 27.4 ], "formula_id": "formula_29", "formula_text": "µ l = n∈P(l) (p left n ) 1 ln • (p right n ) 1 rn ,(26)" }, { "formula_coordinates": [ 5, 133.41, 160.79, 151.91, 13.94 ], "formula_id": "formula_30", "formula_text": "s l,i,j = Linear l (z list i,j ). (27" }, { "formula_coordinates": [ 5, 285.32, 163.38, 4.54, 9.46 ], "formula_id": "formula_31", "formula_text": ")" }, { "formula_coordinates": [ 5, 106.5, 259.3, 178.83, 23.35 ], "formula_id": "formula_32", "formula_text": "f i,j = f (p i , r i,j ) = l∈L s l,i,j • µ l . (28" }, { "formula_coordinates": [ 5, 285.32, 260.65, 4.54, 9.46 ], "formula_id": "formula_33", "formula_text": ")" }, { "formula_coordinates": [ 5, 70.87, 374.13, 219, 79.56 ], "formula_id": "formula_34", "formula_text": "{f i,j } |R i | j=1 and groundtruth labels {y i,j } |R i | j=1 into two probability distributions. f ′ i,j = exp(f i,j ) |R i | t=1 exp(f i,t ) , y ′ i,j = exp(y i,j ) |R i | t=1 exp(y i,t ) .(29)" }, { "formula_coordinates": [ 5, 70.87, 584.57, 218.27, 42.19 ], "formula_id": "formula_35", "formula_text": "L list = - |P | i=1 |R i | j=1 y ′ i,j log(f ′ i,j" }, { "formula_coordinates": [ 5, 70.87, 672.5, 218.27, 42.19 ], "formula_id": "formula_36", "formula_text": "L pair = |P | i=1 -f i,r + + f i,r -+ α" }, { "formula_coordinates": [ 5, 125.33, 727.78, 163.8, 18.93 ], "formula_id": "formula_37", "formula_text": "1≤j≤|R i | (y i,j ) -min 1≤j≤|R i | (y i,j ), then" }, { "formula_coordinates": [ 5, 70.87, 762.84, 50.18, 12.06 ], "formula_id": "formula_38", "formula_text": "f i,r + , f i,r -." }, { "formula_coordinates": [ 5, 388.67, 153.5, 131.93, 20.17 ], "formula_id": "formula_39", "formula_text": "γ list ≤ γ pair . (30" }, { "formula_coordinates": [ 5, 520.6, 156.09, 4.54, 9.46 ], "formula_id": "formula_40", "formula_text": ")" }, { "formula_coordinates": [ 5, 387.5, 211.36, 133.1, 20.17 ], "formula_id": "formula_41", "formula_text": "L list ≤ L pair . (31" }, { "formula_coordinates": [ 5, 520.6, 213.95, 4.54, 9.46 ], "formula_id": "formula_42", "formula_text": ")" }, { "formula_coordinates": [ 5, 306.14, 293.89, 218.27, 32.88 ], "formula_id": "formula_43", "formula_text": "= {p i , {r i,j } |R i | j=1 } |P |" }, { "formula_coordinates": [ 5, 371.24, 342.69, 153.9, 21.14 ], "formula_id": "formula_44", "formula_text": "E(f list D ) ≤ E(f pair D ),(32)" }, { "formula_coordinates": [ 5, 335.13, 368.43, 146.29, 18.93 ], "formula_id": "formula_45", "formula_text": "E(f D ) = R true (f D ) -R emp (f D )." }, { "formula_coordinates": [ 5, 347.72, 528.73, 177.42, 34.74 ], "formula_id": "formula_46", "formula_text": "L list = - |P | i=1 |R i | j=1 y ′ i,j log(f ′ i,j ).(33)" }, { "formula_coordinates": [ 12, 318.02, 91.4, 123.78, 28.19 ], "formula_id": "formula_47", "formula_text": "L list = - |P | i=1 |R i | j=1 y ′ i,j log(f ′ i,j" }, { "formula_coordinates": [ 12, 225.86, 160.06, 299.28, 34.74 ], "formula_id": "formula_48", "formula_text": "∇ 2 f ′ i,j L list = |P | i=1 |R i | j=1 y ′ i,j (f ′ i,j ) 2 > 0,(34)" }, { "formula_coordinates": [ 12, 128.27, 249.94, 392.32, 26.03 ], "formula_id": "formula_49", "formula_text": "| log(u) -log(v)| = log(1 + u v -1) ≤ u v -1 = 1 v (u -v) ≤ γ|u -v|, (35" }, { "formula_coordinates": [ 12, 520.6, 258.4, 4.54, 9.46 ], "formula_id": "formula_50", "formula_text": ")" }, { "formula_coordinates": [ 12, 81.78, 284.17, 389.46, 31.55 ], "formula_id": "formula_51", "formula_text": "t. |v| ≥ 1 γ . Let x = u i,j y i,j , z = v i,j" }, { "formula_coordinates": [ 12, 140.79, 324.53, 384.35, 26.29 ], "formula_id": "formula_52", "formula_text": "|log(u i,j ) -log(v i,j )| = log u i,j y i,j -log v i,j y i,j ≤ γ u i,j y i,j - v i,j y i,j ,(36)" }, { "formula_coordinates": [ 12, 160.26, 394.12, 364.88, 34.74 ], "formula_id": "formula_53", "formula_text": "|P | i=1 |R i | j=1 |y i,j log(u i,j ) -y i,j log(v i,j )| ≤ γ |P | i=1 |R i | j=1 |u i,j -v i,j | .(37)" }, { "formula_coordinates": [ 12, 209.54, 449.82, 315.6, 20.17 ], "formula_id": "formula_54", "formula_text": "|L list (u, y) -L list (v, y)| ≤ γ list |u -v|,(38)" }, { "formula_coordinates": [ 12, 338.31, 485.71, 186.1, 28.19 ], "formula_id": "formula_55", "formula_text": "L pair = |P | i=1 -f i,r + + f i,r -+ α + , where" }, { "formula_coordinates": [ 12, 369.18, 514.76, 155.23, 18.93 ], "formula_id": "formula_56", "formula_text": "1≤j≤|R i | (y i,j ) -min 1≤j≤|R i | (y i,j ), then" }, { "formula_coordinates": [ 12, 70.53, 536.28, 454.61, 168.73 ], "formula_id": "formula_57", "formula_text": "f i,r + , f i,r -. Proof. Let h pair i (⟨f i,r + , f i,r -⟩), y i ) = [-f i,r + + f i,r -+ α] + , u i = ⟨f i,u + , f i,u -⟩, v i = ⟨f i,r v + , f i,r v -⟩ be two inputs of h pair i . For θ ∈ [0, 1], we have h pair i (θu i + (1 -θ)v i , y i ) = h pair i (θ⟨f i,u + , f i,u -⟩ + (1 -θ)⟨f i,v + , f i,v -⟩, y i ) = h pair i (⟨θf i,u + + (1 -θ)f i,v + , θf i,u -+ (1 -θ)f i,v -⟩, y i ) = -(θf i,u + + (1 -θ)f i,v + ) + (θf i,u -+ (1 -θ)f i,v -) + α + = θ(-f i,u + + f i,u -+ α) + (1 -θ)(-f i,v + + f i,v -+ α) + ≤ θ[-f i,u + + f i,u -+ α] + + (1 -θ)[-f i,v + + f i,v -+ α] + = θh pair i (u i , y i ) + (1 -θ)h pair i (v, y i ). (39)" }, { "formula_coordinates": [ 12, 70.87, 728.35, 454.27, 45.03 ], "formula_id": "formula_58", "formula_text": "L pair (θu+(1-θ)v, y) ≤ θ |P | i=1 h pair i (u i , y i )+(1-θ) |P | i=1 h pair i (v i , y i ) = θL pair (u, y)+(1-θ)L pair (v, y),(40)" }, { "formula_coordinates": [ 13, 70.87, 108.09, 454.28, 22.16 ], "formula_id": "formula_59", "formula_text": "|h pair i (u i , y i )-h pair i (v i , y i )| = (-u + i + u - i + α) -(-v + i + v - i + α) + = -u + i + u - i -v - i + u - i + ." }, { "formula_coordinates": [ 13, 193.92, 160.44, 331.22, 21.14 ], "formula_id": "formula_60", "formula_text": "|h pair i (u i , y i ) -h pair i (v i , y i )| ≤ 2(y max -y min ).(42)" }, { "formula_coordinates": [ 13, 175.79, 206.36, 349.35, 20.5 ], "formula_id": "formula_61", "formula_text": "|u i -v i | = u + i -v + i + u - i -v - i ≥ 2(y max -y min ).(43)" }, { "formula_coordinates": [ 13, 197.02, 250.99, 328.12, 21.14 ], "formula_id": "formula_62", "formula_text": "|h pair i (u i , y i ) -h pair i (v i , y i )| ≤ γ pair |u i -v i |,(44)" }, { "formula_coordinates": [ 13, 71.32, 299.38, 452.64, 34.74 ], "formula_id": "formula_63", "formula_text": "|L pair (u, y) -L pair (v, y)| = |P | i=1 h pair i (u i , y i ) - |P | i=1 h pair i (v i , y i ) ≤ γ pair |P | i=1 |u i -v i | = γ pair |u -v|." }, { "formula_coordinates": [ 13, 271.03, 392.65, 254.11, 20.17 ], "formula_id": "formula_64", "formula_text": "γ list ≤ γ pair .(46)" }, { "formula_coordinates": [ 13, 251.76, 481.49, 273.38, 21.8 ], "formula_id": "formula_65", "formula_text": "γ = sup f i,j L ′ i,j (f i,j ) .(47)" }, { "formula_coordinates": [ 13, 78.94, 545.57, 446.2, 78.61 ], "formula_id": "formula_66", "formula_text": "(L list i,j (f i,j )) ′ =      y ′ i,j log |R i | t=1 exp(f i,t ) exp(f i,j )      ′ = y ′ i,j      exp(f i,j ) |R i | t=1 exp(f i,t ) -1      = -y ′ i,j       |R i | k=1,k̸ =j exp(f i,k ) |R i | t=1 exp(f i,t )       ,(48)" }, { "formula_coordinates": [ 13, 254.31, 630.08, 270.83, 21.14 ], "formula_id": "formula_67", "formula_text": "(L pair i,j (f i,j )) ′ = ±1.(49)" }, { "formula_coordinates": [ 13, 209.07, 667.74, 316.07, 25.31 ], "formula_id": "formula_68", "formula_text": "L list i,j (f i,j ) ′ ≤ y ′ i,j ≤ 1 = L pair i,j (f i,j ) ′ .(50)" }, { "formula_coordinates": [ 13, 269.86, 738.69, 255.28, 20.17 ], "formula_id": "formula_69", "formula_text": "L list ≤ L pair .(51)" }, { "formula_coordinates": [ 14, 160.33, 97.77, 364.82, 34.74 ], "formula_id": "formula_70", "formula_text": "L list = - |P | i=1 |R i | j=1 y ′ i,j log f ′ i,j(52)" }, { "formula_coordinates": [ 14, 163.36, 139.21, 361.79, 46.49 ], "formula_id": "formula_71", "formula_text": "= |P | i=1 |R i | j=1 y ′ i,j log |R i | t=1 exp(f i,t ) exp(f i,j )(53)" }, { "formula_coordinates": [ 14, 163.36, 190.78, 361.79, 83.58 ], "formula_id": "formula_72", "formula_text": "= |P | i=1 |R i | j=1 y ′ i,j   log |R i | t=1 exp f i,t -f i,j   (54) = |P | i=1 |R i | j=1 y ′ i,j   log   1 |R i | |R i | t=1 exp(f i,t )   -f i,j + log |R i |  (55)" }, { "formula_coordinates": [ 14, 163.36, 277.29, 361.79, 40.32 ], "formula_id": "formula_73", "formula_text": "≤ |P | i=1 |R i | j=1 y ′ i,j   1 |R i | |R i | t=1 f i,t -f i,j + log |R i |  (56)" }, { "formula_coordinates": [ 14, 163.36, 322.18, 361.79, 60.2 ], "formula_id": "formula_74", "formula_text": "≤ |P | i=1 |R i | j=1 y ′ i,j (f max -f min + log |R i |) (57) = |P |(f max -f min ) + |P | log |R i |,(58)" }, { "formula_coordinates": [ 14, 129.67, 413.18, 395.48, 34.74 ], "formula_id": "formula_75", "formula_text": "L pair = |P | i=1 -f i,r + + f i,r -+ α + ≤ |P |(f max -f min ) + |P |(y max -y min ),(59)" }, { "formula_coordinates": [ 14, 253.6, 587.25, 267, 21.14 ], "formula_id": "formula_76", "formula_text": "E(f list D ) ≤ E(f pair D ). (61" }, { "formula_coordinates": [ 14, 520.6, 590.82, 4.54, 9.46 ], "formula_id": "formula_77", "formula_text": ")" }, { "formula_coordinates": [ 14, 99.85, 609.5, 146.29, 18.93 ], "formula_id": "formula_78", "formula_text": "E(f D ) = R true (f D ) -R emp (f D )." } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b32", "b26", "b6", "b15", "b46", "b21", "b17", "b8" ], "table_ref": [], "text": "Social recommendation is a widely-used technique to improve the quality of recommender systems by incorporating social information into user preference learning [Yu et al., 2021a]. To accomplish this, various neural network techniques have been developed to encode social-aware user preferences for recommendation. Currently, the most advanced social recommendation methods are built using Graph Neural Networks (GNNs) for recursive message passing, which enables the capture of high-order correlations [Fan et al., 2019;Wu et al., 2019;Song et al., 2019]. In these architectures, * Chao Huang is the Corresponding Author user representations are refined by integrating information from both social and interaction neighbors.\nWhile supervised GNN-enhanced models have achieved remarkable performance in social recommendation, they require a large amount of supervised labels to generate accurate user representations. In practical social recommendation scenarios, however, user-item interaction data is often very sparse [Wei et al., 2022a;Chen et al., 2023]. This label sparsity severely limits the representation power of deep social recommenders and hinders their ability to reach their full potential. Recently, Self-Supervised Learning (SSL) has gained success due to its ability to avoid heavy reliance on observed label data in various domains, e.g., computer vision [He et al., 2020a], natural language processing [Liu and Liu, 2021], and graph representation learning [Zhu et al., 2021].\nMotivated by the limitations of supervised GNN-enhanced models, recent attempts have adopted the self-supervised learning framework [Yu et al., 2021a]. These approach introduce an auxiliary learning task to supplement the supervised main task for data augmentation. For example, MHCN [Yu et al., 2021b] uses a hypergraph-enhanced self-supervised learning framework to improve global relation learning in social recommender systems. Additionally, SMIN [Long et al., 2021] constructs metapath-guided node connections to explore the isomorphic transformation property of graph topology with augmented self-supervision signals.\nDespite the decent performance of self-supervised learning, we argue that the SSL-based augmentation is severely hindered by noisy social relations when enhancing the representation learning of complex user preferences. While observed user-user social ties have the potential to capture social influence on user-item interaction behaviors, the model's performance can significantly degrade when trained on socialaware collaborative graphs with noisy social information. For instance, people may establish social connections with colleagues, classmates, or family members, but they may not share many common interests with each other [Liu et al., 2019;Epasto and Perozzi, 2019]. Therefore, these noisy social influences may not align with user preferences in real-life recommendation scenarios. In most existing solutions, information aggregated from noisy social neighbors may mislead graph message passing and self-supervised learning, resulting in sub-optimal recommendation performance.\nTo address the limitations mentioned earlier, we propose the Denoised Self-Augmented Learning (DSL) paradigm for social recommender systems. Our approach leverages social information to better characterize user preferences with noise-resistant self-supervised learning, aimed at pursuing cross-view alignment. Firstly, we develop a dual-view graph neural network to encode latent representations over both user social and interaction graphs. Then, to mitigate the bias of social relations for recommendation, we design a denoising module to enhance the integrated social-aware selfsupervised learning task. This module identifies unreliable user-wise connections with respect to their interaction patterns. Our DSL is aware of the interaction commonality between users and can automate the social effect denoising process with adaptive user representation alignment. By doing so, the social-aware uniformity is well preserved in the learned user embeddings by alleviating the impact of noisy social information in our recommender. Key contributions of this work are summarized as follows:\n• In this work, we investigate denoised self-augmented learning for social recommendation, effectively reducing the impact of noisy social relations on the representation of socially-aware collaborative signals.\n• We propose DSL, which enables denoised cross-view alignment between the encoded embeddings from social and interaction views. The denoising module assigns reliability weights to useful social relations to encode user preference, endowing our DSL with the capability of generating adaptive self-supervised signals.\n• We instantiate DSL for social recommendation on three real-world datasets. Experimental results show that our method provides more accurate recommendations and superior performance in dealing with noisy and sparse data, compared with various state-of-the-art solutions.\n2 Preliminaries and Related Work" }, { "figure_ref": [], "heading": "Social-aware Recommendation", "publication_ref": [ "b28", "b9", "b32", "b19", "b21" ], "table_ref": [], "text": "We denote the sets of users and items as U = {u 1 , ..., u I } and V = {v 1 , ..., v J }, respectively, where I and J represent the number of users and items. The user-item interaction data is represented by an interaction graph G r = {U, V, E r }, where user-item connection edges are generated when user u i interacts with item v j . To incorporate social context into the recommender system, we define a user social graph G s = {U, E s } to contain user-wise social connections. Social recommender systems aim to learn a model from both the user-item interaction graph G r = {U, V, E r } and the useruser social graph G s = {U, E s }, encoding user interests to make accurate recommendations.\nIn recent years, several neural network techniques have been proposed to solve the social recommendation problem. For instance, attention mechanisms have been used to differentiate social influence among users with learned attentional weights, as seen in EATNN [Chen et al., 2019b] and SAMN [Chen et al., 2019a]. Many GNN-based social recommender systems have been developed to jointly model user-user and user-item graphs via message passing, leveraging the effectiveness of high-order relation encoding with graph neural networks [Wang et al., 2019]. Examples include GraphRec [Fan et al., 2019], DiffNet [Wu et al., 2019], and FeSoG [Liu et al., 2022]. Some recent attempts have leveraged self-supervised learning to enhance social recommendation with auxiliary self-supervision signals, such as MHCN [Yu et al., 2021b] and SMIN [Long et al., 2021]. However, their representation learning abilities are limited by social relation noise, which leads to biased models." }, { "figure_ref": [], "heading": "Self-Supervised Recommender Systems", "publication_ref": [ "b34", "b0", "b44", "b40", "b42" ], "table_ref": [], "text": "Self-supervised learning has recently gained attention in various recommendation tasks. Supervised contrastive loss has been shown to benefit graph collaborative filtering with effective data augmentation [Wu et al., 2021;Cai et al., 2023]. For sequential recommender systems, self-supervised pretraining [Zhou et al., 2020] and imitation [Yuan et al., 2022] have been introduced to enhance sequence modeling. Researchers have also brought the benefits of self-supervised learning to multi-interest/multi-behavior recommender systems, as seen in Re4 [Zhang et al., 2022] and CML [Wei et al., 2022b]. Our method advances this research by proposing a novel unbiased self-supervised learning paradigm to denoise social relation modeling in social recommender systems." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we present our DSL model with technical details. The model architecture is illustrated in Figure 1." }, { "figure_ref": [], "heading": "Dual-View Graph Neural Relation Learning", "publication_ref": [ "b4", "b28" ], "table_ref": [], "text": "With the initialized id-corresponding embeddings, our DSL first employs a dual-view graph neural network to capture high-order collaborative relations for both user-item interactions and user-user social ties. Inspired by the effectiveness of lightweight GCN-enhanced collaborative filtering paradigms [He et al., 2020b;Chen et al., 2020], DSL is configured with a simplified graph neural network, which is:\nE (l) r = (L r + I) • E (l-1) r (1)\nThe above equation shows the iterative information propagation scheme of our GCN over the user-item interaction graph. Here, E (l) r , E (l-1) r ∈ R (I+J)×d denote the embeddings of users and items after l iterations of user-item relation modeling. E (0) r is initialized by stacking the initial user embedding matrix E u and the item embedding matrix E v . I ∈ R (I+J)×(I+J) denotes the identity matrix for enabling self-loop. L r ∈ R (I+J)×(I+J) denotes the Laplacian matrix of the user-item interaction graph [Wang et al., 2019].\nL r = D -1 2 r A r D -1 2 r , A r = 0 R R ⊤ 0 (2)\nR ∈ R I×J denotes the user-item interaction matrix, and 0 refers to all-zero matrices. The bidirectional adjacent matrix A r of the user-item interaction view is multiplied by its corresponding diagonal degree matrix D r for normalization.\nTo encode user-wise social relations in our recommender, we also apply the lightweight GCN to the user social graph G s . Specifically, our social view GNN takes the initial users' s = E u . The user embeddings are generated by cross-layer passing:\nE (l) s = (L s + I) • E (l-1) s , L s = D -1 2 s S D -1 2 s (3)\nHere, S ∈ R I×I encodes the user-wise social relations, and D s , L s ∈ R I×I denote the corresponding diagonal degree matrix and the normalized Laplacian matrix for the social view.\nE (l) s , E (l-1)\ns ∈ R I×d are the users' social embeddings in the l-th and (l -1)-th graph neural iteration, respectively.\nEmbedding Aggregation. To aggregate the embeddings encoded from different orders in G r and G s , DSL adopts mean-pooling operators for both the interaction and social views.\nĒr = L l=0 E (l) r , Ēs = L l=0 E (l) s (4)\nHere, L is the maximum number of graph iterations. With our dual-view GNN, we encode view-specific relations for user interaction and social influence in our model." }, { "figure_ref": [], "heading": "Cross-View Denoised Self-Supervision", "publication_ref": [], "table_ref": [], "text": "In our recommender, the learned user-item relations and userwise dependencies are complementary to each other. To integrate both relational contextual signals, we design a crossview denoised self-supervised learning paradigm that can alleviate the noisy effects of transferring social knowledge into user-item interaction modeling. In real-life scenarios, passively-built social relations, such as colleagues or classmates, may not bring much influence to user interaction preference due to their diverse shopping tastes. Blindly relying on such irrelevant social ties to infer users' interests could damage the performance of social recommendation models.\nTo address this issue, we filter out the noisy social influence between dissimilar users with respect to their interaction preference for unbiased self-supervision.\nAdaptive Cross-View Alignment. In our DSL, we incorporate the cross-view denoising task to supplement the main learning task with auxiliary self-supervision signals. The learned user interaction patterns guide the social relation denoising module to filter out misleading embedding propagation based on observed social connections. Specifically, the interaction similarity z i,i ′ between the user pair (i,\ni ′ ) is gen- erated by z i,i ′ = [ē (r) i ; ē(r) i ′ ],\ngiven the user embeddings (ē\ni , ē(r)\ni ′ ) learned from our interaction GNN. Similarly, user social similarity ẑi,i ′ can be obtained by ẑi\n,i ′ = [ē (s) i ; ē(s) i ′ ], based on user representations (ē (s) i , ē(s)\ni ′ ) encoded from our social GNN. To alleviate the semantic gap between interaction view and social view, we design a learnable similarity projection function to map interaction semantics into a latent embedding space for cross-view alignment, as follows:\nz i,i ′ = sigm(d ⊤ • σ(T • [ē (r) i ; ē(r) i ′ ] + ē(r) i + ē(r) i ′ + c))\n(5) where sigm(•) and σ(•) denote the sigmoid and LeakyReLU activation functions, respectively. Our designed parameterized projection function consists of d ∈ R d , T ∈ R d×2d , c ∈ R d as learnable parameters, enabling adaptive alignment between social and interaction views.\nDenoised Self-Supervised Augmentation. To incorporate denoised social influence to improve recommendation quality, we design a self-supervised learning task for cross-view alignment with augmented embedding regularization. Specifically, the cross-view alignment loss function is:\nL ssl = (ui,u i ′ ) max(0, 1 -z i,i ′ ẑi,i ′ ) (6)\nThe user pair (u i , u i ′ ) is individually sampled from user set U. With the above self-supervised learning objective, the integrated user relation prediction task will be guided based on the self-supervised signals for social influence denoising. Dy doing so, the noisy social connections between users with dissimilar preference, which contradicts with the target recommendation task will result in distinguishable user representations for recommendation enhancement." }, { "figure_ref": [], "heading": "Multi-Task Model Optimization", "publication_ref": [ "b24" ], "table_ref": [], "text": "The learning process of our DSL involves multi-task training for model optimization. The augmented self-supervised learning tasks are integrated with the main recommendation optimized loss to model denoised social-aware user preferences. Given the encoded user and item embeddings, we predict user-item (ŷ (r) ui,vj ) and user-user (ŷ\n(s)\nui,u i ′ ) relations as:\nŷ(r) ui,vj = ē(r)⊤ i ē(r) j ; ŷ(s) ui,u i ′ = ē(s)⊤ i ē(s) i ′(7)\nwhere ŷ(r) ui,vj ∈ R represents the likelihood of user u i interacting with item v j from the interaction view, while ŷ(s) ui,u i ′ indicates the probability of u i and u i ′ being socially connected. Given these definitions, we minimize the following BPR loss functions [Rendle et al., 2009] for optimization:\nL rec = (ui,v j + ,v j -) -ln sigm(ŷ (r) ui,v j + -ŷ(r) ui,v j -) L soc = (ui,u i + ,u i -) -ln sigm(ŷ (s) ui,u i + -ŷ(s) ui,u i -)(8)\nwhere v j + and v j -denote the sampled positive and negative item for user u i . u i + and u i -are sampled from u i 's sociallyconnected and unconnected users, respectively. By integrating self-supervised learning objectives with weight parameter λ 1 , λ 2 , λ 3 , the joint optimized loss is given as:\nL = L rec + λ 1 L soc + λ 2 L ssl + λ 3 (∥E u ∥ 2 F + ∥E v ∥ 2 F ) (9)" }, { "figure_ref": [], "heading": "In-Depth Analysis of DSL", "publication_ref": [], "table_ref": [], "text": "In this section, our aim is to answer the question: How does our model enable adaptive and efficient self-supervised learning? We provide analysis to further understand our model.\nAdaptive Self-Supervised Learning. In most existing contrastive learning (CL) approaches, auxiliary SSL signals are generated to address the issue of sparse supervision labels.\nFollowing the mutual information maximization (Infomax) principle, these approaches maximize the agreement between positive samples while pushing negative pairs away in the embedding space, as shown below with derived gradients.\n∂L cl ∂ē i = -ē i + + u i - ēi - exp ē⊤ i ēi - u i -exp ē⊤ i ēi -(10)\nThe first term in the equation maximizes the similarity between positive pairs (ē i and ēi + ) with the same strength. Negative pairs (ē i and ēi -) with higher similarity are pushed away with greater strength, while negative pairs with lower similarity are pushed away with lesser strength. For simplicity, we have omitted the vector normalization and the temperature coefficient in the above-presented InfoNCE loss.\nIn comparison, our cross-view denoising SSL schema aims to maximize the similarity between sampled user pairs (ē i , ēi ′ ) adaptively, based on the labels z i,i ′ . The non-zero gradients of our denoising SSL over ēi are shown below: \n∂L ssl ∂ē i = u i ′ -z i,i ′ ēi ′(11)" }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "We conduct extensive experiments to evaluate the effectiveness of our DSL by answering the following research questions: RQ1: Does DSL outperform state-of-the-art recommender systems? RQ2: How do different components affect the performance of DSL? RQ3: Is DSL robust enough to handle noisy and sparse data in social recommendation? RQ4: How efficient is DSL compared to alternative methods?" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b21" ], "table_ref": [ "tab_0" ], "text": "Dataset. We conduct experiments on three benchmark datasets collected from the Ciao, Epinions, and Yelp online platforms, where social connections can be established among users in addition to their observed implicit feedback (e.g., rating, click) over different items. Table 1 lists the detailed statistical information of the experimented datasets.\nMetrics. We use Hit Ratio (HR)@N and Normalized Discounted Cumulative Gain (NDCG)@N as evaluation metrics, where N is set to 10 by default. We adopt a leave-one-out strategy, following similar settings as in [Long et al., 2021]." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b23", "b36", "b32", "b26", "b28", "b13", "b21" ], "table_ref": [], "text": "We evaluate the performance of DSL by comparing it with 10 baselines from different research lines for comprehensive evaluation, including: i) MF-based recommendation approaches (i.e., PMF, TrustMF); ii) attentional social recommenders (i.e., EATNN); iii) GNN-enhanced social recommendation methods (i.e., DiffNet, DGRec, NGCF+); and iv) self-supervised social recommendation models (i.e., MHCN, KCGN, SMIN, DcRec). Details are provided as follows:\n• PMF [Mnih and Salakhutdinov, 2007]: is a probabilistic approach that uses matrix factorization technique to factorize users and items into latent vectors for representations. • TrustMF [Yang et al., 2016]: This method incorporates trust relations between users into matrix factorization as social information to improve recommendation performance.\n• EATNN [Chen et al., 2019b]: It is an adaptive transfer learning model built upon attention mechanisms to aggregate information from both user interactions and social ties.\n• DiffNet [Wu et al., 2019]: This is a deep influence propagation architecture to recursively update users' embeddings with social influence diffusion components.\n• DGRec [Song et al., 2019]: This social recommender leverages a graph attention network to jointly model the dynamic behavioral patterns of users and social influence.\n• NGCF+ [Wang et al., 2019]: This GNN-enhanced collaborative filtering approach performs message passing over a social-aware user-item relation graph.\n• MHCN [Yu et al., 2021b]: This model proposes a multichannel hypergraph convolutional network to enhance social recommendation by considering high-order relations.\n• KCGN [Huang et al., 2021]: It improves social recommendation by integrating item inter-dependent knowledge with social influence through a multi-task learning framework.\n• SMIN [Long et al., 2021]: This model incorporates a metapath-guided heterogeneous graph learning task into social recommendation, utilizing self-supervised signals based on mutual information maximization. Implementation Details. We implement our DSL using PyTorch and optimize parameter inference with Adam. During training, we use a learning rate range of [5e -4 , 1e -3 , 5e -3 ] and a decay ratio of 0.96 per epoch. The batch size is selected from [1024,2048,4096,8192] and the hidden dimensionality is tuned from [64,128,256,512]. We search for the optimal number of information propagation layers in our graph neural architecture from [1,2,3,4]. The regularization weights λ 1 and λ 2 are selected from [1e -3 , 1e -2 , 1e -1 , 1e 0 , 1e 1 ] and [1e -6 , 1e -5 , 1e -4 , 1e -3 ], respectively. The weight for weight-decay regularization λ 3 is tuned from [1e -7 , 1e -6 , 1e -5 , 1e -4 ]. Epinions datasets, DSL achieves an average improvement of 26.0% and 76.0% over baselines, respectively. This validates the importance of addressing noise issues in social information incorporation, which can effectively debias user representations and boost recommendation performance." }, { "figure_ref": [], "heading": "Overall Performance Comparison (RQ1)", "publication_ref": [], "table_ref": [], "text": "• Methods incorporating self-supervised augmentation consistently outperform other baselines, highlighting the importance of exploring self-supervision signals from unlabeled data to alleviate sparsity issues in social recommendation. Our DSL outperforms other methods, suggesting that denoising social relation modeling in socially-aware recommender systems can benefit the design of more helpful self-supervision information. In MHCN and SMIN, mutual information is maximized to reach agreement between socially-connected users under a self-supervised learning framework. However, blindly generating augmented selfsupervision labels from noisy social connections can align embeddings of connected users, diluting their true preferences. In contrast, our DSL can mitigate the effects of false positives for socially-dependent users.\n• GNN-enhanced social recommenders, e.g., DiffNet and DGRec, outperform vanilla attentive methods like EATNN, highlighting the effectiveness of modeling high-order connectivity in social-aware collaborative relationships. This observation aligns with the conclusion that incorporating high-hop information fusion is beneficial for embedding learning in CF signals. However, aggregating irrelevant social information via GNNs can lead to unwanted embedding propagation and weaken model representation ability." }, { "figure_ref": [ "fig_1" ], "heading": "Impact Study of Different Components (RQ2)", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In this section, we examine the effects of component-wise ablation and how hyperparameters influence performance.\nModel Ablation Study. To investigate the essential role of our denoised self-supervised learning paradigm in improving performance, we perform an ablation study of key model components. Specifically, we compare our DSL with the following ablated variants: (1) \"DSL-d\": disabling Table 4: Ranking performance on Ciao dataset with varying Top-N values in terms of HR@N and NDCG@N cross-view denoised self-supervised learning for mitigating the noisy effects of social-aware collaborative filtering, (2) \"DSL-s\": removing social-aware self-supervised augmentation and directly incorporating social user embeddings into user-item interaction prediction, and (3) \"DSL-c\": replacing denoised cross-view alignment with contrastive learning to reach agreement for integrating social and interaction views.\nResults are reported in Table 3. From our results, we observe that our DSL outperforms other variants in most evaluation cases. Based on this, we draw the following key conclusions:\n• Comparing DSL with \"DSL-d\", the significant performance improvement suggests that social-aware collaborative relation learning is affected by the noise problem when directly incorporating social information. • The recommendation performance further drops in variant \"DSL-s\" without the auxiliary learning task of socialaware relation prediction, indicating that incorporating selfsupervised signals from user-wise social influence is helpful for enhancing collaborative relational learning. • The contrastive learning used in variant \"DSL-c\" attempts to align the social and interaction views, but the irrelevant social connections can mislead the contrastive selfsupervision for data augmentation. This observation supports our assumption that social information is inherently noisy for characterizing user preferences. Parameter Effect Study. Exploring the influence of key hyperparameters on DSL's performance, including SSL loss weight, batch size, and the number of graph propagation layers, would be interesting. The results are shown in Figure 2, where the y-axis represents the performance variation ratio compared to the default parameter settings. size helps alleviate the over-fitting issue during model training. However, worse performance is observed with further increasing batch size, possibly due to local optima. • Effect of propagation layers #. In GNNs, the number of propagation layers balances the trade-off between informativeness and over-smoothing. When tuning L from 1 to 4, other parameters are kept at their default settings. A deeper graph neural network is effective in modeling high-order connectivity through cross-layer message passing to generate informative user and item representations. However, stacking too many propagation layers reduces the model capacity by making many different users identical. The resulting over-smoothed embeddings cannot preserve user uniformity with discriminative representations." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Model Robustness Evaluation (RQ3)", "publication_ref": [], "table_ref": [], "text": "In this section, we investigate the robustness of our DSL against data sparsity and noise for recommendation.\nData Sparsity. To evaluate the model's performance on less active users with fewer item interactions, we partitioned the user set into four groups based on their node degrees in the user-item interaction graph G r , namely (0,5), [5,10), [10,15), and [20, +∞). We separately measured the recommendation accuracy for each user group, and the evaluation results are reported in Figure 3. We observed that DSL outperformed the best-performing baseline MHCN in most cases, further validating the effectiveness of our incorporated self-supervision signals for data augmentation under interaction label scarcity.\nData Noise. To investigate the influence of noisy effects on model performance, we randomly generated different percentages of fake edges (i.e., 10%, 20%, 30%) to create a corrupted interaction graph as noise perturbation. The relative performance degradation with different noise ratios is shown in Figure 3. Our DSL demonstrates great potential in addressing data noise issues compared to competitors. We attribute this superiority to two reasons: 1) Graph structure learning with social relationships as self-supervision signals, which " }, { "figure_ref": [], "heading": "Efficiency Analysis (RQ4)", "publication_ref": [], "table_ref": [], "text": "We conduct additional experiments to evaluate the efficiency of our method for model training when DSL competes with baselines. We measure the computational costs (running time) of different methods on an NVIDIA GeForce RTX 3090 and present the training time for each model in Table 5. The training cost of DSL is significantly lower than most of the compared baselines, demonstrating its potential scalability in handling large-scale datasets in real-life recommendation scenarios. While existing social recommenders (e.g., MHCN and SMIN) leverage SSL for data augmentation, blindly maximizing the mutual information between user embeddings may lead to additional computational costs. In DSL, we enhance the social-aware self-supervised learning paradigm with an adaptive denoising module. This simple yet effective framework not only improves the recommendation quality but also shows an advantage in training efficiency." }, { "figure_ref": [ "fig_3" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct a case study in Figure 4 to qualitatively investigate the effects of our cross-view self-augmented learning framework in denoising social connections for user preference learning. Specifically, we sample two user-user pairs from the Ciao dataset and show the top-3 frequently interacted item categories for each user. From the figure, we observe that the social influence between user 239 and user 227 is identified as weak (i.e., learned lower user relevance weight) with respect to their interaction preference. This is manifested by the fact that they mainly interacted with different item categories, i.e., user 239: Books, Music, and Travel; user 227: Food & Drink, Games, and Household Appliances.\nOn the other hand, the social influence between user 1412 and user 1429 is learned to be strong (i.e., learned higher user relevance weight). Most of their interacted items come from the same categories, i.e., Books, Food & Drink, and Beauty, indicating their similar preferences. This observation aligns with our expectation that DSL can denoise social connections and encode social-aware user interests through meaningful SSLenhanced representations for recommendation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a universal denoised self-augmented learning framework that not only incorporates social influence to help understand user preferences but also mitigates noisy effects by identifying social relation bias and denoising cross-view self-supervision. To bridge the gap between social and interaction semantic views, the framework introduces a learnable cross-view alignment to achieve adaptive selfsupervised augmentation. Experimental results show that our new DSL leads to significant improvements in recommendation accuracy and robustness compared to existing baselines. Additionally, the component-wise effects are evaluated with ablation study. In future work, we aim to investigate the incorporation of interpretable learning over diverse relations to improve the explainability of denoised self-supervised learners for recommendation. Such incorporation can provide insights into the decision-making process of the social-aware recommender system, enabling users to understand how the system arrives at its recommendation results." } ]
Social recommendation is gaining increasing attention in various online applications, including ecommerce and online streaming, where social information is leveraged to improve user-item interaction modeling. Recently, Self-Supervised Learning (SSL) has proven to be remarkably effective in addressing data sparsity through augmented learning tasks. Inspired by this, researchers have attempted to incorporate SSL into social recommendation by supplementing the primary supervised task with social-aware self-supervised signals. However, social information can be unavoidably noisy in characterizing user preferences due to the ubiquitous presence of interest-irrelevant social connections, such as colleagues or classmates who do not share many common interests. To address this challenge, we propose a novel social recommender called the Denoised Self-Augmented Learning paradigm (DSL). Our model not only preserves helpful social relations to enhance user-item interaction modeling but also enables personalized cross-view knowledge transfer through adaptive semantic alignment in embedding space. Our experimental results on various recommendation benchmarks confirm the superiority of our DSL over state-of-the-art methods. We release our model implementation at: https://github.com/HKUDS/DSL.
Denoised Self-Augmented Learning for Social Recommendation
[ { "figure_caption": "Figure 1 :1Figure 1: Overall framework of the proposed denoised self-augmented learning (DSL) model.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: We conduct a hyperparameter study of the DSL with respect to i) SSL loss weight for regularization, ii) batch size for training, and iii) # of propagation layers for message passing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Model robustness study w.r.t data noise and data sparsity, in terms of HR@N and NDCG@N.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Case study on denoising social relations for modeling useritem interaction patterns with sampled socially-connected user pairs.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Statistical information of evaluated datasets.The learnable z i,i ′ reflects the common preference between user u i and u i ′ , which adaptively controls the strength of our self-supervised regularization. This enables us to filter out noisy signals in the observed social connections, and supercharge our SSL paradigm with adaptive data augmentation by transferring knowledge across different semantic views. Efficient SSL. Our DSL model adopts a lightweight graph convolutional network (GCN) as the graph relation encoder, with a complexity of O((|E r |+|E s |)×d). The cross-view denoised self-supervised learning conducts pairwise node-wise relationships, which takes O(B × d) time complexity, where B represents the batch size. In contrast, most existing vanilla InfoNCE-based contrastive learning methods calculate relations between a batch of nodes and all other nodes, resulting in an operation complexity of O(B × I × d).", "figure_data": "DataCiaoEpinionsYelp# Users6,67211,111161,305# Items98,875190,774114,852# Interactions198,181247,591 1,118,645Interaction Density 0.0300% 0.0117%0.0060%# Social Ties109,503203,989 2,142,242", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Recommendation performance of different methods. %Imp denotes relative improvements over all baselines on average.", "figure_data": "DatasetMetricsPMFTrustMF DiffNet DGRec EATNN NGCF+ MHCN KCGN SMINDSL%ImpCiaoHR NDCG 0.2464 0.42230.4492 0.25200.5544 0.31670.4658 0.24010.4255 0.25250.5629 0.34290.5950 0.5785 0.5852 0.6374 0.3805 0.3552 0.3687 0.406526.0 37.2EpinionsHR NDCG 0.0968 0.16860.1769 0.08420.2182 0.11620.2055 0.09080.1576 0.07940.2969 0.15820.3507 0.3122 0.3159 0.3983 0.1926 0.1721 0.1867 0.229076.9 96.2YelpHR NDCG 0.5165 0.75540.7791 0.54240.8031 0.56700.7950 0.55930.8031 0.55600.8265 0.58540.8571 0.8484 0.8478 0.8923 0.6310 0.6028 0.5993 0.659910.1 15.5", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table 4 demonstrate that our DSL framework consistently outperforms all baselines on various datasets, providing evidence of its effectiveness. Based on our results, we make the following observations.", "figure_data": "DataCiaoEpinionsYelpMetricsHRNDCGHRNDCGHRNDCGDSL-d 0.615 0.399 0.354 0.207 0.887 0.658DSL-s0.594 0.374 0.327 0.169 0.839 0.621DSL-c0.603 0.388 0.336 0.199 0.889 0.662DSL0.637 0.406 0.398 0.229 0.892 0.659", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Component-wise ablation study of DSL.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Effect of SSL regularization weight. The SSL loss weight controls the regularization strength of self-supervision signals. It is clear that a proper weight of SSL loss regularization is beneficial for improving model learning on highlyskewed distributed data. However, the SSL regularization does not always improve model representation. As the SSL loss weight increases, the performance worsens. This is because the model gradient learning is biased towards the strongly regularized SSL signals, which has negative effects on the main optimized objective for recommendation.• Effect of batch size. The best model performance is achieved with a batch size of around 2048. The observed performance differences between Epinions and Yelp stem from their diverse social data densities. Epinions data is more sensitive to batch size due to its sparse social connections. Results on Epinions data indicate that a larger batch", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Model computational cost measured by running time (s).", "figure_data": "DataDiffNet NGCF+ MHCN KCGN SMIN OurCiao8.18.24.9226.97.83.2Epinions39.116.39.3449.419.76.1Yelp692.9124.656.2132.575.358.6", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Tianle Wang; Lianghao Xia; Chao Huang
[ { "authors": " Cai", "journal": "", "ref_id": "b0", "title": "", "year": "2023" }, { "authors": "Xuheng Cai; Chao Huang; Lianghao Xia; Xubin Ren", "journal": "", "ref_id": "b1", "title": "Lightgcl: Simple yet effective graph contrastive learning for recommendation", "year": "2023" }, { "authors": "Chen ", "journal": "", "ref_id": "b2", "title": "Social attentional memory network: Modeling aspect-and friend-level differences in recommendation", "year": "2019" }, { "authors": "Chen ", "journal": "", "ref_id": "b3", "title": "An efficient adaptive transfer neural network for social-aware recommendation", "year": "2019" }, { "authors": "Chen ", "journal": "", "ref_id": "b4", "title": "", "year": "2020" }, { "authors": "Lei Chen; Le Wu; Richang Hong; Kun Zhang; Meng Wang", "journal": "", "ref_id": "b5", "title": "Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach", "year": "2020" }, { "authors": "Chen ", "journal": "", "ref_id": "b6", "title": "", "year": "2023" }, { "authors": "Mengru Chen; Chao Huang; Lianghao Xia; Wei Wei; Yong Xu; Ronghua Luo", "journal": "", "ref_id": "b7", "title": "Heterogeneous graph contrastive learning for recommendation", "year": "2023" }, { "authors": "Perozzi Epasto; Alessandro Epasto; Bryan Perozzi", "journal": "", "ref_id": "b8", "title": "Is a single embedding enough? learning node representations that capture multiple social contexts", "year": "2019" }, { "authors": "Fan ", "journal": "", "ref_id": "b9", "title": "", "year": "2019" }, { "authors": "Wenqi Fan; Yao Ma; Qing Li; Yuan He; Eric Zhao", "journal": "WWW", "ref_id": "b10", "title": "Graph neural networks for social recommendation", "year": "2019" }, { "authors": " He", "journal": "", "ref_id": "b11", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": " He", "journal": "", "ref_id": "b12", "title": "Lightgcn: Simplifying and powering graph convolution network for recommendation", "year": "2020" }, { "authors": " Huang", "journal": "", "ref_id": "b13", "title": "", "year": "2021" }, { "authors": "Chao Huang; Huance Xu; Yong Xu; Peng Dai; Lianghao Xia; Mengyin Lu", "journal": "", "ref_id": "b14", "title": "Knowledgeaware coupled graph neural network for social recommendation", "year": "2021" }, { "authors": "Liu Liu", "journal": "", "ref_id": "b15", "title": "", "year": "2021" }, { "authors": "Yixin Liu; Pengfei Liu", "journal": "", "ref_id": "b16", "title": "Simcls: A simple framework for contrastive learning of abstractive summarization", "year": "2021" }, { "authors": " Liu", "journal": "", "ref_id": "b17", "title": "", "year": "2019" }, { "authors": "Ninghao Liu; Qiaoyu Tan; Yuening Li; Hongxia Yang; Jingren Zhou; Xia Hu", "journal": "", "ref_id": "b18", "title": "Is a single vector enough? exploring node polysemy for network embedding", "year": "2019" }, { "authors": " Liu", "journal": "", "ref_id": "b19", "title": "", "year": "2022" }, { "authors": "Zhiwei Liu; Liangwei Yang; Ziwei Fan; Hao Peng; Philip S Yu", "journal": "TIST", "ref_id": "b20", "title": "Federated social recommendation with graph neural network", "year": "2022" }, { "authors": " Long", "journal": "", "ref_id": "b21", "title": "", "year": "2021" }, { "authors": "Xiaoling Long; Chao Huang; Yong Xu; Huance Xu; Peng Dai; Lianghao Xia; Liefeng Bo", "journal": "", "ref_id": "b22", "title": "Social recommendation with self-supervised metagraph informax network", "year": "2021" }, { "authors": "Salakhutdinov Mnih; Andriy Mnih; Russ R Salakhutdinov", "journal": "NeurIPS", "ref_id": "b23", "title": "Probabilistic matrix factorization", "year": "2007" }, { "authors": " Rendle", "journal": "", "ref_id": "b24", "title": "", "year": "2009" }, { "authors": "Steffen Rendle; Christoph Freudenthaler", "journal": "", "ref_id": "b25", "title": "Bpr: Bayesian personalized ranking from implicit feedback", "year": "2009" }, { "authors": " Song", "journal": "", "ref_id": "b26", "title": "", "year": "2019" }, { "authors": "Weiping Song; Zhiping Xiao", "journal": "", "ref_id": "b27", "title": "Session-based social recommendation via dynamic graph attention networks", "year": "2019" }, { "authors": " Wang", "journal": "", "ref_id": "b28", "title": "", "year": "2019" }, { "authors": "Xiang Wang; Xiangnan He; Meng Wang; Fuli Feng; Tat-Seng Chua", "journal": "", "ref_id": "b29", "title": "Neural graph collaborative filtering", "year": "2019" }, { "authors": " Wei", "journal": "NeurIPS", "ref_id": "b30", "title": "Contrastive graph structure learning via information bottleneck for recommendation", "year": "2022" }, { "authors": " Wei", "journal": "", "ref_id": "b31", "title": "Contrastive meta learning with behavior multiplicity for recommendation", "year": "2022" }, { "authors": " Wu", "journal": "", "ref_id": "b32", "title": "", "year": "2019" }, { "authors": "Le Wu; Peijie Sun; Yanjie Fu; Richang Hong; Xiting Wang; Meng Wang", "journal": "", "ref_id": "b33", "title": "A neural influence diffusion model for social recommendation", "year": "2019" }, { "authors": " Wu", "journal": "", "ref_id": "b34", "title": "", "year": "2021" }, { "authors": "Jiancan Wu; Xiang Wang; Fuli Feng; Xiangnan He; Liang Chen; Jianxun Lian; Xing Xie", "journal": "", "ref_id": "b35", "title": "Selfsupervised graph learning for recommendation", "year": "2021" }, { "authors": "Yang ", "journal": "", "ref_id": "b36", "title": "", "year": "2016" }, { "authors": "Bo Yang; Yu Lei; Jiming Liu; Wenjie Li", "journal": "TPAMI", "ref_id": "b37", "title": "Social collaborative filtering by trust", "year": "2016" }, { "authors": " Yu", "journal": "", "ref_id": "b38", "title": "Socially-aware self-supervised tri-training for recommendation", "year": "2021" }, { "authors": " Yu", "journal": "WWW", "ref_id": "b39", "title": "Self-supervised multi-channel hypergraph convolutional network for social recommendation", "year": "2021" }, { "authors": " Yuan", "journal": "", "ref_id": "b40", "title": "", "year": "2022" }, { "authors": "Hongshen Xu Yuan; Yonghao Chen; Xiaofang Song; Zhuoye Zhao; Zhen Ding; Bo He; Long", "journal": "", "ref_id": "b41", "title": "Improving sequential recommendation consistency with self-supervised imitation", "year": "2022" }, { "authors": " Zhang", "journal": "", "ref_id": "b42", "title": "", "year": "2022" }, { "authors": "Shengyu Zhang; Lingxiao Yang; Dong Yao; Yujie Lu", "journal": "", "ref_id": "b43", "title": "Re4: Learning to re-contrast, reattend, re-construct for multi-interest recommendation", "year": "2022" }, { "authors": " Zhou", "journal": "", "ref_id": "b44", "title": "", "year": "2020" }, { "authors": "Kun Zhou; Hui Wang; Wayne Xin Zhao; Yutao Zhu; Sirui Wang; Fuzheng Zhang; Zhongyuan Wang; Ji-Rong Wen", "journal": "", "ref_id": "b45", "title": "S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization", "year": "2020" }, { "authors": " Zhu", "journal": "", "ref_id": "b46", "title": "", "year": "2021" }, { "authors": "Yanqiao Zhu; Yichen Xu; Feng Yu; Qiang Liu; Shu Wu", "journal": "", "ref_id": "b47", "title": "Graph contrastive learning with adaptive augmentation", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 387.69, 465.97, 170.31, 12.82 ], "formula_id": "formula_0", "formula_text": "E (l) r = (L r + I) • E (l-1) r (1)" }, { "formula_coordinates": [ 2, 353.3, 597.11, 204.7, 21.32 ], "formula_id": "formula_1", "formula_text": "L r = D -1 2 r A r D -1 2 r , A r = 0 R R ⊤ 0 (2)" }, { "formula_coordinates": [ 3, 83.65, 301.27, 213.35, 15.71 ], "formula_id": "formula_2", "formula_text": "E (l) s = (L s + I) • E (l-1) s , L s = D -1 2 s S D -1 2 s (3)" }, { "formula_coordinates": [ 3, 79.63, 360.48, 45.99, 12.82 ], "formula_id": "formula_3", "formula_text": "E (l) s , E (l-1)" }, { "formula_coordinates": [ 3, 112.89, 446.47, 184.11, 30.55 ], "formula_id": "formula_4", "formula_text": "Ēr = L l=0 E (l) r , Ēs = L l=0 E (l) s (4)" }, { "formula_coordinates": [ 3, 315, 335.6, 243, 26.91 ], "formula_id": "formula_5", "formula_text": "i ′ ) is gen- erated by z i,i ′ = [ē (r) i ; ē(r) i ′ ]," }, { "formula_coordinates": [ 3, 315, 376.94, 243, 28.39 ], "formula_id": "formula_7", "formula_text": ",i ′ = [ē (s) i ; ē(s) i ′ ], based on user representations (ē (s) i , ē(s)" }, { "formula_coordinates": [ 3, 323.44, 452.03, 214.5, 14.21 ], "formula_id": "formula_8", "formula_text": "z i,i ′ = sigm(d ⊤ • σ(T • [ē (r) i ; ē(r) i ′ ] + ē(r) i + ē(r) i ′ + c))" }, { "formula_coordinates": [ 3, 364.58, 592.13, 193.42, 21.97 ], "formula_id": "formula_9", "formula_text": "L ssl = (ui,u i ′ ) max(0, 1 -z i,i ′ ẑi,i ′ ) (6)" }, { "formula_coordinates": [ 4, 208.04, 125.13, 9.99, 6.12 ], "formula_id": "formula_10", "formula_text": "(s)" }, { "formula_coordinates": [ 4, 94.76, 148.19, 202.24, 12.49 ], "formula_id": "formula_11", "formula_text": "ŷ(r) ui,vj = ē(r)⊤ i ē(r) j ; ŷ(s) ui,u i ′ = ē(s)⊤ i ē(s) i ′(7)" }, { "formula_coordinates": [ 4, 71.34, 235.01, 225.66, 55.77 ], "formula_id": "formula_12", "formula_text": "L rec = (ui,v j + ,v j -) -ln sigm(ŷ (r) ui,v j + -ŷ(r) ui,v j -) L soc = (ui,u i + ,u i -) -ln sigm(ŷ (s) ui,u i + -ŷ(s) ui,u i -)(8)" }, { "formula_coordinates": [ 4, 60.6, 356.14, 236.4, 12.69 ], "formula_id": "formula_13", "formula_text": "L = L rec + λ 1 L soc + λ 2 L ssl + λ 3 (∥E u ∥ 2 F + ∥E v ∥ 2 F ) (9)" }, { "formula_coordinates": [ 4, 92.51, 513.09, 204.49, 28.78 ], "formula_id": "formula_14", "formula_text": "∂L cl ∂ē i = -ē i + + u i - ēi - exp ē⊤ i ēi - u i -exp ē⊤ i ēi -(10)" }, { "formula_coordinates": [ 4, 132.23, 674.17, 164.77, 27.8 ], "formula_id": "formula_15", "formula_text": "∂L ssl ∂ē i = u i ′ -z i,i ′ ēi ′(11)" } ]
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b14", "b15", "b17", "b18", "b20", "b21", "b22", "b13", "b23", "b27", "b28", "b31", "b32", "b24", "b26", "b33", "b34", "b35", "b37", "b38", "b40", "b42", "b43", "b44", "b45", "b47", "b48", "b49", "b50", "b51" ], "table_ref": [], "text": "I N the geomatics community, high-resolution remote sens- ing (HRS) images have become increasingly important due to advancements in imaging technologies and the growing demand for detailed and accurate data. These images are characterized by the exceptional spatial resolution, which presents finer details and features within the observed scenes. The advent of high-resolution satellite and aerial imagery has revolutionized remote sensing, urban planning [1], [2], envi- ronmental monitoring [3], and disaster management [4], [5], among other applications.\nLong-distance shooting by satellites and aircraft brings distinctive characteristics to HRS images [6], [7]. Unlike conventional street-level images, objects of the same category in HRS images are typically situated in distinct geographical landscapes, which leads to more scale differences [8], [9]. These disparities are not only evident between rural and urban land features but also between cities, where the differences are equally significant (e.g., differences among British architectural style, North American architectural style, modern architectural style, and Chinese architectural style) [10]. Therefore, the ability to obtain multi-scale image features is crucial for HRS image segmentation networks. Moreover, objects from the same category normally present different shapes layouts, and class distribution at different locations of the HRS image [11]. For example, Rural scenes typically contain more natural elements such as forests and barren, while urban scenes contain buildings and vehicles. In contrast, urban areas with dense populations often have orderly and diverse shapes, and contain more small object categories such as cars and airports. Whereas, rural building structures present disordered and simpler building structures, and roads and rivers are relatively narrow compared to those in urban areas [12]. These introduce the emergence of intra-class variability. Another problem related to HRS is that due to the complex background environment, objects belonging to different categories can have similar appearances. Although the high-resolution imagery and diverse complex scenes contribute to a richer level of detail, inter-class similarities severely impact the performance of semantic segmentation networks [13]- [15]. These factors present unique challenges in effectively handling and analyzing the diverse landscape elements within the segmentation process.\nTraditional segmentation methods typically use edge-based segmentation [16]- [18], threshold-based segmentation [19]- [21], and region-based segmentation [22] [23] to extract key information from HRS images. However, with the increasing resolution of HRS images in recent years, traditional methods have gradually become insufficient for complex and diverse image segmentation tasks. To achieve high-precision HRS image segmentation results, currently, the commonly used methods are neural convolutional [14], [24]- [27] and transformer [28]- [30]. Since the introduction of the full convolutional network (FCN) [31] in 2015, semantic segmentation models have mushroomed, promoting the development of [25] [26] in the field of HRS image. Classic semantic segmentation networks like U-Net restore spatial details of features through a symmetrical decoder-encoder structure [32]. On the other hand, DeepLabv3+ [33] introduces the Atrous Spatial Pyramid Pooling module to extract context information at different scales. Unlike the above work, this work introduces a novel convolutional neural network (CNN) backbone designed to address all the questions mentioned before.\nOwing to the increased complexity of diverse spatial scales in high-resolution segmentation tasks, some work [34]- [36] increases the size of the receptive field by introducing the adaptive spatial pooling module to capture features at different scales. Unfortunately, performing a one-to-two feature aggregation at the end of each block often loses spatial information about the features. [37] obtains the feature maps of the low, medium and high scales in the first convolution of the network, and features the dense connection modules along the diagonal. However, this approach of consecutive downsampling may result in feature loss, and the process of dense connection also has the possibility of network structure redundancy and information blocking. In contrast to the aforementioned methods, this paper proposes the funnel module and multibranch module. The original image passes through the inverted bottleneck block in the funnel module to obtain reliable highresolution information. In the multi-branch module, new scale information is obtained through downsampling in a progressive manner, and features at different scales are extracted in parallel, forming an efficient and direct feature extraction convolutional stream. At the end of each feature extraction, the feature information from the previous branch is fused with the newly generated branch's information to achieve multiscale information interaction, allowing the entire network to maintain high resolution while obtaining sufficiently complete and reliable low-resolution information.\nIt is also essential to recognize that the shape distribution variations of object of the same class across diverse geo-graphical regions. To alleviate the issue of class distribution disparities, a common approach is to incorporate attention mechanisms into the network. For instance, [38]- [40] utilize spatial attention to optimize class weights and address class imbalance problems. Additionally, [41] and [42] employ parallel channel attention and spatial attention to enhance local features simultaneously. This paper argues that introducing attention mechanisms in individual modules of the network may lead to network focus bias. Therefore, we have proposed the Information Aggregation (IA) block. This block aggregates the features and extracts the location, channel and spatial information of the features through the Coordinate Attention mechanism (CA) [43]. To prevent neuronal death, the GELU [44] is used as an activation function in the module, ensuring that the information of the attention mechanism flows smoothly through the network. We applied IA block to the multi-branch module, thereby aggregating different shapes of the same class to reduce the intra-class distribution distances. To evaluate the validity of the IA block, we visualized the feature maps of Hi-ResNet base and Hi-ResNet, and the results are shown in Figure 1. Hi-ResNet base stacks basic block to construct HRS module while Hi-ResNet stacks IA block. Obviously, the image features extracted and fused by IA block are far beyond the image features extracted by using the traditional convolutional network basic module.\nAs previously mentioned, due to the diversification of highresolution satellite images and the increased background of false detection, we propose a feature refine module. This module upsamples the three feature maps with different resolutions obtained by muti-branch module to the same size, and concat into a feature map. By convolution and Object-Contextual Representations (OCR) [45], we obtained the results of coarse and fine segmentation of Hi-ResNet, respectively, and calculated the loss according to a ratio of 0.4:1. Some work [46], [47] applied dice loss in road extraction by increasing the weights of the key road regions, FactsegNet [48] utilizes collaborative probability loss to merge the outputs of the dualbranch decoders at the probability level, aiming to enhance the utilization of information. However, the proposed classagnostic edge aware (CEA) loss in this paper focuses more on the edge information of class objects. CEA loss randomly selects one class from the segmentation results and treats the other classes as background. It computes the Hausdorff distance matrix between the background and the selected class, and then performs a Hadamard product between the matrix and the ground truth. This correction at the edge level helps improve the model's perception of boundaries and shapes, enhancing its ability to capture accurate object edges. Finally, we evaluate the proposed method on widely used datasets. This study contributes three main points:\n(1) We propose a funnel module to reduce computing costs, efficiently extract high-resolution information and avoid feature loss from the input image. (2) We propose a multi-branch module with stacks of information aggregation blocks with a balance of both attention and convolution mechanisms. (3) We develop a class-agnostic edge aware (CEA) loss, which emphasizes edge information while taking into account multiple classes. (4) Our Hi-ResNet is validated on several benchmarks with performance better than existing state-of-the-art methods. The paper is organized as follows: Section II provides an overview of related work, including Semantic Segmentation in Remote Sensing, Attention, and Model Pre-training. In Section III, we describe the proposed method, which includes the Hi-ResNet model, the design of loss functions, a series of training strategies, and the use of unsupervised and supervised pretraining in RS tasks. Section IV presents a series of ablation experiments and experimental results and analyses on different datasets. Finally, in Section V, we conclude the paper and provide a summary." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Semantic Segmentation in Remote Sensing", "publication_ref": [ "b52", "b48", "b27", "b35", "b53", "b56", "b43", "b57", "b59", "b60", "b63", "b64", "b65" ], "table_ref": [], "text": "Parallel multi-resolution architectures primarily focus on high-level semantic information, resulting in a semantically richer and spatially more accurate representation, providing an advanced technical reference for location-sensitive vision problems. Among them, HRNet [49] was well known as a parallel semantic segmentation model that could maintain high resolution. It passed through four stages of gradually decreasing resolution and performs multi-scale fusion to enhance high-resolution representations. Subsequently, researchers attempted to combine HRNet with OCR [45] which distinguishes contextual information for the same target category from different target categories and optimizes feature pixels. This architecture was widely applied in the field of HRS segmentation [27], [34], [50]- [53]. However, HRNet primarily focused on high-resolution semantic features of images, while OCR was more concerned with the relationships between image objects and their pixels, both of which ignore semantic information about the target location. Unfortunately, reliable and comprehensive high-level semantic information is undoubtedly crucial, as HRS images often contain a large amount of complex and unrelated background, and small target objects usually occupy only a few pixels. To obtain richer high-level semantic information, some studies [41], [54]- [56] applied different dilated convolutions to multiple features of traditional CNN networks to construct distinctive local semantic representation modules by expanding the receptive field of the convolution kernel, thereby effectively utilizing multi-scale features. Furthermore, some researchers [57]- [60] applied graph convolution on multi-layer features, where each pixel treated as a node, and then the extracted graph features were connected with the final global visual features. Despite this, locally aggregating features in the spatial direction might overlook channel and positional information of high-level semantics. An effective solution is to establish an information connection between space and channels in convolutional networks. Recently, MBFANet [61] combined the pooling channel attention module and convolutional coordinate attention module to complement each other, which helped the models focus on more complex background categories. SAPNet [62] joint models both spatial and channel affinity, which allows for preserving spatial details and extracting accurate channel information. Inspired by the parallel architecture of HRNet, we propose Hi-ResNet in this work. The Hi-ResNet proposed in this article utilizes a funnel module to obtain rich highresolution semantic information. Inspired by HRNet, we parallel extract multiresolution features in a muti-branch module and use a feature refine module to obtain high-level semantic information in HRS images." }, { "figure_ref": [], "heading": "B. Attention Mechanisms", "publication_ref": [ "b24", "b40", "b41", "b69", "b71", "b23", "b50", "b72", "b74", "b75", "b77", "b28", "b30", "b78", "b45" ], "table_ref": [], "text": "Attention mechanisms could help the network to locate the information of interest and inhibit useless information, which has been widely used in convolutional neural networks [25], [38], [39], [65]- [67]. For HRS tasks, some studies utilized the popular attention mechanism Squeeze-and-Excitation to automatically process the features of various scenes and extract more effective features [24], [47], [68]- [70]. However, this attention mechanism only focused on inter-channel information while neglecting spatial and positional information about the features. In order to simultaneously capture channel and positional information, researchers explored the use of Convolutional Block Attention Module or Bottleneck Attention Module in network architectures [71]- [73]. These modules used spatial attention to obtain the location information and reduce the input channel dimension to save the calculation cost. Due to the limited receptive field of the sliding window in convolutional operations, only local relationships were captured, it could not maintain long-range dependencies between different positions in the image. Currently, some works [28], [29], [74] built spatial attention in self-attention networks to overcome the local nature of convolutions and capture diverse spatial information. Nevertheless, the substantial computational complexity introduced by self-attention made the training cost of the network expensive, which posed challenges for its application in lightweight convolutional networks. Recently, Hou et al. [43] proposed an efficient attention mechanism called Coordinate Attention (CA) to capture both channel and positional information. This attention mechanism decomposed channel attention into one-dimensional features in two spatial directions through pooling operations, maintaining reliable positional information while establishing long-range dependencies. This paper suggests that the complementary characteristics of the CA mechanism in space and location can effectively alleviate the difference in the class distribution of HRS images in remote sensing segmentation tasks, so we apply the CA mechanism in the IA block." }, { "figure_ref": [], "heading": "C. Model Pre-training", "publication_ref": [ "b31", "b80", "b83", "b84", "b85", "b86", "b66", "b68", "b88", "b93", "b91", "b94" ], "table_ref": [], "text": "In addition to the design of the network itself, excellent pretraining programs are also indispensable. Numerous studies showed that applying pre-training can make models more stable and extract more commonalities [30], [75]- [78]. Therefore, we pre-train the Hi-ResNet to enhance the fine-tuning ability for HRS segmentation tasks. Recently, for HRS tasks, some studies [79], [80] used labeled semantic segmentation datasets such as Mapillary [81] for pre-training to improve model performance. However, these large-scale labeled datasets mostly [63] and GD loss [64], which are computed in direct relation to the ground truth and predictions. Concurrently, the CEA randomly elects a category, designating all others as background, before computing the loss between the two categories. come from natural images, and pre-training on them for HRS tasks often yields unsatisfactory results. It is worth noting that recent works on unsupervised pre-training [82]- [86] showed that unsupervised pre-training outperforms the supervised way in downstream tasks such as segmentation. MoCo [85], as a mechanism for building dynamic dictionaries for contrastive learning, surpassed its supervised counterpart in seven downstream tasks. [87] illustrated that MoCo mainly transferred low-level and middle-level semantic features, and when performing image reconstruction, the reconstructed images without supervision were closer to the original data distribution. Based on the previous work, we argue that employing supervised pre-training will provide richer and more comprehensive prior information for HRS tasks. At the same time, using the pre-training mechanism of MoCo can effectively compensate for the loss of precise localization information in the network and reduce the emphasis on local object information. Therefore, in this study, we apply both fully supervised and unsupervised pre-training strategies on Hi-ResNet and evaluate the performance of the two methods." }, { "figure_ref": [], "heading": "III. PROPOSED METHODS", "publication_ref": [], "table_ref": [], "text": "In this section, we present the framework of Hi-ResNet, including the funnel module, the multi-branch module with information aggregation blocks, and feature refinement module. Then we introduce the class-agnostic edge aware loss for HRS image feature extraction. Finally, we present how to transfer the various pre-training strategies to the HRS segmentation task." }, { "figure_ref": [ "fig_1", "fig_2", "fig_3", "fig_4", "fig_5" ], "heading": "A. Hi-ResNet Framework", "publication_ref": [ "b95", "b32", "b38", "b96", "b33", "b97", "b45", "b48" ], "table_ref": [ "tab_0" ], "text": "The Hi-ResNet proposed in this paper is shown in Figure 2. In the following sections, we will present the funnel module, multi-branch module, and feature refinement module, and the implementation details of each module in turn.\n1) Funnel Module: In the funnel module, we start by passing the input image through two stride-2, 3×3 convolutions, which reduce the image resolution to 1/4 of its original size. During the network downsampling, the BN layer was placed before the convolution operation. It could improve the generalization and stability of the model by applying the BN layer where the spatial resolution changes, which makes the preand post-samples to different Gaussian distributions. Then, the image goes through a funnel stem with four inverted bottleneck (IB) blocks to obtain high-resolution semantic features. The traditional bottleneck layer structure uses a structure with long heads and a short middle. With consideration of the distribution characteristics of HRS data, to prevent the collapse of activation space and loss of channel information caused by non-linear activation functions in network layers [88], our work adapts the IB block with thin heads and a thick middle. This block is used to extract richer semantic features by performing high-dimensional upsampling on HRS images, followed by downsampling and linear activation functions to avoid information loss, thereby preserving more complete information of HRS images. The design of the funnel module is illustrated in Figure 3. We use a 3×3 convolution for downsampling in the first layer of the bottleneck, and a 1×1 convolution for both fourfold downsampling and upsampling in the middle layer, thus obtaining richer high-resolution semantic information. 2) Multi-branch Module: Multi-branch module consists of multi-resolution convolutions streams and repeated fusions. First of all, to address the issue of unstable segmentation accuracy caused by differences in image scales, our network maintains the high-resolution representation of the input image throughout the entire process, while generating a new lowresolution branch at each layer, which together with the existing branches forms the input for the next layer. We adopt the parallel approach to connect the convolutions of multiresolution branches, forming the multi-resolution convolutions stream. Notably, the minimum resolution of the image in the parallel branch of the second layer is only 1/16 of the original image, indicating that this layer focuses more on the high-level semantic information of the image. Due to the significant layout differences among buildings and objects in different areas of HRS images, the shape and contour features in high-level semantic information are crucial. This paper argues that it cannot extract rich high-level semantic features that contain target locations if merely stacking the same number of blocks as in other layers and using the same sliding window sampling. Therefore, we stack 4 IA blocks in the first layer as high-resolution module 4 (HRM 4), and 12 IA blocks as high-resolution module 12 (HRM 12). Figure 4 illustrates the semantic information extracted from the multibranch module of the original Hi-ResNet and the extended multi-branch module. More abundant and reliable high-level semantic information can be obtained through the second layer after elongation, which not only effectively alleviates the problem of class distribution inconsistency and reduces intraclass variance but also avoids the loss of positional information of small target objects that occupy only a few pixels in the image, thereby enhancing the weak features of small target objects.\n(b) (a) Numerous studies propose methods for multi-scale feature fusion [31], [37], [89]. Classic semantic segmentation networks like UNET [32] and SegNet [90] extract low-resolution feature maps during downsampling and combine them with feature maps of the same resolution, and upsampling them to prevent feature loss during the upsampling process. In contrast, our approach performs cross-layer fusion between parallel branches with different resolutions, capturing features of different sizes by repeatedly exchanging information on different scales at each layer. Figure 5 illustrates the fusion process for layer2, where the input consists of three images with different resolutions. Different sampling methods are used depending on the resolution of the input and output. The upsampling stage includes bilinear upsampling, batch normalization, and a 1×1 convolution, while the downsampling stage includes batch normalization and a stride-2, 3×3 convolution. The multi-branch module process ultimately outputs three feature maps with different resolutions. 3) Information Aggregation Block: HRS images provide rich details and features but also bring more irrelevant background objects. To suppress the impacts brought by the irrelevant background information and to enhance the spatial and positional feature representations, we propose an IA block with CA [43]. The residual connection of this block consists of three parts: two downsampling convolutions and one CA attention module, both of the downsampling convolutions use 3×3 convolutional kernels. The fundamental principle of the CA mechanism is to exploit the spatial coordinate information of feature maps. This is achieved by passing the x and y coordinates of each spatial position through separate neural network branches, allowing attention weights to be computed for each position. This approach captures spatial correlations between different positions within the feature map, which enhances the representational power of the feature map. Furthermore, CA is a lightweight and efficient attention mechanism since it can selectively attend to specific positions within the feature map, rather than computing attention weights across all positions. We hypothesize that more abstract and refined feature information is better for attention modules to extract contextual semantic information in HRS. Therefore, the CA module is placed at the end of the entire block. In the IA block, GELU is used instead of RELU, along with batchnorm (BN). The proposed block is considered lightweight and efficient compared to other attention mechanisms and can enhance the representational power of the feature map. The IA block with CA is illustrated in Figure 6. 4) Feature Refinement Module: We use the three different resolution feature maps output by the multi-branch module as inputs to the feature refinement module. In the feature refinement module, we combine the three output images to the same size using bilinear upsampling, which serves as the coarse segmentation of the network. By leveraging OCR [45], we first treat a category in the coarse segmentation result as a region and estimate the comprehensive feature representation within that region by aggregating the representations of each pixel. Then, we compute the pixel-region relationships to obtain corresponding weights, which are used to enhance the representation of each pixel by weighting all the regions. The weighted feature representation serves as the refined segmentation result of the model. Lastly, Hi-ResNet outputs both coarse segmentation and refined segmentation. We calculate the losses for both segmentation results separately and weigh them with a ratio of 0.4:1.0. The final network block numbers and module repeated times in each stage are shown in Table I." }, { "figure_ref": [ "fig_6" ], "heading": "B. Loss Design", "publication_ref": [ "b68", "b66", "b66", "b98", "b98", "b86", "b99", "b38", "b91" ], "table_ref": [], "text": "In HRS tasks, semantic segmentation typically involves more than two labels, with significant differences in the number and pixel range of objects for different categories, leading to sample imbalance and sub-optimal performance. Therefore, an appropriate loss function is crucial. In this work, we propose a new class-agnostic edge aware (CEA) loss, which is combined with the Generalised Dice loss (GD) [64]and Label Smoothing Cross-Entropy loss (LSCE) [63] as the training loss.\n1) Generalised Dice Loss: Weighted cross-entropy and Sensitivity-Specificity approaches are designed to address imbalanced problems only in binary classification tasks. In contrast, the GD loss method can weight various pixel classes, allowing for a more comprehensive approach to imbalanced sample issues. The loss calculation for GD loss can be expressed as:\nGD = 1 -2 2 l=1 w l n r ln p ln 2 l=1 w l n r ln + p ln (1)\nThe equation for GD loss involves using r l to represent the label of each pixel in the reference foreground segmentation for class l, and p ln to denote the predicted probabilistic map for the foreground label of class l over N image elements p n . The weighting factor w l is used to provide invariance to different label set properties. Its calculation is expressed as:\nw l = 1 N i=1 r 2 ln (2\n)\nDuring the calculation process, overlapping r n and p n are added according to their weights and then divided by the weighted sum of the union part. This effectively suppresses the interference of complex background classes, enhances the features of small targets, and alleviates the problem of imbalanced image samples.\n2) Label Smoothing Cross-Entropy Loss: The label smooth technique proposed in [63] as a training strategy can adjust the extreme values of the loss and improve the model's generalization ability when combined with Cross-Entropy loss. The formulation of Label Smoothing Cross-Entropy loss is as follows:\nL ce = - 1 N N n=1 K k=1 y (n) k log ŷ(n) k(3)\nq i = 1 -ε if i = y, ε/(K -1) otherwise(4)\nIn the above formula, y k represents the sample label following the label smoothing operation, where ϵ is the smoothing factor, and y k is the corresponding softmax output of the network. Considering that HRS image datasets usually have a small amount of data, we argue that using this loss can prevent overfitting of the network and provide the correct optimization direction for the model.\n3) Class-agnostic Edge Aware Loss: The Hausdorff loss [91] was originally developed for boundary calculation in medical images. However, for HRS image segmentation, we modified the original HD loss calculation method to make it suitable for multi-class boundary calculation. It is shown below:\nLMHD = 1 N N i=0 Ω (g(pi) -s θ (pi)) 2 (DG(pi) β + DS(pi) β )dpi,(5)\nwhere p i refers to a distinct category of the input image, and Ω denotes the spatial domain of the training images. The distance function from the predicted boundary S, after applying thresholds s θ , is represented by D S . The hyperparameter β, which is set to 2 by the authors of [91] through a grid search, is also a part of this process. Additionally, we use N to represent the number of classes.\nWe have observed that the computation time for HD loss is quite high in the case of multi-class segmentation and it increases proportionally with the number of classes. In light of the specific characteristics of HRS imagery, such as the prevalence of background and the low incidence of class intersections, we have proposed a new class-agnostic edge aware (CEA) loss function in Equation 6 as an improved alternative. The CEA loss function randomly picks one of the categories and treats the rest of the classes as background, then computes the loss with respect to the two classes. It has a fixed computational cost and is tailored to address the challenges of HRS segmentation tasks. The CEA loss function improves the segmentation edge while mitigating the negative impact of complex backgrounds.\nLCEA = Ω (g(p) -s θ (p)) 2 (DG(p ∨ 0p) β + DS(p ∼ 0p) β )dp (6)\nHere, p i refers to the input image, while the symbol Ω denotes the spatial domain of the training images. The distance function from the predicted boundary S, after applying thresholds s θ , is represented by D S . The hyper-parameter β, which is set to 2 through a grid search, is also a part of this process. 0 p represents a zero matrix with the same shape as P , while ∨ and ∼ represent the XOR operation and the N OT operation, respectively.\nC. Remote Sensing Pre-training 1) Dataset:\n• The Mapillary dataset [81] is presently the largest publicly available dataset at the street level with specific instance annotations and a high degree of diversity. This dataset encompasses 25,000 high-resolution RGB images, captured by a variety of imaging devices, and includes fine-grained labels for 66 categories. • Million-AID [92] is a comprehensive benchmark dataset designed for remote sensing scene classification. This dataset obtains images with resolutions ranging from 0.5m to 153m from multiple satellites of Google Earth. The scene labels are obtained through the geographical coordinate information, resulting in over one million images labeled with 51 semantic scene categories. 2) Pre-training Details: The third section of this paper introduces two distinct methods for pre-training models. For the supervised training, the Mapillary dataset is selected as it shares the same downstream task as this paper, and it is also used by HRNet [37] for pre-training. After clipping the Mapillary images, 2 million 256×256 images are obtained. To address the issue of imbalanced data, a filter strategy is employed, considering the ratio of pixel classes. This results in a training set of 400,000 images. The supervised pre-training process is conducted similarly to that of our own model, and the hyper-parameters are specified in II.\nThe unsupervised training utilizes the Million-AID dataset. Since the images in Million-AID have varying resolutions, they are partitioned into 400×400, while images size less than 400 are dropped from the dataset. We use contrast learning MoCoV2 [85] as the unsupervised pre-training method. The process of MoCoV2 is shown in Figure 7, and the primary training settings for both pre-training approaches are presented below. MoCoV2 pre-training commences with the augmentation of a batch of images twice, to generate the positive samples, denoted as q, and the batch negative samples, denoted as k + . Following this, the logits of q and k + are procured by introducing q and k + into the standard encoder and the momentum encoder, respectively. In addition, the input negative samples are amalgamated with k + and k -, which are retrieved from the queue. Subsequently, the logits of the positive and negative samples are concatenated, following which the InfoNCE loss function is computed to update the standard encoder:\nL q,k + ,k -= -lg exp (q • k + /τ ) exp (k • k + /τ ) + k - exp (q • k -/τ ) (7\n)\nwhere q is a query representation, k + is a representation of the positive (similar) key sample, and k -are representations of the negative (dissimilar) key samples. τ is a temperature hyper-parameter. During training, only the normal encoder updates while the momentum encoder is updated with the function below:\nθ k = mθ k + (1 -m)θq(8)\nHere m ∈ [0, 1) is a momentum coefficient. Only the parameters θ q are updated by back-propagation. The momentum update in 8 makes θ k evolve more smoothly than θ q . Finally, k + will be added to the queue and features earlier in the queue will be dequeued." }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL RESULTS AND ANALYSIS", "publication_ref": [ "b11", "b78", "b78", "b3", "b5", "b7", "b9", "b11", "b13", "b15", "b19", "b21", "b23", "b27", "b30", "b32", "b34", "b36", "b40", "b100" ], "table_ref": [], "text": "In this section, we evaluate the performance of our proposed model on multiple remote sensing datasets, including LoveDA, Potsdam, and Vaihingen. We first conduct a series of ablation studies on the Vaihingen dataset to analyze and identify a suitable framework for our proposed model. Next, we compare our Hi-ResNet with current state-of-the-art (SOTA) methods on public benchmarks, utilizing existing popular frameworks. Additionally, we demonstrate the superiority of our proposed model in terms of computational complexity, such as inference speed and memory footprint, as well as data transfer efficiency.\nA. Datasets 1) LoveDA: The LoveDA dataset [12] comprises 5987 HSR images (GSD 0.3 m) from three different cities, each containing 166768 annotated objects. Each image is 1024×1024 pixels and includes 7 land cover categories, namely building, road, water, barren, forest, agriculture, and background. The dataset provides 2522 images for training, 1669 images for validation, and 1796 official images for testing. The dataset consists of two scenes, urban and rural, from three Chinese cities, namely Nanjing, Changzhou, and Wuhan. Consequently, the dataset presents a significant research challenge due to the presence of multi-scale objects, complex backgrounds, and inconsistent class distributions.\n2) Potsdam: Potsdam is an example of a historic city with significant building complexes, narrow streets, and dense settlement structures. The Potsdam dataset is composed of 38 patches, each measuring 6000×6000 pixels, and containing a true orthophoto (TOP) extracted from a larger TOP mosaic. The dataset has been manually classified into the six most common land cover categories, and the ground sampling distance of both the TOP and the DSM is 5 cm. In this paper, we follow the approach used in [74] and use 23 images (excluding image 7 10 with error annotations) for training and 14 images for testing.\n3) Vaihingen: The village of Vaihingen comprises many individual buildings and small multi-story buildings, and like the Potsdam dataset, it has been classified into six common land cover categories. The dataset includes 3-band remote sensing TIFF files (near-infrared, red, green) and singleband DSM, with 33 HRS images of varying sizes. For the experiment, we have followed [74] to select the remote sensing images with ID 2,4,6,8,10,12,14,16,20,22,24,27,29,31,33,35,and 38 In the experiment, learning rate warmup combined with cosine annealing was used to adjust the learning rate, where warmup was set to 3 epochs. Moreover, AdamW [93] optimizer was selected to accelerate model convergence, with the learning rate and weight decay set to 1 × 10 -4 and 1 × 10 -8 respectively. The training was conducted on 4 NVIDIA GTX 3090 GPUs and implemented based on the PyTorch framework. " }, { "figure_ref": [ "fig_9", "fig_10", "fig_11", "fig_11", "fig_11" ], "heading": "C. Ablation Study and Comparison Experiments", "publication_ref": [ "b86", "b99", "b91", "b34", "b34", "b52", "b52", "b51", "b52", "b12", "b51", "b106", "b114", "b28", "b78", "b24", "b28", "b32", "b52", "b104", "b119", "b28", "b34" ], "table_ref": [ "tab_0", "tab_0", "tab_0", "tab_0", "tab_9", "tab_0", "tab_10" ], "text": "-ResNet v1 ✔ ✔ Hi-ResNet v2 ✔ ✔ 2 ✔ Hi-ResNet v3 ✔ ✔ ✔ ours ✔ ✔ ✔ 3 ✘ 1 extend\nthe current layer 2 black check mark means the current layer is extended compared to the base 3 black cross mark means the current layers are deleted As shown in Table IV, we configured the multi-branch module of the Hi-ResNet base to consist of three layers.\nEach layer is stacked with 4, 16, and 12 basic blocks, where each basic block consists of three 3×3 convolutions. Next, we define Hi-ResNet V1 by stacking 1, 4, and 3 HRM 4, where each HRM 4 is composed of four IA blocks. For Hi-ResNet V2 and Hi-ResNet V3, we increased the number of IA blocks in the HRM 4 of the second and third layers to 12 for ablative experiments. Table V displays the results of the ablative experiments. Surprisingly, despite the decision to lengthen layer3 leading to more than double the number of network parameters, the improvement in accuracy is smaller than that achieved by lengthening layer2. Through sampling analysis of the feature output from the two layers, we argue that there is some feature loss when extracting information in layer2, resulting in a reduction of the semantic features of medium and low resolution that layer3 can obtain. Lengthening layer2 effectively solves this problem, allowing for the extraction of richer and more accurate spatial information and better fitting of the features of HRS images in tasks. Therefore, we choose to lengthen layer2 by three times. After determining the module size, we attempt to remove redundant modules in the framework. We speculate that lengthening layer2 means that the information extracted by the module can fully encompass the information extracted by layer3, so we attempt to remove layer3. This decision significantly reduces the number of model parameters, speeds up the training process, and further improves the efficiency of the model, while having little impact on the final performance of the model. Ultimately, we set Hi-ResNet) to include two layers, and named HRM 4 stacked 12 IA blocks HRM 12. Compared to the initial assumption, the final model achieves a 10% increase in mIoU on the Vaihingen dataset and reduces the number of parameters by 30%. 2) Stability: To validate the stability of the proposed model, this study conducted experiments on the Vaihingen dataset using various input sizes during training, including square sizes of 256×256, 512×512, and 1024× 024, as well as rectangular sizes of 256×512 and 512×1024. As shown in Table VI, the Hi-ResNet presented in this study exhibited a mIoU deviation of less than 0.8% for inputs of different sizes, with the best performance observed for input sizes of 512×512. When training with HRS images, the proposed network yielded improved segmentation results for buildings and no significant loss in accuracy for the \"car\" class, indicating its effectiveness in segmenting small objects in HRS images. In addition, Moreover, the small difference between the mIoU achieved for large objects with an input image size of 256×256 and the best mIoU demonstrates that the proposed model has a larger receptive field. 3) Pre-training Comparison: To evaluate the impact of pre-training strategies on downstream HRS tasks, this study conducts an ablation study on Hi-ResNet using different pretraining schemes. For supervised pre-training, the original Mapillary dataset [81] is randomly cropped into 256×256 pixels, and 2 million category-balanced HRS images are selected as the pre-training dataset. The batch size is set to 80, and the base learning rate is set to 5 × 10 -5 . The maximum iteration number of this pre-training is 3,125,000, achieving 51.8 on the Mapillary validation set. For unsupervised pre-training, the MillionAID dataset [92] with randomly cropped 512×512 pixels is used. The learning rate is set to 0.015, the batch size is 64, and the maximum iteration number is 3,125,000. The final top-1 accuracy on MillionAID is 78.9, and the top-5 accuracy is 94.1. We conduct a comprehensive evaluation of HRS pretraining models on LoveDA, Potsdam, and Vaihingen datasets, and the detailed information is presented in Table VII. Two pre-trained models are loaded onto the original network, and the results showed that unsupervised pre-training of HRS images using MoCov2 [85] can provide a 15% increase in mIoU, while supervised pre-training only increases mIoU by 8% under the same number of iterations. The experiment demonstrates that unsupervised remote sensing pre-training can significantly improve the performance of the model on a small data set and make the model converge faster. Furthermore, using MoCov2 provides a deeper feature representation for HRS downstream tasks. In this sense, pretrained models using contrastive learning methods can offer competitive backbones for future research in the field of HRS.\n4) Model Size: In addition to accuracy and precision, the complexity and speed of a model are equally important for HRS tasks. A lightweight network architecture is undoubtedly advantageous for predicting large-scale HRS images. Therefore, we use the test dataset of the LoveDA dataset to compare the proposed network with the advanced model in terms of parameter amount, GPU memory occupation and complexity, and the comparison results are shown in Table VIII. Compared to the less complex DeepLabV3+ network [33], our network performed 11% percent higher on the mIoU, and achieves a 2% improvement on mIoU while using only 1/50th of the memory compared to the advanced model using the vision transformer backbone. It's also worth noting that the proposed model achieves a performance improvement of 6% on mIoU while having fewer parameters than some CNN-based models such as DeepLabV3+ [33] and HRNet [49]. These results suggest that the proposed model strikes a good balance between accuracy and efficiency, making it a promising approach for practical remote sensing applications. D. Results on The Dataset 1) LoveDA: The LoveDA dataset is recognized as a challenging HSR dataset for Land Cover Domain Adaptive Semantic Segmentation. This dataset presents three significant challenges for large-scale remote sensing mapping, namely multi-scale targets, complex background samples, and inconsistent class distributions. As a result, achieving high scores on this dataset is quite difficult.\nTable IX demonstrates the excellent performance of the network proposed in this paper on the LoveDA dataset. Thanks to the precise loss strategy, we can handle complex samples of different backgrounds well on LoveDA and achieve a mIoU of 52.5 on the official test set. Our network outperforms the HRNet [49], loaded with officially provided pre-trained weights, by 5% on mIoU and FactSegNet [48], an excellent small-object semantic segmentation network, by 6%. It is worth noting that for the \"barren\" class, where most networks underperform, our mIoU is 3% higher than other methods. Whether in urban or rural scenarios, sparse or dense distribution, our network can accurately segment objects with high confidence. We also provide visual comparison results with other methods in Figure 8.\nTo be more specific, due to limited computing resources, we perform only 3-4 complete pre-training processes (150 epochs), which makes pre-training the depth provided by Hi-ResNet compared with the pre-training weights provided by other officials, there is a certain gap between the hierarchical representation information. However, the ablation experiments in Table IV have fully demonstrated the excellent performance of Hi-ResNet in terms of performance and accuracy.\n2) Potsdam: As a widely-used dataset for segmentation tasks, Potsdam can comprehensively demonstrate the improvement of the accuracy of HRS images by the model proposed , results of HRNet [49], results of FarSeg [13], results of FactSeg [48], results of RSSFormer [99], and results of our Hi-ResNet.\nin this paper. Table X shows the scores achieved on the Potsdam dataset. The network proposed in this paper achieves an F1 of 92.4 and a mIoU of 86.1 on the Potsdam dataset. Hi-ResNet outperforms the lightweight convolutional network FANet [107] and the lightweight transformer-based network Segmenter [28]. Notably, Hi-ResNet performs best among all methods for the \"lowveg\" class, achieving a score of 87.9.\nThe car class also obtains a higher score second only to UnetFormer [74], with a mean F1 of 96.1. This result fully demonstrates that the Hi-ResNet has better performance for small target segmentation in HRS images.\nWe present Potsdam segmentation results to showcase the effectiveness of Hi-ResNet for small object segmentation. As displayed in Figure 9, most networks perform poorly in the segmentation of object edges. To overcome this limitation, Hi-ResNet employs CEA loss in the loss calculation to maximize the distance between the two boundaries, ensuring good connectivity of the extracted edge features during loss calculation, thus avoiding category boundary blur. At the same time, small cars covered by shadows or branches on the ground are difficult to identify, while vehicles on narrow roads are easily misidentified as categories close to vehicles. In Hi-ResNet, the design of the IB block can extract richer lowlevel semantic information such as contours and shapes in the results of DANet [25], results of Segmenter [28], results of FCN [31], and results of our Hi-ResNet.\nfunnel module, thus increasing inter-class differences in HRS images. Additionally, Hi-ResNet extracts more accurate small target location information through the muti-branch module while expanding the effective global receptive field through the CA module to capture global context information effectively. This enhances the network's ability to identify and segment occluded and covered cars more clearly.\n3) Vaihingen: The Vaihingen dataset has a large number of houses obscured by tree branches and multi-story small villages, so the dataset requires the network to identify and segment small targets more accurately. To test the network's accuracy, we selected 17 images from the Vaihingen dataset and present the prediction results of the Hi-ResNet in Table XI. Our proposed network achieved an OA of 90.7 and a mIoU of 79.8 on the Vaihingen dataset. In the low vegetation category, Hi-ResNet secured first place with the same performance on the Potsdam dataset, with a six percent improvement over the results obtained by the HRNet network for both the \"building\" and \"car\" classes. This is because Hi-ResNet effectively solves the sample imbalance problem caused by small targets occupying small pixels in HRS images by using the CEA loss and GD loss to weigh each category. We show [49], results of PSPNet [97], results of DANet [112], results of Segmenter [28], results of DeepLabv3+ [33], and results of our Hi-ResNet. some typical segmentation results in vaihingen in Figure 10. In Figure 10, the \"Tree\" class and the \"low vegetable\" class have serious misclassification, and at the same time, the within-class distance segmentation redundancy of the dense small target object \"car\" class, and the edge segmentation of the small cars is not clear. Hi-ResNet can obtain the position and edge information of small cars more accurately when the global receptive field is increased, thereby avoiding misclassification in complex scenes. The accurate segmentation of opaque ground in Figure 10 also demonstrates the effectiveness of the network proposed in this paper." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Our study centers on the semantic segmentation of HRS, specifically focusing on addressing the inherent challenges of object scale and shape variance, and complex background environments. These issues often lead to object misclassification and sub-optimal outcomes with current learning algo-rithms. We respond by developing Hi-ResNet, which stands out due to an efficient network structure that includes a funnel module, a multi-branch module embedded with IA blocks, and a feature refinement module. Additionally, we introduce the CEA loss function. In our approach, the funnel module functions to downsample and extract high-resolution semantic information from the input image. The process then moves to the multi-branch module with stacks of IA blocks, enabling the capture of image features at different scales and distinguishing variant scales and shapes within the same class. Our study concludes with the integration of the CEA loss function within our feature refinement module. This innovative step effectively disambiguates inter-class objects with similar shapes and increases the data distribution distance for accurate predictions. The superiority of Hi-ResNet is proven through a comparative evaluation with leading methodologies across LoveDA benchmarks. The results underscore the value of our contributions to advancing HRS semantic segmentation." } ]
High-resolution remote sensing (HRS) semantic segmentation extracts key objects from high-resolution coverage areas. However, objects of the same category within HRS images generally show significant differences in scale and shape across diverse geographical environments, making it difficult to fit the data distribution. Additionally, a complex background environment causes similar appearances of objects of different categories, which precipitates a substantial number of objects into misclassification as background. These issues make existing learning algorithms sub-optimal. In this work, we solve the abovementioned problems by proposing a High-resolution remote sensing network (Hi-ResNet) with efficient network structure designs, which consists of a funnel module, a multi-branch module with stacks of information aggregation (IA) blocks, and a feature refinement module, sequentially, and Class-agnostic Edge Aware (CEA) loss. Specifically, we propose a funnel module to downsample, which reduces the computational cost, and extract high-resolution semantic information from the initial input image. Secondly, we downsample the processed feature images into multi-resolution branches incrementally to capture image features at different scales and apply IA blocks, which capture key latent information by leveraging attention mechanisms, for effective feature aggregation, distinguishing image features of the same class with variant scales and shapes. Finally, our feature refinement module integrate the CEA loss function, which disambiguates inter-class objects with similar shapes and increases the data distribution distance for correct predictions. With effective pre-training strategies, we demonstrated the superiority of Hi-ResNet over state-of-the-art methods on three HRS segmentation benchmarks.
Hi-ResNet: A High-Resolution Remote Sensing Network for Semantic Segmentation
[ { "figure_caption": "Fig. 1 .1Fig. 1. Comparisons of our model behavior by heatmaps with different images illustrate the feature information obtained by upsampling and merging at the end of each layer for Hi-ResNet base and Hi-ResNet. The three rows (a)(b)(c) show the original image, and the features of Hi-ResNet base and Hi-ResNet separately. It is evident from the results that compared to the Hi-ResNet base, the Hi-ResNet extracts richer and superior feature information.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The comprehensive architecture of Hi-ResNet is partitioned into three components. (a) The funnel module, composed of a downsample part and a funnel stem, is purposed for downsampling input imagery and facilitating feature extraction. (b) The multi-branch module further hones these features via the amalgamation of a multi-resolution convolutions stream. (c) Coarse features are computed directly via a convolution layer, with fine-grained features managed through the utilization of OCR [45]. (d) Multiple loss functions are employed, including LSCE loss[63] and GD loss[64], which are computed in direct relation to the ground truth and predictions. Concurrently, the CEA randomly elects a category, designating all others as background, before computing the loss between the two categories.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The structure of the funnel module where IB refers to inverted bottleneck. The number in each block refers to the kernel size, stride, and channel numbers respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. (a) the output features of the multi-branch module of Hi-ResNet base. (b) the output features of the multi-branch module of Hi-ResNet.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig.5. This figure illustrates the process of feature information aggregation across various resolutions in the transition layer of the network. Furthermore, the figure indicates a deviation from the original process of normalization followed by sampling to sampling followed by normalization using BN, as employed in this study.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig.6. The IA block of Hi-ResNet comprises two convolutions with a kernel size of 3x3 and a stride of 1, coordinate attention (CA)[43] module, and a residual branch. CA module is an attention mechanism that enhances the model's ability to model relationships among channels.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. The figure outlines the entire process of MoCoV2 pre-training. q and k + are the positive sample and the negative respectively, augmented from the same image. while k -refers to the past negative features stored in the queue.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Forthe LoveDA dataset, the training and validation sets are both used for training. Images are cropped into patches with 512×512 resolution for input. During training, various enhancement techniques such as random vertical flip, random horizontal flip, and random scaling with ratios of [0.5, 0.75, 1.0, 1.25, 1.5] were employed. The training process lasted for 200 epochs with a batch size of 16. During the testing phase, 1796 images provided by the official were used, and multi-scale and random flip enhancements were applied for prediction. As for the Potsdam and Vaihingen datasets, this paper utilized techniques including color transformation, random vertical flip, and random horizontal flip for data augmentation, and cropped them into 512×512 patches for model input. The training epoch was set to 200 with a batch size of 16.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 )1Determining the Layer Size: This paper conducts ablation experiments on the Vaihingen dataset to prevent information loss in Hi-ResNet and reduce the number of model parameters. Table IV compares the accuracy and training cost of the model under different decisions, including the number of parameters (Params), calculation amount (FLOPs), GPU memory usage, and training time.", "figure_data": "", "figure_id": "fig_8", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Visual results of different methods on the LoveDA dataset. From left to right: original image, ground truth, results of SegFormer[29], results of HRNet[49], results of FarSeg[13], results of FactSeg[48], results of RSSFormer[99], and results of our Hi-ResNet.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Visualization results for the Potsdam validation set. From left to right: original image, ground truth, results of HRNet[49], results of ERFNet[104], results of DANet[25], results of Segmenter[28], results of FCN[31], and results of our Hi-ResNet.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig.10. Visualization results for the Vaihingen validation set. From left to right: original image, ground truth, results of HRNet[49], results of PSPNet[97], results of DANet[112], results of Segmenter[28], results of DeepLabv3+[33], and results of our Hi-ResNet.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "CONFIGURATION TABLE. ", "figure_data": "Module PartMulti-scaleBranchesModuleModuleNumsfunnel1/41Conv, IB block2, 41 Mb layer11/4, 1/82HRM 41Mb layer21/4, 1/8, 1/163HRM 124", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "for testing, while the remaining 16 images are used for training. TableIIIprovides detailed information about each dataset.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "OF THE REFERENCE METHODS AND THE PROPOSED HI-RESNET METHOD ON THE LOVEDA DATASET The top three scores in each metric are marked by red , blue and green.", "figure_data": "MethodBackboneBackgroundBuildingRoadWaterBarrenForestAgriculturemIoUComplexityPSPNet [97]ResNet5044.452.153.576.59.744.157.948.3105.7DeepLabV3+ [33]ResNet5043.050.952.074.410.444.258.547.695.8SemanticFPN [98]ResNet5042.951.553.474.711.244.658.748.2103.3FarSeg [13]ResNet5043.151.553.976.69.843.358.948.2-RSSFormer [99]RSS-B52.360.755.276.218.745.358.31 52.3-FactSeg [48]ResNet5042.653.652.876.916.242.957.548.9-BANet [100]ResT-Lite43.751.551.176.916.644.962.549.652.6TransUNet [94]ViT-R5043.056.153.778.09.344.956.948.9803.4Segmenter [28]ViT-Tiny38.050.748.777.413.343.558.247.126.8SegFormer [29]MIT-B543.152.355.070.710.743.256.847.4-SwinUperNet [101]Swin-Tiny43.354.354.378.714.945.359.650.0349.1DC-Swin [102]Swin-Tiny41.354.556.278.114.547.262.450.6183.8UNetFormer [74]ResNet1844.758.854.979.520.146.062.552.446.9UNet [95]ResNet5043.152.752.873.010.343.159.947.8-UNet++ [103]ResNet5042.952.652.874.511.444.458.848.2-HRNet [49]W3244.655.357.478.011.045.360.949.8-Hi-ResNetHi-ResNet46.758.355.980.117.046.762.752.549.7BuildingWaterForestRoadBarrenBackgroundAgricultureImageGround truthSegFormerHRNetFarSegFactSegRSSFormerOurs", "figure_id": "tab_9", "figure_label": "IX", "figure_type": "table" }, { "figure_caption": "OF THE REFERENCE METHODS AND THE PROPOSED HI-RESNET METHOD ON THE POTSDAM DATASET", "figure_data": "MethodBackbone1 Imp.surfBuildingLowvegTreeCarMeanF1OAmIoUERFNet [104]-88.793.081.175.890.585.884.576.2BiSeNet [105]ResNet1890.294.685.586.292.789.888.281.7DANet [25]-89.993.283.682.392.688.386.779.6ShelfNet [106]ResNet1892.595.886.687.194.691.389.984.4FANet [107]ResNet1892.096.186.087.894.591.389.884.2EaNet [108]ResNet1892.095.784.385.795.190.688.783.4BANet [100]ResT-Lite93.396.787.489.196.092.591.086.3Segmenter [28]ViT-Tiny91.595.385.485.088.589.288.780.7BotNet [109]ResNet5092.396.387.388.794.191.790.4-SwiftNet [110]ResNet1891.895.985.786.894.591.089.383.8MAResU-Net [111]ResNet1891.495.685.886.693.390.589.083.9LANet [24]ResNet5093.097.187.388.094.191.990.8-HRNet [49]HRNet-W4888.793.483.081.591.187.586.178.1UNetFormer [74]ResNet1893.697.787.788.996.592.891.386.8SwinUperNet [101]Swin-Tiny93.296.487.688.695.492.290.985.8FCN [31]ResNet5091.496.685.986.982.288.685.6-FCN [31]VGG1688.693.383.379.893.087.6185.59-Hi-ResNetHi-ResNet93.296.587.988.696.192.491.186.1ImSurfBuildingLowVeg.Tree.CarIgnoreImageGround TruthHRNetERFNetDANetSegmenterFCNOurs", "figure_id": "tab_10", "figure_label": "X", "figure_type": "table" }, { "figure_caption": "OF THE REFERENCE METHODS AND THE PROPOSED HI-RESNET METHOD ON THE VAIHINGEN DATASET", "figure_data": "MethodBackboneImp.surfBuildingLowvegTreeCarMeanF1OAmIoUERFNet [104]-88.590.276.485.853.678.985.869.1PSPNet [97]-89.093.281.587.743.979.087.768.6HRNet [49]HRNet-W4889.892.881.086.879.586.087.675.8BiSeNet [105]ResNet1889.191.380.986.973.184.387.175.8DABNet [112]-90.088.874.384.960.279.284.370.2DANet [25]ResNet1890.093.982.287.344.579.688.269.4DeppLabV3+ [33]ResNet1889.993.980.689.483.387.489.0-ABCNet [113]ResNet1892.795.284.589.785.389.590.781.3[107]ViT-Tiny90.793.882.688.671.685.488.975.6EaNet [108]ResNet5091.794.583.189.280.087.789.778.7BoTNet [109]ResNet1889.992.181.888.771.384.888.074.3MAResU-Net [111]ResNet1892.095.083.789.378.387.790.178.6UNetFormer [74]ResNet1892.795.384.990.688.590.491.082.7ShelfNet [106]ResT-Lite91.894.683.889.377.987.589.878.3Segmenter [28]ViT-Tiny89.893.081.288.967.684.188.173.6SwiftNet [110]ResNet1892.294.884.189.381.288.390.279.6Hi-ResNetHi-ResNet92.395.184.988.583.589.190.779.8ImSurfBuildingLowVeg.Tree.CarIgnoreImageGround TruthHRNetPSPNetDANetSegmenterDeepLabv3+Ours", "figure_id": "tab_11", "figure_label": "XI", "figure_type": "table" } ]
Yuxia Chen; Pengcheng Fang; Jianhui Yu; Xiaoling Zhong; Xiaoming Zhang; Tianrui Li; Z Xiaoming; L Tianrui
[ { "authors": "R Alshehhi; P R Marpu; W L Woon; M Dalla Mura", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b0", "title": "Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks", "year": "2017" }, { "authors": "X Gao; M Wang; Y Yang; G Li", "journal": "Ieee Access", "ref_id": "b1", "title": "Building extraction from rgb vhr images using shifted shadow algorithm", "year": "2018" }, { "authors": "P Qin; Y Cai; J Liu; P Fan; M Sun", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b2", "title": "Multilayer feature extraction network for military ship detection from high-resolution optical remote sensing images", "year": "2021" }, { "authors": "A J Cooner; Y Shao; J B Campbell", "journal": "Remote Sensing", "ref_id": "b3", "title": "Detection of urban damage using remote sensing and machine learning algorithms: Revisiting the 2010 haiti earthquake", "year": "2016" }, { "authors": "C Xiong; Q Li; X Lu", "journal": "Automation in Construction", "ref_id": "b4", "title": "Automated regional seismic damage assessment of buildings using an unmanned aerial vehicle and a convolutional neural network", "year": "2020" }, { "authors": "I Demir; K Koperski; D Lindenbaum; G Pang; J Huang; S Basu; F Hughes; D Tuia; R D Raskar", "journal": "", "ref_id": "b5", "title": "A challenge to parse the earth through satellite images", "year": "2018" }, { "authors": "D Marcos; M Volpi; B Kellenberger; D Tuia", "journal": "ISPRS journal of photogrammetry and remote sensing", "ref_id": "b6", "title": "Land cover mapping at very high resolution with rotation equivariant cnns: Towards small yet accurate models", "year": "2018" }, { "authors": "R Kemker; C Salvaggio; C Kanan", "journal": "ISPRS journal of photogrammetry and remote sensing", "ref_id": "b7", "title": "Algorithms for semantic segmentation of multispectral remote sensing imagery using deep learning", "year": "2018" }, { "authors": "M Volpi; V Ferrari", "journal": "", "ref_id": "b8", "title": "Semantic segmentation of urban scenes by learning local class interactions", "year": "2015" }, { "authors": "G.-S Xia; X Bai; J Ding; Z Zhu; S Belongie; J Luo; M Datcu; M Pelillo; L Zhang", "journal": "", "ref_id": "b9", "title": "Dota: A large-scale dataset for object detection in aerial images", "year": "2018" }, { "authors": "A Boguszewski; D Batorski; N Ziemba-Jankowska; T Dziedzic; A Zambrzycka", "journal": "", "ref_id": "b10", "title": "Landcover. ai: Dataset for automatic mapping of buildings, woodlands, water and roads from aerial imagery", "year": "2021" }, { "authors": "J Wang; Z Zheng; A Ma; X Lu; Y Zhong", "journal": "", "ref_id": "b11", "title": "Loveda: A remote sensing land-cover dataset for domain adaptive semantic segmentation", "year": "2021" }, { "authors": "Z Zheng; Y Zhong; J Wang; A Ma", "journal": "", "ref_id": "b12", "title": "Foreground-aware relation network for geospatial object segmentation in high spatial resolution remote sensing imagery", "year": "2020" }, { "authors": "M Kampffmeyer; A.-B Salberg; R Jenssen", "journal": "", "ref_id": "b13", "title": "Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks", "year": "2016" }, { "authors": "J Bai; J Ren; Y Yang; Z Xiao; W Yu; V Havyarimana; L Jiao", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b14", "title": "Object detection in large-scale remote-sensing images based on timefrequency analysis and feature optimization", "year": "2021" }, { "authors": "A Wanto; S D Rizki; S Andini; S Surmayanti; N Ginantra; H Aspan", "journal": "Journal of Physics: Conference Series", "ref_id": "b15", "title": "Combination of sobel+ prewitt edge detection method with roberts+ canny on passion flower image identification", "year": "2021" }, { "authors": "R Tian; G Sun; X Liu; B Zheng", "journal": "Electronics", "ref_id": "b16", "title": "Sobel edge detection based on weighted nuclear norm minimization image denoising", "year": "2021" }, { "authors": "K Li; Y Tian; B Wang; Z Qi; Q Wang", "journal": "Electronics", "ref_id": "b17", "title": "Bi-directional pyramid network for edge detection", "year": "2021" }, { "authors": "P A Rogerson", "journal": "Journal of Geographical Systems", "ref_id": "b18", "title": "Change detection thresholds for remotely sensed images", "year": "2002" }, { "authors": "S Lei; M Lu; J Lin; X Zhou; X Yang", "journal": "Signal, Image And Video Processing", "ref_id": "b19", "title": "Remote sensing image denoising based on improved semi-soft threshold", "year": "2021" }, { "authors": "J Yang; Y He; J Caspersen", "journal": "Remote sensing of environment", "ref_id": "b20", "title": "Region merging using local spectral angle thresholds: A more accurate method for hybrid segmentation of remote sensing images", "year": "2017" }, { "authors": "Z Wang; J R Jensen; J Im", "journal": "Environmental Modelling & Software", "ref_id": "b21", "title": "An automatic region-based image segmentation algorithm for remote sensing applications", "year": "2010" }, { "authors": "X Zhang; X Feng; P Xiao; G He; L Zhu", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b22", "title": "Segmentation quality evaluation using region-based precision and recall measures for remote sensing images", "year": "2015" }, { "authors": "L Ding; H Tang; L Bruzzone", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b23", "title": "Lanet: Local attention embedding to improve the semantic segmentation of remote sensing images", "year": "2020" }, { "authors": "J Fu; J Liu; H Tian; Y Li; Y Bao; Z Fang; H Lu", "journal": "", "ref_id": "b24", "title": "Dual attention network for scene segmentation", "year": "2019" }, { "authors": "I ; Ii-B ", "journal": "X", "ref_id": "b25", "title": "", "year": "" }, { "authors": "H Li; K Qiu; L Chen; X Mei; L Hong; C Tao", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b26", "title": "Scattnet: Semantic segmentation network with spatial and channel attention mechanism for high-resolution remote sensing images", "year": "2020" }, { "authors": "R Niu; X Sun; Y Tian; W Diao; K Chen; K Fu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b27", "title": "Hybrid multiple attention network for semantic segmentation in aerial images", "year": "2021" }, { "authors": "R Strudel; R Garcia; I Laptev; C Schmid", "journal": "", "ref_id": "b28", "title": "Segmenter: Transformer for semantic segmentation", "year": "2021" }, { "authors": "", "journal": "I, II-B, IX, IV-D2", "ref_id": "b29", "title": "", "year": "" }, { "authors": "E Xie; W Wang; Z Yu; A Anandkumar; J M Alvarez; P Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "D Wang; Q Zhang; Y Xu; J Zhang; B Du; D Tao; L Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b31", "title": "Advancing plain vision transformer towards remote sensing foundation model", "year": "2022" }, { "authors": "J Long; E Shelhamer; T Darrell", "journal": "", "ref_id": "b32", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "Springer", "ref_id": "b33", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "L.-C Chen; Y Zhu; G Papandreou; F Schroff; H Adam", "journal": "", "ref_id": "b34", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "S Yin; H Li; L Teng; M Jiang; S Karim", "journal": "International Journal of Image and Data Fusion", "ref_id": "b35", "title": "An optimised multiscale fusion method for airport detection in large-scale optical remote sensing images", "year": "2020" }, { "authors": "L Li; Z Zhou; B Wang; L Miao; H Zong", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b36", "title": "A novel cnnbased method for accurate ship detection in hr optical remote sensing images via rotated bounding box", "year": "2020" }, { "authors": "X Wang; M Kang; Y Chen; W Jiang; M Wang; T Weise; M Tan; L Xu; X Li; L Zou", "journal": "Remote Sensing", "ref_id": "b37", "title": "Adaptive local cross-channel vector pooling attention module for semantic segmentation of remote sensing imagery", "year": "1980" }, { "authors": "C Zhang; G Li; S Du", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b38", "title": "Multi-scale dense networks for hyperspectral remote sensing image classification", "year": "2019" }, { "authors": "I ", "journal": "", "ref_id": "b39", "title": "III-A2", "year": "" }, { "authors": "S Woo; J Park; J.-Y Lee; I S Kweon", "journal": "", "ref_id": "b40", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "S.-B Chen; Q.-S Wei; W.-Z Wang; J Tang; B Luo; Z.-Y Wang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b41", "title": "Remote sensing scene classification via multi-branch local attention network", "year": "2021" }, { "authors": "W Chen; S Ouyang; W Tong; X Li; X Zheng; L Wang", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b42", "title": "Gcsanet: A global context spatial attention deep learning network for remote sensing scene classification", "year": "2022" }, { "authors": "Q Bi; K Qin; H Zhang; G.-S Xia", "journal": "IEEE Transactions on Image Processing", "ref_id": "b43", "title": "Local semantic enhanced convnet for aerial scene recognition", "year": "2021" }, { "authors": "X Zhao; J Zhang; J Tian; L Zhuo; J Zhang", "journal": "Remote Sensing", "ref_id": "b44", "title": "Residual dense network based on channel-spatial attention for the scene classification of a high-resolution remote sensing image", "year": "2020" }, { "authors": "Q Hou; D Zhou; J Feng", "journal": "", "ref_id": "b45", "title": "Coordinate attention for efficient mobile network design", "year": "2021" }, { "authors": "I ; Ii-B ", "journal": "III-A", "ref_id": "b46", "title": "", "year": "" }, { "authors": "D Hendrycks; K Gimpel", "journal": "", "ref_id": "b47", "title": "Gaussian error linear units (gelus)", "year": "2016" }, { "authors": "Y Yuan; X Chen; X Chen; J Wang", "journal": "", "ref_id": "b48", "title": "Segmentation transformer: Object-contextual representations for semantic segmentation", "year": "2019" }, { "authors": "L Zhou; C Zhang; M Wu", "journal": "", "ref_id": "b49", "title": "D-linknet: Linknet with pretrained encoder and dilated convolution for high resolution satellite imagery road extraction", "year": "2018" }, { "authors": "Y Lin; D Xu; N Wang; Z Shi; Q Chen", "journal": "Remote sensing", "ref_id": "b50", "title": "Road extraction from very-high-resolution remote sensing images via a nested se-deeplab model", "year": "2020" }, { "authors": "A Ma; J Wang; Y Zhong; Z Zheng", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b51", "title": "Factseg: Foreground activation-driven small object semantic segmentation in large-scale remote sensing imagery", "year": "2021" }, { "authors": "J Wang; K Sun; T Cheng; B Jiang; C Deng; Y Zhao; D Liu; Y Mu; M Tan; X Wang", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b52", "title": "Deep high-resolution representation learning for visual recognition", "year": "2020" }, { "authors": "Z Cheng; D Fu", "journal": "IEEE", "ref_id": "b53", "title": "Remote sensing image segmentation method based on hrnet", "year": "2020" }, { "authors": "J Zhang; S Lin; L Ding; L Bruzzone", "journal": "Remote Sensing", "ref_id": "b54", "title": "Multi-scale context aggregation for semantic segmentation of remote sensing images", "year": "2020" }, { "authors": "X Chen; Z Liu; S Zhou; H Yu; J Chen; Y Liu", "journal": "SPIE", "ref_id": "b55", "title": "Litehrnet-ocr: a lightweight high-resolution network and object-contextual representation for road extraction on remote sensing images", "year": "2023" }, { "authors": "X Wang; Z Zhang; H Dai", "journal": "Computers and Electrical Engineering", "ref_id": "b56", "title": "Detection of remote sensing targets with angles via modified centernet", "year": "2022" }, { "authors": "R Hamaguchi; A Fujita; K Nemoto; T Imaizumi; S Hikosaka", "journal": "IEEE", "ref_id": "b57", "title": "Effective use of dilated convolutions for segmenting small object instances in remote sensing imagery", "year": "2018" }, { "authors": "W Li; X Zhang; Y Peng; M Dong", "journal": "IEEE Sensors Journal", "ref_id": "b58", "title": "Dmnet: A network architecture using dilated convolution and multiscale mechanisms for spatiotemporal fusion of remote sensing images", "year": "2020" }, { "authors": "Q Liu; M Kampffmeyer; R Jenssen; A.-B Salberg", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b59", "title": "Dense dilated convolutions' merging network for land cover classification", "year": "2020" }, { "authors": "W Cai; Z Wei", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b60", "title": "Remote sensing image classification based on a cross-attention mechanism and graph convolution", "year": "2020" }, { "authors": "U Chaudhuri; B Banerjee; A Bhattacharya", "journal": "Computer vision and image understanding", "ref_id": "b61", "title": "Siamese graph convolutional network for content based remote sensing image retrieval", "year": "2019" }, { "authors": "G Zhou; W Chen; Q Gui; X Li; L Wang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b62", "title": "Split depthwise separable graph-convolution network for road extraction in complex environments from high-resolution remote-sensing images", "year": "2021" }, { "authors": "K Xu; H Huang; P Deng; Y Li", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b63", "title": "Deep feature aggregation framework driven by graph convolutional network for scene classification in remote sensing", "year": "2021" }, { "authors": "J Shi; W Liu; H Shan; E Li; X Li; L Zhang", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b64", "title": "Remote sensing scene classification based on multibranch fusion attention network", "year": "2023" }, { "authors": "S Zheng; C Lu; Y Wu; G Gupta", "journal": "", "ref_id": "b65", "title": "Sapnet: Segmentationaware progressive network for perceptual contrastive deraining", "year": "2022" }, { "authors": "R Müller; S Kornblith; G E Hinton", "journal": "Advances in neural information processing systems", "ref_id": "b66", "title": "When does label smoothing help?", "year": "2019" }, { "authors": "", "journal": "", "ref_id": "b67", "title": "III-B, III-B2", "year": "" }, { "authors": "C H Sudre; W Li; T Vercauteren; S Ourselin; M Jorge Cardoso", "journal": "Springer", "ref_id": "b68", "title": "Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations", "year": "2017-09-14" }, { "authors": "J Hu; L Shen; G Sun", "journal": "", "ref_id": "b69", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "Y Cao; J Xu; S Lin; F Wei; H Hu", "journal": "", "ref_id": "b70", "title": "Gcnet: Non-local networks meet squeeze-excitation networks and beyond", "year": "2019" }, { "authors": "J.-J Liu; Q Hou; M.-M Cheng; C Wang; J Feng", "journal": "", "ref_id": "b71", "title": "Improving convolutional networks with self-calibrated convolutions", "year": "2020" }, { "authors": "T Tian; L Li; W Chen; H Zhou", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b72", "title": "Semsdnet: A multiscale dense network with attention for remote sensing scene classification", "year": "2021" }, { "authors": "X Zhang; J Li; Z Hua", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b73", "title": "Mrse-net: multiscale residuals and seattention network for water body segmentation from satellite images", "year": "2022" }, { "authors": "C Zhang; W Jiang; Y Zhang; W Wang; Q Zhao; C Wang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b74", "title": "Transformer and cnn hybrid deep neural network for semantic segmentation of very-high-resolution remote sensing imagery", "year": "2022" }, { "authors": "W Wang; X Tan; P Zhang; X Wang", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b75", "title": "A cbam based multiscale transformer fusion approach for remote sensing image change detection", "year": "2022" }, { "authors": "L Zhu; X Geng; Z Li; C Liu", "journal": "Remote Sensing", "ref_id": "b76", "title": "Improving yolov5 with attention mechanism for detecting boulders from planetary images", "year": "2021" }, { "authors": "Q Shi; M Liu; S Li; X Liu; F Wang; L Zhang", "journal": "IEEE transactions on geoscience and remote sensing", "ref_id": "b77", "title": "A deeply supervised attention metric-based network and an open aerial image dataset for remote sensing change detection", "year": "2021" }, { "authors": "L Wang; R Li; C Zhang; S Fang; C Duan; X Meng; P M Atkinson", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b78", "title": "Unetformer: A unet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b79", "title": "II-B, IV-A2, IV-A3", "year": "" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b80", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b81", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "D Wang; J Zhang; B Du; G.-S Xia; D Tao", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b82", "title": "An empirical study of remote sensing pretraining", "year": "2022" }, { "authors": "K Ayush; B Uzkent; C Meng; K Tanmay; M Burke; D Lobell; S Ermon", "journal": "", "ref_id": "b83", "title": "Geography-aware self-supervised learning", "year": "2021" }, { "authors": "S Workman; A Hadzic; M U Rafique", "journal": "", "ref_id": "b84", "title": "Handling image and label resolution mismatch in remote sensing", "year": "2023" }, { "authors": "G Kong; H Fan", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b85", "title": "Enhanced facade parsing for street-level images using convolutional neural networks", "year": "2020" }, { "authors": "G Neuhold; T Ollmann; S Rota; P Bulo; Kontschieder", "journal": "", "ref_id": "b86", "title": "The mapillary vistas dataset for semantic understanding of street scenes", "year": "2017" }, { "authors": "", "journal": "", "ref_id": "b87", "title": "-C3", "year": "" }, { "authors": "S Khan; M Naseer; M Hayat; S W Zamir; F S Khan; M Shah", "journal": "ACM computing surveys (CSUR)", "ref_id": "b88", "title": "Transformers in vision: A survey", "year": "2022" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b89", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "X Chen; H Fan; R Girshick; K He", "journal": "", "ref_id": "b90", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "K He; H Fan; Y Wu; S Xie; R Girshick", "journal": "", "ref_id": "b91", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b92", "title": "-C3", "year": "" }, { "authors": "Y Xu; Q Zhang; J Zhang; D Tao", "journal": "", "ref_id": "b93", "title": "Regioncl: Can simple region swapping contribute to contrastive learning?", "year": "2021" }, { "authors": "N Zhao; Z Wu; R W Lau; S Lin", "journal": "", "ref_id": "b94", "title": "What makes instance discrimination good for transfer learning?", "year": "2020" }, { "authors": "M Sandler; A Howard; M Zhu; A Zhmoginov; L.-C Chen", "journal": "", "ref_id": "b95", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "B Cheng; B Xiao; J Wang; H Shi; T S Huang; L Zhang", "journal": "", "ref_id": "b96", "title": "Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation", "year": "2020" }, { "authors": "V Badrinarayanan; A Kendall; R Cipolla", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b97", "title": "Segnet: A deep convolutional encoder-decoder architecture for image segmentation", "year": "2017" }, { "authors": "D Karimi; S E Salcudean", "journal": "IEEE Transactions on medical imaging", "ref_id": "b98", "title": "Reducing the hausdorff distance in medical image segmentation with convolutional neural networks", "year": "2019" }, { "authors": "Y Long; G.-S Xia; S Li; W Yang; M Y Yang; X X Zhu; L Zhang; D Li", "journal": "IEEE Journal of selected topics in applied earth observations and remote sensing", "ref_id": "b99", "title": "On creating benchmark dataset for aerial image interpretation: Reviews, guidances, and million-aid", "year": "2021" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b100", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "J Chen; Y Lu; Q Yu; X Luo; E Adeli; Y Wang; L Lu; A L Yuille; Y Zhou", "journal": "", "ref_id": "b101", "title": "Transunet: Transformers make strong encoders for medical image segmentation", "year": "2021" }, { "authors": "T Xiao; Y Liu; B Zhou; Y Jiang; J Sun", "journal": "", "ref_id": "b102", "title": "Unified perceptual parsing for scene understanding", "year": "2018" }, { "authors": "H Cao; Y Wang; J Chen; D Jiang; X Zhang; Q Tian; M Wang", "journal": "Springer", "ref_id": "b103", "title": "Swin-unet: Unet-like pure transformer for medical image segmentation", "year": "2022" }, { "authors": "H Zhao; J Shi; X Qi; X Wang; J Jia", "journal": "", "ref_id": "b104", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "A Kirillov; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b105", "title": "Panoptic feature pyramid networks", "year": "2019" }, { "authors": "R Xu; C Wang; J Zhang; S Xu; W Meng; X Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b106", "title": "Rssformer: Foreground saliency enhancement for remote sensing land-cover segmentation", "year": "2023" }, { "authors": "L Wang; R Li; D Wang; C Duan; T Wang; X Meng", "journal": "Remote Sensing", "ref_id": "b107", "title": "Transformer meets convolution: A bilateral awareness network for semantic segmentation of very fine resolution urban scene images", "year": "2021" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b108", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "L Wang; R Li; C Duan; C Zhang; X Meng; S Fang", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b109", "title": "A novel transformer based semantic segmentation scheme for fine-resolution remote sensing images", "year": "2022" }, { "authors": "Z Zhou; M M Rahman Siddiquee; N Tajbakhsh; J Liang", "journal": "Springer", "ref_id": "b110", "title": "Unet++: A nested u-net architecture for medical image segmentation", "year": "2018-09-20" }, { "authors": "E Romera; J M Alvarez; L M Bergasa; R Arroyo", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b111", "title": "Erfnet: Efficient residual factorized convnet for real-time semantic segmentation", "year": "2017" }, { "authors": "C Yu; J Wang; C Peng; C Gao; G Yu; N Sang", "journal": "", "ref_id": "b112", "title": "Bisenet: Bilateral segmentation network for real-time semantic segmentation", "year": "2018" }, { "authors": "J Zhuang; J Yang; L Gu; N Dvornek", "journal": "", "ref_id": "b113", "title": "Shelfnet for fast semantic segmentation", "year": "2019" }, { "authors": "P Hu; F Perazzi; F C Heilbron; O Wang; Z Lin; K Saenko; S Sclaroff", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b114", "title": "Real-time semantic segmentation with fast attention", "year": "2020" }, { "authors": "X Zheng; L Huan; G.-S Xia; J Gong", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b115", "title": "Parsing very high resolution urban scene images by learning deep convnets with edgeaware loss", "year": "2020" }, { "authors": "A Srinivas; T.-Y Lin; N Parmar; J Shlens; P Abbeel; A Vaswani", "journal": "", "ref_id": "b116", "title": "Bottleneck transformers for visual recognition", "year": "2021" }, { "authors": "M Oršić; S Šegvić", "journal": "Pattern Recognition", "ref_id": "b117", "title": "Efficient semantic segmentation with pyramidal fusion", "year": "2021" }, { "authors": "R Li; S Zheng; C Duan; J Su; C Zhang", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b118", "title": "Multistage attention resu-net for semantic segmentation of fine-resolution remote sensing images", "year": "2021" }, { "authors": "G Li; I Yun; J Kim; J Kim", "journal": "", "ref_id": "b119", "title": "Dabnet: Depth-wise asymmetric bottleneck for real-time semantic segmentation", "year": "2019" }, { "authors": "R Li; S Zheng; C Zhang; C Duan; L Wang; P M Atkinson", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b120", "title": "Abcnet: Attentive bilateral contextual network for efficient semantic segmentation of fine-resolution remotely sensed imagery", "year": "2021" } ]
[ { "formula_coordinates": [ 6, 364.21, 614.98, 198.83, 29.39 ], "formula_id": "formula_0", "formula_text": "GD = 1 -2 2 l=1 w l n r ln p ln 2 l=1 w l n r ln + p ln (1)" }, { "formula_coordinates": [ 6, 405.55, 726.63, 153.62, 26.58 ], "formula_id": "formula_1", "formula_text": "w l = 1 N i=1 r 2 ln (2" }, { "formula_coordinates": [ 6, 559.16, 733.69, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 7, 108.65, 197.11, 191.37, 30.55 ], "formula_id": "formula_3", "formula_text": "L ce = - 1 N N n=1 K k=1 y (n) k log ŷ(n) k(3)" }, { "formula_coordinates": [ 7, 113.16, 237.92, 186.87, 23.3 ], "formula_id": "formula_4", "formula_text": "q i = 1 -ε if i = y, ε/(K -1) otherwise(4)" }, { "formula_coordinates": [ 7, 52.77, 428.88, 247.26, 34.72 ], "formula_id": "formula_5", "formula_text": "LMHD = 1 N N i=0 Ω (g(pi) -s θ (pi)) 2 (DG(pi) β + DS(pi) β )dpi,(5)" }, { "formula_coordinates": [ 7, 53.57, 735.25, 246.45, 17.77 ], "formula_id": "formula_6", "formula_text": "LCEA = Ω (g(p) -s θ (p)) 2 (DG(p ∨ 0p) β + DS(p ∼ 0p) β )dp (6)" }, { "formula_coordinates": [ 8, 74.55, 393.19, 221.99, 29.71 ], "formula_id": "formula_7", "formula_text": "L q,k + ,k -= -lg exp (q • k + /τ ) exp (k • k + /τ ) + k - exp (q • k -/τ ) (7" }, { "formula_coordinates": [ 8, 296.54, 401.05, 3.48, 7.77 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 8, 129.97, 513.3, 170.05, 8.35 ], "formula_id": "formula_9", "formula_text": "θ k = mθ k + (1 -m)θq(8)" }, { "formula_coordinates": [ 9, 59.37, 634.75, 202.19, 46.32 ], "formula_id": "formula_10", "formula_text": "-ResNet v1 ✔ ✔ Hi-ResNet v2 ✔ ✔ 2 ✔ Hi-ResNet v3 ✔ ✔ ✔ ours ✔ ✔ ✔ 3 ✘ 1 extend" } ]
10.18653/v1/2020.acl-main.424
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b33", "b20", "b48", "b53", "b28", "b52", "b14", "b24", "b10", "b29", "b63", "b67", "b6", "b18", "b66", "b60", "b55", "b7", "b56", "b15", "b22", "b21" ], "table_ref": [], "text": "Style representation learning aims to represent the stylistic attributes of an authored text. Prior work has treated the style of a text as separable from the content. Stylistic attributes have included, but are not limited to, linguistic choices in syntax, grammar, spelling, vocabulary, and punctuation (Jafaritazehjani et al., 2020). Style representations should represent two texts with similar stylistic attributes more closely than texts with different attributes independent of what content is present in the texts.\nStylometry, the analysis of style, applies forensic linguistics to tasks like authorship attribution. Stylometry often relies on semi-manual analysis by forensic linguistic experts (Mosteller and Wallace, 1963;Holmes, 1994;Rosso et al., 2016). Computational stylometry often uses rule-based approaches utilizing count-based features like the frequencies of function words (Stamatatos, 2009;Koppel et al., Figure 1: An example of a 768-dimensional interpretable style vector produced by LISA, trained using a GPT-3 annotated synthetic stylometery dataset.\n2009; Tausczik and Pennebaker, 2010). More modern, neural approaches attempt to learn style representations in an unsupervised fashion through a proxy task like style transfer (Shen et al., 2017;Fu et al., 2018;John et al., 2019;Dai et al., 2019;Li et al., 2019;Yi et al., 2021;Zhu et al., 2022) or authorship verification (Boenninghoff et al., 2019;Hay et al., 2020;Zhu and Jurgens, 2021;Wegmann et al., 2022). These stronger neural approaches, unlike simpler frequency-based techniques, are uninterpretable. This makes it difficult to effectively analyze their representations and their failure modes, and precludes their usage in real-world authorship attribution scenarios because interpretability and verification is critical for legal admissibility (Tiersma and Solan, 2002).\nWith this motivation, we propose a humaninterpretable style representation model M which, for a given text t, produces a D-dimensional vector M(t) ∈ [0.0, 1.0] D . Each dimension corresponds to one of D style attributes {a 0 , a 1 , . . . , a D }. Each element at dimension d of this vector is constrained in the range [0.0, 1.0] to represent the probability of the corresponding style attribute a d being present in the text t. See Figure 1 for a visualization of a result from our final trained model with D = 768 dimensions. An immediate obstacle to train such a model is that no large dataset of texts with stylometric annotations currently exists; annotating a large number of texts on a wide variety (D = 768) of stylistic attributes would likely require annotators with linguistic expertise and be prohibitively expensive. Given this, we use GPT-3 (Brown et al., 2020), a large language model (LLM), and zeroshot prompts to generate a synthetic dataset we call STYLEGENOME of human-interpretable stylometric annotations for various texts. Our approach is motivated by recent works showing models trained on synthetic datasets annotated by prompting LLMs can match and sometimes even outperform models trained on human-labeled datasets (Wang et al., 2022;Gilardi et al., 2023;Huang et al., 2022;Honovich et al., 2022). Training on STYLEGENOME, we develop the Linguistically-Interpretable Style Attribute (LISA) embedding model. We summarize our primary contributions:\n1. We outline an unsupervised method for producing interpretable style embeddings using zero-shot prompting and distillation.\n2. We generate and release STYLEGENOME, a synthetic stylometry dataset with ~5.5M examples, the first large-scale dataset with texts paired with wide range of stylometric annotations.\n3. We train, evaluate, and release LISA, the first-ever interpretable style embedding model with a wide variety of linguistic dimensions (D = 768). We find LISA matches the performance of existing style representations, while allowing for explainability and transparency." }, { "figure_ref": [], "heading": "Generating STYLEGENOME", "publication_ref": [ "b26", "b2", "b60", "b7", "b36" ], "table_ref": [], "text": "To create STYLEGENOME for training LISA, we select Reddit data from the Million User Dataset (MUD) (Khan et al., 2021;Andrews and Bishop, 2019) to stylometrically annotate following prior work that uses Reddit to source a diversity of styles from different authors (Wegmann et al., 2022). We sample 10 random posts per author for 1,000 random authors, resulting in 10,000 total posts selected for annotation. We display some of the diversity of styles captured in the following examples from our Reddit authors. They vary in formality, punctuation, emoji usage, etc.: We use GPT-31 (Brown et al., 2020;Ouyang et al., 2022) to annotate each post in two stages.\nStage 1 We elicit the model with a zero-shot prompt to generate a description of the style of a given Reddit post:\nPrompt: Write a long paragraph describing the unique grammar style of the following passage without referring to specifics about the topic.\nPassage: Wow that's really cool. I'll have to watch for that next time I watch it.\nDescription:" }, { "figure_ref": [], "heading": "↓", "publication_ref": [], "table_ref": [], "text": "Generation: The passage reflects a conversational style in its grammar, with short, simple sentences that are easy to understand. The language is informal and direct, with the speaker expressing their enthusiasm for the topic in a straightforward manner. The speaker also uses contractions, such as \"I'll,\" to emphasize their excitement. The speaker also uses a casual tone, with the phrase \"really cool\" to emphasize their interest in the topic. The speaker also uses the present tense to express their anticipation for the future, with the phrase \"next time I watch it.\" This style of grammar is effective in conveying the speaker's enthusiasm and interest in the topic.\nStage 2 We use another zero-shot prompt to standardize the generations from Stage 1 into short, declarative sentences with a uniform structure:\nPrompt: Here's a description of an author's writing style for a passage: The passage utilizes a conversational style of grammar, with short...\nRewrite this description as a long list of short sentences describing the author's writing style where each sentence is in the format of \"The author is X.\" or \"The author uses X.\".\nOutput:" }, { "figure_ref": [], "heading": "↓", "publication_ref": [], "table_ref": [], "text": "Generation: The author is using a conversational style of grammar. The author is using short, simple sentences. The author is using language that is informal and direct. The author is expressing enthusiasm for the topic in a straightforward manner. The author is using contractions, such as \"I'll\". The author is using a casual tone.\nThe author is emphasizing their interest in the topic with the phrase \"really cool\". The author is using the present tense to express anticipation for the future.\nWe run both stages with 93 different Stage 1 prompts for all 10,000 posts. The remaining 87 prompts target narrow and specific dimensions of style:\n1. Write a description of whether the author of the following passage has any figurative language ..." }, { "figure_ref": [], "heading": "2.", "publication_ref": [], "table_ref": [], "text": "Write a description of whether the author of the following passage has any swear words ..." }, { "figure_ref": [], "heading": "3.", "publication_ref": [ "b56" ], "table_ref": [ "tab_2" ], "text": "Write a description of whether the author of the following passage has any repeated words ... The 87 targeted prompts are derived from surveys of stylometry literature, and they cover all of (Tausczik and Pennebaker, 2010)'s linguistic and psychological categories. See Appendix A.2 for more details. We report the results of an ablation experiment between the two Stage 1 prompt categories in Appendix C. Appendix D details dataset annotation costs.\nSTYLEGENOME The output of Stage 2 is sentence tokenized3 and filtered to keep only sentences beginning with \"The author\". We refer to these sentences as human-interpretable style attributes. Our method annotates the texts with nearly 1.3M style attributes. These style attributes are represented in natural language so \"The author creates a conversational tone\" and \"The author has a conversational tone\" are counted separately in the raw dataset. Our training procedure in Section 3.1 is able to train directly on these natural language style attributes, obviating a normalization step. Some annotations may be hallucinated resulting in a noisy dataset, but we choose to train on the full synthetic dataset, without manual intervention, to maintain an unsupervised procedure following prior work (Wang et al., 2022). We hypothesize our model will find signal in the noise, which we evaluate in Section 4. The final dataset statistics can be found in Table 1. \n# of" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We first distill stylometric annotation knowledge from GPT-3 into a Style Feature Agreement Model (SFAM). Given a text t and a style attribute a as input, SFAM(t, a) produces an agreement score between 0.0 and 1.0 representing the probability of the style attribute being present in the text. By selecting a set of D style attributes {a 0 , a 1 , . . . , a D }, we can use SFAM to construct LISA, our interpretable style representation model that produces D-dimensional vectors: MLISA(t) = SFAM(t, a0), SFAM(t, a1), . . . , SFAM(t, aD) The Euclidean distance between style vectors for two texts ∥M LISA (t 2 ) -M LISA (t 1 )∥ 2 would not be particularly meaningful. We can multiply a trained weight vector w or a weight matrix W to the style vectors, that act as simple interpretable embedding layers. This operation would make the Euclidean distance more meaningful, for example ∥M LISA (t 2 ) * w -M LISA (t 1 ) * w∥ 2 . We call the result of a LISA style vector multiplied by w or W a LISA style embedding. We discuss training in detail next, leaving hyperparameter and implementation specifics in Appendix E." }, { "figure_ref": [], "heading": "SFAM", "publication_ref": [ "b4", "b30", "b43" ], "table_ref": [], "text": "We use distillation (Ba and Caruana, 2014) to teach the stylometric annotation capabilities of GPT-3 to EncT54 (Liu et al., 2021;Raffel et al., 2020), a smaller, more efficient student model. Table 2: Correlation of agreement scores produced by SFAM against human judgments on texts over a wide variety linguistic and authorship dimensions. The natural language style attributes used as input to SFAM when producing the agreement scores for each dataset are also provided." }, { "figure_ref": [], "heading": "Sampling Batches", "publication_ref": [], "table_ref": [], "text": "We train EncT5 with a binary classifier head on randomly sampled batches of examples (x i , y i ) where each batch contains an equal number of positive (y i = 1) and negative (y i = 0) examples. The input x i consists of a style attribute a and an author's text t concatenated in a string \"{{a}}|||{{t}}\", for example x i = \"The author is using a positive tone.|||You got this ;)\". Labeled pairs from STYLEGENOME are sampled as positive examples such that each style attribute is sampled with equal probability. For each positive example, we perform negative sampling and retrieve a negative example text where the positive example's style attribute is likely not present. To do this, we find the 10,000 most dissimilar style attributes to the positive example's style attribute with SBERT5 similarity. We select a text that is positively labeled with a randomly selected dissimilar style attribute as the negative example text." }, { "figure_ref": [], "heading": "Training and Inference", "publication_ref": [ "b43" ], "table_ref": [], "text": "Training over the ~1.3M unique style attributes in STYLEGENOME, our training dataset for SFAM is effectively a multitask mixture. Style attributes presented in natural language to the model allows the pre-trained T5 encoder to jointly learn between style attributes and generalize to unseen style attributes using the semantic information in those natural language descriptions. This is especially desirable since some style attributes only have a handful of text examples, while others may have thousands. This setup resembles the multitask mixture trained on in Raffel et al. (2020). To validate training, we hold-out 50 random style attributes that have between 30-50 examples each as a validation set. We validate learning during training by measuring the ability of SFAM to generalize and produce accurate agreement scores for the unseen style attributes. At inference, we softmax the binary class logits to interpret them as probabilities and we take the probability of y i = 1 as the agreement score. We also study the effect of the size of STYLEGENOME on performance and find that as the synthetic dataset grows, validation performance improves and SFAM generalizes to better predict agreement scores for unseen style attribute and text pairs (see Appendix B)." }, { "figure_ref": [], "heading": "LISA Style Vectors", "publication_ref": [ "b13" ], "table_ref": [], "text": "As discussed earlier, SFAM is directly used to produce the LISA interpretable style vectors. We arbitrarily choose D = 768 in this work, following the dimensionality of prior style vectors and BERT (Devlin et al., 2019). We now detail how we select the style attributes associated with each dimension {a 0 , a 1 , . . . , a 768 } with little manual in- 13 POT MORDE BABY WOOOOOOOOOOOOOOOO 1. 1.00 -The author is using an elongated word.\n2. 1.00 -The author is using only English words.\n3. 1.00 -The author is using a single word. 4. 1.00 -The author is using swear words. 5. 1.00 -The author uses two exclamation marks.\nNo wonder everyone resorts to performing a murder spree eventually.\n1. 1.00 -The author is scornful. 2. 1.00 -The author is ungenerous. 3. 0.99 -The author is expressing antisocial behaviors. 4. 0.99 -The author is uncaring. 5. 0.99 -The author is dramatic.\nPodcast originally refers to an iPod, and before that there was definitely TWiT, which still calls itself a Netcast 1. 0.97 -The author uses a variety of words to describe the same concept. 2. 0.97 -The author is simply describing a product. * 3. 0.95 -The author uses specific terms related to the topic. 4. 0.94 -The author has a deep understanding of the topic. 5. 0.94 -The author is using words focusing on the past.\nTable 3: The five highest scoring dimensions from the 768-dimensional LISA vector produced on various Reddit texts. The interpretable style attribute corresponding to each dimension is displayed along with the score. We manually inspect the top style attributes and annotate them as reasonable, plausible, or incorrect. Attributes annotated with * blur the line between style and content. Error analysis can be found in Section 4.1.\ntervention. The first 87 attributes {a 0 , a 1 , . . . , a 86 } directly correspond to the features of our 87 targeted prompts in the form of \"The author is using {{targeted_feature}}\". The remaining 681 are downselected from the ~1.3M style attributes with filtering heuristics, choosing those that appear for at least 10 authors, but no more than 600 authors (attempting to select for frequent, yet discriminative attributes). Once a style attribute is selected to be part of the 768, we do not select another style attribute with SBERT cosine similarity > 0.8 to largely avoid near-duplicates. We also reject style attributes for selection that are undesirable for interpretability. 6 Examples of LISA can be found in Figure 1 and Table 3. With 768 dimensions, producing a single LISA vector would require 768 inferences of SFAM, a computationally expensive operation. To address this, we produce the LISA representations for 1,000,000 random Reddit posts from MUD. We then distill into a new EncT5 model with 768 regression labels. We hold-out 10,000 examples as a validation set. After distillation to the dedicated model, the 768-dimensional style vector can be produced in a single forward pass with minimal degradation (validation MSE = 0.005)." }, { "figure_ref": [], "heading": "LISA Style Embeddings", "publication_ref": [ "b27", "b51", "b60", "b47" ], "table_ref": [], "text": "We experiment with two different simple and interpretable embedding layers, a weight vector (w 768 ) and a weight matrix (W 768×64 ). We attach these on top of the LISA model and train just the layer using a contrastive learning objective and triplet loss (Khosla et al., 2020;Schroff et al., 2015). We also experiment with two different authorship datasets from prior works to train the embedding layer; we refer to these datasets as the Wegmann dataset (Wegmann et al., 2022) and the LUAR dataset (Rivera-Soto et al., 2021). Like the prior work, we assume an author has consistent style between their different texts. Given some anchor text by an author, we use another text by the same author as a positive example, and text by a different author as a negative example for our triplets. This objective minimizes the distance between two texts by the same author and maximizes the distance between texts by different authors, learning a meaningful metric." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b59", "b57" ], "table_ref": [ "tab_5" ], "text": "We first evaluate to what degree SFAM, which is ultimately used to build LISA representations, learns useful stylometric annotation capabilities that align with human reviewers. We then evaluate LISA itself on STEL, a framework purpose-built for evaluating the quality of style measures (Wegmann and Nguyen, 2021). All evaluations were completed after the collection of the STYLEGENOME dataset.\nCorrelation to Human Judgments We conduct a broad set of 55 studies across 21 datasets in 7 distinct categories of linguistic style and authorship dimensions in Table 2. We measure the correlation of SFAM's agreement scores to human judgments. SFAM performs stronger on dimensions like formality, sentiment, and emotion than dimensions like linguistic acceptability. This is likely an artifact of the effectiveness of GPT-3 in annotating these categories, an expected result given prior work has shown language models struggle with identifying these features (Warstadt et al., 2020). Interestingly, SFAM demonstrates some limited ability to perform authorship profiling, a task adjacent to stylometry. The ability to probe SFAM in an interpretable manner helps identify which categories of features it can reliably represent, whereas prior approaches were more opaque. Overall, the Table 2 results demonstrate SFAM's annotations do correlate with human judgments on some important dimensions of style. We hypothesize future research with larger datasets (> 10,000 posts), more diverse sources of texts, and larger and more performant LLMs may further broaden and improve learned stylometric annotation capabilities.\nSTEL In Table 4, we provide the results of evaluating LISA using STEL. The STEL task evaluates whether two texts with similar styles can be matched using the distance/similarity metric defined by a style representation. We compare with other content-independent style representations, or methods that explicitly limit representation of content in favor of style. LISA explicitly limits the representation of content through the 768 stylefocused attributes that act as a bottleneck. Contentaware representations like SBERT, on the other hand, have direct access to the text and may be able to represent the content in the text to an extreme degree, representing the usage of a specific rare" }, { "figure_ref": [], "heading": "Text Style Attribute", "publication_ref": [ "b47", "b60" ], "table_ref": [], "text": "Subscribed. Interesting idea. I would like to see some advanced stats on hitting percentages to different locations on the court. For example, having the court broken up into maybe 12 zones and then hitting percentages from each position to those zones. I remember seeing an article that did this years ago and I have never been able to find anything online. I said recently on this sub that the deep angle shot from the left side or right side was the highest percentage shot in volleyball, but I was not able to back up my claim with any sources or anything. Anyways, I am a VB nerd, no doubt. Interested to see what direction you take this. Cheers!\nThe author is being polite.\nYeah I also work in QA, and seeing this kind of stuff get released is maddening. About a year ago working on a new platform we were seeing bugs in the hundreds each week, we pushed back the release of the product 3 months because basically it didn't work. If it was up to the devs, they'd have released it on time, because the stuff they'd written code for worked. Thorough doesn't even cover the work we go through every 3 months, and Niantic's approach seems completely amateur from this side. They're putting bandaids on problems and hiding things like the 3 step problem behind curtains without seemingly fixing anything, although I do have to say their balance tweaks to battling have been a big step in the right direction.\nThe author is using a personal anecdote to illustrate their point.\nThank you. I'd be interested in reading more about your experiences, in addition to the \"American Wedding\" story. Are you watching the stream? I wish there was a way to find out how many people in the world are watching it. The music is lovely, huh? God damn. He's got his bunny Fair Isle sweater on, drinking Dunkin' Donuts coffee. I would have thought him a Starbucks man. :-)\nThe author is using an emoji.\nTable 5: Sentence-level LISA vectors over each sentence from a longer passage of text can help identify and quantify which sentences contribute to overall style attributes scored on the longer passage providing granular interpretability.\nword or discussion of a specific concept. We provide the results of content-aware representations simply for reference. We find LISA embeddings are able to closely match (and on average slightly outperform) prior style representations on STEL while providing interpretability.\nSentence-Level Interpretability In Table 5, we demonstrate how visualizing a dimension of sentence-level LISA vectors can help explain which sentences contribute to a dimension activated on a passage-level LISA vector.\nForensic Interpretability Typically for authorship attribution tasks, content-aware representations that capture both content and style are used to make a determination. Author style, however, is still an important component in determining attribution (Rivera-Soto et al., 2021). Offering a clear explanation and presenting supporting evidence is crucial, particularly in the context of forensic analysis, such as when presenting evidence in a court trial. Explainability has often been overlooked in neural approaches to authorship attribution tasks.\nTo motivate this as a future research direction using our interpretable stylometric representations and our general approach, we provide an example of explanations on the Contrastive Authorship Verifi-cation task from Wegmann et al. (2022) in Table 6 with LISA (LUAR + W ). Further examples and discussion on how the top common and distinct style attributes are ranked can be found in Appendix G." }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [ "b23", "b5", "b38" ], "table_ref": [], "text": "We highlight insights and observations around common failure modes of our technique in this section. We annotate the common failure modes with their percentage rate of occurrence.7 \nContent vs. Style Attributes (3%) It is unclear whether style and content can truly be separated as some content features are important for style or profiling an author (Jafaritazehjani et al., 2020;Bischoff et al., 2020;Patel et al., 2022). Even after filtering, 3% of dimensions of LISA still represent content. For example, \"The author is using words related to the game they are discussing\". However, while LISA may have the ability to represent that two texts are both discussing the topic of video games, it does not have the direct ability a contentaware approach would of representing which specific video game is being discussed, due to the Texts Top 3 Common/Distinct Style Attributes Anchor: Devices that use two pronged instead of three pronged plugs are required to meet certain safe design requirements. Among other things, if a device has a switch, the switched line MUST BE hot, not neutral. The polarized plugs make sure that the right prong/wire is hot. This is why devices that have no switches (primarily wall warts) need not have polarized plugs. Same Author: Your diaphragm would be trying to contract against the air pressure in your lungs. That's why deep sea diving requires regulators, to match the pressure of the air supply to the pressure surrounding your rib cage. You can breathe against a maximum of about 1/2 PSI, which is not enough pressure to adequately oxygenate your blood.\n(0.89, 1.00) -The author is using a scientific approach. (0.96, 0.98) -The author is using a combination of technical terms and everyday language. (0.91, 0.84) -The author is using formal and professional language.\nDifferent Author: That's great! I'm glad it seems to be finding its' niche. Now if they could just make a Star Wars version of this game, I'd happily swallow that fat learning curve and overcome my frustrations with the combat system. ;) (0.06, 0.99) -The author is using words related to the game they are discussing. * (0.00, 0.88) -The author is using an emoji. (0.02, 0.87) -The author uses an emoticon at the end.\nTable 6: Example interpretable explanations on the Contrastive Authorship Verification task. The top style attributes in common between the Anchor text and a text by the Same Author are shown. The top distinct style attributes between the Anchor text and a text by a Different Author are also shown. The scores of each style attribute against the texts is shown in (•, •/•). Attributes annotated with * blur the line between style and content. Error analysis can be found in Section 4.1. Further examples and details on style attribute ranking can be found in Appendix G. limited set of 768 features that act as a bottleneck. Our approach also allows visibility into understanding how much of the representation derives from content-related features, while other neural representations are opaque and may use content-related features in a way that cannot be easily assessed." }, { "figure_ref": [], "heading": "Conflating Style Attributes with Content (2%)", "publication_ref": [ "b19" ], "table_ref": [], "text": "For some style attributes, LISA conflates the content of text with the presence of the style attribute. For example, \"The author is cautious\", may have a high agreement score on any text containing the word \"caution\" even if the author is not actually expressing caution in the text.\nSpurious Correlations (6%) For other style attributes, LISA has learned spurious correlations. For example, \"The author uses two exclamation marks\", often has a high agreement score on any text that is exclamatory in nature, but does not actually use exclamation marks. An example can be found in Table 3.\nFundamental Errors (10%) LISA sometimes produces a high agreement score for text displaying the polar opposite of a style attribute or produces a high agreement score for an attribute that simply is not present in the text. Table 3 demonstrates some of these incorrect examples. Inspecting our dataset, this error happens both due to EncT5's internal representations likely aligning on relatedness instead of similarity (Hill et al., 2015) and due to hallucination and annotation errors by GPT-3. Hallucinated generations is a common issue with any LLM-guided approach and we discuss it further in Limitations along with potential future mitigations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a promising novel approach to learning interpretable style representations. To overcome a lack of stylometrically annotated training data, we use a LLM to generate STYLEGENOME, a synthetic stylometry dataset. Our approach distills the stylometric knowledge from STYLEGENOME into two models, SFAM and LISA. We find that these models learn style representations that match the performance of recent direct neural approaches and introduce interpretability grounded in explanations that correlate with human judgments. Our approach builds towards a research direction focused on making style representations more useful for downstream applications where such properties are desirable such as in a forensic analysis context. Future directions that introduce human-in-the-loop supervised annotations or newer, larger, and better aligned LLMs for annotation have the potential to yield further gains in both performance and interpretability.\nModel and Data Release We release our dataset (STYLEGENOME) and our two models (SFAM and LISA) to further research in author style." }, { "figure_ref": [ "fig_1" ], "heading": "Limitations and Broader Impacts", "publication_ref": [ "b33", "b25", "b46", "b35", "b42" ], "table_ref": [], "text": "Limitations Handcrafted features by forensic linguists typically rely on frequency counts of word usage, usage of unique words or phrases, etc. (Mosteller and Wallace, 1963). The space of these kinds of features is non-enumerable and would not be well-represented with our technique that scores a fixed set of 768 interpretable features. Pure neural approaches may capture these kinds of features, but are non-interpretable and may capture undesirable content-related features. We explicitly trade-off the use of these kinds of features in this work to achieve interpretability. While we demonstrate our synthetic annotations are enough for a model to learn to identify stylistic properties in text in Table 2, they cannot be fully relied on yet for the reasons we discuss in Section 4.1. As large language models scale and improve, however, we believe this work could benefit from increasing coherency and decreasing hallucination in the annotations (Kaplan et al., 2020). STYLEGENOME is collected only on 10,000 English Reddit posts, however, larger datasets may improve performance as we show in Figure 2 and future research in multilingual LLMs may make it feasible to replicate this procedure for other languages.\nEthical considerations Style representations are useful for text style transfer (Riley et al., 2021) and in manipulating the output of machine generated text to match a user's style, for example, in machine translation (Niu et al., 2017;Rabinovich et al., 2017). While style transfer can be a useful benign commercial application of this work, superior style representations may aid the impersonation of authors. We demonstrate how style representations may aid legitimate cases of authorship attribution, a task that is typically done by forensic linguist experts. Our work introduces an interpretable approach, an important step in legitimizing the use of computational models for authorship attribution by providing explanations for predictions that can be audited and verified." }, { "figure_ref": [], "heading": "Diversity and inclusion", "publication_ref": [], "table_ref": [], "text": "We believe style representations that capture wider dimensions of style can help aid in analyzing and representing minority writing styles in downstream applications like style transfer. To give GPT-3 more context, we also substitute {{target_feature_definition}} with a definition of the target feature, also generated by GPT-3. The full set of targeted prompts can be found in the released source package for this paper." }, { "figure_ref": [], "heading": "A.3 Standardization Prompt Templates", "publication_ref": [], "table_ref": [], "text": "The descriptions of style generated from the prompts in Appendix A.1 and Appendix A.2 are substituted into the following standardization prompt:\nHere's a description of an author's writing style for a passage: {{description}} Rewrite this description as a long list of short sentences describing the author's writing style where each sentence is in the format of \"The author is X.\" or \"The author uses X.\"." }, { "figure_ref": [], "heading": "Output:", "publication_ref": [], "table_ref": [], "text": "which transforms the verbose descriptions into short, declarative, uniform sentences beginning with \"The author...,\" which are the final style attributes used in building the STYLEGENOME dataset SFAM is trained on." }, { "figure_ref": [ "fig_1" ], "heading": "B Effect of STYLEGENOME Dataset Size", "publication_ref": [], "table_ref": [], "text": "When training SFAM, we experiment with artificially limiting the size of the synthetic dataset, by limiting the number of authors in the dataset, to determine the effect of dataset size on the validation performance. In Figure 2, we find that as the synthetic dataset grows, validation performance improves and SFAM generalizes to better predict agreement scores for unseen style attribute and text pairs. " }, { "figure_ref": [], "heading": "C Annotation Prompts Ablation", "publication_ref": [], "table_ref": [], "text": "Prompts Used to Generate STYLEGENOME Validation F1\nOpen-ended Prompts 0.865 Targeted Prompts 0.898 Open-ended Prompts & Targeted Prompts 0.920 " }, { "figure_ref": [], "heading": "D STYLEGENOME Annotation Cost", "publication_ref": [], "table_ref": [], "text": "Our inference cost with the OpenAI API was priced at $0.02 / 1K tokens, with a cost of ~$8 to annotate 10 Reddit posts by a single author with all of our prompts. Our full dataset of 1,000 authors cost ~$8,000 to annotate. " }, { "figure_ref": [], "heading": "F Full SFAM Evaluation Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), the IARPA HIATUS Program (contract 2022-22072200005), and the NSF (Award 1928631). Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, NSF, or the U.S. Government." }, { "figure_ref": [], "heading": "A Prompt Templates", "publication_ref": [], "table_ref": [], "text": "for targeted prompts, substituting {{target_feature}} with each of the following targeted features: Table 8: Correlation of agreement scores produced by SFAM against human judgments on texts over a wide variety linguistic and authorship dimensions. The natural language style attributes used as input to SFAM when producing the agreement scores for each dataset are also provided." }, { "figure_ref": [], "heading": "G Interpretable Authorship Verification", "publication_ref": [], "table_ref": [], "text": "Texts Top 3 Common/Distinct Style Attributes Anchor: Devices that use two pronged instead of three pronged plugs are required to meet certain safe design requirements. Among other things, if a device has a switch, the switched line MUST BE hot, not neutral. The polarized plugs make sure that the right prong/wire is hot. This is why devices that have no switches (primarily wall warts) need not have polarized plugs. Same Author: Your diaphragm would be trying to contract against the air pressure in your lungs. That's why deep sea diving requires regulators, to match the pressure of the air supply to the pressure surrounding your rib cage. You can breathe against a maximum of about 1/2 PSI, which is not enough pressure to adequately oxygenate your blood.\n(0.89, 1.00) -The author is using a scientific approach. (0.96, 0.98) -The author is using a combination of technical terms and everyday language. (0.91, 0.84) -The author is using formal and professional language.\nDifferent Author: That's great! I'm glad it seems to be finding its' niche. Now if they could just make a Star Wars version of this game, I'd happily swallow that fat learning curve and overcome my frustrations with the combat system. ;) (0.06, 0.99) -The author is using words related to the game they are discussing. * (0.00, 0.88) -The author is using an emoji. (0.02, 0.87) -The author uses an emoticon at the end.\nAnchor: Not sure what the income tax is in Germany, but in the Netherlands the income can be up 50% for the higher income classes. Same Author: The salaries in the US alway blow my mind. A software developer in Amsterdam gets like C40.000/year, maybe C50.000/year if your good, and maybe C60.000/year if you're some kind of manager. Anything position over C100.000/year is basically running the entire company.\n(0.84, 0.90) -The author is using words indicating poverty. (0.87, 1.00) -The author is using words indicating wealth. (0.80, 0.93) -The author is using words related to money.\nDifferent Author: How would you even test this software? The setup would be just insane.\n(0.00, 1.00) -The author is comfortable with technology. (0.00, 0.85) -The author is discussing a product. * (0.13, 0.92) -The author is using formal and professional language.\nAnchor: If only there was something he could have done to avoid this backlash. Like maybe not acting like a complete d**khead. Same Author: I take issue with a faster landing being marked as less skilled. By that logic the slowest, smoothest possible landing would be the most skilled and that is plain wrong. Maybe war machine intentionally does faster and harder landings.\n(1.00, 0.96) -The author is emphasizing the contrast between the two ideas. (0.78, 0.89) -The author is able to draw conclusions. (0.97, 0.86) -The author is using an all-or-none thinking style.\nDifferent Author: She was the Ronald Reagan of the UK in the same time period.\n(0.83, 0.05) -The author is describing sexual content. * (0.38, 0.94) -The author is using words related to politics. * (0.74, 0.38) -The author is using parentheticals.\nTable 9: Example interpretable explanations on the Contrastive Authorship Verification task. The top style attributes in common between the Anchor text and a text by the Same Author are shown. The top distinct style attributes between the Anchor text and a text by a Different Author are also shown. The scores of each style attribute against the texts is shown in (•, •/•). We manually inspect the style attributes and annotate them as reasonable, plausible, or incorrect explanations. Attributes annotated with * blur the line between style and content. Error analysis can be found in Section 4.1.\nWe perform this task with LISA (LUAR + W ) and demonstrate interpretability on a few task instances. To rank the top common or distinct style attributes between two style vectors ⃗ v 1 and ⃗ v 2 , we perform a simple calculation. We first calculate the contribution of each dimension d to the Euclidean distance as a measure of the general importance of each dimension. The importance score is defined as:\nTo retrieve the top common style attributes, we rank the dimensions, and the corresponding style attributes, in descending order by the following score function:\nTo retrieve the top distinct style attributes, we rank the dimensions, and the corresponding style attributes, in descending order by the following score function:" }, { "figure_ref": [], "heading": "H Resources", "publication_ref": [ "b7", "b36", "b13", "b31", "b49", "b31", "b13", "b45", "b47", "b60", "b26", "b2", "b59", "b60", "b39", "b44", "b64", "b32", "b37", "b16", "b50", "b12", "b17", "b62", "b41", "b11", "b3", "b9", "b0", "b58", "b58", "b61", "b45" ], "table_ref": [], "text": "We provide links and citations to resources used in this paper which provide license information, documentation, and their intended use. Our usage follows the intended usage of all resources.\nWe utilize the following models:\n• GPT-3175B (text-davinci-003) (Brown et al., 2020;Ouyang et al., 2022) • EncT5 (t5-base) (Devlin et al., 2019;Liu et al., 2019) • DistilRoBERTa (nli-distilroberta-base-v2) (Sanh et al., 2019;Liu et al., 2019;Devlin et al., 2019;Reimers and Gurevych, 2019) • Learning Universal Authorship Representations (LUAR) Embedding model (Rivera-Soto et al., 2021) • Style embedding model from Wegmann et al. (2022) We utilize the following datasets:\n• Reddit Million User Dataset (Khan et al., 2021;Andrews and Bishop, 2019) • STEL dataset (Wegmann and Nguyen, 2021) • Contrastive Authorship Verification dataset (Wegmann et al., 2022) • Formality in Online Communication (Pavlick and Tetreault, 2016) • Grammarly's Yahoo Answers Formality Corpus (Rao and Tetreault, 2018) • Yelp Reviews Dataset (Zhang et al., 2015) • IMDB Large Movie Review Dataset (Maas et al., 2011) • Amazon Customer Reviews Dataset (Amazon.com, 2018)https://s3.amazonaws.com/amazon-reviews -pds/readme.html\n• Rotten Tomatoes Movie Review Data (Pang and Lee, 2005) • App Reviews Dataset (Grano et al., 2017) • Twitter Sentiment Analysis Training Corpus (Naji, 2012) • DAIR.AI Emotion Dataset (Saravia et al., 2018) • GoEmotions Dataset (Demszky et al., 2020) • Political Slant Dataset (Prabhumoye et al., 2018)\n• Twitter User Gender Classification Dataset (CrowdFlower, 2017)https://www.kaggle.com/datasets/cr owdflower/twitter-user-gender-classification\n• African-American Vernacular English Dataset (Groenwold et al., 2020) • Shakespeare Dataset (Xu, 2017) • Wikipedia Bias Dataset (Pryzant et al., 2020) • HateSpeech18 (de Gibert et al., 2018) • Offensive Social Media Dataset (Atwell et al., 2022) • Simple Wikipedia Dataset (Coster and Kauchak, 2011) • ASSET (Alva-Manchego et al., 2020) • CoLA (Warstadt et al., 2019) • BLiMP (Warstadt et al., 2019) We utilize the following software:\n• Transformers (Wolf et al., 2019) • Sentence-Transformers (Reimers and Gurevych, 2019) • emojihttps://pypi.org/project/sentence-splitter/\n• sentence-splitterhttps://pypi.org/project/sentence-splitter/\nWe estimate the total compute budget and detail computing infrastructure used to run the computational experiments found in this paper below:\n• 1x NVIDIA RTX A6000 / 30GB RAM / 4x CPU -230 hours" } ]
Style representation learning builds contentindependent representations of author style in text. To date, no large dataset of texts with stylometric annotations on a wide range of style dimensions has been compiled, perhaps because the linguistic expertise to perform such annotation would be prohibitively expensive. Therefore, current style representation approaches make use of unsupervised neural methods to disentangle style from content to create style vectors. These approaches, however, result in uninterpretable representations, complicating their usage in downstream applications like authorship attribution where auditing and explainability is critical. In this work, we use prompting to perform stylometry on a large number of texts to generate a synthetic stylometry dataset. We use this synthetic data to then train humaninterpretable style representations we call LISA embeddings. We release our synthetic dataset (STYLEGENOME) and our interpretable style embedding model (LISA) as resources.
Learning Interpretable Style Embeddings via Prompting LLMs
[ { "figure_caption": "4. ... see all 87 targeted prompts in Appendix A.2", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Best F1 achieved by SFAM on a held-out validation set of examples at various dataset sizes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Statistics for the STYLEGENOME dataset.", "figure_data": "Reddit Authors1,000# of Reddit Posts10,000# of Interpretable Style Attributes1,255,874# of (Text, Style Attribute) labeled pairs 5,490,847", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Accuracy scores on STEL/STEL-or-Content, an evaluation framework for style measures proposed byWegmann and Nguyen (2021) andWegmann et al. (2022). \"LIWC\" results are fromWegmann and Nguyen (2021). \"LISA\" is the 768-dimensional style vector. \"LISA (...)\" uses LISA embeddings with the training dataset and embedding layer type denoted in (...). Gray indicates worse than random baseline performance on the adversarially challenging STEL-or-Content task. All approaches underperform on STEL-or-Content, but LISA approaches outperform or closely match existing style representation choices on STEL, while providing interpretability.", "figure_data": "ModelFormalComplex Numb3rC'tionAvgInterpretableRandom Baseline0.50/0.50 0.50/0.50 0.50/0.50 0.50/0.50 0.50/0.50Content-Aware RepresentationsSBERT0.78/0.00 0.54/0.01 0.81/0.04 0.86/0.00 0.75/0.01✗LUAR0.80/0.14 0.67/0.00 0.74/0.03 0.77/0.00 0.75/0.04✗Content-Independent Style RepresentationsLIWC0.52/ -0.52/ -0.50/ -0.99/ -0.63/ -✓Wegmann et al. (2022) 0.84/0.69 0.59/0.26 0.56/0.03 0.96/0.02 0.74/0.25✗LISA0.69/0.07 0.57/0.01 0.80/0.03 0.77/0.00 0.71/0.03✓LISA (Wegmann + w)0.72/0.07 0.61/0.03 0.81/0.08 0.68/0.00 0.71/0.05✓LISA (Wegmann + W ) 0.66/0.03 0.56/0.01 0.70/0.01 0.87/0.00 0.70/0.01✓LISA (LUAR + w)0.73/0.05 0.65/0.00 0.85/0.03 0.92/0.00 0.79/0.02✓LISA (LUAR + W )0.81/0.07 0.56/0.01 0.74/0.03 0.82/0.00 0.73/0.03✓", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Best F1 achieved by SFAM on a held-out validation set of examples with different sets of Stage 1 prompts used to annotate Reddit posts and generate the synthetic training data used during distillation.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Ajay Patel; Delip Rao; Ansh Kothary; Kathleen Mckeown; Chris Callison-Burch
[ { "authors": "Fernando Alva-Manchego; Louis Martin; Antoine Bordes; Carolina Scarton; Benoît Sagot; Lucia Specia", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations", "year": "2020" }, { "authors": "", "journal": "Inc. Amazon.com", "ref_id": "b1", "title": "Amazon Customer Reviews Dataset -s3.amazonaws", "year": "2018" }, { "authors": "Nicholas Andrews; Marcus Bishop", "journal": "", "ref_id": "b2", "title": "Learning invariant representations of social media users", "year": "2019" }, { "authors": "Katherine Atwell; Sabit Hassan; Malihe Alikhani", "journal": "", "ref_id": "b3", "title": "APPDIA: A discourse-aware transformerbased style transfer model for offensive social media conversations", "year": "2022" }, { "authors": "Jimmy Ba; Rich Caruana", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Do deep nets really need to be deep?", "year": "2014" }, { "authors": "Sebastian Bischoff; Niklas Deckers; Marcel Schliebs; Ben Thies; Matthias Hagen; Efstathios Stamatatos; Benno Stein; Martin Potthast", "journal": "", "ref_id": "b5", "title": "The importance of suppressing domain style in authorship analysis", "year": "2020" }, { "authors": "Benedikt Boenninghoff; Robert M Nickel; Steffen Zeiler; Dorothea Kolossa", "journal": "IEEE", "ref_id": "b6", "title": "Similarity learning for authorship verification in social media", "year": "2019" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b7", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b8", "title": "", "year": "" }, { "authors": "William Coster; David Kauchak", "journal": "Association for Computational Linguistics. CrowdFlower", "ref_id": "b9", "title": "Simple English Wikipedia: A new text simplification task", "year": "2011" }, { "authors": "Ning Dai; Jianze Liang; Xipeng Qiu; Xuan-Jing Huang", "journal": "", "ref_id": "b10", "title": "Style transformer: Unpaired text style transfer without disentangled latent representation", "year": "2019" }, { "authors": "Ona De Gibert; Naiara Perez; Aitor García-Pablos; Montse Cuadros", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Hate Speech Dataset from a White Supremacy Forum", "year": "2018" }, { "authors": "Dorottya Demszky; Dana Movshovitz-Attias; Jeongwoo Ko; Alan Cowen; Gaurav Nemade; Sujith Ravi", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "GoEmotions: A dataset of fine-grained emotions", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Zhenxin Fu; Xiaoye Tan; Nanyun Peng; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b14", "title": "Style transfer in text: Exploration and evaluation", "year": "2018" }, { "authors": "Fabrizio Gilardi; Meysam Alizadeh; Maël Kubli", "journal": "", "ref_id": "b15", "title": "Chatgpt outperforms crowd-workers for textannotation tasks", "year": "2023" }, { "authors": "Giovanni Grano; Andrea Di Sorbo; Francesco Mercaldo; Corrado A Visaggio; Gerardo Canfora; Sebastiano Panichella", "journal": "Association for Computing Machinery", "ref_id": "b16", "title": "Android apps and user feedback: A dataset for software evolution and quality improvement", "year": "2017" }, { "authors": "Sophie Groenwold; Lily Ou; Aesha Parekh; Samhita Honnavalli; Sharon Levy; Diba Mirza; William Yang; Wang ", "journal": "", "ref_id": "b17", "title": "Investigating African-American Vernacular English in transformer-based text generation", "year": "2020" }, { "authors": "Julien Hay; Bich-Lien Doan; Fabrice Popineau; Ouassim Ait; Elhara ", "journal": "", "ref_id": "b18", "title": "Representation learning of writing style", "year": "2020" }, { "authors": "Felix Hill; Roi Reichart; Anna Korhonen", "journal": "Computational Linguistics", "ref_id": "b19", "title": "SimLex-999: Evaluating semantic models with (genuine) similarity estimation", "year": "2015" }, { "authors": "I David; Holmes", "journal": "Computers and the Humanities", "ref_id": "b20", "title": "Authorship attribution", "year": "1994" }, { "authors": "Or Honovich; Thomas Scialom; Omer Levy; Timo Schick", "journal": "", "ref_id": "b21", "title": "Unnatural instructions: Tuning language models with (almost) no human labor", "year": "2022" }, { "authors": "Jiaxin Huang; Shixiang Shane Gu; Le Hou; Yuexin Wu; Xuezhi Wang; Hongkun Yu; Jiawei Han", "journal": "", "ref_id": "b22", "title": "Large language models can self-improve", "year": "2022" }, { "authors": "Somayeh Jafaritazehjani; Gwénolé Lecorvé; Damien Lolive; John Kelleher", "journal": "", "ref_id": "b23", "title": "Style versus content: A distinction without a (learnable) difference", "year": "2020" }, { "authors": "Vineet John; Lili Mou; Hareesh Bahuleyan; Olga Vechtomova", "journal": "", "ref_id": "b24", "title": "Disentangled representation learning for non-parallel text style transfer", "year": "2019" }, { "authors": "Jared Kaplan; Sam Mccandlish; T J Henighan; Tom B Brown; Benjamin Chess; Rewon Child; Scott Gray; Alec Radford; Jeff Wu; Dario Amodei", "journal": "", "ref_id": "b25", "title": "Scaling laws for neural language models", "year": "2020" }, { "authors": "Aleem Khan; Elizabeth Fleming; Noah Schofield; Marcus Bishop; Nicholas Andrews", "journal": "", "ref_id": "b26", "title": "A deep metric learning approach to account linking", "year": "2021" }, { "authors": "Prannay Khosla; Piotr Teterwak; Chen Wang; Aaron Sarna; Yonglong Tian; Phillip Isola; Aaron Maschinot; Ce Liu; Dilip Krishnan", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Supervised contrastive learning", "year": "2020" }, { "authors": "Moshe Koppel; Jonathan Schler; Shlomo Argamon", "journal": "Journal of the American Society for Information Science and Technology", "ref_id": "b28", "title": "Computational methods in authorship attribution", "year": "2009" }, { "authors": "Dianqi Li; Yizhe Zhang; Zhe Gan; Yu Cheng; Chris Brockett; William B Dolan; Ming-Ting Sun", "journal": "", "ref_id": "b29", "title": "Domain adaptive text style transfer", "year": "2019" }, { "authors": "Frederick Liu; Siamak Shakeri; Hongkun Yu; Jing Li", "journal": "", "ref_id": "b30", "title": "Enct5: Fine-tuning t5 encoder for non-autoregressive tasks", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b31", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Andrew L Maas; Raymond E Daly; Peter T Pham; Dan Huang; Andrew Y Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Learning word vectors for sentiment analysis", "year": "2011" }, { "authors": "Frederick Mosteller; David L Wallace", "journal": "Journal of the American Statistical Association", "ref_id": "b33", "title": "Inference in an authorship problem", "year": "1963" }, { "authors": "Ibrahim Naji", "journal": "", "ref_id": "b34", "title": "TSATC: Twitter Sentiment Analysis Training Corpus", "year": "2012" }, { "authors": "Xing Niu; Marianna Martindale; Marine Carpuat", "journal": "", "ref_id": "b35", "title": "A study of style in machine translation: Controlling the formality of machine translation output", "year": "2017" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b36", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Bo Pang; Lillian Lee", "journal": "USA. Association for Computational Linguistics", "ref_id": "b37", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "year": "2005" }, { "authors": "Ajay Patel; Nicholas Andrews; Chris Callison-Burch", "journal": "", "ref_id": "b38", "title": "Low-resource authorship style transfer with in-context learning", "year": "2022" }, { "authors": "Ellie Pavlick; Joel Tetreault", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b39", "title": "An empirical analysis of formality in online communication", "year": "2016" }, { "authors": "Yulia Shrimai Prabhumoye; Ruslan Tsvetkov; Alan W Salakhutdinov; Black", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Style transfer through back-translation", "year": "2018" }, { "authors": "Reid Pryzant; Richard Diehl Martinez; Nathan Dass; Sadao Kurohashi; Dan Jurafsky; Diyi Yang", "journal": "", "ref_id": "b41", "title": "Automatically neutralizing subjective bias in text", "year": "2020" }, { "authors": "Ella Rabinovich; Raj Nath Patel; Shachar Mirkin; Lucia Specia; Shuly Wintner", "journal": "", "ref_id": "b42", "title": "Personalized machine translation: Preserving original author traits", "year": "2017" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b43", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Sudha Rao; Joel Tetreault", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer", "year": "2018" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b45", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Parker Riley; Noah Constant; Mandy Guo; Girish Kumar; David C Uthus; Zarana Parekh", "journal": "", "ref_id": "b46", "title": "Textsettr: Few-shot text style extraction and tunable targeted restyling", "year": "2021" }, { "authors": "Olivia Elizabeth Rafael A Rivera-Soto; Juanita Miano; Ordonez; Aleem Barry Y Chen; Marcus Khan; Nicholas Bishop; Andrews", "journal": "", "ref_id": "b47", "title": "Learning universal authorship representations", "year": "2021" }, { "authors": "Paolo Rosso; Francisco Rangel; Martin Potthast; Efstathios Stamatatos; Michael Tschuggnall; Benno Stein", "journal": "Springer", "ref_id": "b48", "title": "Overview of pan'16: new challenges for authorship analysis: cross-genre profiling, clustering, diarization, and obfuscation", "year": "2016-09-05" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b49", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Elvis Saravia; Hsien-Chi Toby Liu; Yen-Hao Huang; Junlin Wu; Yi-Shin Chen", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "CARER: Contextualized affect representations for emotion recognition", "year": "2018" }, { "authors": "Florian Schroff; Dmitry Kalenichenko; James Philbin", "journal": "", "ref_id": "b51", "title": "Facenet: A unified embedding for face recognition and clustering", "year": "2015" }, { "authors": "Tianxiao Shen; Tao Lei; Regina Barzilay; Tommi Jaakkola", "journal": "Advances in neural information processing systems", "ref_id": "b52", "title": "Style transfer from non-parallel text by cross-alignment", "year": "2017" }, { "authors": "Efstathios Stamatatos", "journal": "Journal of the American Society for Information Science and Technology", "ref_id": "b53", "title": "A survey of modern authorship attribution methods", "year": "2009" }, { "authors": "R Yla; James W Tausczik; Pennebaker", "journal": "Journal of language and social psychology", "ref_id": "b54", "title": "The psychological meaning of words: Liwc and computerized text analysis methods", "year": "2010" }, { "authors": "Peter Tiersma; Lawrence M Solan", "journal": "Language", "ref_id": "b55", "title": "The linguist on the witness stand: Forensic linguistics in american courts", "year": "2002" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b56", "title": "Self-instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Alex Warstadt; Alicia Parrish; Haokun Liu; Anhad Mohananey; Wei Peng; Sheng-Fu Wang; Samuel R Bowman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b57", "title": "BLiMP: The benchmark of linguistic minimal pairs for English", "year": "2020" }, { "authors": "Alex Warstadt; Amanpreet Singh; Samuel R Bowman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b58", "title": "Neural network acceptability judgments", "year": "2019" }, { "authors": "Anna Wegmann; Dong Nguyen", "journal": "", "ref_id": "b59", "title": "Does it capture stel? a modular, similarity-based linguistic style evaluation framework", "year": "2021" }, { "authors": "Anna Wegmann; Marijn Schraagen; Dong Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "Same author or just same topic? towards content-independent style representations", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Jamie Brew", "journal": "", "ref_id": "b61", "title": "Huggingface's transformers: State-of-the-art natural language processing", "year": "2019" }, { "authors": "Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b62", "title": "From shakespeare to Twitter: What are language styles all about?", "year": "2017" }, { "authors": "Xiaoyuan Yi; Zhenghao Liu; Wenhao Li; Maosong Sun", "journal": "", "ref_id": "b63", "title": "Text style transfer via learning style instance supported latent space", "year": "2021" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b64", "title": "Character-level convolutional networks for text classification", "year": "2015" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b65", "title": "", "year": "" }, { "authors": "Jian Zhu; David Jurgens", "journal": "", "ref_id": "b66", "title": "Idiosyncratic but not arbitrary: Learning idiolects in online registers reveals distinctive yet consistent individual styles", "year": "2021" }, { "authors": "Kangchen Zhu; Zhiliang Tian; Ruifeng Luo; Xiaoguang Mao", "journal": "", "ref_id": "b67", "title": "Styleflow: Disentangle latent representations via normalizing flow for unsupervised text style transfer", "year": "2022" }, { "authors": "Details E Training", "journal": "", "ref_id": "b68", "title": "1 SFAM We use the EncT5 architecture", "year": "2019" }, { "authors": "E ", "journal": "", "ref_id": "b69", "title": "LISA We use the EncT5 architecture", "year": "2019" }, { "authors": "E ", "journal": "", "ref_id": "b70", "title": "3 LISA Embedding Layers We experiment with two types of embedding layers w and W . We also experiment with two training datasets", "year": "2021" }, { "authors": "Luar For", "journal": "", "ref_id": "b71", "title": "we use the train split with 5% held out as validation and we sample random authors as negative examples", "year": "" } ]
[ { "formula_coordinates": [ 3, 315.54, 235.1, 14.19, 8.06 ], "formula_id": "formula_0", "formula_text": "# of" } ]
2023-11-15
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b16", "b57", "b20", "b17", "b25", "b34", "b56", "b62", "b1", "b42", "b44", "b58", "b41", "b56" ], "table_ref": [], "text": "In anticipation of various applications such as robotics and accessibility, gaze estimation techniques have long been the subject of active research [1,4,17,58]. Appearancebased methods take the machine learning approach to directly estimate 3D gaze directions from input eye or face images, and have shown great potential for robust realworld gaze estimation [11,21]. One of the key challenges in appearance-based gaze estimation is its limited generalization performance to unseen conditions. The performance of gaze estimation models is often affected by various factors, including individuality, illumination, and the distributions of gaze and head pose. Although various datasets have been proposed [18,19,26,35,57,63], creating a generic model that can handle arbitrary conditions is still not a trivial task.\nMost of the existing appearance-based methods take monocular images as input and formulate gaze estimation as a task to estimate the gaze direction vector defined in the input image coordinate system. For this reason, generalization difficulties in appearance-based gaze estimation vary greatly depending on the factors. Specifically, unseen people and lighting conditions affect the facial appearance but do not fundamentally change the pattern of faces in the image. In contrast, unseen head poses and their associated unseen gaze directions lead to entirely new patterns and have a more direct impact on the input-output relationship. Therefore, it is usually more difficult for existing gaze estimation models to generalize to unseen head poses.\nIf not limited to appearance-based methods, multicamera geometry-based eye tracking systems have long been the subject of active research [2,39,43,45,50]. Such a multi-view approach may solve the above problems in appearance-based methods. For many application scenarios, such as driver monitoring and public displays, using multiple synchronized machine vision cameras is a sufficiently realistic assumption for appearance-based gaze estimation. The expected effect of multi-view input is not limited to simply increasing information and improving accuracy. The model could acquire a head pose-independent feature representation by training a gaze estimation model considering the geometric positional relationship between input images. However, considering the cost of training data acquisition, training a model specialized for cameras in a particular positional relationship is not practical. The key challenge is to train a model that can perform accurate gaze estimation even if the camera's positional relationship changes between inference and training.\nThis work proposes a multi-view appearance-based gaze estimation method that utilizes the relative rotation between cameras as additional input information (Fig. 1). Assuming the normalization process used in appearance-based gaze estimation [59], a relative rotation matrix can always express the interrelationship of camera positions. The main idea of the proposed method is to use the rotation matrix as a constraint for feature fusion between images. The proposed method consists of stacked rotation-constrained feature fusion blocks that can be combined with arbitrary feature extraction backbones. In each block, one of the features is multiplied by the rotation matrix to transfer to the other image. Although the physical rotation is not originally applicable to the feature space, the model is expected to learn to extract rotatable features through the explicit training process incorporating the rotation operation. We demonstrate that our method acquires rotatable feature representation through experimental analyses on multiple datasets [42,57]. The proposed method achieves better generalizability than baseline approaches, including state-of-the-art domain generalization methods.\nOur key contributions are threefold. First, this paper addresses the camera-independent multi-view appearancebased gaze estimation task for the first time in the literature. Second, we propose a novel cross-view feature fusion approach incorporating the relative rotation matrix into multiview gaze estimation. Our method uses the rotation matrix as a constraint to transfer features between images, and we provide a thorough analysis of the internal feature representation. Third, we demonstrate that multi-view gaze estimation improves generalization performance for unseen head poses. Through experiments, we show that the accuracy gains from multi-view training are superior to state-of-theart methods. We also provide thorough analyses and visualizations of the internal feature representation obtained through the rotation constraint." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b8", "b28", "b41", "b56", "b58", "b61", "b9", "b11", "b40", "b47", "b62", "b5", "b7", "b8", "b56", "b61", "b39", "b54", "b4", "b31", "b45", "b34", "b62", "b56", "b25", "b56", "b41", "b43", "b53", "b63" ], "table_ref": [], "text": "Appearance-based Gaze Estimation. Appearancebased gaze estimation is a task to regress 3D gaze directions from full-face [7, 9,29,42,57,59,62] or eyeregion images [10,12,19,41,48,56,61,63]. In recent years, the advances in deep neural networks have enabled gaze estimation techniques with decent within-dataset performance [6,8,9,57,62]. However, noticeable performance degradation can be observed when deployed in real-world applications. Consequently, the performance of state-of-the-art approaches would be limited under unconstrained conditions, primarily when the gap arising from the aforementioned factors is significantly large during training and testing. While there are ongoing attempts on personalization [40,55], and domain adaptation [5,32,46], training models that can generalize to unknown environments is a significant challenge. This study addresses this domain gap issue by improving the generalization ability to unseen head poses.\nGiven the various factors to consider for generalizing appearance-based gaze estimation, previous studies have proposed many datasets. Because it is difficult to cover all factors concurrently, many datasets were constructed focusing on a certain diversity [19,35,63]. Covering a variety of head poses is challenging, and each recent dataset still has limitations, such as the lighting conditions diversity [57] and the gaze labels accuracy [26]. Although intended for a monocular estimation task, the ETH-XGaze dataset [57] was created using multiple synchronized cameras and can be used for multi-view purposes. Recent work on synthesizing face images with ground-truth gaze directions further indicates the possibility of acquiring training data for multiview estimation [42,44,54,64]. This study examines the novel task of multi-view gaze estimation on these datasets." }, { "figure_ref": [], "heading": "Domain Generalization for Gaze Estimation.", "publication_ref": [ "b15", "b27", "b39", "b1", "b2", "b42", "b44", "b59", "b32", "b52", "b14", "b19", "b30", "b3", "b35", "b36", "b43", "b53" ], "table_ref": [], "text": "There are some prior attempts to alleviate the gap between training and testing environments through domain generalization. Some prior work [16,28] use large-scale unlabeled face images for pretraining or as an additional training signal to generalize gaze estimator. However, these approaches still require extra samples from either the target domain or the Internet, which is often nontrivial to be prepared in practice even without ground-truth gaze labels. The proposed method differs from these approaches because it does not use extra data for achieving generalization.\nAnother line of work on domain generalization directly improves model robustness on unseen domains by removing person-dependent factors during training and is thus closer to our objective [7,40]. We note that the goal of multi-view gaze estimation is not only domain generalization, and the direction of this work is not strictly consistent with them. Nevertheless, our approach improves robustness on unseen head poses, which none of the above-mentioned methods have explicitly proven.\nMulti-view Feature Fusion. Most previous works on multi-view eye tracking take the model-based approach [2,3,39,43,45], which requires a more complex setup with external light sources. Although some image-only multi-view methods exist [50], they still rely on geometric eyeball models. Prior research has shown that such geometry or shapebased approaches are inferior to appearance-based methods in terms of performance [60]. In contrast, appearance-based multi-view gaze estimation [22, 30] has been understudied. Lian et al. [30] proposed directly concatenating features from stereo images to predict 2D on-screen gaze positions. Gideon et al. [22] proposed disentangling image features through feature swapping between multi-view videos, different from our frame-by-frame setting. However, these methods require fixed cameras during training and testing, and their effectiveness in unknown camera configurations remains unproven. The proposed method uses the relative rotation matrix between camera pairs as additional input to overcome this drawback, achieving multi-view gaze estimation generalizable to unseen camera pairs.\nMulti-view input has also been explored in many computer vision tasks, but the direct use of these methods for gaze estimation is not straightforward. Recent work on multi-view stereo constructs and uses 3D voxel representation from multi-view features [23, 33,52,53]. While such a voxel representation is suited to geometry-related tasks, it is not directly applicable to gaze estimation, which is rather an attribute regression task. NeRF [15,20,31,34,36,37] can also be applied to gaze redirection and training data synthesis [44,54], but it is not yet directly related to estimation tasks. Unlike these approaches strongly based on physical geometry, our method uses relative camera rotation as a soft constraint for learnable feature extraction and fusion blocks." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "This work aims to design a network to efficiently perform feature transitions and fusions between images according to the input rotation matrix. Since rotation has an extremely low dimensionality compared to image features, it is not optimal to feed it into the network as another feature. The proposed method achieves this goal via stacked rotation-constrained feature fusion blocks." }, { "figure_ref": [ "fig_2" ], "heading": "Overview", "publication_ref": [ "b58" ], "table_ref": [], "text": "Fig. 2 shows the overview of the proposed network, which consists of stacked rotation-constrained feature fusion blocks. The inputs are the target image I tgt and reference image I ref . As described earlier, these face images are assumed to be normalized for appearance-based gaze estimation [59], and their mutual relationship can be fully described by the rotation matrix R. R indicates the rotation from the reference camera to the target camera coordinate systems and can be obtained either from the extrinsic camera calibration, or head poses estimated through the normalization process. While the goal is to estimate the gaze direction g tgt of the target image I tgt , the role of the target I tgt and the reference I ref are symmetrical in our method. The network components on both sides thus share weights, but the stacked fusion blocks do not share weights.\nGiven input images, the proposed method first extracts two features. The network first extracts the backbone feature vectors f tgt and f ref using the Backbone Extractor module. The Rotatable Feature Extractor module then takes these backbone features as inputs and outputs D threedimensional vectors. These vectors are stacked to form rotatable feature tensors\nF (0) ref , F(0)\ntgt ∈ R 3×D . While backbone features are used unaltered throughout the process, the rotatable features are updated through the rotationconstrained feature fusion blocks. The output gaze vectors g tgt and g ref are defined as 3D unit vectors in each normalized camera coordinate system." }, { "figure_ref": [], "heading": "Rotation-Constrained Feature Fusion", "publication_ref": [], "table_ref": [], "text": "As discussed earlier, the basic idea behind the rotationconstrained feature fusion block is directly applying the rotation (multiplying the rotation matrix) in the feature space. We expect the network to learn to extract rotatable feature representation by introducing a rotation-constrained feature fusion mechanism. The rotated and transferred features are expected to complement the information in each image, and therefore the optimal features cannot be obtained simply by observing individual images. The proposed method is designed to achieve optimal feature transfer by repeating the process of fusing the rotated features with the backbone features of the destination.\nIn the i-th block, the model first multiplies the rotation matrix R to the reference rotatable feature F Gaze Estimator\n𝐠 #$% (') 𝐑 ) 𝐅 !\"* (+) 𝐑𝐅 #$% (+) 𝐟 !\"! 𝐟 #$% 𝐅 !\"! (') 𝐅 #$% (') 𝐅 !\"! (+) 𝐅 #$% (+)\nRotatable Feature Extractor Rotation-constrained Feature Fusion Block 1 Fuser Fuser Gaze Estimator\n𝐠 !\"! (,)\nGaze Estimator outputs g (i) , not just the final output g (l) as\n𝐠 #$% (,) 𝐑 ) 𝐅 !\"! (,-') 𝐑𝐅 #$% (,-') 𝐟 !\"! 𝐟 #$% 𝐅 !\"!(\nL total = l i=1 α l-i • L i (g (i) tgt , g (i) ref ).(1)\nL i is the angular loss for the i-th block defined as\nL i = arccos (g (i)⊤ tgt ĝtgt ) + arccos (g (i)⊤ ref ĝref ),(2)\nwhere ĝtgt and ĝref indicate the ground-truth 3D gaze directions corresponding to target and reference images. α is a hyperparameter indicating the decay to gaze estimated from earlier blocks." }, { "figure_ref": [], "heading": "Rotation Matrix", "publication_ref": [ "b58", "b58" ], "table_ref": [], "text": "As discussed earlier, the relative rotation matrix R can be obtained either from camera calibration or head pose estimation. R describes the rotation term between the normalized cameras. The meaning of R is the same in either approach, and the results are the same if there is no head pose estimation error. Although a translation vector t is also needed to describe the relationship between two cameras completely, it is uniquely determined by R and can be ignored under the assumption of data normalization [59].\nFor the first option based on camera calibration, the rotation matrix can be calculated using the camera extrinsic parameters and the normalization matrices [59] \nas R = N tgt RN ⊤ ref .\nR is the rotation matrix obtained via camera calibration and therefore corresponds to the coordinate systems of the original camera before normalization. N tgt and N ref are the normalization matrices, which indicate the transformation from the original to the normalized camera coordinate systems. For the second option, the rotation is calculated using head poses estimated through the normalization process. If we denote the rotation from the head coordinate system to the normalized camera coordinate system as H tgt and H ref , the rotation matrix is R = H tgt H ⊤ ref ." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b23", "b37", "b46" ], "table_ref": [], "text": "Unless otherwise noted, we set the number of blocks l = 3 in all experiments. We used ResNet-50 [24] as the Backbone Extractor module, which was initialized with pretrained weights on ImageNet [13] and fine-tuned through the training process. We used two-layered MLPs for the Rotatable Feature Extractor and the Gaze Estimator, and a three-layered MLP for the Fuser, with D = 512. The Rotatable Feature Extractor receives the backbone feature vector from the Backbone Extractor and outputs three Ddimensional vectors. These feature vectors are stacked to form a 3×D rotatable feature matrices. The Fuser and Gaze Estimator first flatten the 3 × D shaped rotatable feature matrices. The flattened matrices are then concatenated with the backbone feature vectors from the other view. Like the Rotatable Feature Extractor, the Fuser also outputs three D-dimensional vectors stacked to form a 3 × D matrix. All MLPs use ReLU [38] as the activation layer.\nDuring the training, we applied random mask data augmentation to the input images to force the model to exploit features from another view. Inspired by random erasing [65], we masked the input image with multiple randomly sized (5-30% of the image width) small squares with a 50% probability. The quantity of squares was determined randomly so that the proportion of the total area covered by the squares was restricted to 50-60% of the face image. We also applied color jitter, translation, and scaling. We set the saturation, brightness, and contrast range to 0.1. The intensities of translation and scaling were set to be small (0.01 for translation, 0.99-1.01 for scaling) to represent possible face alignment errors during the normalization process. We apply the same data augmentation to all baseline methods for fair comparisons.\nWe set the training batch size to 256. We used Adam [27] optimizer with a weight decay of 1 × 10 -6 . We used Cycli-cLR [47] as the learning rate scheduler, with the base and maximum learning rate of 1 × 10 -6 and 1 × 10 -3 , decaying 0.5 per cycle. The cycle steps were determined so that one cycle was completed in one epoch. We used mixed precision training and set the block decay α to 0.5." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [ "b56", "b41", "b62", "b56", "b23", "b8", "b23", "b13", "b50", "b23", "b39", "b24" ], "table_ref": [], "text": "We use two datasets that provide synchronized multiple views of participants and corresponding 3D gaze directions. ETH-XGaze [57] contains 110 participants, each captured with 18 cameras simultaneously. Since one of our goals is to evaluate the generalization performance against unseen head poses, we split the training subset of 80 participants instead of using their official test data. We directly use the camera extrinsic parameters and head pose estimation results provided with the dataset. MPII-NV is a synthetic dataset created by following Qin et al.'s approach [42]. The dataset is based on the MPIIFaceGaze dataset [63] that consists of monocular images of 15 participants. 3D face meshes are first reconstructed from the original MPI-IFaceGaze images and then rotated with their ground-truth gaze vector to generate face images of new head poses. We synthesized the data so that the head pose distribution is the same as that of the ETH-XGaze training set. Since it is a purely synthetic dataset, we can use the camera position for image rendering to compute the relative rotation matrix. After data normalization, the input image resolution for both datasets is 224 × 224. Both datasets were collected with IRB approval or consent from participants.\nWe used k-fold cross-validation regarding participant IDs (k = 4 for ETH-XGaze and k = 3 for MPII-NV), and all methods were trained for 15 epochs without validation data. Given the practical scenario where the camera positions at the deployment are not necessarily known at the training time, generalization performance for unseen camera positions is an important metric. Therefore, we further split the cameras into training and test sets in each fold to evaluate the generalization performance. Specifically, we split the 18 cameras of both ETH-XGaze and MPII-NV into 12 for training and 6 for testing, three of which are within the head-pose range of the training camera set (interpolation), and the other three are outside (extrapolation). Please note that head poses with respect to the camera position are nearly fixed under the ETH-XGaze setup, and there is a strong correlation between the head poses and camera positions. Training and testing image pairs are constructed by randomly selecting two cameras from each set.\nConcat represents the approach of Lian et al.\n[30] as the baseline multi-view appearance-based method. It extracts features from two images using weight-shared ResNet-50 and then estimates the gaze direction using concatenated features. Single is the single-image baseline method corresponding to the one reported in ETH-XGaze paper [57]. It extracts features from the monocular input image using ResNet-50 [24] followed by a fully-connected layer to output gaze direction. Gaze-TR [9] is one of the state-of-theart methods for single-image gaze estimation. We adopted the hybrid version containing a ResNet-50 [24] extractor and a transformer encoder [14,51]. It extracts features from ResNet-50 [24] and feeds the feature maps to the transformer encoder, followed by an MLP to output the gaze directions. Frontal Selection is fundamentally a single-view model, but multi-view information is utilized naively during inference. It predicts the gaze based on the more frontal image from the reference and target images.\nSince our goal is generalizable gaze estimation, we also include some single-image domain generalization approaches as baselines. Since our method requires no prior knowledge of the target domain, we excluded domain adaptation methods which require target domain data. Please note that unsupervised domain adaptation still requires target domain images. PureGaze [7] introduced an extra CNN-based image reconstruction module to the ResNet-18 backbone and MLP. We followed the official implementation for MLP and the reconstruction module while replacing the backbone with ResNet-50. DT-ED [40] first extracts the latent codes of appearance, gaze, and head pose from the source image, then a decoder is used to reconstruct the target image from the rotated head pose and gaze features, and an MLP is used to predict gaze directions from the gaze features only. We follow the original structure of 4-block DenseNet [25] and a growth rate of 32, and the target image was randomly chosen from the same subject." }, { "figure_ref": [ "fig_4" ], "heading": "Performance Comparison", "publication_ref": [], "table_ref": [], "text": "Within-Dataset Evaluation. We first conduct a withindataset evaluation on ETH-XGaze and MPII-NV. For both datasets, we present the cases where the relative rotation matrix R is from the camera extrinsic calibration. We report the angular error averaged over the k folds in Table 1a. The Head pose column corresponds to the split of the cameras as described in Sec. 4.1. For ETH-XGaze under the seen head pose condition, multi-view approaches (Concat and Proposed) consistently outperform single-image methods. However, the Concat model shows lower accuracy in the unseen head pose condition than the single-image Table 1. Within-and cross-dataset evaluation using rotation from camera calibration. Each number indicates the mean angular error of the proposed and baseline methods.\nbaselines. This shows the difficulty of learning a generic head pose-independent gaze feature under the multi-view condition. Our proposed method with feature rotation and repetitive fusion achieves the best performance. In particular, it improves 0.57°(14.0%) over the Single baseline in the seen head pose condition, and more prominently, improves 1.54°(29.0%) in the unseen head pose condition. This demonstrated the significance of our proposed method for novel-view-generalizable gaze estimation.\nFor MPII-NV, the proposed method outperforms all single-image and multi-view baselines under unseen head pose conditions. The performance improvement is 8.6% and 17.9% over the Single baseline in the seen and unseen head pose conditions, respectively. It is also worth noting that while Concat and our proposed method perform almost equivalently well in the seen condition, our proposed method outperforms Concat by 16.1% in the unseen condition. This proves the advantage of the proposed feature rotation and repetitive feature fusion in fusing head-poseindependent representations. Fig. 3 further visualizes the difference of mean gaze error between the proposed and Single model in the ETH-XGaze unseen head pose condition. We can see that the gaze estimation errors drastically decrease when the target head pose corresponds to extrapolation (cameras 11, 14, 17) and the reference head pose corresponds to interpolation (cameras 2, 5, 8). We can also confirm a decent error reduction even when the head poses correspond to extrapolation.\nCross-Dataset Evaluation. We further evaluate the performance of the proposed method in the cross-dataset setting. We train on one of ETH-XGaze and MPII-NV and evaluate angular errors on the other dataset. Since the two datasets contain different participants, we use all participants in one dataset for training and all participants in the other for the test. Table 1b shows the results of the crossdataset evaluation in the unseen and seen head pose conditions. We can observe the same tendencies as within-dataset evaluation in Table 1a. In both conditions, the accuracy of our proposed method is much higher than any of the baseline methods. From these results, it can be seen that the proposed method is also effective in reducing the inter-domain gap. The proposed method performs better than the Concat baseline, indicating that the reduction of the inter-domain gap owes to the rotation-constrained feature fusion rather than multi-view estimation." }, { "figure_ref": [], "heading": "Detailed Performance Analyses", "publication_ref": [ "b48" ], "table_ref": [], "text": "Ablation Studies. In Table 2, we compare different usage of the rotation matrix. The second row (w/o Rotation matrix corresponds to the model that uses the stacked architecture but concatenates the features without rotation. The third row (MLP Encoding) is the case where the rotation matrix is used as an additional feature instead of multiplication at each block. In this case, we concatenated the flattened rotation matrices with the features and then fed them to an MLP encoder which is almost the same as Fuser except for the input feature dimension. As with the proposed model, the learnable weights are shared for target and reference but not shared across blocks for both cases. Although feeding flattened rotation matrices (MLP Encoding) improves the accuracy from the Single, it is still inferior to the proposed method. We also change the number of fusion blocks from the fourth to the last row in Table 2. The impact of the stacked fusion blocks is different under unseen head pose conditions. However, the three-block model has the best performance for unseen head poses and is almost the best for the seen head pose condition.\nThe rightmost column in Table 2 shows the inference times of our method on NVidia V100. Single model's inference time is 10.7 ms and Gaze-TR is 35.0 ms. We can observe that the additional inference cost from additional fusion blocks is relatively minor, and the inference time is almost double that of the Single baseline. Although the increase in computational cost is one limitation of the proposed method, it is still faster than more complex singleimage methods such as Gaze-TR and is considered to be well within the practical range.\nAccuracy of Rotation Matrix. In Table 3, we compare the performance of the proposed method using a rotation matrix obtained from estimated head poses without camera calibration. Since the head pose is expected to be perfectly accurate on synthetic MPII-NV, we only evaluate the cases using real images from ETH-XGaze. Proposed (Calib.) and Proposed (Pose) correspond to the cases where the rotation matrices for training data are obtained from calibration and head pose, respectively. As a reference, we also show the performance of the Concat model. Please note that all baseline methods, including Concat, do not use a rotation matrix as input. Thus, the numbers are the same as Table 1a.\nIt can be seen that the proposed model trained with calibration is sensitive to the noise of the rotation matrix at inference times. However, Proposed (Calib.) method still performs best for the unseen head pose condition. If the model is trained with rotation matrices from head pose (Proposed (Pose)), unseen head pose performance degrades while the performance is improved for the seen head pose condition.\nWhile the Frontal Selection in Table 1 shows that simply utilizing multi-view information can improve performance from the Single, our proposed approach performs best in most cases, demonstrating the significance of our rotationconstrained feature fusion. [49]. We use the XGaze dataset under the unseen head pose condition. Isomap embedding was generated from the initial rotatable features F (0) obtained from 1000 test samples with a neighborhood size of 30. Marker shape indicates different feature types, and color indicates the sample ID. The arrows indicate the feature position before and after rotation. Since the rotation is symmetric, we only visualize the rotation from the reference to the target. We can clearly observe that the rotation operation brings the feature closer to the other. This indicates that, as intended, the 3D Feature Extractor module learns to extract rotatable features through our rotation constraint." }, { "figure_ref": [ "fig_8" ], "heading": "Rotatable Feature Representation", "publication_ref": [], "table_ref": [], "text": "Fig. 5 shows an example of the rotatable features. In this plot, we interpret the rotatable features as a set of D 3D vectors and transform each 3D vector into the pitch-yaw coordinate system. The size and color of the dots represent the magnitude of the norm of the 3D vectors, and large yellow dots indicate vectors with larger norms. The distributions of RF Each row corresponds to the rotatable features at different fusion stages from F (0) to F (3) . Larger and yellower dots represent elements with a larger norm." }, { "figure_ref": [], "heading": "MLP Encoding Proposed", "publication_ref": [], "table_ref": [], "text": "Target camera index Reference camera index Reference camera index tgt . We hypothesize that rotatable features adaptively evolve through stacked fusion blocks into complementary representations of the backbone features." }, { "figure_ref": [ "fig_9", "fig_9", "fig_10", "fig_9", "fig_9" ], "heading": "Contribution of Reference Images", "publication_ref": [], "table_ref": [], "text": "Fig. 6 illustrates the contribution ratio of the reference features for each camera pair. As a metric for feature contribution, we calculated the sum of the gradient of the backbone features. We use the XGaze dataset under the unseen head pose condition as the experimental setup. A larger number represents more contribution of the reference image to the estimation result. Fig. 6 shows the visualization results of the MLP Encoding baseline (left) and our proposed method (right). Comparing the two visualization results, we can confirm that our method adaptively uses the reference images. While MLP Encoding model always ignores the reference images, our method uses the reference information mainly depending on its head poses. Fig. 7 further shows sample images with their corresponding contribution ratios. The edge color of each image represents its contribution as in Fig. 6. Overall, images with a view that captures the face from below have a higher contribution. This is consistent with Fig. 6 where, e.g., camera 14 shows a more significant contribution. Occlusion caused by eyelids is possibly a significant factor in the minor contribution of images captured from the top view." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b63", "b23", "b56", "b41" ], "table_ref": [], "text": "In this paper, we presented a novel multi-view appearance-based gaze estimation task. We propose a crossview feature fusion approach using the relative rotation matrix between input images as a constraint when transferring the features to the other image. In addition to its practical significance, the proposed method has the advantage of improving generalization performance for unseen head poses. Through experiments, we demonstrated the advantage of our method over state-of-the-art baseline, including single-image domain generalization methods.\nThe limitation of our approach compared to a singleimage baseline is the slightly increased hardware requirements. The requirements of our method are not particularly unrealistic compared to existing eye trackers, and this is ultimately a matter of trade-offs. The same can be said about the effect of camera calibration. It is also essential for future work to develop lightweight models that are robust to errors in the rotation matrix and time synchronization.\n𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐈 !\"! 𝐈 #$% 𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐈 !\"! 𝐈 #$% 𝐈 !\"! 𝐈 #$% 𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐈 !\"! 𝐈 #$% Figure 2\n. Scatter plot visualization of the rotatable features. Each of the D 3D vectors is represented in a pitch-yaw coordinate system. Each row corresponds to the rotatable features at different fusion stages. Larger and yellower dots represent elements with a larger norm.\nDT-ED Since we use a richer full-face patch instead of an eye-region patch as input of DT-ED, we modified the appearance and gaze latent code sizes from 64 and 2 to 512 and 16. Following the original setting, we used angular loss for gaze estimation and ℓ 1 for reconstruction. For the learning rate, we found that the scaling and ramp-up settings in the original paper make it difficult for the model to reconstruct the target image. Therefore, we trained the model with a base learning rate of 5 × 10 -4 decaying by 0.8 every 1 epoch, similar to another gaze redirection work [64]. Unlike other baselines, the batch size is set to 60.\nGaze-TR In our implementation, we used ResNet-50 [24] to extract feature maps from the images. The size of the feature map was 7 × 7 × 32, which is then fed to a sixlayer transformer. Finally, an MLP takes the feature vector as input and estimates the gaze direction.\nPureGaze When training models on both ETH-XGaze [57] and MPII-NV [42] dataset, we used the default mask image in the official PureGaze repository1 generated for normalized ETH-XGaze face images to compute the adversarial reconstruction loss. For the extra hyperparameters controlling the relative contribution of the adversarial loss to the total loss, we followed the official implementation." }, { "figure_ref": [], "heading": "D. Definition of the Rotation Matrix", "publication_ref": [], "table_ref": [], "text": "As discussed in the paper, there are two approaches to computing the relative rotation matrix R using either cam-era calibration or head poses estimation. In the following, we provide detailed explanations of two claims: 1) the final R becomes the same in either approach, and 2) the relative translation t is uniquely determined by R and can be ignored.\nFirst, we show that the two definitions R = N tgt RN ⊤ ref and R = H tgt H ⊤ ref are interconvertible. Let us denote the camera extrinsic parameters, i.e., the transformation from the reference to the target camera coordinate systems, as C ∈ R 4×4 . If we denote head poses in the original camera coordinate systems before normalization as Ĥ ∈ R 4×4 , their relationship can be defined as Ĥtgt = C Ĥref .\n(\n)1\nIf we further denote an extended 4 × 4 normalization matrix as N, the head poses H after normalization can also be obtained from the normalization matrix as\nH = N Ĥ.(2)\nFrom Eq. 1 and Eq. 2, we can derive that\nN tgt CN ⊤ ref = N tgt Ĥtgt Ĥ⊤ ref N ⊤ ref = (N tgt Ĥtgt )(N ref Ĥref ) ⊤ = H tgt H ⊤ ref .\nTherefore, we can conclude that the two definitions are interconvertible and have the same meaning. Note that this applies not only to the rotation component R but also to the translation component t.\nNext, we show that the translation component t is uniquely determined by the rotation R under the assumption of data normalization. One of the key properties of the normalization process is that the origin of the gaze vector is located at a fixed distance d on the z-axis of the camera coordinate system. Therefore, this origin o = (0, 0, d, 1) ⊤ does not move when the above transformation matrix is applied:\no tgt = R t 0 1 o ref = o ref .(3)\nIf we denote R = (r x , r y , r z ) where r x , r y , r z ∈ R 3 are the column vectors of the rotation matrix, substituting this into Eq. 3 yields\n    0 0 d 1     = r x r y r z t 0 1     0 0 d 1     = dr z + t 1 .\nTherefore, the translation component t is uniquely defined by the fixed distance d and the rotation vector r z as\nt =   0 0 d   -dr z ,(4)\nand can be ignored in our problem setting." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was supported by JSPS KAKENHI Grant Number JP21K11932." }, { "figure_ref": [], "heading": "Head pose", "publication_ref": [], "table_ref": [], "text": "Seen Unseen w/o Separate Fusers 3.49°5.10°w /o Backbone Features 3.46°5.20°P roposed 3.50°4.95°T able 1. Ablation studies of learnable modules. We ablated the separate weight of the Fusers and 3D Feature Extractor." }, { "figure_ref": [], "heading": "A. Detailed Ablation Studies", "publication_ref": [], "table_ref": [], "text": "We perform ablation studies on several learnable modules of the proposed method to validate our design choice on Fuser. The first row (w/o Separate Fusers) in Table 1 corresponds to a variant of the proposed method where the Fusers in each block share the same weights. The model in the second row (w/o Backbone Features) uses the initial rotatable feature F (0) as input to Fusers instead of the backbone feature f . This model, therefore, does not distinguish between rotatable and backbone features.\nWhile both methods perform on par with the proposed method under the seen setting, the proposed method shows superiority in the unseen setting. One possible explanation is that stacking different fusion blocks allows the model to focus on different patterns depending on the depth of the block and that the original backbone feature still contains valuable information for appearance-based gaze estimation." }, { "figure_ref": [], "heading": "B. Visualization of Rotatable Features", "publication_ref": [], "table_ref": [], "text": "In Fig. 1, we depict more Isomap embedding of the initial rotatable features from test subjects. Each Isomap embedding was generated from the features obtained from each target participant, and all other visualization details are consistent with the main paper. The visualization results confirm that the proposed method acquires personindependent rotatable feature representations.\nIn Fig. 2, we also show more scatter plot visualizations of the rotatable features from test subjects in the yaw-pitch coordinate system. We can consistently observe the tendency for feature distributions to converge before the first fusion block and then diverge in later blocks across different subjects. It can be seen that the proposed method dynamically updates rotatable features even with a slight rotation (the upper right example in Fig. 2)." }, { "figure_ref": [], "heading": "C. Baseline Implementation Details", "publication_ref": [], "table_ref": [], "text": "Unless otherwise noted, all baseline methods follow the same training hyperparameters as used for the proposed method in the main paper. We note that we did not tune the hyperparameters in favor of the proposed method. Instead, we used common choices, most of which already comply with ResNet and PureGaze. With the Cyclic LR scheduler, " } ]
Appearance-based gaze estimation has been actively studied in recent years. However, its generalization performance for unseen head poses is still a significant limitation for existing methods. This work proposes a generalizable multi-view gaze estimation task and a cross-view feature fusion method to address this issue. In addition to paired images, our method takes the relative rotation matrix between two cameras as additional input. The proposed network learns to extract rotatable feature representation by using relative rotation as a constraint and adaptively fuses the rotatable features via stacked fusion modules. This simple yet efficient approach significantly improves generalization performance under unseen head poses without significantly increasing computational cost. The model can be trained with random combinations of cameras without fixing the positioning and can generalize to unseen camera pairs during inference. Through experiments using multiple datasets, we demonstrate the advantage of the proposed method over baseline methods, including state-of-the-art domain generalization approaches.
Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation
[ { "figure_caption": "Figure 1 .1Figure 1. Overview of the proposed multi-view gaze estimation task. We estimate the 3D gaze direction from multiple synchronized images. The model can be generalized to unseen camera combinations unavailable during training by leveraging the relative rotation between cameras.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "module then fuses the rotated feature with the backbone feature f tgt of the target image to obtain an updated rotatable feature F (i) tgt of the target image. The network is symmetric and applies the same operation to update the rotatable feature of the reference image using the back-rotated reference feature R ⊤ F (i) tgt . The Gaze Estimator modules then output intermediate estimates of gaze directions g (i) tgt and g (i) ref using both backbone and rotatable features. The model repeats the above fusion process until l blocks, and the l-th estimated gaze g (l) tgt becomes the final output. The loss function L total is defined for all intermediate", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. The overview of the proposed network which consists of stacked rotation-constrained feature fusion blocks. The subscripts ref and tgt indicate reference and target images, respectively, and the superscripts denote block id. While network components share weights between the target and reference sides, they do not share weights across stacked blocks.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Visualization of the performance gain from multi-view input. The numbers on the x and y axis indicate the camera index in ETH-XGaze. The numbers and colors in the matrix indicate the mean gaze error between the gaze estimation errors of the proposed method and Single model.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "2 .2Ablation studies of the rotation encoding approaches and the number of fusion blocks. Inference time is benchmarked on a single NVIDIA V100 GPU. roposed (Calib.) 4.63°5.68°P roposed (Pose.) 3.63°7.12°T able 3. ETH-XGaze within-dataset evaluation using rotation matrices obtained from head pose without calibration. Each number indicates the mean angular error.", "figure_data": "", "figure_id": "fig_5", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Isomap embedding of the initial rotatable features. The right side shows the example input samples. F (0) ref , F (0) tgt , and RF (0) ref of the same sample are represented in the same color on the left side plot.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "FigFig. 4 shows the features F (0) ref , F (0) tgt , and RF (0)ref embedded in Isomap[49]. We use the XGaze dataset under the unseen head pose condition. Isomap embedding was generated from the initial rotatable features F (0) obtained from 1000 test samples with a neighborhood size of 30. Marker shape indicates different feature types, and color indicates the sample ID. The arrows indicate the feature position before and after rotation. Since the rotation is symmetric, we only visualize the rotation from the reference to the target. We can clearly observe that the rotation operation brings the feature closer to the other. This indicates that, as intended, the 3D Feature Extractor module learns to extract rotatable features through our rotation constraint.Fig.5shows an example of the rotatable features. In this plot, we interpret the rotatable features as a set of D 3D vectors and transform each 3D vector into the pitch-yaw coordinate system. The size and color of the dots represent the magnitude of the norm of the 3D vectors, and large yellow dots indicate vectors with larger norms. The distributions of RF", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Scatter plot visualization of the rotatable features. Each of the D 3D vectors is represented in a pitch-yaw coordinate system. Each row corresponds to the rotatable features at different fusion stages from F (0) to F(3) . Larger and yellower dots represent elements with a larger norm.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Analysis of contributions of features from each view. The values represent the contribution ratio of the feature from the reference images, which is calculated as the sum of the gradient of the backbone features.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Example of paired images with their contribution to gaze estimation. The left and right images correspond to the target and the reference. The edge color shows the contribution ratio, where red indicates a higher contribution.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" } ]
Yoichiro Hisadome; Tianyi Wu; Jiawei Qin; Yusuke Sugano
[ { "authors": "Yasmeen Abdrabou; Ahmed Shams; Mohamed Omar Mantawy; Anam Ahmad Khan; Mohamed Khamis; Florian Alt; Yomna Abdelrahman", "journal": "", "ref_id": "b0", "title": "Gazemeter: Exploring the usage of gaze behaviour to enhance password assessments", "year": "2021" }, { "authors": "Nuri Murat Arar; Hua Gao; Jean-Philippe Thiran", "journal": "FG", "ref_id": "b1", "title": "Robust gaze estimation based on adaptive fusion of multiple cameras", "year": "2015" }, { "authors": "Nuri Murat; Arar ; Jean-Philippe Thiran", "journal": "", "ref_id": "b2", "title": "Robust real-time multi-view eye tracking", "year": "2017" }, { "authors": "Mihai Bace; Vincent Becker; Chenyang Wang; Andreas Bulling", "journal": "", "ref_id": "b3", "title": "Combining gaze estimation and optical flow for pursuits interaction", "year": "2020" }, { "authors": "Yiwei Bao; Yunfei Liu; Haofei Wang; Feng Lu", "journal": "", "ref_id": "b4", "title": "Generalizing gaze estimation with rotation consistency", "year": "2022" }, { "authors": "Zhaokang Chen; Bertram E Shi", "journal": "", "ref_id": "b5", "title": "Appearance-based gaze estimation using dilated-convolutions", "year": "2018" }, { "authors": "Yihua Cheng; Yiwei Bao; Feng Lu", "journal": "", "ref_id": "b6", "title": "Puregaze: Purifying gaze feature for generalizable gaze estimation", "year": "2022" }, { "authors": "Yihua Cheng; Shiyao Huang; Fei Wang; Chen Qian; Feng Lu", "journal": "", "ref_id": "b7", "title": "A coarse-to-fine adaptive network for appearancebased gaze estimation", "year": "2020" }, { "authors": "Yihua Cheng; Feng Lu", "journal": "", "ref_id": "b8", "title": "Gaze estimation using transformer", "year": "2022" }, { "authors": "Yihua Cheng; Feng Lu; Xucong Zhang", "journal": "", "ref_id": "b9", "title": "Appearancebased gaze estimation via evaluation-guided asymmetric regression", "year": "2018" }, { "authors": "Yihua Cheng; Haofei Wang; Yiwei Bao; Feng Lu", "journal": "", "ref_id": "b10", "title": "Appearance-based gaze estimation with deep learning: A review and benchmark", "year": "2021" }, { "authors": "Yihua Cheng; Xucong Zhang; Feng Lu; Yoichi Sato", "journal": "IEEE Transactions on Image Processing", "ref_id": "b11", "title": "Gaze estimation by exploring two-eye asymmetry", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b12", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b13", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "S M Ali Eslami; Danilo Jimenez Rezende; Frederic Besse; Fabio Viola; Ari S Morcos; Marta Garnelo; Avraham Ruderman; Andrei A Rusu; Ivo Danihelka; Karol Gregor; David P Reichert; Lars Buesing; Theophane Weber; Oriol Vinyals; Dan Rosenbaum; Neil Rabinowitz; Helen King; Chloe Hillier; Matt Botvinick; Daan Wierstra; Koray Kavukcuoglu; Demis Hassabis", "journal": "Science", "ref_id": "b14", "title": "Neural scene representation and rendering", "year": "2018" }, { "authors": "Arya Farkhondeh; Cristina Palmero; Simone Scardapane; Sergio Escalera", "journal": "", "ref_id": "b15", "title": "Towards self-supervised gaze estimation", "year": "2022" }, { "authors": "Wenxin Feng; Jiangnan Zou; Andrew Kurauchi; Carlos H Morimoto; Margrit Betke", "journal": "", "ref_id": "b16", "title": "Hgaze typing: Head-gesture assisted gaze typing", "year": "2021" }, { "authors": "Tobias Fischer; Hyung ; Jin Chang; Yiannis Demiris", "journal": "", "ref_id": "b17", "title": "RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments", "year": "2018" }, { "authors": "Kenneth Alberto; Funes Mora; Florent Monay; Jean-Marc Odobez", "journal": "ETRA", "ref_id": "b18", "title": "Eyediap: A database for the development and evaluation of gaze estimation algorithms from rgb and rgb-d cameras", "year": "2014" }, { "authors": "Guy Gafni; Justus Thies; Michael Zollhöfer; Matthias Nießner", "journal": "", "ref_id": "b19", "title": "Dynamic neural radiance fields for monocular 4d facial avatar reconstruction", "year": "2021" }, { "authors": "Shreya Ghosh; Abhinav Dhall; Munawar Hayat; Jarrod Knibbe; Qiang Ji", "journal": "", "ref_id": "b20", "title": "Automatic gaze analysis: A survey of deep learning based approaches", "year": "2021" }, { "authors": "John Gideon; Shan Su; Simon Stent", "journal": "", "ref_id": "b21", "title": "Unsupervised multi-view gaze representation learning", "year": "2022" }, { "authors": "Xiaodong Gu; Zhiwen Fan; Siyu Zhu; Zuozhuo Dai; Feitong Tan; Ping Tan", "journal": "", "ref_id": "b22", "title": "Cascade cost volume for high-resolution multi-view stereo and stereo matching", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b23", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Gao Huang; Zhuang Liu; Laurens Van Der Maaten; Kilian Q Weinberger", "journal": "", "ref_id": "b24", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "Petr Kellnhofer; Adria Recasens; Simon Stent; Wojciech Matusik; Antonio Torralba", "journal": "", "ref_id": "b25", "title": "Gaze360: Physically unconstrained gaze estimation in the wild", "year": "2019-10" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b26", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Rakshit Kothari; Shalini De Mello; Umar Iqbal; Wonmin Byeon; Seonwook Park; Jan Kautz", "journal": "", "ref_id": "b27", "title": "Weakly-supervised physically unconstrained gaze estimation", "year": "2021" }, { "authors": "Kyle Krafka; Aditya Khosla; Petr Kellnhofer; Harini Kannan; Suchendra Bhandarkar; Wojciech Matusik; Antonio Torralba", "journal": "", "ref_id": "b28", "title": "Eye tracking for everyone", "year": "2016" }, { "authors": "Dongze Lian; Lina Hu; Weixin Luo; Yanyu Xu; Lixin Duan; Jingyi Yu; Shenghua Gao", "journal": "TNNLS", "ref_id": "b29", "title": "Multiview multitask gaze estimation with deep convolutional neural networks", "year": "2019" }, { "authors": "Lingjie Liu; Jiatao Gu; Kyaw Zaw Lin; Tat-Seng Chua; Christian Theobalt", "journal": "", "ref_id": "b30", "title": "Neural sparse voxel fields", "year": "2020" }, { "authors": "Yunfei Liu; Ruicong Liu; Haofei Wang; Feng Lu", "journal": "", "ref_id": "b31", "title": "Generalizing gaze estimation with outlier-guided collaborative adaptation", "year": "2021" }, { "authors": "Xinjun Ma; Yue Gong; Qirui Wang; Jingwei Huang; Lei Chen; Fan Yu", "journal": "", "ref_id": "b32", "title": "Epp-mvsnet: Epipolar-assembling based depth prediction for multi-view stereo", "year": "2021" }, { "authors": "Ricardo Martin-Brualla; Noha Radwan; S M Mehdi; Jonathan T Sajjadi; Alexey Barron; Daniel Dosovitskiy; Duckworth", "journal": "", "ref_id": "b33", "title": "NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections", "year": "2021" }, { "authors": "Christopher D Mcmurrough; Vangelis Metsis; Jonathan Rich; Fillia Makedon", "journal": "ETRA", "ref_id": "b34", "title": "An eye tracking dataset for point of gaze detection", "year": "2012" }, { "authors": "Marko Mihajlovic; Aayush Bansal; Michael Zollhoefer; Siyu Tang; Shunsuke Saito", "journal": "", "ref_id": "b35", "title": "KeypointNeRF: Generalizing image-based volumetric avatars using relative spatial encoding of keypoints", "year": "2022" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b36", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Vinod Nair; Geoffrey E Hinton", "journal": "", "ref_id": "b37", "title": "Rectified linear units improve restricted boltzmann machines", "year": "2010" }, { "authors": "Takehiko Ohno; Naoki Mukawa", "journal": "", "ref_id": "b38", "title": "A free-head, simple calibration, gaze tracking system that enables gaze-based interaction", "year": "2004" }, { "authors": "Seonwook Park; Shalini De Mello; Pavlo Molchanov; Umar Iqbal; Otmar Hilliges; Jan Kautz", "journal": "", "ref_id": "b39", "title": "Few-shot adaptive gaze estimation", "year": "2019" }, { "authors": "Seonwook Park; Adrian Spurr; Otmar Hilliges", "journal": "", "ref_id": "b40", "title": "Deep pictorial gaze estimation", "year": "2018" }, { "authors": "Jiawei Qin; Takuru Shimoyama; Yusuke Sugano", "journal": "", "ref_id": "b41", "title": "Learning-by-novel-view-synthesis for full-face appearancebased 3d gaze estimation", "year": "2022" }, { "authors": "Ravikrishna Ruddarraju; Antonio Haro; Kris Nagel; Irfan A Quan T Tran; Gregory Essa; Elizabeth D Abowd; Mynatt", "journal": "", "ref_id": "b42", "title": "Perceptual user interfaces using vision-based eye tracking", "year": "2003" }, { "authors": "Alessandro Ruzzi; Xiangwei Shi; Xi Wang; Gengyan Li; Shalini De Mello; Hyung ; Jin Chang; Xucong Zhang; Otmar Hilliges", "journal": "", "ref_id": "b43", "title": "Gazenerf: 3d-aware gaze redirection with neural radiance fields", "year": "2022" }, { "authors": "Sheng- ; Wen Shih; Jin Liu", "journal": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)", "ref_id": "b44", "title": "A novel approach to 3-d gaze tracking using stereo cameras", "year": "2004" }, { "authors": "Ashish Shrivastava; Tomas Pfister; Oncel Tuzel; Joshua Susskind; Wenda Wang; Russell Webb", "journal": "", "ref_id": "b45", "title": "Learning from simulated and unsupervised images through adversarial training", "year": "2017" }, { "authors": "Leslie N Smith", "journal": "", "ref_id": "b46", "title": "Cyclical learning rates for training neural networks", "year": "2017" }, { "authors": "Yusuke Sugano; Yasuyuki Matsushita; Yoichi Sato", "journal": "", "ref_id": "b47", "title": "Learning-by-synthesis for appearance-based 3d gaze estimation", "year": "2014" }, { "authors": "Joshua B Tenenbaum; Vin De Silva; John C Langford", "journal": "Science", "ref_id": "b48", "title": "A global geometric framework for nonlinear dimensionality reduction", "year": "2000" }, { "authors": "Akira Utsumi; Kotaro Okamoto; Norihiro Hagita; Kazuhiro Takahashi", "journal": "", "ref_id": "b49", "title": "Gaze tracking in wide area using multiple camera observations", "year": "2012" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b50", "title": "Attention is all you need", "year": "2017" }, { "authors": "Jiayu Yang; Wei Mao; Jose M Alvarez; Miaomiao Liu", "journal": "", "ref_id": "b51", "title": "Cost volume pyramid based depth inference for multi-view stereo", "year": "2020" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Tian Fang; Long Quan", "journal": "", "ref_id": "b52", "title": "Mvsnet: Depth inference for unstructured multi-view stereo", "year": "2018" }, { "authors": "Jiawu Pengwei Yin; Jingjing Dai; Di Wang; Shiliang Xie; Pu", "journal": "", "ref_id": "b53", "title": "Nerf-gaze: A head-eye redirection parametric model for gaze estimation", "year": "2022" }, { "authors": "Yu Yu; Gang Liu; Jean-Marc Odobez", "journal": "", "ref_id": "b54", "title": "Improving fewshot user-specific gaze adaptation via gaze redirection synthesis", "year": "2019" }, { "authors": "Yu Yu; Jean-Marc Odobez", "journal": "", "ref_id": "b55", "title": "Unsupervised representation learning for gaze estimation", "year": "2020-06" }, { "authors": "Xucong Zhang; Seonwook Park; Thabo Beeler; Derek Bradley; Siyu Tang; Otmar Hilliges", "journal": "", "ref_id": "b56", "title": "Eth-xgaze: large scale dataset for gaze estimation under extreme head pose and gaze variation", "year": "2020" }, { "authors": "Seonwook Zhang; Anna Maria Park; Feit", "journal": "Springer International Publishing", "ref_id": "b57", "title": "Eye Gaze Estimation and Its Applications", "year": "2021" }, { "authors": "Xucong Zhang; Yusuke Sugano; Andreas Bulling", "journal": "", "ref_id": "b58", "title": "Revisiting data normalization for appearance-based gaze estimation", "year": "2018" }, { "authors": "Xucong Zhang; Yusuke Sugano; Andreas Bulling", "journal": "", "ref_id": "b59", "title": "Evaluation of appearance-based methods and implications for gaze-based applications", "year": "2019" }, { "authors": "Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling", "journal": "", "ref_id": "b60", "title": "Appearance-based gaze estimation in the wild", "year": "2015" }, { "authors": "Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling", "journal": "", "ref_id": "b61", "title": "It's written all over your face: Full-face appearancebased gaze estimation", "year": "2017" }, { "authors": "Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling", "journal": "TPAMI", "ref_id": "b62", "title": "Mpiigaze: Real-world dataset and deep appearancebased gaze estimation", "year": "2019" }, { "authors": "Yufeng Zheng; Seonwook Park; Xucong Zhang; Shalini De Mello; Otmar Hilliges", "journal": "", "ref_id": "b63", "title": "Self-learning transformations for improving gaze and head redirection", "year": "2020" }, { "authors": "Zhun Zhong; Liang Zheng; Guoliang Kang; Shaozi Li; Yi Yang", "journal": "", "ref_id": "b64", "title": "Random erasing data augmentation", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 402.91, 279.21, 39.74, 14.3 ], "formula_id": "formula_0", "formula_text": "F (0) ref , F(0)" }, { "formula_coordinates": [ 4, 160.47, 93.08, 143.99, 155.21 ], "formula_id": "formula_1", "formula_text": "𝐠 #$% (') 𝐑 ) 𝐅 !\"* (+) 𝐑𝐅 #$% (+) 𝐟 !\"! 𝐟 #$% 𝐅 !\"! (') 𝐅 #$% (') 𝐅 !\"! (+) 𝐅 #$% (+)" }, { "formula_coordinates": [ 4, 488.47, 119.63, 9.8, 9.28 ], "formula_id": "formula_2", "formula_text": "𝐠 !\"! (,)" }, { "formula_coordinates": [ 4, 398.68, 93.01, 99.53, 155.21 ], "formula_id": "formula_3", "formula_text": "𝐠 #$% (,) 𝐑 ) 𝐅 !\"! (,-') 𝐑𝐅 #$% (,-') 𝐟 !\"! 𝐟 #$% 𝐅 !\"!(" }, { "formula_coordinates": [ 4, 100.09, 343.88, 186.28, 30.32 ], "formula_id": "formula_4", "formula_text": "L total = l i=1 α l-i • L i (g (i) tgt , g (i) ref ).(1)" }, { "formula_coordinates": [ 4, 75.15, 403.42, 211.22, 14.3 ], "formula_id": "formula_5", "formula_text": "L i = arccos (g (i)⊤ tgt ĝtgt ) + arccos (g (i)⊤ ref ĝref ),(2)" }, { "formula_coordinates": [ 4, 50.11, 632.43, 236.25, 22.96 ], "formula_id": "formula_6", "formula_text": "as R = N tgt RN ⊤ ref ." }, { "formula_coordinates": [ 12, 50.11, 74.09, 464.58, 405.37 ], "formula_id": "formula_7", "formula_text": "𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐈 !\"! 𝐈 #$% 𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐈 !\"! 𝐈 #$% 𝐈 !\"! 𝐈 #$% 𝐅 !\"! 𝐑𝐅 #$% 𝐅 #$% Yaw Pitch Block 0 Block 1 Block 2 Block 3 𝐈 !\"! 𝐈 #$% Figure 2" }, { "formula_coordinates": [ 13, 278.62, 231.12, 7.74, 8.64 ], "formula_id": "formula_8", "formula_text": ")1" }, { "formula_coordinates": [ 13, 146.76, 298.95, 139.6, 11.5 ], "formula_id": "formula_9", "formula_text": "H = N Ĥ.(2)" }, { "formula_coordinates": [ 13, 80.94, 345.74, 174.1, 44.41 ], "formula_id": "formula_10", "formula_text": "N tgt CN ⊤ ref = N tgt Ĥtgt Ĥ⊤ ref N ⊤ ref = (N tgt Ĥtgt )(N ref Ĥref ) ⊤ = H tgt H ⊤ ref ." }, { "formula_coordinates": [ 13, 99.55, 558.1, 186.81, 20.72 ], "formula_id": "formula_11", "formula_text": "o tgt = R t 0 1 o ref = o ref .(3)" }, { "formula_coordinates": [ 13, 91.08, 640.26, 152.66, 74.06 ], "formula_id": "formula_12", "formula_text": "    0 0 d 1     = r x r y r z t 0 1     0 0 d 1     = dr z + t 1 ." }, { "formula_coordinates": [ 13, 392.09, 107.03, 153.02, 34.21 ], "formula_id": "formula_13", "formula_text": "t =   0 0 d   -dr z ,(4)" } ]
10.18653/v1/2020.acl-main.485
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Sentiment analysis (SA) has many practical applications, leading to widespread interest in using it for many languages. SA is naturally framed as a supervised learning problem, but substantial amounts of supervised training data exist in only a handful of languages. Since creating supervised training data in a new language is costly, two transfer learning strategies are commonly used to reduce its cost, or even to avoid it altogether. The first, which reduces cost, is monolingual transfer: we pre-train an unsupervised model on a large corpus in the target language, fine-tune on a small amount of supervision data in that language, and apply the model 1 https://github.com/seraphinatarrant/ multilingual_sentiment_analysis" }, { "figure_ref": [], "heading": "その人との会話はむかつかた。", "publication_ref": [ "b14", "b6", "b29", "b37", "b36", "b10", "b23", "b38", "b34", "b22", "b25", "b17", "b10" ], "table_ref": [], "text": "The conversation with that person is annoying.\nThe conversation with that Korean person is annoying. in that language (Gururangan et al., 2020). The second, which avoids annotation cost altogether, is zero-shot cross-lingual transfer: we pre-train an unsupervised model on a large corpus in many languages, fine-tune on already available supervision data in a high-resource language, and use the model directly in the target language (Eisenschlos et al., 2019;Ranasinghe and Zampieri, 2020). While transfer learning strategies can be used to avoid annotation costs, we hypothesised that they may incur other costs in the form of bias. It is well-known that high-resource SA models exhibit gender and racial biases (Kiritchenko and Moham-mad, 2018;Thelwall, 2018;Sweeney and Najafian, 2020). Less is known about bias in other languages. A recent study found that SA models trained with monolingual transfer were less biased than those trained without any transfer learning (Goldfarb-Tarrant et al., 2023). As far as we are aware, there is no work that studies the effect of cross-lingual transfer on bias.\nBut there is good reason to hypothesise that cross-lingual transfer may introduce new biases. Specific cultural meanings, multiple word senses, and dialect differences often contribute to errors in multilingual SA systems (Mohammad et al., 2016;Troiano et al., 2020), and are also sources of bias (Sap et al., 2019). For example, the English word foreigner translates to the Japanese word gaijin (外 人) which has approximately the same meaning, but more negative sentiment. Bias may also arise from differences in what is explicitly expressed. For example, there is evidence that syntactic gender agreement increases gender information in representations (Gonen et al., 2019a;McCurdy and Serbetci, 2017), and there is also evidence that gender information in representations correlates with gender bias (Orgad et al., 2022). From these facts, we hypothesise that multilingual pre-training on languages with gender agreement will produce more gender bias in target languages without gender agreement, while producing less bias in target languages with gender agreement.\nIn this paper, we conduct the first investigation of biases imported by cross-lingual transfer, answering the following research questions: (RQ1) What biases are imported via cross-lingual transfer, compared to those found in monolingual transfer? When using cross-lingual transfer, are observed biases explained by the pre-training data, or by the cross-lingual supervision data? Since practical systems often use distilled models, we also ask: (RQ2) Do distilled transfer models show the same trends as standard ones?\nWe investigate these questions via counterfactual evaluation, in which test examples are edited to change a single variable of interest-such as the race of the subject-so that any change in model behaviour can be attributed to that edit. We use the counterfactual evaluation benchmarks of Kiritchenko and Mohammad (2018) and an extension of it (Goldfarb-Tarrant et al., 2023) to test for gender, racial, and immigrant bias in five languages: Japanese (ja), simplified Chinese (zh), Spanish (es), German (de), and English (en). The first four languages cover three different language families, that all have fewer sentiment analysis resources then English; including English in the study enables us to compare to previous work. We find that:\n1. Zero-shot multilingual transfer generally increases bias compared to monolingual models. Racial bias in particular changes dramatically.\n2. The increase in bias in cross-lingual transfer is largely, but not entirely attributable to the multilingual pre-training data, rather than crosslingual supervision data.\n3. As hypothesised, gender bias is influenced by multilingual pre-training in directions that are predictable by the presence or absence of syntactic gender agreement in the target language.\n4. Compressing models via distillation often reduces bias, but not always.\nWe conclude with a set of recommendations to test for bias in zero-shot cross-lingual transfer learning, to create more resources to allow testing, and to expand bias research outside of English. We release all models and code used for our experiments, to facilitate further research. 1 2 Background" }, { "figure_ref": [], "heading": "Cross-lingual Transfer", "publication_ref": [ "b30", "b27", "b43" ], "table_ref": [], "text": "The aim of transfer learning is to leverage a plentiful resource to bootstrap learning for a task with few resources. Cross-lingual transfer learning (Ruder et al., 2019;Pires et al., 2019;Wu and Dredze, 2019) extends this idea to transferring across languages. It works by pre-training a model on text in many languages, including both the target language and one or more additional languages with substantial resources in the target task. For example, we pre-train a model on a multilingual web crawl containing both English and Japanese, and fine-tune on many English reviews (plentiful resource). We then assume that since the model knows about both Japanese and polarity detection, it can be applied to the task even though it has never seen examples of polarity detection in Japanese. We call this zero-shot cross-lingual transfer (ZS-XLT). An alternative approach is few-shot transfer, where we also use a very small amount of targetlanguage supervision. We focus on zero-shot transfer because it makes clear any causal link between multilingual training and bias transfer." }, { "figure_ref": [], "heading": "Counterfactual Evaluation", "publication_ref": [ "b26", "b18" ], "table_ref": [], "text": "Counterfactual evaluation is an approach that allows us to establish causal attribution: a single input variable is modified at a time, so that one can be sure that any changes in the output are due to that change (Pearl, 2009).\nBenchmarks for evaluating model fairness with this strategy are constructed so that model predictions should be invariant to changes in a demographic or protected variable such as race or gender (Kusner et al., 2017).2 For example, the sentiment scores of The conversation with that boy was irritating and The conversation with that girl was irritating should be equal. If there is a systematic difference in predicted sentiment scores between such pairs of sentences, we conclude that our model is biased. Biased models for sentiment analysis are likely to propagate representational harm (Crawford, 2017) by systematically associating minoritised groups with more negative sentiment. They also can propagate allocational harm by being less stable at sentiment prediction in the presence of certain demographic information. Sentiment analysis is often a component of another application, so the specific harm depends on the application." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b28" ], "table_ref": [], "text": "We treat sentiment polarity detection as a five-way classification problem: very negative (1), negative (2), neutral (3), positive (4), or very positive (5). In figures, we refer to these classes to using symbols --, -, 0, + and ++. This ordinal labeling scheme is commonly used when systems are trained on user reviews with a star rating (Poria et al., 2020).\nWe train monolingual and cross-lingual models, then evaluate them on counterfactual corpora and compare their differences in bias measures. We look at both average bias using aggregate metrics; and granular bias using a contingency table of counterfactuals. This enables us to build an overall picture of model comparability and also to differentiate between models with identical aggregate bias but different behaviour -some models may make many small errors, and some may make few large errors, and this may matter for minimising real world harms." }, { "figure_ref": [], "heading": "Evaluation Benchmarks", "publication_ref": [ "b17", "b3", "b17", "b10", "b24", "b2", "b41", "b11", "b32" ], "table_ref": [ "tab_0", "tab_0" ], "text": "To evaluate social bias in our experiments, we use multiple different counterfactual benchmarks. Table 1 contains examples from all datasets. For English, we use the counterfactual corpus of Kiritchenko and Mohammad (2018), which covers binary gender bias, and racial bias. Gender is represented by common gender terms (he, she, sister, brother), and African American race is represented by African-American first names contrasted with European American ones, derived from Caliskan et al. (2017). For non-English language benchmarks, we use a corpus which follows the methodology of Kiritchenko and Mohammad (2018) to create the same kind of benchmark in German, Spanish, Japanese, and Chinese, carefully extended to respect linguistic and cultural specifics of those languages (Goldfarb-Tarrant et al., 2023). All languages have a test for gender bias, where gender is binary and is similarly represented by common gender terms. The German resource covers antiimmigrant bias, using identity terms identified by governmental and NGO resources as immigrant categories that are targets of hate (Muigai, 2010;, FADA). The Japanese resource covers bias against racial minorities, using identity terms from sociology resources (Buckley, 2006;Weiner, 2009). The Spanish resource tests anti-immigrant bias via name proxies of immigrant first names, taken from Goldfarb-Tarrant et al. (2021) based on the social science research of Salamanca and Pereira (2013). The benchmark provides only gender bias tests for Chinese.\nIn all datasets, counterfactual pairs are generated from template sentences (Table 1) that vary both the counterfactual and the sentiment polarity, by using placeholders for demographic words and emotion words, respectively." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b17" ], "table_ref": [], "text": "We need an aggregate measure of overall bias and a way to look at results in more detail. For our aggregate metric, we measure the difference in sentiment score between each pair of counterfactual sentences, and then analyse the mean and variance over all pairs. Formally, each corpus consists of n sentences, S = {s i ...s n }, and a demographic variable A = {a, b} where a is the privileged class (male or privileged / unmarked race) and b is the minoritised class (female or racial minority). The sentiment classifier produces a score R for each sentence, and our aggregate measure of bias is:\n1 N n i=0 R(s i | A = a) -R(s i | A = b)\nIn this formulation, values greater than zero indicate bias against the minoritised group, values less than zero indicate bias against the privileged group, and zero indicates no bias. Scores are discrete integers ranging from 1 to 5, so the range of possible values is -4 to 4. For example, if a sentence received a score of 4 with the male demographic term, and a score of 1 with the female demographic term, then the score gap for that example is 3.\nTo put our results in context, Kiritchenko and Mohammad (2018) found the average bias of a system to be ≤ 3% of the output score range, which corresponds to a gap of 0.12 on our scale. In practice, this is equivalent to reducing the sentiment score by one for twelve out of every hundred reviews mentioning a minoritised group, or to flipping the score from maximally positive to maximally negative for three out of every hundred.\nFor more granular analysis we examine contingency tables of privileged vs. minoritised scores for each example. This enables us to distinguish between many minor changes in sentiment or fewer large changes, which are otherwise obscured by aggregate metrics as described above.3 " }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b42", "b16", "b5", "b33", "b16", "b35", "b9", "b0" ], "table_ref": [], "text": "Our goal is to simulate practical conditions as much as is possible with available resources and datasets, so we start with pre-trained models from huggingface (Wolf et al., 2020) which are commonly used in sentiment benchmarks and previous work on our data. 4 We then fine-tune these models on supervised training for the polarity detection task and apply to the counterfactual evaluation set in the target language. Both monolingual and multilingual models have as similar numbers of parameters and fine-tuning procedures as is possible, to minimise confounds while being realistic (Appendix A). Models are fine-tuned until convergence using early stopping on the development set. All models (multilingual and monolingual) converge to equivalent performance as previous work (Keung et al., 2020). F1 scores and steps to convergence are included in Appendix B. Monolingual transfer (mono-T) models are based on pre-trained bert-base-uncased (Devlin et al., 2018) in the target language. We randomly initialise a linear classification layer, then simultaneously train it and fine-tune the language model on monolingual supervision data. Our distilled monolingual model (distilmono-T) is identical, except that it is based on distilbert-base-uncased (Sanh et al., 2019). Multilingual models are based on pre-trained mbert-base-uncased then fine-tuned on a large volume of data in English only, the standard approach to zero-shot cross-lingual transfer (ZS-XLT). We also fine-tune a distilled ZS-XLT model (distil-ZS-XLT), identical except that it is based on distilmbert-base-uncased. Since it is not trained on target language data, we apply the same ZS-XLT model to each target language. As an ablation, we also train mono-XLT models (one per language) based on mbert-base-uncased pre-training data and fine-tuned on target language supervision. Although this setup is atypical, it enables us to determine whether changes in behaviour between the mono-T and ZS-XLT models are attributable to multilingual pre-training data, English supervision data, or both.\nFine-tuning data. Each mono-T and mono-XLT model is fine-tuned on the target language subset of the Multilingual Amazon Reviews Corpus (MARC; Keung et al., 2020), which contains 200-word reviews in English, Japanese, German, French, Chinese and Spanish, with discrete polarity labels ranging from 1-5, balanced across labels. We use the provided train/dev/test splits of 200k, 5k, 5k examples in each language). The ZS-XLT model is fine-tuned on the US segment of the Amazon Customer reviews corpus.5 This dataset is not balanced across labels,6 so we balance it by downsampling overrepresented labels to match the maximum number of the least frequent label, in order to make the label distribution identical to that of the mono-T and mono-XLT fine-tuning data. After balancing we have a dataset of 2 million reviews (ten times more than monolingual training data), which we then concatenate with the English subset of MARC. We fix the random seed for the data shuffle to be the same across all fine-tuning runs. Since our pre-training data is from Wikipedia and Common-Crawl, Paracrawl, or the target language equivalent, there is a domain shift between pre-training and fine-tuning data, and between fine-tuning and evaluation data, which are more similar to the pretraining; domain mismatches are common in SA.\nWe train each model five times with different random seeds and then ensemble by taking their majority vote, a standard procedure to reduce variance. In our initial experiments, we observed that bias varied substantially across different random initialisations on our out-of-domain counterfactual corpora, despite stable performance on our in-domain training/eval/test data. Previous work has also found different seeds with identical in-domain performance to have wildly variable out-of-domain results (Mc-Coy et al., 2020) and bias (Sellam et al., 2022) and theorised that different local minima may have differing generalisation performance. To combat this generalisation problem, we use classifier dropout in all of our neural models, which is theoretically equivalent to a classifier ensembling approach (Gal and Ghahramani, 2016;Baldi and Sadowski, 2013)." }, { "figure_ref": [ "fig_1", "fig_3", "fig_3", "fig_3", "fig_3", "fig_1", "fig_1", "fig_4" ], "heading": "Results", "publication_ref": [ "b19", "b22", "b31" ], "table_ref": [], "text": "We examine whether system bias is affected by a decision to use zero-shot cross-lingual transfer (ZS-XLT) instead of monolingual transfer. There are two potential sources of bias in ZS-XLT: from the multilingual pre-training, or from the English supervision. Bias from pre-training is of most concern, since it could influence many other types of multilingual models. To tease them apart, we look at the mono-XLT, system: if it has higher bias than the mono-T model, then we can conclude that bias is imported from the multilingual pre-training data. If the ZS-XLT model is more biased than the mono-XLT model, then we can conclude that bias is imported from the cross-lingual supervision.\n5.1 RQ1: How does bias compare between monolingual models and ZS-XLT models? Are observed changes from pre-training or from supervision?\nFigure 2 shows comparison between mono-T, mono-XLT, and ZS-XLT models.\nWhich transfer learning strategy introduces more bias? Our results show that ZS-XLT models have equal or greater bias than monolingual models; bias often worsens, sometimes dramatically. This contrasts with a recent study showing that pre-trained models are less biased than models without pre-training (Goldfarb-Tarrant et al., 2023): our results show that cross-lingual zero-shot transfer exacerbates biases, even though these models are trained on much more data than monolingual transfer.\nAre biases imported from the multilingual pretraining data, or the English supervision data?\nThe pattern is unfortunately not consistent. More frequently, the multilingual model causes a large difference in bias, but not always. For Japanese, German, Spanish, and English gender bias, the multilingual model causes the most change, but for Chinese, the English data causes it. For German racial bias the multilingual model causes a huge jump in bias, but for Spanish, the English data does.\nOverall, the multilingual pre-training causes a large increase in bias, rather than the supervision data. This is on the one hand not very surprising, as there is a great deal of discriminatory content in multilingual pre-training data (Luccioni and Viviano, 2021), likely much more than in sentiment analysis supervision data. However, it is a novel finding, since it means that either negative social biases can Mean and variance of differences in the sentiment label under each counterfactual pair, one graph per language and type of bias tested. Higher numbers indicate greater bias against the minoritized group. The dashed line at zero indicates no bias, the shaded region corresponds to 3% of total range (see 3.2).\ntransfer between languages, or that some artifact of multilingual training increases bias.\nWhat different behaviours are behind these changes? To examine model differences in more detail, we create contingency tables to find the patterns in bias behaviour. An unbiased model would have all values on the diagonal. We display a subset of contingency tables in Figure 3, illuminating both differences in bias patterns underlying similar bias levels; and the causes of extreme changes in aggregate bias, as we see with German. The complete set appears in Appendix C. In the aggregate metric for Japanese gender bias, we can see that the model goes from nearly no bias in mono-T to significant anti-male bias in both mono-XLT and ZS-XLT models. Figure 3a shows three different patterns of behaviour for all three models. The leftmost matrix shows that the mono-T model displays equivalent bias in most areas and across most labels: there is small total counterfactual errors, and what there are is evenly distributed. The introduction of multilingual training with the mono-XLT model increases aggregate bias, but not uniformly -it is largely accounted for by changes from neutral to postive or negative sentiment; it does not flip positive to negative sentiment or vice versa. The ZS-XLT model has less overall bias, but the source of it is different: the model overpredicts extremely positive sentiment for female examples (right vertical bar of matrix).\nFigure 3b shows the less frequent case of increase in bias from the supervision data rather than the multilingual pre-training. The mono-T model has some bias, but in a way that is driven by minor changes, with the sentiment changing by only one ordinal label (blue clustered around the diagonal). The mono-XLT model, in the middle, is quite similar, but the failures are slightly more broadly distributed. The ZS-XLT model has extremely different behaviour from the mono-T model. The aggregate bias in similar (though of flipped polarity and higher variance) but the failures under the counterfactual frequently flip between extremes. Even for similar levels of aggregate bias, the mono-T Chinese model is likely to be better; the errors that it makes are more reasonable than the ZS-XLT ones, which are more concerningly wrong.\nFigure 3c presents an analysis of the unusual behaviour of the German cross-lingual models when evaluated for racial biases. We can see that the mono-XLT model inaccurately predicts maximally negative sentiment for racially minoritised groups (bottom row of matrix), and this underlies the huge increase in racial bias between the mono-T and mono-XLT models that we see in Figure 2. The ZS-XLT model ameliorates this behaviour, and brings the pattern closer to that of the mono-T model, but remains more biased overall than mono-T, since many of the errors are extreme flips from maximally positive to negative (lower left corner cell of matrix). As well as having less aggregate bias, again we see that the mono-T model is the only one that shows reasonable behaviour under the counterfactual.\nThe Case of Gender The difference between mono-T and mono-XLT is generally small for race, except in German, and large for gender (Figure 2). This demonstrates that bias from a language included in pre-training can appear in a model targeted to a different target language.\nThe larger effect on gender than on race is as we expected; both because gender biases are less culturally specific than racial biases, and because some languages have stronger syntactic gender signal than others. We also hypothesised that the increase in gender information from grammatical agreement seen in previous work might manifest in changes in gender bias when introducing a multilingual model. Languages do not all encode gender similarly, and this has been found to be reflected in embedding spaces (McCurdy and Serbetci, 2017;Gonen et al., 2019b). Based on this, we expected this to show in increased gender bias when using cross-lingual transfer for languages with weak gender agreement, and decreased gender bias when using transfer for languages with strong gender agreement. For all languages, our hypothesis holds, the first time this effect has been shown on a downstream task rather than internally in a language model. For English, Chinese, and Japanese, monolingual models have less gender bias than their multilingual counterparts, while for Spanish and German, monolingual models have more gender bias.\nThe Case of Race For racial bias, the source of the bias is less systematic: Sometimes the ZS-XLT model bias is unchanged-as with Japanese and English-and sometimes it increases, as with German and Spanish. The presence of cross-lingual racial bias is surprising. Racial bias tends to be culturally specific, so we did not expect it to transfer across language data the way gender bias might; we expected ZS-XLT to have either equivalent or less racial bias than mono-T. A possible factor in this may be whether the languages that share information have overlapping racial biases. For instance, racial bias categories in Japanese, like Okinawan or Korean, are unlikely to be effected by pre-training on English. Whereas racial bias categories in German, though German-specific, may be shared by other high resource Western languages, such as Arab. Future work could investigate whether differences in cross-lingual transfer for racial bias are related to level of shared cultural context. It could also investigate whether language-specific implementation details like monolingual vs. multilingual tokenisation (Rust et al., 2021) could be driving any of these effects, since that would be more likely to affect morphologically rich languages like German. There is, importantly, one factor in race that is very systematic, which is that aggregate bias is never against the privileged group (values are at or above the x-axis of zero). So while sentiment models may vary across languages and models in whether they inaccurately associate negative or positive sentiment to male vs. female terms, they universally associate negative sentiment to racial terms, just to varying degrees.\n6 RQ2: Do distilled models show the same trends?\nFigure 4 shows a comparison of standard and distilled models for mono-T and ZS-XLT models. The patterns are still not consistent, but are striking. For cross-lingual transfer, distillation dampens racial biases. For gender bias, distillation always tend to dampen bias when applied to monolingual models, but frequently worsens bias when applied to crosslingual models. German, Spanish, and Chinese all have significantly more bias for gender with distil-ZS-XLT than with ZS-XLT models. Perhaps this indicates that the sources of gender bias in Japanese and in English are different than in German, Spanish, and Chinese, or that there are more language-specific characteristics that interact differently with distillation. This mirrors the answer to RQ1 in this one way: that the effects of cross-lingual transfer on gender bias (even with distilled models) vary greatly across different languages, whereas the effects for racial bias are a clearer trend. We leave this investigation for future work, but consider these results to be at least promising, that model may be an effective approach to mitigate or at least avoid exacerbating racial biases in cases where cross-lingual transfer must be used." }, { "figure_ref": [], "heading": "Recommendations and Conclusions", "publication_ref": [ "b20", "b8", "b15", "b39" ], "table_ref": [], "text": "This broad set of experiments has shown that bias can change drastically as a result of any of the standard engineering choices for making an SA system in a lower resourced language. In light of these results, we make the following recommendations: Do not assume that more data will improve biases Assess bias of all new model and data choices. Use granular bias by sentiment label, as well as aggregate bias, to make decisions that best suit the intended application. Don't rely solely on aggregate measures. Our results highlight how summary statistics can make different underlying distributions appear identical, a point made by Matejka and Fitzmaurice (2017) Be particularly aware of racial biases. Racial biases were both more pervasive and generally of higher magnitude than gender biases, across many languages and models. Racial biases are frequently overlooked in research (Field et al., 2021), and our results show that this can be quite dangerous.\nConsider compressing models. Distilled models had lower bias across most languages and demographics, with a few exceptions. This came at a very low penalty for performance of one F1 point on average. Previous work had contradictory conclusions regarding compression, with some vision models showing worse bias in compressed models (Hooker et al., 2020) and some NLP generation models showing less bias under compression (Vig et al., 2020). Our results support the latter, suggesting that it may be worth using compressed models even when not computationally required.\nWe have done the first study of the impact of cross-lingual transfer on social biases in sentiment analysis. We have also raised many open questions. What are the key mechanisms of cross-lingual transfer causing these changes? Have negative stereotypes been imported across languages and cultures, or is the increase in bias due to some other artifact of the transfer? Why do gender biases behave so differently from racial biases? An analysis of how the model learns the bias behaviour over the course of training could also help us understand the mechanisms better. Alternatively a causal analysis, or saliency and attribution methods, could enable us to understand, and perhaps control, when crosslingual transfer makes biases better and when it makes biases worse. We release our code, all models, and all intermediate checkpoints, to help expedite further analysis answering these and other questions." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b1" ], "table_ref": [], "text": "There are of course limitations to our study. We consider a range of models that achieve state-ofthe-art results on sentiment analysis tasks, but it is not feasible to test all models currently in use. Also, no resources exist across domains, so we cannot isolate the effect of domain shift. In addition, without a specific downstream application in mind, we can only measure the presence of bias but not estimate which specific harms (Blodgett et al., 2020) are likely to arise as a result.\nThe bias tests we use in this paper are only available in five languages. While this is a significant step forward compared to only testing for bias in English, it represents only a fraction of the world's languages. A study involving more languages would also allow testing the interactions between languages. For example, it is plausible that biases are more likely to be shared between languages that share the same alphabet.\nFinally, this paper contributes to understanding how cross-lingual transfer affects the presence of bias, but this is only one of the sources of bias. Moreover, measuring bias is only the first step, and our approach only allows us to make limited causal statements about why the biases are present. More research is needed for more detailed recommendations for how to reduce it." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b17" ], "table_ref": [], "text": "Our work is a direct response to the risks posed by biased AI. We hope that our work will help to reduce the risk of bias (in this case, gender and racial bias) affecting sentiment classifiation decisions. In doing so, we are releasing models that we know to be biased. These models could, in theory, be used by others for dubious purposes. However, since we are aware that the models are biased and which racial and gender biases they have, it is unlikely that someone else will use them unintentionally. After weighing up the risks and benefits, we therefore release them in the interest of reproducibility and of people who wish to build on our work.\nThe dataset we use, which ultimately derives from the templates collected by Kiritchenko and Mohammad (2018), does not contain any information that names or uniquely identifies individual people or offensive content. Our use of this dataset is consistent with its intended use, to measure gender and racial bias in sentiment analysis systems." }, { "figure_ref": [], "heading": "A Model Implementation Details", "publication_ref": [ "b16" ], "table_ref": [], "text": "Monolingual transformer models have 110 million parameters (± 1 million) and vocabularies of 30-32k with 768D embeddings. Multilingual models have 179 million parameters, a vocabulary of 120k, with 768D embeddings. We train the monolingual models with the same training settings as preferred in Keung et al. (2020), and allow the pre-trained weights to fine-tune along with the newly initialised classification layer. The multilingual models are trained identically, save that they have a 100x larger learning rate, and learning rate annealing.\nAll models were trained for 5 seeds, models trained on monolingual data (mono-T, mono-XLT, and distil-mono-T) were checkpointed 15 times. ZS-XLT models were checkpointed 6 times. In total we train 1525 models: 3 monolingual (nonbaseline) model types with 5 seeds across 5 languages and 15 checkpoints (1,225 models) and 2 multilingual model types (ZS-XLT, distil-XLT) with 5 seeds and 5 languages and 6 checkpoints (300) models.\nThis study was done on only the converged models, but all models are released for further study.\nComputational Resources. Each model was trained on 4 NVIDIA Tesla V100 GPUs with 16GB memory. mono-T and mono-XLT models took 6-8 hours to converge, ZS-XLT and distil-ZS-XLT took 15 hours. This is a total of 620 total hours, or 2,480 GPU hours on our resource. C Full set of contingency tables comparing baseline and monolingual models." }, { "figure_ref": [], "heading": "B Model Performance", "publication_ref": [], "table_ref": [], "text": "The contingency tables for all languages can be shown in Figure 5. A subset of these are included in the main body of the paper.\nIt is worth noting that saturations are not normalised across all languages and models; this is not a proxy for aggregate comparative bias, it shows the pattern across sentiment scores. The contingency tables also do not show actual (ground-truth) sentiment scores. We include baseline models (leftcolumn) not used in this work for maximum visual comparability to previous work on these benchmarks. Figure 5: All confusion matrices for experiments in this paper. ++ to -are sentiment scores. Rows are predicted sentiment scores for the privileged group, columns predicted scores for the minoritised group. Higher colour saturation in the lower triangle is therefore bias against the minoritised group, in the upper triangle is bias against the privileged group." } ]
Sentiment analysis (SA) systems are widely deployed in many of the world's languages, and there is well-documented evidence of demographic bias in these systems. In languages beyond English, scarcer training data is often supplemented with transfer learning using pretrained models, including multilingual models trained on other languages. In some cases, even supervision data comes from other languages. Does cross-lingual transfer also import new biases? To answer this question, we use counterfactual evaluation to test whether gender or racial biases are imported when using cross-lingual transfer, compared to a monolingual transfer setting. Across five languages, we find that systems using cross-lingual transfer usually become more biased than their monolingual counterparts. We also find racial biases to be much more prevalent than gender biases. To spur further research on this topic, we release the sentiment models we used for this study, and the intermediate checkpoints throughout training, yielding 1,525 distinct models; we also release our evaluation code.
Cross-lingual Transfer Can Worsen Bias in Sentiment Analysis
[ { "figure_caption": "Figure 1 :1Figure1: We use counterfactual evaluation to evaluate how bias is differs in monolingual vs. cross-lingual systems. Counterfactual pairs (e.g. sentences a, b) vary a single demographic variable (e.g. race). We measure bias as the difference in scores for the pair. An unbiased model should be invariant to the counterfactual, with a difference of zero.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Aggregate bias metrics (RQ1): Comparison of mono-T (blue), and mono-XLT (orange), ZS-XLT (green).Mean and variance of differences in the sentiment label under each counterfactual pair, one graph per language and type of bias tested. Higher numbers indicate greater bias against the minoritized group. The dashed line at zero indicates no bias, the shaded region corresponds to 3% of total range (see 3.2).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Example confusion matrices for demographic counterfactual pairs for gender in Japanese and Chinese and race in German. From left to right: mono-T models, mono-XLT models, and ZS-XLT models. ++ to -are sentiment scores. Rows are predicted sentiment scores for the privileged group, columns predicted scores for the minoritised group. Higher colour saturation in the lower triangle is bias against the minoritised group, in the upper triangle is bias against the privileged group. Colour saturations are different scales for different models. Not visualised here: actual (ground-truth) sentiment scores.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Aggregate bias metrics (RQ2): Comparison of mono-T (blue), and distil-mono-T (orange), ZS-XLT (green), and distil-ZS-XLT (red). Mean and variance of differences in the sentiment label under each counterfactual pair, one graph per language and type of bias tested. mono-T and ZS-XLT models are repeated from Fig 2 to enable easier visual comparison to distilled models. Higher numbers indicate greater bias against the minoritized group.The dashed line at zero indicates no bias, the shaded region corresponds to 3% of total range (see 3.2).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "The conversation with <person object> was <emotional situation word>. The conversation with [him\\her] was irritating. ja <person> との会は <emotion word passive>た[彼\\彼女] との会は イライラさた。 zh 跟 <person> 的谈话很 <emotional situation word>.跟 [他\\她] 的谈话很 令人生气. de Das Gespräch mit <person dat. object> war <emotional situation word>. Das Gespräch mit [ihm\\ihr] war irritierend. es La conversación con <person> fue <emotional situation word female>.La conversación con [él\\ella] fue irritante. Example sentence templates for each language and their counterfactual words that, when filled in, create a contrastive pair; in this case, for gender bias. For illustration, all five examples are translations of the same sentence.", "figure_data": "Template", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "F1 at convergence and steps at convergence for standard size and distilled models. Monolingual model performance is measured on the MARC data, ZS-XLT model performance on the US reviews data.", "figure_data": "StandardDistilledF1Steps F1Stepsja0.62 44370 0.61 60436zh0.56 35190 0.53 43750de0.63 36720 0.63 52621es0.61 41310-en0.65 27050 0.65 44285ZS-XLT 0.69 75000 0.68 33336", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Seraphina Goldfarb-Tarrant; Björn Ross; Adam Lopez
[ { "authors": "Pierre Baldi; Peter J Sadowski", "journal": "", "ref_id": "b0", "title": "Understanding dropout", "year": "2013-12-05" }, { "authors": "Lin Su; Solon Blodgett; Hal Barocas; Iii Daumé; Hanna Wallach", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Language (technology) is power: A critical survey of \"bias\" in NLP", "year": "2020" }, { "authors": "Sandra Buckley", "journal": "Routledge", "ref_id": "b2", "title": "Encyclopedia of contemporary Japanese culture", "year": "2006" }, { "authors": "Aylin Caliskan; Joanna J Bryson; Arvind Narayanan", "journal": "Science", "ref_id": "b3", "title": "Semantics derived automatically from language corpora contain human-like biases", "year": "2017" }, { "authors": "Kate Crawford", "journal": "", "ref_id": "b4", "title": "The trouble with bias", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Julian Eisenschlos; Sebastian Ruder; Piotr Czapla; Marcin Kadras; Sylvain Gugger; Jeremy Howard", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "MultiFiT: Efficient multi-lingual language model fine-tuning", "year": "2019" }, { "authors": "", "journal": "The Federal Anti-Discrimination Agency (FADA)", "ref_id": "b7", "title": "Equal rights, equal opportunities: Annual report of the federal anti-discrimination agency", "year": "2020" }, { "authors": "Anjalie Field; Su Lin Blodgett; Zeerak Waseem; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "A survey of race, racism, and anti-racism in NLP", "year": "2021" }, { "authors": "Yarin Gal; Zoubin Ghahramani", "journal": "", "ref_id": "b9", "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "year": "2016-06-19" }, { "authors": "Seraphina Goldfarb-Tarrant; Adam Lopez; Roi Blanco; Diego Marcheggiani", "journal": "", "ref_id": "b10", "title": "Bias beyond english: Counterfactual tests for bias in sentiment analysis in four languages", "year": "2023" }, { "authors": "Seraphina Goldfarb-Tarrant; Rebecca Marchant; Ricardo Muñoz Sánchez; Mugdha Pandya; Adam Lopez", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Intrinsic bias metrics do not correlate with application bias", "year": "2021" }, { "authors": "Yova Hila Gonen; Yoav Kementchedjhieva; Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "a. How does grammatical gender affect noun representations in gender-marking languages", "year": "2019" }, { "authors": "Yova Hila Gonen; Yoav Kementchedjhieva; Goldberg", "journal": "", "ref_id": "b13", "title": "How does grammatical gender affect noun representations in gender-marking languages?", "year": "2019" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": "Sara Hooker; Nyalleng Moorosi; G Clark; S Bengio; Emily L Denton", "journal": "", "ref_id": "b15", "title": "Characterising bias in compressed models", "year": "2020" }, { "authors": "Phillip Keung; Yichao Lu; György Szarvas; Noah A Smith", "journal": "", "ref_id": "b16", "title": "The multilingual Amazon reviews corpus", "year": "2020" }, { "authors": "Svetlana Kiritchenko; Saif Mohammad", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Examining gender and race bias in two hundred sentiment analysis systems", "year": "2018" }, { "authors": "Matt J Kusner; Joshua R Loftus; Chris Russell; Ricardo Silva", "journal": "", "ref_id": "b18", "title": "Counterfactual fairness", "year": "2017-09" }, { "authors": "Alexandra Luccioni; Joseph Viviano", "journal": "", "ref_id": "b19", "title": "What's in the box? an analysis of undesirable content in the Common Crawl corpus", "year": "2021" }, { "authors": "Justin Matejka; George Fitzmaurice", "journal": "Association for Computing Machinery", "ref_id": "b20", "title": "Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17", "year": "2017" }, { "authors": "R Thomas Mccoy; Junghyun Min; Tal Linzen", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance", "year": "2020" }, { "authors": "K Mccurdy; Oguz Serbetci", "journal": "", "ref_id": "b22", "title": "Grammatical gender associations outweigh topical gender bias in crosslinguistic word embeddings", "year": "2017" }, { "authors": "M Saif; Mohammad Mohammad; Svetlana Salameh; Kiritchenko", "journal": "J. Artif. Intell. Res", "ref_id": "b23", "title": "How translation alters sentiment", "year": "2016" }, { "authors": "Githu Muigai", "journal": "", "ref_id": "b24", "title": "Report of the special rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, githu muigai, on his mission to germany", "year": "2009-07-01" }, { "authors": "Hadas Orgad; Seraphina Goldfarb-Tarrant; Yonatan Belinkov", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "How gender debiasing affects internal model representations, and why it matters", "year": "2022" }, { "authors": "Judea Pearl", "journal": "Statistics Surveys", "ref_id": "b26", "title": "Causal inference in statistics: An overview", "year": "2009" }, { "authors": "Telmo Pires; Eva Schlinger; Dan Garrette", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "How multilingual is multilingual BERT?", "year": "2019" }, { "authors": "Soujanya Poria; Devamanyu Hazarika; Navonil Majumder; Rada Mihalcea", "journal": "", "ref_id": "b28", "title": "Beneath the tip of the iceberg: Current challenges and new directions in sentiment analysis research", "year": "2020" }, { "authors": "Tharindu Ranasinghe; Marcos Zampieri", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Multilingual offensive language identification with cross-lingual embeddings", "year": "2020" }, { "authors": "Sebastian Ruder; Ivan Vulić; Anders Søgaard", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b30", "title": "A survey of cross-lingual word embedding models", "year": "2019" }, { "authors": "Phillip Rust; Jonas Pfeiffer; Ivan Vulić; Sebastian Ruder; Iryna Gurevych", "journal": "", "ref_id": "b31", "title": "How good is your tokenizer? on the monolingual performance of multilingual language models", "year": "2021" }, { "authors": "Gastã Salamanca; Lidia Pereira", "journal": "SUJETOS DE NIVEL EDUCA-CIONAL SUPERIOR. Universum (Talca)", "ref_id": "b32", "title": "PRESTI-GIO Y ESTIGMATIZACIÃ\"N DE 60 NOMBRES PROPIOS EN 40", "year": "2013" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b33", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Maarten Sap; D Card; Saadia Gabriel; Yejin Choi; Noah A Smith", "journal": "", "ref_id": "b34", "title": "The risk of racial bias in hate speech detection", "year": "2019" }, { "authors": "Thibault Sellam; Steve Yadlowsky; Ian Tenney; Jason Wei; Naomi Saphra; D' Alexander; Tal Amour; Jasmijn Linzen; Bastings; Raluca Iulia; Jacob Turc; Dipanjan Eisenstein; Ellie Das; Pavlick", "journal": "", "ref_id": "b35", "title": "The multiBERTs: BERT reproductions for robustness analysis", "year": "2022" }, { "authors": "Chris Sweeney; Maryam Najafian", "journal": "Association for Computing Machinery", "ref_id": "b36", "title": "Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning", "year": "2020" }, { "authors": "Mike Thelwall", "journal": "Online Information Review", "ref_id": "b37", "title": "Gender bias in sentiment analysis", "year": "2018" }, { "authors": "Enrica Troiano; Roman Klinger; Sebastian Padó", "journal": "International Committee on Computational Linguistics", "ref_id": "b38", "title": "Lost in back-translation: Emotion preservation in neural machine translation", "year": "2020" }, { "authors": "Jesse Vig; Sebastian Gehrmann; Yonatan Belinkov; Sharon Qian; Daniel Nevo; Yaron Singer; Stuart Shieber", "journal": "", "ref_id": "b39", "title": "Investigating gender bias in language models using causal mediation analysis", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b40", "title": "", "year": "" }, { "authors": "Michael Weiner", "journal": "Taylor & Francis", "ref_id": "b41", "title": "Japan's minorities: the illusion of homogeneity", "year": "2009" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Shijie Wu; Mark Dredze", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "year": "2019" }, { "authors": "Jieyu Zhao; Kai-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "LOGAN: Local group bias detection by clustering", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 96.27, 249.72, 168.66, 33.71 ], "formula_id": "formula_0", "formula_text": "1 N n i=0 R(s i | A = a) -R(s i | A = b)" } ]
10.18653/v1/2021.acl-long.238
2023-10-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b35", "b59", "b39", "b15", "b22", "b63", "b3", "b54", "b26", "b14", "b66", "b68", "b42", "b28", "b64", "b37", "b7", "b6", "b62", "b5", "b38", "b0", "b19", "b9", "b30", "b49", "b46", "b45", "b51", "b0", "b30", "b49", "b0" ], "table_ref": [], "text": "State-of-the-art (SoTA) language models (Devlin et al., 2019;Radford et al., 2019;Winata et al., Figure 1: Our dual-model AL system architecture at every iteration: 1) the AL data selector chooses a few unlabeled examples; 2) human annotators provide an explanation and label for each data instance; 3) the annotated explanations are used to finetune the explanationgeneration model; 4) the annotated labels and generated explanations are used to finetune the prediction model. Then, humans can review the predicted labels and generated explanations for unlabeled data and start the next iteration. Green arrows indicate the training target.\n2021) demonstrate astonishing performance on various NLP tasks, including Question Answering (QA) and Question Generation (QG) (Rajpurkar et al., 2016;Duan et al., 2017;Kočiský et al., 2018;Yao et al., 2022), Natural Language Inference (NLI) (Bowman et al., 2015;Wang et al., 2018), etc. Despite the superior generative capabilities, the lack of faithful explainability within these \"black boxes\" may lead to mistrust of their predictions (Lipton, 2018), where humans, on the other hand, can develop intermediate rationales to facilitate the decision-making process.\nThe lack of explainability and untrustworthiness of models is magnified in the real world (Drozdal et al., 2020), where domain experts rarely only annotate a decision label in their daily workflow without providing explanations (i.e., clinical diagnoses by clinicians) (Zhang et al., 2023), and humans need explanations to understand and trust model predictions (Zhang et al., 2021). Therefore, a few approaches were proposed to retrospectively analyze the probability distribution within the model or ask models to generate explanations along with predictions (Ribeiro et al., 2016;Lundberg and Lee, 2017;Yu et al., 2019;Rajagopal et al., 2021;Chen et al., 2021), despite, the former is still very difficult for laymen to understand while the latter explanations are not faithful toward predictions.\nAs researchers looked into the quality (Carton et al., 2020;Yao et al., 2023) of humanannotated natural language explanations (Camburu et al., 2018;Rajani et al., 2019;Aggarwal et al., 2021), they discovered numerous issues in existing datasets (Geva et al., 2019;Chmielewski and Kucker, 2020;Narang et al., 2020;Sun et al., 2022), such that the human annotations are of low quality and significant inconsistency. Furthermore, the ever-increasing costs in terms of labor, finances, and time for large-scale, high-quality data annotations remain a persistent challenge for the research community. This challenge has given rise to various methodologies to reduce reliance on human annotations, such as Active Learning (AL) (Settles, 2009). AL is a human-in-the-loop framework that utilizes AL sampling strategies to iteratively select a small number of representative examples, request oracle annotations, and subsequently fine-tune the model using the annotated data. However, prior AL works predominantly focus on labels and overlook the fact that real-world scenarios often need both labels and natural language explanations.\nIn this work, we propose a dual-model AL architecture for human annotation of labels and explanations, drawing inspiration from the human decision-making process. Our system consists of:\n1) An explanation-generation model guided by human-provided explanations 2) A prediction model that accepts the data content and the generated explanations for prediction.\nWe integrate AL to reduce human annotation efforts and establish human trustworthiness by actively engaging humans in the training process. We design a novel data diversity-based AL sampling strategy to select the most representative examples by exploiting the explanation annotations, which is analogous to the prevalent core-set (Sener and Savarese, 2017) strategy. Our AL architecture aims to support low-resource model predictions and AI trustworthiness by explicitly generating natural language explanations. Specifically, we request label and free-form explanation annotations for a very limited number of examples (e.g., 3 or 10) selected by our AL sampling strategy at every AL iteration. Subsequently, the generated explanations serve as input for the final prediction, demonstrating the potential for these explanations to support the model's predictions faithfully.\nWe conduct two AL simulations with different amounts of samplings and iterations on a largescale NLI dataset with human-annotated explanations to justify incorporating explanations in AL data selection can consistently outperform random, traditional data diversity-based, and model probability-based sampling strategies. We make the code publically available1 .\nA human evaluation of perceived validity, explainability, and preference of the generated explanations among our system, a SoTA explanationgeneration system, and human-annotated explanations shows that, despite human explanations being ranked highest, explanations generated by our system are preferred over the SOTA system. Additionally, we conduct three ablation studies to explore the capability and potential of our proposed AL architecture in transfer learning, generalizability, and incorporating large language models (LLMs) for explanation generation to further reduce human efforts. LLMs demonstrate exceptional explanationgeneration capabilities on relatively simple tasks. However, their effectiveness in handling complex real-world tasks warrants in-depth study. (Talmor et al., 2019), including two variants of Cos-E dataset (CoS-E v1.0 and CoS-E v1.11 (Rajani et al., 2019)) and the ECQA (Aggarwal et al., 2021) dataset. Many recent works (Narang et al., 2020;Sun et al., 2022) have found explanations in CoS-E to be noisy and low-quality, and thus, Aggarwal et al. (2021) carefully designed and followed the explanation annotation protocols to created ECQA, which is of higher quality compared with CoS-E.\nIn this paper, we leverage the e-SNLI dataset as the benchmark dataset for our AL simulation experiment because 1) the classification task is popular and representative, 2) the massive data size ensures data diversity, and 3) explanations for a classification task may provide more effective help compared to CQA task where training and testing data may be unrelated. We additionally conduct an ablation study on the ECQA dataset to explore the generalizability of our proposed AL architecture." }, { "figure_ref": [], "heading": "Active Learning for Data Annotation", "publication_ref": [ "b47", "b48", "b1", "b53", "b21", "b67", "b46", "b32", "b17", "b44", "b41", "b61", "b31", "b45" ], "table_ref": [], "text": "Owning to the paucity of high-quality, large-scale benchmarks for a long tail of NLP tasks, learning better methods for low-resource learning is acquiring more attention, such as Active Learning (AL) (Sharma et al., 2015;Shen et al., 2017;Ash et al., 2019;Teso and Kersting, 2019;Kasai et al., 2019;Zhang et al., 2022). AL iteratively 1) selects samples from the unlabeled data pool (based on AL sampling strategies) and queries their annotation from human annotators, 2) fine-tunes the underlying model with newly annotated data, and 3) evaluates model performance.\nA few AL surveys (Settles, 2009;Olsson, 2009;Fu et al., 2013;Schröder and Niekler, 2020;Ren et al., 2021) of sampling strategies provide two high-level selection concepts: data diversity and model probability. We propose a novel data diversity-based strategy that leverages humanannotated explanations to select data. Our data selector shares a similar concept with the established data-based clustering strategies (Xu et al., 2003;Nguyen and Smeulders, 2004) and core-set (Sener and Savarese, 2017) 2021) provides guidance on unifying evaluation for few-shot settings." }, { "figure_ref": [], "heading": "Natural Language Explanation Generation", "publication_ref": [ "b52", "b50", "b24", "b12", "b34", "b27", "b8", "b20", "b29", "b65" ], "table_ref": [], "text": "Different approaches have been explored to enhance the model's explainability by asking them to generate natural language explanations. Some of them (Talmor et al., 2020;Tafjord et al., 2021;Latcinnik and Berant, 2020) propose systems to generate text explanations for specific tasks. Dalvi et al. (2022) propose a 3-fold reasoning system that generates a reasoning chain and asks users for correction. Other recent works (Paranjape et al., 2021;Liu et al., 2022;Chen et al., 2022) explore different prompt-based approaches to generate additional information for the task and examine the robustness and validity. We believe that our dual-model system provides and uses explanations explicitly towards prediction, while the self-rationalization setting falls short. Hase and Bansal (2022) argues that explanations are most suitable as input for predicting, and Kumar and Talukdar (2020) designed a system to generate label-wise explanations, which is aligned with our design hypothesis. Nevertheless, there exist other works (Wiegreffe et al., 2021;Marasovic et al., 2022;Zelikman et al., 2022) that explore the use of self-rationalization setting. We include the self-rationalization setting in our human evaluation of the explanation quality in Section 4.4.\n3 Dual-Model AL System" }, { "figure_ref": [], "heading": "System Architecture", "publication_ref": [ "b62", "b25", "b36", "b43", "b18", "b69", "b5", "b46", "b32", "b17", "b44", "b41" ], "table_ref": [], "text": "Figure 1 illustrates our proposed dual-model AL framework. The system comprises three primary modules: 1) an explanation-generation model that takes the data, fine-tunes on human-annotated explanations, and generates free-form explanations;\n2) a prediction model that accepts the data content and the generated explanations as input, fine-tunes on human-provided labels, and predicts the final label; 3) an AL data selector that selects a set of representative examples in terms of the semantic similarity between each unlabeled data text and labeled data's human explanations. The AL data selector plays a crucial role in finding a small, highly representative set of samples at every iteration, and further details of our AL selector are in Section 3.2.\nIn each AL iteration, after the data selector samples unlabeled examples for human annotations, we first fine-tune the explanation-generation model supervised by human-provided free-form explanations. Then, we instruct this model to generate explanations for the same set of data. Subsequently, we fine-tune a prediction model using the data content and explanations generated by the previous model as input, supervised by human-annotated labels. The fine-tuning process teaches the prediction model to rely on the explanations for predictions (Yao et al., 2023). Additionally, we fine-tune the prediction model with model-generated explanations instead of human-annotated ones for better alignment during inference, especially when no human annotations are available. After each AL iteration, we evaluate the framework on a standalone evaluation data split.\nBoth the explanation-generation model and the prediction model can be any SoTA sequence-tosequence models, such as BART (Lewis et al., 2020) and T5 (Raffel et al., 2020). In this work, we utilize T5 as the backbone for both models and design a prompt-based input template for both models, as shown in Table 1, inspired by a few existing works (Schick and Schütze, 2021;Gao et al., 2021;Zhou et al., 2023). To elucidate how each prompt addresses a different part of data content:\n1) \"explain:\" and \"question:\" are the leading prompts in the explanation-generation model and the prediction model, respectively, indicating different tasks for both models and are followed by Table 1: The prompt-based input templates for both models in our system, with the e-SNLI (Camburu et al., 2018) dataset as an example.\nthe original task content. For the e-SNLI dataset, the task content becomes \"what is the relationship between\" the hypothesis and premise sentences;\n2) \"choiceN\" is followed by candidate answers, where N ∈ [1, 3] for the e-SNLI dataset corresponds to entailment, neutral, and contradiction. We pass the choices to the explanation-generation model, expecting that it will learn to generate freetext explanations that may reflect potential relationships between the data content and the task;\n3) for the prediction model, an additional prompt \"because:\" is followed by the explanations generated by the explanation-generation model. We use a special token to separate the original task content and the explanation.\" According to recent surveys of AL (Settles, 2009;Olsson, 2009;Fu et al., 2013;Schröder and Niekler, 2020;Ren et al., 2021), there are two primary approaches for AL data selection: model probability-based and data diversity-based approaches. Model probability-based approaches, firstly, aim to select examples about which the models are least confident. These approaches involve conducting inference on unlabeled data at every iteration, which consumes more time and computing resources. Unlike data diversity-based approaches, they are not model-agnostic, which may affect the effectiveness of the sampling strategies depending on the model in use." }, { "figure_ref": [], "heading": "AL Data", "publication_ref": [ "b40" ], "table_ref": [], "text": "Secondly, data diversity-based approaches leverage various data features, such as data distribution and similarity, to select a representative set of examples from the candidate pool while maximizing diversity. This paper introduces a data diversitybased AL selection strategy that shares a concept similar to traditional data-based clustering strategy Nguyen and Smeulders ( 2004) and core-set strategy. However, our strategy differs from traditional strategies because ours incorporates humanannotated explanations for selection. More specifically, our data selector aims to choose examples that are representative of the unlabeled data pool in terms of average similarity to human-annotated explanations of all previously-labeled data while maximizing the diversity of newly-selected data.\nWe assume that human-annotated explanations contribute significantly to the model's prediction and convey more information than the original data content alone. These explanations can reveal underlying relationships between concepts in the data content and the relations between the data content and choices. For instance, in the e-SNLI dataset, the data content consists of the concatenation of hypothesis and premise sentences. Later, we construct a baseline selector in the AL simulation experiment (Sec. 4.2) with the same setup, except that it only compares the similarity between data content. Additionally, we include random baseline and probability-based baseline strategies. Our results demonstrate that using humanannotated explanations for data selection consistently leads to improved prediction performance compared to using data content alone.\nHere we delve into the details of our data-based AL data selector (shown in Algorithm 1). For each unlabeled data instance, we use sentencetransformers (Reimers and Gurevych, 2019) to calculate the semantic similarity between its data content and every previously annotated explanation. Then, we take the averaged similarity scores for each unlabeled example and rank all the unlabeled data in terms of the average similarity score. To select the most representative data in the candidate pool while maximizing diversity, we choose examples from the ranked data list with equal intervals. Note that in the first iteration, since no previously annotated explanations are available, we compare the similarity between the data content." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b5", "b36" ], "table_ref": [], "text": "We conduct the AL simulation experiment with the e-SNLI (Camburu et al., 2018) dataset. The primary objective is to justify that our proposed dualmodel framework, when combined with humanannotated explanations in AL data selection, can effectively identify more representative and helpful data from a reasonably large-scale dataset.\nGiven that e-SNLI dataset comprises a substantial 549, 367 examples in the train split, we performed a preliminary experiment to determine a reasonable number of candidate data for the AL simulation. This approach aims to save time and computing resources. Our goal is to identify an ideal candidate data size that would not introduce potentially biased feature distributions or significantly degrade model performance when compared to fine-tuning on the full dataset. We employ the pre-trained T5-base (Raffel et al., 2020) as the backbone for all the experiments and provide the hyperparameters in Appendix C." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Preliminary Experiment", "publication_ref": [], "table_ref": [], "text": "The expected outcome of the aforementioned preliminary experiment is 1) to determine the upper bound of performance and observe how the performance of our dual-model system gradually decreases as we reduce the amount of training data, and 2) to identify a suitable candidate data size for the AL simulation.\nWe also randomly sample the same amount of data for each category in the preliminary experiment to minimize potential bias introduced by uneven distribution, especially when the sampling size per iteration is very small. Specifically, we select eight different sampling amounts per category from the e-SNLI training split, ranging from [10,50,100,500,1500,3000,5000] and the complete data per category. Since the e-SNLI dataset consists of three categories: entailment, neutral, and contradiction, the total sampling size in each setting becomes [30, 150, 300, 1500, 4500, 9000, 15000, and 549, 367 (full train split)], respectively, as shown in Figure 2.\nFor each sampling setting, we conduct three trials to obtain an averaged result. In each trial, we fine-tune the explanation generation model and the prediction model once and conduct a hyperparameter search. The framework is then evaluated on the test split of e-SNLI (9, 824 examples).\nThe preliminary experiment results are shown in Figure 2, where the blue dot denotes the averaged prediction accuracy (in percentages) at each setting, and the red bar indicates the standard deviation of accuracy among three trials. Notably, with more than 1, 500 data per category, the performance drop compared to the full train split is inconspicuous (84.08% to 87.02%), while the standard deviation is below 0.5%. This observation indicates that using 1% of the original training data size only leads to a performance drop of merely 3%. Additionally, we found that even with only 10 data points per category (30 data in total), our system still achieves an average accuracy of 45%, although the deviation is relatively significant. Furthermore, when we extend the training data size from 100 to 500 data per category (300 to 1500 in total), a reasonably applicable setting in real-world scenarios, the accuracy can reach over 80% accuracy, showing promising results considering that the amount of training data is much smaller than the size of evaluation samples." }, { "figure_ref": [], "heading": "Simulation Experiment: Evaluation Setup", "publication_ref": [ "b4" ], "table_ref": [], "text": "Based on the findings from the preliminary experiment, we decide to use 3, 000 examples per category (9, 000 in total) as the candidate unlabeled data pool for the Active Learning simulation.\nInspired by the few-shot evaluation guidance (Bragg et al., 2021), we conduct 80 trials for each AL setting and calculate the averaged per- formance for ours and the baseline data selectors at every iteration. During each trial, we start by randomly selecting 3, 000 examples per category from the complete train dataset, then use the same data to conduct AL simulations with different data selectors in our dual-model framework. This way, we can ensure the performance differences during each trial are not due to different unlabeled data pools but to actual differences in the performance of the AL data selectors. For the evaluation, we randomly sample 300 examples per category (900 in total) from the test split of e-SNLI every trial and evaluate with the same test data after each iteration.\nThe AL simulation comprises two settings, where we simulate annotating 180 and 450 data instances, respectively. These two levels of data annotations reasonably mimic real-world scenarios where users have limited budgets, annotators, and data for annotation. Specifically, we experiment with the following two settings:\n1) For every iteration, select 3 examples per category (9 in total) with 20 iterations, which results in 180 examples altogether;\n2) For every iteration, select 10 examples per category (30 in total) with 15 iterations, which results in 450 examples altogether.\nOur AL simulation experiment involves our data selector, two baselines, and an additional model probability-based selector. Our data selector, described in Section 3.2, is a novel data diversitybased sampling strategy that leverages humanannotated explanations. For comparison, we use a random data selector as the basic benchmark and another traditional data diversity-based algorithm that shares the same procedures with ours, except that it only compares the similarity between each unlabeled data's content and the previously-labeled data's content, not using the human-annotated explanations. The probability-based selector conducts inference on unlabeled data and selects examples with the least probability at every iteration. We fix the same set of hyperparameters (Appendix C).\nWorth noting that our data selector does not use task content in previously labeled examples; instead, we exclusively rely on human-annotated explanations to demonstrate their greater utility compared to task content. In the first iteration, both ours and the data diversity baseline perform identically because no previously annotated data is available." }, { "figure_ref": [ "fig_2" ], "heading": "Simulation Experiment: Result", "publication_ref": [], "table_ref": [], "text": "The AL simulation results are presented in Figure 3. To explain the diagrams in detail, each dot is the average accuracy on 80 trials at every iteration for each data selector. The green/yellow/red/blue dots denote our data selector/data diversity-based baseline/random selector/model probability-based selector, respectively. We observe that our data selector consistently maintains an advantage over the traditional data-based sampling baseline, while the traditional one consistently beats the random baseline by a significant margin. Additionally, we observe that the model probability-based selector outperforms the random baseline in both settings.\nTo summarize, our data selector outperforms both baselines in every iteration for both AL settings, indicating that using human-annotated explanations in the data selector with our dualmodel AL framework is more beneficial than using the data content alone. Even with only 180 and 450 data to be annotated in each setting, our system can achieve 55% and 72% accuracy on average, respectively. We anticipate that our experiment will reach a similar performance around 85% as shown in Figure 2 but converge much faster than the random selector if we continue the AL process." }, { "figure_ref": [], "heading": "Human Evaluation Setup and Results", "publication_ref": [ "b29", "b60", "b16" ], "table_ref": [ "tab_3" ], "text": "To qualitatively evaluate the explainability of the generated explanations from our system against a SoTA few-shot explanation-generation system, the self-rationalization baseline (Marasovic et al., 2022), and the human ground-truth, we recruited three human participants to conduct a human evaluation following the prior literature (Xu et al., 2022). The self-rationalization baseline is a T5base model, which uses the same input template of our explanation-generation model shown in Table 1 but asks the model to generate both the label and explanation simultaneously. We leverage AL setting 1 described in Section 4.2 to fine-tune our system with a total of 180 examples over 20 epochs and use the same 180 examples to fine-tune the self-rationalization baseline. Both systems are used to infer the complete test split of e-SNLI after fine-tuning; then, we randomly sample 80 examples for the human study.\nFor each data instance, the rater is presented with the textual content of the premise and hypothesis of the original data paired with three sets of labels and explanations from our system, baseline system, and the human-annotated ground-truth from the e-SNLI dataset. Participants who are not aware of the source of each label-explanation pair are asked to answer four questions with [Yes/No]:\n1) Is the Prediction correct?\n2) Is the Explanation itself a correct statement?\n3) Regardless of whether the AI Prediction and Explanation is correct or not, can the Explanation help you to understand why AI has such Prediction? 4) Will you trust & use this AI in real-world decisionmaking?\nTo ensure inter-coder consistency, we first conduct a 30-min tutorial session to educate all three participants with ten examples to build a consensus among them. In the actual experiment, each of the three participants is then asked to rate 30 data instances (20 unique ones and 10 shared ones), which make up a total of 70 data instances, and 360 ratings (3 rater*30 instances*4 questions). We first calculated the Inter-Rater Reliability score (IRR) among them for each of the four questions. With the IRR score of (Q1: 1, Q2: 0.89, Q3: 0.98, Q4: 0.87), we are confident that the three coders have the same criteria for further result analysis.\nOur questions all have binary responses, and we rely on Chi-square analysis (Elliott and Woodward, 2007) to examine the statistical significance of the rating groups' differences. As shown in Table 2, the participants rated human ground-truth explanations highest across all four dimensions. Between our system and the few-shot self-rationalization system (baseline), participants believe our systems' predicted labels are more likely to be correct, with 64 'valid' ratings out of 90 for our system versus 42 out of 90 ratings for the baseline. Chi-square test indicates such a difference is statistically significant (χ 2 (1) = 21.61, p < 0.01). When asked whether they would trust the AI if there were such AI systems supporting their realworld decision-making, 35 out of 90 answered 'Yes' for our system, and it is significantly better than the baseline system (21 'Yes' out of 90) (χ 2 (1) = 12.17, p < 0.01). In comparison, 78 out of 90 times people voted that they would trust the humanannotated explanation's quality.\nAs for Question 2 (\"the validity of the generated explanation\") and Question 3 (\"whether the generated explanation is supporting its prediction\"), the human evaluation fails to suggest statistically meaningful results between our system and the baseline system (χ 2 (1) = 0.06, p = 0.89 for explanation validity, and χ 2 (1) = 0.41, p = 0.52 for explanation supporting prediction). In summary, human participants believe our system can outperform the baseline system on the label prediction's quality and the trustworthiness of AI dimensions. Still, there is a large space to improve as human evaluators believe the ground-truth label and explanation quality is much better than either AI system." }, { "figure_ref": [ "fig_8" ], "heading": "Ablation Study 1: Transfer to Multi-NLI", "publication_ref": [ "b58" ], "table_ref": [], "text": "We conduct an ablation study with transfer learning through AL simulation from e-SNLI to Multi-NLI (Williams et al., 2018). This study explores whether the explanation-generation model trained on e-SNLI is helpful for AL on a similar task.\nThe transfer-learning ablation study consists of the following steps: 1) fine-tune an explanationgeneration model using our AL framework on the e-SNLI dataset; 2) freeze the explanation-generation model and use it to generate explanations in the AL simulation for Multi-NLI; 3) fine-tune the prediction model for Multi-NLI at every iteration. Unlike the e-SNLI experiment, our AL data selection algorithm will use model-generated explanations to select examples at every iteration in the transfer learn- ing AL simulation. We fine-tune the explanationgeneration models on e-SNLI with the same two settings in the previous experiment, average the result on 15 trials of experiments, and keep consistent with every other hyper-parameters.\nThe ablation results are shown in Figure 6 of Appendix B. The blue/red lines denote the explanationgeneration model is fine-tuned on e-SNLI with each setting in Section 4.2 correspondingly. We observe that the explanation-generation model consistently provided helpful explanations, leading to an improvement in the system's prediction performance, with accuracy reaching more than 65%. In addition, the explanation-generation model fine-tuned on more data can perform better, suggesting that it had learned to generate more helpful explanations." }, { "figure_ref": [ "fig_6" ], "heading": "Ablation Study 2: Our AL Framework on ECQA", "publication_ref": [ "b0" ], "table_ref": [], "text": "We additionally conduct an AL Simulation experiment on ECQA (Aggarwal et al., 2021) (a recent dataset extends the CommonsenseQA dataset with high-quality human-annotated explanations) with our data selector, random baseline, and similaritybased baseline that does not use explanations. We comply with the same experiment settings for the e-SNLI AL simulations described in Section 4.2. The results are shown in Figure 4, where our proposed data selection strategy can consistently outperform both baselines in both simulation settings. Interestingly, the similarity-based baseline performs similarly to the random baseline, which could be because using data content alone is not sufficient to select more helpful and representative examples while using human-annotated explanations can facilitate better data selection consistently." }, { "figure_ref": [ "fig_7" ], "heading": "Ablation Study 3: LLM for Explanation Generation", "publication_ref": [ "b55", "b10", "b33" ], "table_ref": [], "text": "The recent prevalence of instructional-finetuned large language models (LLMs) (Wei et al., 2021;Chowdhery et al., 2022;Ouyang et al., 2022) with exceptional generation capabilities off-the-shelf enabled a straightforward idea upon our dual-model framework: can LLMs generate natural language explanations that are on par or even of higher quality than human-annotated ones, to facilitate the prediction model fine-tuning process? We conduct ablation experiments to leverage FLAN-T5-XL (Chung et al., 2022) for explanation generation in our framework to substitute the T5 model fine-tuned on human explanations (LLM-AL, hereinafter). We conduct the AL simulations on e-SNLI and ECQA datasets to explore whether we can further reduce human annotation efforts.\nThe results are presented in Figure 5, where a horizontal dotted line represents the benchmark of the explanation generation model fine-tuned on human-annotated explanations in Section 4.3 and 4.6. The LLM-AL framework significantly outperforms the explanation generation model guided by human annotation in both Active Learning settings. However, we hypothesize the LLM's explanation generation capability can vary from task to task. It may be highly efficient in relatively easy tasks, such as e-SNLI and ECQA datasets, both of which are training datasets for FLAN-T5. Yet, LLMs may struggle to provide helpful explanations in complex real-world domain-specific tasks, where human experts' feedback may still be necessary and preferred. This leads to another potential avenue for future work: exploring the capability and limitations of leveraging LLMs for explanation generation in real-world scenarios." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In summary, this paper introduces a novel dualmodel AL system designed to address the common real-world need for domain experts to provide both classification labels and natural language explanations. Our system comprises a purpose-built data diversity-based AL example selector and two sequence-to-sequence language models, one for explanation generation and the other for label prediction. Through an AL simulation evaluation and a human assessment of the e-SNLI dataset, our results demonstrate the effectiveness of explanations in AL sampling with our system. They consistently outperform both baselines, and the explanations generated by our system are preferred over a stateof-the-art explanation-generation system.\nOur work lays a step-stone towards a humancentered interactive AI solution (it can be easily implemented as an interactive system as illustrated in Fig 7 in Appendix D) that supports domain experts for their data annotation tasks. Many realworld tasks still require domain experts to review and annotate each data instance with a decision and an explanation for accountability purposes (e.g., a lawyer reviewing and signing off on a legal document). We invite fellow researchers to join us in advancing this research direction, essential for supporting this prevalent real-world requirement.\nIn this paper, we demonstrate the effectiveness of our framework on a representative large-scale classification dataset (e-SNLI), but there are many other NLP tasks, such as question answering and commonsense reasoning. The generalizability of our system on other NLP tasks remains unexplored. Another limitation is that this work proposed a data diversity-based AL selector design. We benchmark it with a traditional data diversity-based selector as well as a model probability-based design to demonstrate the usefulness of explanations. Prior literature has proposed other designs, such as ensemble approaches, which are not evaluated in this paper. " }, { "figure_ref": [], "heading": "B Transfer Learning Ablation Study Diagrams", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C System Environment and Hyper-Parameters", "publication_ref": [], "table_ref": [], "text": "The computing resource of all the experiments we conducted in this paper has 128 Gigabytes of RAM.\nIn addition, we use 2 NVIDIA Tesla V100 GPU for the preliminary experiment and 8 NVIDIA Tesla V100 GPU for the AL simulation experiment." }, { "figure_ref": [], "heading": "C.1 Preliminary Experiment", "publication_ref": [], "table_ref": [], "text": "For the Preliminary experiment described in Section 4.1, we leverage the same set of finetuning hyper-parameters other than the number of fine-tuning epochs for the explanationgeneration model (denotes as M EG ) and the prediction model (denotes as M P ). The same set " }, { "figure_ref": [], "heading": "C.2 AL Simulation Experiment", "publication_ref": [], "table_ref": [], "text": "For both of the AL Simulation settings we experimented in Section 4.2, we leverage the same set of hyper-parameters for fine-tuning our dualmodel AL system: batch_size_per_GPU = Our proposed dual-model system can be easily implemented as an interactive human-centered AI system for supporting domain experts and human annotators in labeling both labels and explanations.\nFigure 7: Our proposed dual-model system can be implemented as an interactive AL-based data annotation system to speed up users' annotation productivity. Such a system can simply have an interface with four output functions (i.e., display unlabeled data, display AL selected data, display generated-explanation, and display predicted labeled) and one input function (i.e., annotate label and explanation for the unlabeled data." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), which is part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons)." }, { "figure_ref": [], "heading": "Appendix A e-SNLI Examples", "publication_ref": [], "table_ref": [], "text": "Table 3 illustrates an example data of each category in the e-SNLI dataset. Every data instance contains a premise and hypothesis along with a human annotated label and free-form explanation.\nPremise: This church choir sings to the masses as they sing joyous songs from the book at a church. Hypothesis: The church is filled with song. Label: entailment Human-annotated explanation: \"Filled with song\" is a rephrasing of the \"choir sings to the masses.\nPremise: A man playing an electric guitar on stage. Hypothesis: A man is performing for cash. Label: neutral Human-annotated explanation: It is unknown if the man is performing for cash.\nPremise: A couple walk hand in hand down a street. Hypothesis: A couple is sitting on a bench. Label: contradiction Human-annotated explanation: The couple cannot be walking and sitting a the same time." } ]
Real-world domain experts (e.g., doctors) rarely annotate only a decision label in their day-to-day workflow without providing explanations. Yet, existing low-resource learning techniques, such as Active Learning (AL), that aim to support human annotators mostly focus on the label while neglecting the natural language explanation of a data point. This work proposes a novel AL architecture to support experts' real-world need for label and explanation annotations in low-resource scenarios. Our AL architecture leverages an explanationgeneration model to produce explanations guided by human explanations, a prediction model that utilizes generated explanations toward prediction faithfully, and a novel data diversity-based AL sampling strategy that benefits from the explanation annotations. Automated and human evaluations demonstrate the effectiveness of incorporating explanations into AL sampling and the improved human annotation efficiency and trustworthiness with our AL architecture. Additional ablation studies illustrate the potential of our AL architecture for transfer learning, generalizability, and integration with large language models (LLMs). While LLMs exhibit exceptional explanationgeneration capabilities for relatively simple tasks, their effectiveness in complex real-world tasks warrants further in-depth study.
Beyond Labels: Empowering Human Annotators with Natural Language Explanations through a Novel Active-Learning Architecture
[ { "figure_caption": "Explanation-generation Model: Training Input explain: what is the relationship between [hypothesis] and [premise] choice1: entailment choice2: neutral choice3: contradiction Training Target [human annotated explanations] Model Generation [generated free-form explanation] Prediction Model: Training Input question: what is the relationship between [hypothesis] and [premise] choice1: entailment choice2: neutral choice3: contradiction <sep> because [generated free-form explanation] Training Target [human annotated label] Model Prediction [predicted category]", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "′train = rank D train by score D selected = select x data f rom D ′ train with equal intervals Human annotation on D selected D train -= D selected ; D prev + = D selected", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Preliminary experiment result of our dualmodel system on e-SNLI (Camburu et al., 2018) dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(a) Setting 1: 9 examples per iteration + 20 iterations (b) Setting 2: 30 examples per iteration + 15 iterations", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Results of AL Simulation experiment on our Dual-model system with different data selectors.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "(a) Setting 1: 9 examples per iteration + 20 iterations (b) Setting 2: 30 examples per iteration + 15 iterations", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Ablation study results of AL simulation experiment on our Dual-model system with different data selectors on ECQA dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Ablation study results of AL simulation experiment with FLAN-T5-XL for explanation (exp.) generation in our Dual-model framework compared with best human-annotated explanations on e-SNLI (top) and ECQA (bottom) datasets.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure6shows the results of our Ablation Study results described in Section 4.5. The explanationgeneration model is fine-tuned from AL on e-SNLI dataset with two different AL settings, then we freeze the explanation-generation model to train the prediction model in AL simulation for Multi-NLI dataset under two settings. Setting 1/2 refers to the settings for Active Learning Simulation in Section 4.2.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Results of Transfer Learning Ablation Study of AL Simulation experiment on our Dual-model system from e-SNLI to Multi-NLI. Setting 1/2 refers to the settings for Active Learning Simulation in Section 4.2. of hyper-parameters is: batch_size_per_GPU = 2; learning_rate = 1e -4 ; input_max_length = 512; target_max_length = 64 We conduct a hyper-parameter search for the number of fine-tuning epochs for each amount of sampled examples, details are shown in Table4.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "2; learning_rate = 1e -4 ; M EG _train_epoch = 20, M P _train_epoch = 250; input_max_length = 512; target_max_length = 64 D Proposal for an Interactive System", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Human evaluation results.", "figure_data": "Yes / No CountLabelExp.Exp. → Label Trustworthy AIGround-truth83 / 786 / 487 / 378 / 12Dual-model (ours)64 / 26 68 / 2248 / 4235 / 55Self-rationalization 42 / 48 67 / 2351 / 3921 / 69", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Sample data of each category in e-SNLI(Camburu et al., 2018) dataset.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "# of train data per category / total epoch for M RG epoch for M P", "figure_data": "10 / 302510050 / 15025250100 / 30010250500 / 15005501500 / 45005503000 / 90005255000 / 15000525Full11", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Fine-tuning epochs of each model in our dualmodel system with different data amount settings.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
Bingsheng Yao; Ishan Jindal; Lucian Popa; Yannis Katsis; Sayan Ghosh; Lihong He; Yuxuan Lu; Shashank Srivastava; Yunyao Li; James Hendler; Dakuo Wang
[ { "authors": "Shourya Aggarwal; Divyanshu Mandowara; Vishwajeet Agrawal; Dinesh Khandelwal; Parag Singla; Dinesh Garg", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Explanations for Common-senseQA: New Dataset and Models", "year": "2021" }, { "authors": "Chicheng Jordan T Ash; Akshay Zhang; John Krishnamurthy; Alekh Langford; Agarwal", "journal": "", "ref_id": "b1", "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "year": "2019" }, { "authors": "Moorthy Meghana; Alessandro Bhat; Subhabrata Sordoni; Mukherjee", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Self-training with few-shot rationalization", "year": "2021" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "Jonathan Bragg; Arman Cohan; Kyle Lo; Iz Beltagy", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b4", "title": "Flex: Unifying evaluation for few-shot nlp", "year": "2021" }, { "authors": "Oana-Maria Camburu; Tim Rocktäschel; Thomas Lukasiewicz; Phil Blunsom", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "e-SNLI: Natural language inference with natural language explanations", "year": "2018" }, { "authors": "Anirudh Samuel Carton; Chenhao Rathore; Tan", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Evaluating and characterizing human rationales", "year": "2020" }, { "authors": "Hanxiong Chen; Xu Chen; Shaoyun Shi; Yongfeng Zhang", "journal": "", "ref_id": "b7", "title": "Generate natural language explanations for recommendation", "year": "2021" }, { "authors": "Howard Chen; Jacqueline He; Karthik Narasimhan; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Can rationalization improve robustness", "year": "2022" }, { "authors": "Michael Chmielewski; Sarah C Kucker", "journal": "Social Psychological and Personality Science", "ref_id": "b9", "title": "An mturk crisis? shifts in data quality and the impact on study results", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b10", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b11", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Bhavana Dalvi; Oyvind Tafjord; Peter Clark", "journal": "", "ref_id": "b12", "title": "Towards teachable reasoning systems", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Jaimie Drozdal; Justin Weisz; Dakuo Wang; Gaurav Dass; Bingsheng Yao; Changruo Zhao; Michael Muller; Lin Ju; Hui Su", "journal": "", "ref_id": "b14", "title": "Trust in automl: exploring information needs for establishing trust in automated machine learning systems", "year": "2020" }, { "authors": "Nan Duan; Duyu Tang; Peng Chen; Ming Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Question generation for question answering", "year": "2017" }, { "authors": "C Alan; Wayne A Elliott; Woodward", "journal": "Sage", "ref_id": "b16", "title": "Statistical analysis quick reference guidebook: With SPSS examples", "year": "2007" }, { "authors": "Yifan Fu; Xingquan Zhu; Bin Li", "journal": "Knowledge and information systems", "ref_id": "b17", "title": "A survey on instance selection for active learning", "year": "2013" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Making pre-trained language models better few-shot learners", "year": "2021" }, { "authors": "Mor Geva; Yoav Goldberg; Jonathan Berant", "journal": "", "ref_id": "b19", "title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets", "year": "2019" }, { "authors": "Peter Hase; Mohit Bansal", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "When can models learn from explanations? a formal framework for understanding the roles of explanation data", "year": "2022" }, { "authors": "Jungo Kasai; Sairam Kun Qian; Yunyao Gurajada; Lucian Li; Popa", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Low-resource deep entity resolution with transfer and active learning", "year": "2019" }, { "authors": "Tomáš Kočiský; Jonathan Schwarz; Phil Blunsom; Chris Dyer; Karl Moritz Hermann; Gábor Melis; Edward Grefenstette", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b22", "title": "The NarrativeQA reading comprehension challenge", "year": "2018" }, { "authors": "Sawan Kumar; Partha Talukdar", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "NILE : Natural language inference with faithful natural language explanations", "year": "2020" }, { "authors": "Veronica Latcinnik; Jonathan Berant", "journal": "", "ref_id": "b24", "title": "Explaining question answering models through text generation", "year": "2020" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": " Zachary C Lipton", "journal": "Queue", "ref_id": "b26", "title": "The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery", "year": "2018" }, { "authors": "Jiacheng Liu; Alisa Liu; Ximing Lu; Sean Welleck; Peter West; Le Ronan; Yejin Bras; Hannaneh Choi; Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Generated knowledge prompting for commonsense reasoning", "year": "2022" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "A unified approach to interpreting model predictions", "year": "2017" }, { "authors": "Ana Marasovic; Iz Beltagy; Doug Downey; Matthew Peters", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Few-shot self-rationalization with natural language prompts", "year": "2022" }, { "authors": "Sharan Narang; Colin Raffel; Katherine Lee; Adam Roberts; Noah Fiedel; Karishma Malkan", "journal": "", "ref_id": "b30", "title": "Wt5?! training text-to-text models to explain their predictions", "year": "2020" }, { "authors": "T Hieu; Arnold Nguyen; Smeulders", "journal": "", "ref_id": "b31", "title": "Active learning using pre-clustering", "year": "2004" }, { "authors": "Fredrik Olsson", "journal": "", "ref_id": "b32", "title": "A literature survey of active machine learning in the context of natural language processing", "year": "2009" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Bhargavi Paranjape; Julian Michael; Marjan Ghazvininejad; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Prompting contrastive explanations for commonsense reasoning tasks", "year": "2021" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b35", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b36", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Dheeraj Rajagopal; Vidhisha Balachandran; Eduard H Hovy; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "SELFEXPLAIN: A self-explaining architecture for neural text classifiers", "year": "2021" }, { "authors": "Nazneen Fatema Rajani; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Explain yourself! leveraging language models for commonsense reasoning", "year": "2019" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b40", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Pengzhen Ren; Yun Xiao; Xiaojun Chang; Po-Yao Huang; Zhihui Li; B Brij; Xiaojiang Gupta; Xin Chen; Wang", "journal": "ACM computing surveys (CSUR)", "ref_id": "b41", "title": "A survey of deep active learning", "year": "2021" }, { "authors": "Marco Tulio Ribeiro; Sameer Singh; Carlos Guestrin", "journal": "", "ref_id": "b42", "title": "why should i trust you?\" explaining the predictions of any classifier", "year": "2016" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Exploiting cloze-questions for few-shot text classification and natural language inference", "year": "2021" }, { "authors": "Christopher Schröder; Andreas Niekler", "journal": "", "ref_id": "b44", "title": "A survey of active learning for text classification using deep neural networks", "year": "2020" }, { "authors": "Ozan Sener; Silvio Savarese", "journal": "", "ref_id": "b45", "title": "Active learning for convolutional neural networks: A core-set approach", "year": "2017" }, { "authors": "Burr Settles", "journal": "", "ref_id": "b46", "title": "Active learning literature survey", "year": "2009" }, { "authors": "Manali Sharma; Di Zhuang; Mustafa Bilgic", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Active learning with rationales for text classification", "year": "2015" }, { "authors": "Yanyao Shen; Hyokun Yun; Zachary Lipton; Yakov Kronrod; Animashree Anandkumar", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Deep active learning for named entity recognition", "year": "2017" }, { "authors": "Jiao Sun; Swabha Swayamdipta; Jonathan May; Xuezhe Ma", "journal": "", "ref_id": "b49", "title": "Investigating the benefits of freeform rationales", "year": "2022" }, { "authors": "Oyvind Tafjord; Bhavana Dalvi; Peter Clark", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "ProofWriter: Generating implications, proofs, and abductive statements over natural language", "year": "2021" }, { "authors": "Alon Talmor; Jonathan Herzig; Nicholas Lourie; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge", "year": "2019" }, { "authors": "Alon Talmor; Oyvind Tafjord; Peter Clark; Yoav Goldberg; Jonathan Berant", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b52", "title": "Leap-of-thought: Teaching pre-trained models to systematically reason over implicit knowledge", "year": "2020" }, { "authors": "Stefano Teso; Kristian Kersting", "journal": "", "ref_id": "b53", "title": "Explanatory interactive machine learning", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b55", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Sarah Wiegreffe; Ana Marasovic", "journal": "", "ref_id": "b56", "title": "Teach me to explain: A review of datasets for explainable natural language processing", "year": "2021" }, { "authors": "Sarah Wiegreffe; Ana Marasović; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b57", "title": "Measuring association between labels and free-text rationales", "year": "2021" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" }, { "authors": "Genta Indra Winata; Andrea Madotto; Zhaojiang Lin; Rosanne Liu; Jason Yosinski; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "Language models are few-shot multilingual learners", "year": "2021" }, { "authors": "Ying Xu; Dakuo Wang; Mo Yu; Daniel Ritchie; Bingsheng Yao; Tongshuang Wu; Zheng Zhang; Toby Jia-Jun Li; Nora Bradford; Branda Sun", "journal": "ACL", "ref_id": "b60", "title": "Fantastic questions and where to find them: Fairytaleqaan authentic dataset for narrative comprehension", "year": "2022" }, { "authors": "Zhao Xu; Kai Yu; Xiaowei Volker Tresp; Jizhi Xu; Wang", "journal": "Springer", "ref_id": "b61", "title": "Representative sampling for text classification using support vector machines", "year": "2003" }, { "authors": "Bingsheng Yao; Prithviraj Sen; Lucian Popa; James Hendler; Dakuo Wang", "journal": "", "ref_id": "b62", "title": "Are human explanations always helpful? towards objective evaluation of human natural language explanations", "year": "2023" }, { "authors": "Bingsheng Yao; Dakuo Wang; Tongshuang Wu; Zheng Zhang; Toby Li; Mo Yu; Ying Xu", "journal": "Association for Computational Linguistics", "ref_id": "b63", "title": "It is AI's turn to ask humans a question: Questionanswer pair generation for children's story books", "year": "2022" }, { "authors": "Mo Yu; Shiyu Chang; Yang Zhang; Tommi Jaakkola", "journal": "Association for Computational Linguistics", "ref_id": "b64", "title": "Rethinking cooperative rationalization: Introspective extraction and complement control", "year": "2019" }, { "authors": "Eric Zelikman; Jesse Mu; Yuhuai Tony Noah D Goodman; Wu", "journal": "", "ref_id": "b65", "title": "Star: Self-taught reasoner bootstrapping reasoning with reasoning", "year": "2022" }, { "authors": "Shao Zhang; Jianing Yu; Xuhai Xu; Changchang Yin; Yuxuan Lu; Bingsheng Yao; Melanie Tory; M Lace; Jeffrey Padilla; Ping Caterino; Zhang", "journal": "", "ref_id": "b66", "title": "Rethinking human-ai collaboration in complex medical decision making: A case study in sepsis diagnosis", "year": "2023" }, { "authors": "Shujian Zhang; Chengyue Gong; Xingchao Liu; Pengcheng He; Weizhu Chen; Mingyuan Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b67", "title": "ALLSH: Active learning guided by local sensitivity and hardness", "year": "2022" }, { "authors": "Zhan Zhang; Yegin Genc; Dakuo Wang; Mehmet Eren Ahsen; Xiangmin Fan", "journal": "Journal of Medical Systems", "ref_id": "b68", "title": "Effect of ai explanations on human perceptions of patient-facing ai-powered healthcare systems", "year": "2021" }, { "authors": "Yangqiaoyu Zhou; Yiming Zhang; Chenhao Tan", "journal": "Association for Computational Linguistics", "ref_id": "b69", "title": "FLamE: Few-shot learning from natural language explanations", "year": "2023" } ]
[]
2024-02-15
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b3", "b3", "b3", "b14", "b3" ], "table_ref": [], "text": "In recent years, there has been a surge of interest in vision-and-language research, particularly in the field of text-to-image generation. Prominent models in this domain include autoregression models like DALL-E [1] and Make-A-Scene [2], as well as diffusion models like DALL-E 2 [3] and Stable Diffusion [4]. These models have revolutionized the quality of generated images. They leverage text prompts to synthesize images depicting various objects and scenes that align with the given text. Among these models, Stable Diffusion [4] stands out as a significant open-source model. It serves as a foundation for many recent works, including image generation [5,6,7,8], image editing [9,10,11,12,13,14], and more. [4]. The bottom row sets attention weights of caption words to zero, only keeping the start-/end-token, so the caption maps of the bottom are black. Also notice the starttoken has strong weights so the map is all white.\nHowever, text prompts have limitations when it comes to incorporating unspeakable information from reference images. It becomes challenging to generate a perfect and detailed prompt when users want to synthesize images related to a picture they have seen. Image variation techniques aim to address this limitation by enabling users to generate multiple variations of an input image, without relying on complex prompts. As illustrated in Fig. 1, the generated variations closely resemble the reference image, often sharing the same scene or objects but with distinct details.\nStable Diffusion Reimagine (SD-R) [4] 3 is a recently proposed image variation algorithm. It achieves this goal by retraining Stable Diffusion [4], where the text encoder is replaced with an image encoder to adapt the model for image input. The model is trained using millions of images and over 200,000 GPU-hours, enabling it to effectively generate image variations based on reference images.\nIn this paper, we make a significant discovery that allows a more cost-effective image-to-prompt conversion approach. We find the CLIP model [15], as utilized in Stable Diffusion, can be repurposed as an effective image-to-prompt converter. This converter can be directly employed or served as a valuable initialization for a data-efficient fine-tuning process. As a result, the expenses associated with constructing or customizing an image-to-prompt converter can be substantially reduced.\nMore specifically, our method is built upon a surprising discovery: the control of image generation through text is primarily influenced by the embedding of the end-of-sentence (EOS) token. We found that masking all word tokens, except for the start and end tokens, does not adversely affect the quality of image generation, as illustrated in Figure 2. Simultaneously, during CLIP training, the projection of the end-token embedding is trained to align with the visual embedding. This inherent relationship enables us to derive a closed-form projection matrix that converts visual embedding into an embedding that is capable of controlling the generation of Stable Diffusion [4]. We call this method Stable Diffusion Image-to-Prompt Conversion (SD-IPC).\nIn addition, we introduce two methods to enhance the quality and flexibility of image-to-prompt conversion. The first approach involves parameter-efficient tuning using a small amount of data, consisting of only 100 images and requiring just 1 GPU-hour. This method encourages the model to better preserve image information and enables practitioners to control the specific content they want to retain when generating new images. The second approach involves customizing the model on reference images using a few iterations, ensuring that the generated images are closer to specific concepts. While this approach has been explored in previous research, we demonstrate that with the advantageous initialization provided by SD-IPC, the online fine-tuning requires significantly fewer iterations to achieve desirable results." }, { "figure_ref": [], "heading": "Background and Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Diffusion Model", "publication_ref": [ "b3", "b15", "b16", "b17", "b18", "b3", "b3", "b3", "b16" ], "table_ref": [], "text": "Firstly, we present a brief overview of the Stable Diffusion [4], which serves as our underlying model. Diffusion models (DMs) [16,17,18,19] belong to a class of latent variable models. In DMs, there exist two Markov chains known as the diffusion process and the reverse process, both having a fixed length T . The diffusion process progressively introduces Gaussian noise to the original data (x 0 ) until the signal becomes corrupted (x T ). During DMs training, the reverse process is learned, which operates in the opposite direction of the diffusion process. The reverse process can be viewed as a denoising procedure, moving from x t to x t-1 at each step. After multiple denoising steps, the model obtains instances that closely resemble the real data.\nStable Diffusion [4] is built on the Latent Diffusion Model (LDM) [4]. LDM [4] proposed to do diffusion process in a latent space rather than the usual pixel space, significantly reducing the training and inference cost of the diffusion model. The authors proposed to utilize a VAE compression to get the latent code z 0 , which is x 0 above. Diffusion process will build on the latents. A U-Net architecture [17] with timestep and text conditions would do the reverse. The text prompt is injected into the model with cross-attention layers. We denote ϵ θ (z t , c txt (p txt ), t) as the output of the U-Net, which is the predicted denoising result. p txt is the textual prompt and c txt (p txt ) is the prompt embedding from the text encoder. t is the timestep. The training objective of DMs is as followed:\nE ϵ,z,ptxt,t ∥ϵ -ϵ θ (z t , c txt (p txt ), t)∥ 2 2 ,(1)\nwhere ϵ ∼ N (0, I) is the noise used to corrupt clean latent variables. During the generation, the latent z t , which starts at a random Gaussian noise z T , will recursively go through a denoising operation until z 0 is sampled. Finally, z 0 is reconstructed to an image by the VAE." }, { "figure_ref": [], "heading": "CLIP Model", "publication_ref": [ "b14", "b14" ], "table_ref": [], "text": "The CLIP model [15] has garnered significant acclaim as a groundbreaking zero-shot model in recent years. Its training process demands optimizing a contrastive loss function using extensive 400-million pairs of images and corresponding text descriptions. Through the meticulous training, the model has been able to achieve unparalleled capabilities in zero-shot classification and image-text retrieval.\nThe model comprises an image encoder CLIP i (•), a text encoder CLIP t (•), a visual projection layer W i , and a textual projection layer W t . The image encoder encodes an input image x into a visual embedding f img derived from a special class-token. By applying the visual projection layer, the embedding is projected into the CLIP visual embedding f c img . Similarly, the text encoder processes the input text, yielding a sequence of output embeddings f txt for each text token and a start token and end-of-sentence (EOS) )token. The embedding of the EOS token f t,⟨eos⟩ txt , where t denotes the length of the sentence, is projected into the CLIP textual embedding f c txt through W t . Formally,\nf img = CLIP i (x) , f c img = W i • f img ,(2)\nf txt = CLIP t (s) , f c txt = W t • f t,⟨eos⟩ txt .(3)\nThe training objective of CLIP is to maximize the cosine similarity between f c txt and f c img for matched sentence-image pair while minimizing this similarity for unmatched pairs. For the simplicity of discussion, we denote the space spanned by f txt as T -space and the space spanned by f c * as C-space. The CLIP text encoder [15] is directly used in Stable Diffusion to encode text prompts. It encodes a text prompt as a sequence of embeddings: \nf txt := f 0,⟨sos⟩ txt , f 1,\nwhere f 0,⟨sos⟩ txt , f i,w txt and f t,⟨eos⟩ txt denote the embeddings corresponding to the start-token, the i-th word token and end-token, respectively. From f t+1,⟨eos⟩ txt to f 76,⟨eos⟩ txt are padded tokens." }, { "figure_ref": [], "heading": "Image Variation & Customized Generation", "publication_ref": [ "b3", "b3", "b3", "b10", "b13", "b12", "b10", "b12", "b12", "b3", "b8", "b9", "b19", "b20", "b21", "b3" ], "table_ref": [], "text": "Image Variation. Image variation aims to generate images similar to the reference image but not identical. SD-R [4] is proposed to address this problem, which builds upon the Stable-unCLIP model 4 . The authors fine-tuned the Stable Diffusion model [4] to align with the CLIP visual embedding. In SD-R [4], images can be directly input into the diffusion model through CLIP image encoder. Since the original Stable Diffusion is conditioned on text only, an expensive fine-tuning is required to accommodate this new input. The process took 200,000 GPU-hours on NVIDIA A100-40GB GPU while our approach only requires 1 GPU-hour on NVIDIA A5000-24GB GPU 5 .\nCustomized Generation. Recent works such as DreamBooth [11], Textual Inversion [14], and Custom Diffusion [13] focus on learning a special text prompt to feature specific objects or persons from the reference images. For instance, given several photos of a particular cat, these methods use a special-token \"⟨s⟩ cat\" to represent the concept and incorporate it with the text prompt. DreamBooth [11] and Custom Diffusion [13] also perform simultaneous fine-tuning of diffusion model parameters. However, the fine-tuning process is still somewhat time-consuming, with Custom Diffusion [13] requiring nearly 6 minutes on 2 NVIDIA A100 GPUs. In contrast, our fast update SD-IPC only needs 1 minute on 2 A5000 GPUs.\nImage Editing. Stable Diffusion [4] is commonly used for image editing tasks. Prompt-to-Prompt [9] and Plug-and-Play [10] utilize attention map as a bridge to enable concept and style manipulation. Null-Text Inversion [20] and Pix2Pix-Zero [21] relies on inversion-based methods. InstructPix2Pix [22] creates a dataset of paired edited images and fine-tunes Stable Diffusion [4] as an editing model. It's important to highlight that while our primary focus in developing this method was to enhance image variation, it can also be employed to generate images based on prompts that combine both textual instructions and accompanying images. Notably, unlike existing approaches that frequently reproduce the layout of the original image in the generated output, our method operates without being confined to replicating the exact layout of the source image." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Image-to-Prompt Conversion via Projecting CLIP embedding", "publication_ref": [ "b14", "b22", "b14" ], "table_ref": [], "text": "By design, the image generation process in the stable diffusion model should be influenced by embeddings of all tokens in a prompt, like Eq. ( 4). Interestingly, we have discovered that masking word tokens, by setting their attention weights to 0 except for the start-/end-token, does not have a negative impact on the quality of generated images. This finding is visually illustrated in Figure 2.\nOn another note, the training objective of CLIP [15] is to match the embeddings f c img and f c txt , with f c txt being essentially a projection of f t,⟨eos⟩ txt\n. This inherent relationship, coupled with the aforementioned observation, leads us to establish a connection between f c img and f t,⟨eos⟩ txt\n, effectively converting the visual embedding to a prompt embedding.\nFormally, we assume that after training, CLIP model can induce high cosine similarity between the f c img and f txt and we can further make the following approximation:\nf c img ∥f c img ∥ ≈ f c txt ∥f c txt ∥ , with f c txt = W t f t,⟨eos⟩ txt .(5)\nBy using Moore-Penrose pseudo-inverse [23] on W t6 , we obtain an estimate of f t,⟨eos⟩ txt from f c img :\nf t,⟨eos⟩ txt ≈ ∥f c txt ∥ ∥f c img ∥ W + t f c img := f cnvrt txt , where, W + t = W ⊤ t W t -1 W ⊤ t ,(6)\nwhere we empirically observe ∥f c txt ∥ can be well approximated by a constant, e.g., ∥f c txt ∥ = 27 and W t can be obtained from the pretrained CLIP model [15]. We denote the converted embedding as f cnvrt txt and use it to assemble a pseudo-prompt sequence with the following format:\nftxt := f 0,⟨sos⟩ txt , f 1,cnvrt txt , ..., f 76,cnvrt txt ,(7)\nwhere\nf 1,cnvrt txt = • • • = f 76,cnvrt txt = f cnvrt txt .\nIn other words, we replace all word-tokens, pad-tokens and end-token in Eq. ( 4) with the converted f cnvrt txt , based on the fact that f cnvrt txt is an approximation of f t,⟨eos⟩ txt and masking word-tokens does not influence the generation 7 ." }, { "figure_ref": [], "heading": "Input Images", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "SD-R SD-IPC (Ours) SD w/ Text", "publication_ref": [ "b23", "b3", "b3" ], "table_ref": [], "text": "Figure 3: Image variation results on MSCOCO [24]. SD w/ Text [4] is generation from the groundtruth text prompts that are not available for variation methods such as SD-R and SD-IPC. SD-IPC is our method, notice that SD-IPC does not need any training compared to SD-R [4]. This approximation allows immediate conversion of an image to a text prompt by directly mapping it to an (approximately) equivalent prompt. We refer to this method as Stable Diffusion Image-to-Prompt Conversion (SD-IPC). Experimental results in Fig. 3 demonstrate that SD-IPC effectively captures the semantic information present in the reference image and enables image variation. Furthermore, we have identified a simple yet effective approach to combine both the text prompt and the converted image prompt within our framework. To achieve this, we perform a weighted average of the two embeddings. Formally, the process can be described as follows:\nf comb txt = f cnvrt txt + α • f t,⟨eos⟩ txt , f edit txt = f 0,⟨sos⟩ txt , f 1,w0 txt , ..., f t,comb txt , ..., f 76,comb txt ,(8)\nwhere f i,comb text = f comb text is the combined-token embedding and α is a hyperparameter to control the expression of editing text. Notice that the editing word-token f i,w txt are also in the embedding sequence. Conditioning on f edit text could generate images that match both the visual and textual conditions. We report some editing results in Appendix D.2." }, { "figure_ref": [], "heading": "Fine-tuning with Image-to-Prompt Conversion", "publication_ref": [ "b3", "b12", "b24", "b24" ], "table_ref": [], "text": "While the aforementioned SD-IPC method demonstrates reasonable performance, it still faces challenges when it comes to real-world applications due to two main reasons. Firstly, the conversion process in SD-IPC relies on approximations, which may not always yield optimal results. Secondly, determining the exact topic or theme of an image introduces ambiguity. As the saying goes, \"an image is worth a thousand words\", but precisely which words? The same reference image can be interpreted differently based on its objects, scenes, styles, or the identities of people depicted within. Therefore, it becomes crucial to have a method that allows control of the content we wish to preserve and convert into the prompt. To address these concerns and cater to the needs, we propose a partial fine-tuning approach for the CLIP converter derived from Sec. 3.1.\nIn proposed approach, we focus on fine-tuning two specific types of parameters. Firstly, we address the optimization of the projection matrix within the cross-attention layer of the U-Net in Stable Diffusion [4]. This aspect aligns with the methodology employed in Custom Diffusion [13]. Furthermore, we incorporate deep prompt tuning [25] into the transformer of the CLIP image encoder. Deep prompt tuning [25] introduces learnable tokens within all layers of the transformer while keeping the weights of other components fixed. More details can be found in Appendix A." }, { "figure_ref": [], "heading": "Input Images", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "SD-R SD-IPC-FT (Ours) SD-IPC (Ours)", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "ImageNet Finetuned", "publication_ref": [ "b0", "b25", "b26", "b27" ], "table_ref": [], "text": "CelebA-HQ Finetuned Places365 Finetuned The parameters can be learned by using the following loss:\nE ϵ,z,xref,t ∥ϵ -ϵ θ (z t , c img (x ref ), t)∥ 2 + E ϵ,z,ptxt,t ∥ϵ -ϵ θ (z t , c txt (p txt ), t)∥ 2 ,(9)\nhere the first term is L cnvrt , which is fine-tuning with the image-to-prompt input, and the second term is L text , which is the original text-to-image training loss, we utilize this as a regularization to keep the text-to-image generation. In the proposed approach, c img (•) refers to the image-to-prompt conversion derived from SD-IPC. It encompasses the CLIP image transformer augmented with the newly-introduced learnable prompts in deep prompting and the fixed inverse matrix derived from Eq. ( 6). During tuning, the inverse projection matrix remains unchanged. z t represents the latent representation of the target image x target at time step t. The objective function aims to encourage the image-to-prompt conversion to extract information from x ref that facilitates the recovery of x target .\nThere are two possible choices for x target : (1) x target can be selected to be the same as x ref .\n(2) x target and x ref can be different images, but with a shared visual concept that we intend to extract as the prompt. This usually poses stronger supervision to encourage the converter to extract information related to the shared theme. The schematic representation of this scheme is illustrated in Appendix C.2.\nWe use images randomly sampled from ImageNet [26], CelebA-HQ [27], and Places365 [28] dataset to encourage the model extract object, identity, and scene information, respectively. Experiments show that merely 100 images and 1 GPU-hour of training are sufficient for achieving satisfied results thanks to the good initialization provided by SD-IPC. We call this approach SD-IPC-FT, the results are shown in Fig. 4. Some editing examples are listed in Fig. 5, Fig. 6, and Appendix D.4." }, { "figure_ref": [ "fig_3" ], "heading": "Fast Update for Customized Generation", "publication_ref": [ "b10", "b12", "b12" ], "table_ref": [], "text": "Existing methods, such as DreamBooth [11] and Custom Diffusion [13], suggest that partially finetuning the model on given concept images before generation can be an effective way to synthesized images with customized visual concepts, e.g., people with the same identity. Our approach can also benefit from this scheme by performing such an online update with SD-IPC. This can be achieved by simply replacing the training images in SD-IPC-FT with reference images and use L convrt only. We call this method SD-IPC-CT (CT stands for customized concept tuning). Interestingly, we find that our method can generate customized images with much fewer updates. As a comparison, SD-IPC-CT only takes 30-iteration updates with around 1 minute on 2 A5000 GPUs while the Custom Diffusion [13] needs 250 iterations (6 minutes on 2 A100 GPUs). We report customized generation in Fig. 7." }, { "figure_ref": [], "heading": "Input Images", "publication_ref": [], "table_ref": [], "text": "\"On the beach\" \"Under the starry sky\"" }, { "figure_ref": [], "heading": "SD-R", "publication_ref": [], "table_ref": [], "text": "Without Editing" }, { "figure_ref": [], "heading": "SD-R SD-IPC-FT (Ours) SD-R SD-IPC-FT (Ours) SD-IPC-FT (Ours)", "publication_ref": [ "b25", "b3" ], "table_ref": [], "text": "Figure 5: Image editing result with SD-IPC-FT trained with 100 images sampled from ImageNet [26]. SD-IPC-FT shows better editing performance than that of SD-R [4]." }, { "figure_ref": [], "heading": "Input Images", "publication_ref": [], "table_ref": [], "text": "\"Wearing glasses\" \"Wearing a hat\"" }, { "figure_ref": [], "heading": "SD-R", "publication_ref": [], "table_ref": [], "text": "Without Editing" }, { "figure_ref": [], "heading": "SD-R SD-IPC-FT (Ours) SD-R SD-IPC-FT (Ours) SD-IPC-FT (Ours)", "publication_ref": [ "b26", "b3" ], "table_ref": [], "text": "Figure 6: Image editing result with SD-IPC-FT trained with 100 images sampled from CelebA-HQ [27]. SD-IPC-FT shows better editing performance than that of SD-R [4]." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Training Details", "publication_ref": [ "b25", "b26", "b27", "b28", "b29", "b18", "b30", "b25", "b26", "b27" ], "table_ref": [], "text": "Datasets & Evaluations. In previous discussion, we propose three different fine-tuning schemes, using ImageNet [26] for object understanding, CelebA-HQ [27] for portrait understanding, and Places365 [28] for scene understanding. The specific training classes or identities we have selected for each dataset can be found in Appendix B. Each dataset includes 100 images, the test images are non-overlap with the training classes. In order to enable customized generation, we choose two objects and two identities as examples, each accompanied by five images. To assess the quality and semantic consistency of our generated outputs, we measure FID-Score [29] and CLIP-Score [30].\nArchitecture & Hyperparameter. We utilize Stable Diffusion v1.4 8 and CLIP ViT-L/149 models in this paper. We compare our method with the larger Stable-unCLIP-small model10 using CLIP ViT-H/14 and a 1,024-dimensional attention feature. Our method uses DDIM [19] for sampling, while Stable-unCLIP uses PNDM [31], both with 50 sampling steps. SD-IPC-FT is trained for 100, 50, and 100 epochs on ImageNet [26], CelebA-HQ [27], and Places365 [28], respectively. The learning rates for all datasets are 1e-5 with cosine decay. Customized generation has a constant learning rate of 5e-6 for 30-iteration updates. Training is conducted on 2 A5000 GPUs. The editing α is set to 0.9." }, { "figure_ref": [], "heading": "Image Variation Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "SD-IPC.", "publication_ref": [ "b23", "b3", "b3", "b14", "b25", "b28", "b29", "b3", "b3", "b3", "b10", "b3", "b3" ], "table_ref": [], "text": "We evaluate image variation on MSCOCO [24] using all 5,000 images in the 2017-split validation set. Fig. 3 compares text-based generation, SD-R [4], and our SD-IPC. Both SD-R [4] Table 1: Accuracy and Retrieval recalls (%) of original CLIP [15] (C-space) and the inverse matrix transfer (Tspace). Acc@k is the top-k accuracy of ImageNet [26] val-set, TR@k and IR@k are top-k text and image retrieval recalls. This is a surprising result that there is almost no performance decline.\nEmb. Space Acc@1 Acc@5 TR@1 TR@5 IR@1 IR@5 and our method demonstrate image variation, but our approach utilizes an inverse matrix without training. The FID-Score [29] and CLIP-Score [30] are reported in Tab. 2. Our SD-IPC maintains image quality similar to text-based generation (FID-Score: 24.78 vs. 23.65, CLIP-Score: 73.57 vs. 70.15). Note that SD-R [4] achieves better results due to its advanced backbone model.\nFinetuning. In Fig. 4, it is observed that SD-IPC may exhibit inconsistency, for example, the input \"teddybear\" generates a picture of \"kid\". This inconsistency could be attributed to fact that SD-IPC fails to discern the semantically related concepts \"kid\" and \"teddybear\". However, this issue can be rectified through fine-tuning, as demonstrated in Fig. 4, SD-IPC-FT achieves improved generation quality. Moreover, we find that the editing capability of SD-IPC-FT, as illustrated in Fig. 5, Fig. 6, and Appendix D.4, surpasses that of SD-R [4], and fine-tuning does not impact the editing performance.\nWe incorporated a quantitative experiment to validate the superior editing performance of SD-IPC-FT in comparison to SD-R [4]. We utilize images from DreamBooth [11] benchmark and randomly select an editing text for each test image. Editing performance is evaluated using CLIP-T score, the quantitative results are presented in Tab. 4. As seen, our method achieves a higher CLIP-T score than SD-R [4]. Furthermore, we include the training-free SD-IPC for comparison, revealing even SD-IPC slightly outperforms SD-R [4]. The role of different target images used in the fine-tuning stage is outlined in Appendix C.2, which showcases how the choice of target images influences the generation result. Additional generation results are presented in Appendix D." }, { "figure_ref": [ "fig_3", "fig_3", "fig_4" ], "heading": "Customized Generation Results", "publication_ref": [ "b12", "b12", "b10", "b10", "b13", "b12", "b10", "b13", "b12" ], "table_ref": [], "text": "In Fig. 7, we compare our SD-IPC-CT with Custom Diffusion [13] in terms of customized generation. We evaluate customization for two identities (\"Obama\" and \"Chastain\") and two objects (\"cat\" and \"tortoise\"). The training process only requires 5 images and 30 iterations. The results in Fig. 7 reveal that Custom Diffusion [13] struggles to learn the details of the concept with such limited updates, while our SD-IPC-CT demonstrates impressive performance, particularly in the \"young boy/girl\" editing. However, for rare instances like the \"tortoise\" example, both methods do not perform well.\nFor quantitative results, we followed DreamBooth [11]. We used DINO and CLIP-I for subject fidelity and CLIP-T for editing performance. Comparing with DreamBooth [11], Textual Inversion [14], and Custom Diffusion [13], the results are in Tab. 3, Fig. 8, and Appendix D.5. DreamBooth [11] excels in DINO/CLIP-I scores but lags in CLIP-T, indicating limited editing performance, evident in visually similar outputs to training images. Textual Inversion [14] and Custom Diffusion [13] have strong CLIP-T but weak DINO/CLIP-I scores, indicating challenges in preserving subject details. Our SD-IPC-CT method strikes a balance between subject identity preservation and editing performance. " }, { "figure_ref": [ "fig_0" ], "heading": "Ablation Study", "publication_ref": [ "b14" ], "table_ref": [], "text": "Effectiveness of Inverse Projection Matrix. To evaluate the effectiveness of the inverse projection matrix in SD-IPC, we introduce a fully-connected layer instead of the inverse matrix in the image-toprompt conversion, referred to as SD-IPC-FC and SD-IPC-FC(I), where (I) means to initialize with our inverse projection matrix. We train the FC models with the same training data as SD-IPC-FT. However, the results in Fig. 10 indicate SD-IPC-FC suffers from overfitting. SD-IPC-FC(I) slightly alleviates the overfitting but still gets inferior results, shown in Fig. 9. This highlights that our SD-IPC-FT benefits from the good initialization of SD-IPC and preserves knowledge in CLIP [15]." }, { "figure_ref": [], "heading": "Input Images SD-IPC SD-IPC -FT (C) SD-IPC -FT (U) SD-IPC-FC SD-IPC-FT SD-IPC -FC (I)", "publication_ref": [], "table_ref": [], "text": "Prompt Learning & U-Net Fine-tuning. We perform quantitative tests on (text-edited) image variation for the comprehensive ablation studies following the testing in Sec. 4.2. For text-edited variation, we use the editing text as the prompt, such as \"A [Class Name] with a mountain in the background.\". We present the results of individual fine-tuning for two components: SD-IPC-FT (C) for CLIP and SD-IPC-FT (U) for the U-Net. Qualitative results are available in Fig. 9, while quantitative results are provided in Tab. 5 and Tab. 6. It demonstrates that fine-tuning each component contributes to model adaptation, with the best performance achieved when simultaneously fine-tuning both two parts. Some editing comparisons are in Appendix C.3.\nAdditionally, we investigate the influence of the editing parameter α in Appendix C.1. " }, { "figure_ref": [], "heading": "Limitations & Feature Directions", "publication_ref": [ "b3" ], "table_ref": [], "text": "While SD-IPC offers an alternative to SD-R [4], there are remaining challenges. Firstly, the editing text must be contextually appropriate, as using \"on the beach\" to edit a portrait may result in a person being on the beach but lacking facial features. Secondly, SD-IPC currently does not support multiple image inputs. Another future study is to extend our method to generate a sequence of images with consistency. Appendix E shows some potential of our method in this direction." }, { "figure_ref": [ "fig_0" ], "heading": "Conclusion", "publication_ref": [ "b14", "b3", "b3", "b12", "b24" ], "table_ref": [], "text": "This paper reveals that the CLIP model [15] serves as an image-to-prompt converter, enabling image variation in text-to-image Stable Diffusion [4] without extensive training. This finding enhances our understanding of the CLIP embedding space, demonstrating that a simple inverse matrix can convert visual embeddings into textual prompts. Leveraging this image-to-prompt conversion, our SD-IPC methods achieve impressive image variation and editing capabilities, while also enabling fast adaptation for customized generation. Experimental results also show the potential of our method in more multi-modal tasks. We anticipate that this study will inspire future research exploring the image-to-prompt pathway in CLIP-based or LDM-based models. Figure 13: The cross-attention fine-tuning demonstration. W q , W k , W v are projections for query, key, and value, respectively. In Stable Diffusion [4], the query is U-Net feature, key and value are condition embedding (textual embedding or our converted embedding). We only fine-tune W k , W v in updating, this is the same as [13]. [25] for CLIP image transformer. The gray part is the added prompt in each layer. It will be optimized by our fine-tuning loss." }, { "figure_ref": [], "heading": "A Demonstration of Architectures", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "CLIP Image Transformer", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "B Fine-tuning Classes", "publication_ref": [ "b25", "b27", "b26" ], "table_ref": [], "text": "Table 7: We randomly select 100 samples for each fine-tuning. This is the label list of the selected classes. For ImageNet [26] and Places365 [28], we select 20 classes with 5 images in each class. For CelebA-HQ [27], we select 10 people, below are their id-number in dataset. The impact of the editing parameter, α, is examined in Fig. 15, focusing on the \"Obama\" customized model. A higher α value indicates a greater contribution of the editing text to the generation process. Different editing instructions may require different values of α. For example, simpler edits like \"wearing glasses\" may be expressed with lower α, even 0.0, as the added word-tokens in Eq. ( 8) also input the cross-attention. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was partly supported by the China Scholarship Council under Grant 202006960047 and partly by the National Natural Science Foundation of China (No.62173265). Lingqiao Liu is supported by Centre of Augmented Reasoning." }, { "figure_ref": [], "heading": "E Story Generation Example", "publication_ref": [ "b12" ], "table_ref": [], "text": "• A little robot named Rusty went on an adventure to a big city.\n• The robot found no other robot in the city but only people.\n• The robot went to the village to find other robots.\n• Then the robot went to the river.\n• Finally, the robot found his friends. F Custom Diffusion [13] with More Updates" }, { "figure_ref": [], "heading": "SD-IPC-FT (Ours) SD w/ Text", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Custom Diffusion Custom Diffusion Custom Diffusion", "publication_ref": [], "table_ref": [], "text": "Without Editing Without Editing \"Rainbow color hair\" \"Young bog/girl\" \"On the bed\" \"Pink furry\"" }, { "figure_ref": [], "heading": "Training Images", "publication_ref": [ "b12", "b12" ], "table_ref": [], "text": "Figure 28: Following the training details in Custom Diffusion [13], we fine-tuned Custom Diffusion [13] with 250 iterations and shows it is effective for the generation and editing. We report 2 examples for each editing." } ]
The Stable Diffusion model is a prominent text-to-image generation model that relies on a text prompt as its input, which is encoded using the Contrastive Language-Image Pre-Training (CLIP). However, text prompts have limitations when it comes to incorporating implicit information from reference images. Existing methods have attempted to address this limitation by employing expensive training procedures involving millions of training samples for image-to-image generation. In contrast, this paper demonstrates that the CLIP model, as utilized in Stable Diffusion, inherently possesses the ability to instantaneously convert images into text prompts. Such an image-to-prompt conversion can be achieved by utilizing a linear projection matrix that is calculated in a closed form. Moreover, the paper showcases that this capability can be further enhanced by either utilizing a small amount of similar-domain training data (approximately 100 images) or incorporating several online training steps (around 30 iterations) on the reference images. By leveraging these approaches, the proposed method offers a simple and flexible solution to bridge the gap between images and text prompts. This methodology can be applied to various tasks such as image variation and image editing, facilitating more effective and seamless interaction between images and textual prompts.
The CLIP Model is Secretly an Image-to-Prompt Converter
[ { "figure_caption": "Figure 1 :1Figure 1: Demonstration of image variation. The image on the left is a real reference image, while the four on the right are generated from our method.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Attention map of Stable Diffusion[4]. The bottom row sets attention weights of caption words to zero, only keeping the start-/end-token, so the caption maps of the bottom are black. Also notice the starttoken has strong weights so the map is all white.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Fine-tuned SD-IPC, denoted as SD-IPC-FT, can enhance the image-to-prompt conversion quality.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Customized generation examples. The images at left are training images, they are all from one concept or one identity. We compared our SD-IPC-CT with Custom Diffusion [13], notice that both results are trained by 5 reference images with merely 30 iterations.", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Results of DreamBooth [11] benchmark, the training images are listed at top-left corner.", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :Figure 10 :910Figure 9: Image variation results of different fine-tuning settings. SD-IPC-FT (C) means only training CLIP prompts, SD-IPC-FT (U) means only training U-Net cross-attention layers, SD-IPC-FC (I) means initializing the FC-layer with the inverse matrix. SD-IPC-FC SD-IPC-FT (Ours) Input Images", "figure_data": "", "figure_id": "fig_5", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Deep prompt tuning[25] for CLIP image transformer. The gray part is the added prompt in each layer. It will be optimized by our fine-tuning loss.", "figure_data": "", "figure_id": "fig_6", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :Figure 16 :Figure 17 :151617Figure 15: Editing with different α value. Higher α expresses more editing.", "figure_data": "", "figure_id": "fig_7", "figure_label": "151617", "figure_type": "figure" }, { "figure_caption": "Figure 18 :Figure 19 :1819Figure18: MSCOCO[24] variation with SD-IPC. We report three results for each input image.", "figure_data": "", "figure_id": "fig_8", "figure_label": "1819", "figure_type": "figure" }, { "figure_caption": "Figure 20 :Figure 21 :Figure 22 :Figure 23 :Figure 24 :Figure 25 :202122232425Figure 20: Portrait editing with SD-IPC.", "figure_data": "", "figure_id": "fig_9", "figure_label": "202122232425", "figure_type": "figure" }, { "figure_caption": "Figure 26 :26Figure 26: More results of DreamBooth [11] benchmark.", "figure_data": "", "figure_id": "fig_10", "figure_label": "26", "figure_type": "figure" }, { "figure_caption": "FID-Score[29] and CLIP-Score[30] (%) of the generation results. SD w/ Text[4] means the generation from ground-truth text.", "figure_data": "MethodsFIDCLIP-ScoreSD w/ Text [4]23.6570.15SD-R [4]19.8682.59SD-IPC (Ours)24.7873.57", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of DreamBooth[11] benchmark and the comparison with common methods.", "figure_data": "MethodDNIOCLIP-ICLIP-TCommentsDreamBooth [11]60.1177.7825.81Good Identity, Weak EditingTextual Inversion [14]25.1162.4429.53Weak Identity, Good EditingCustom Diffusion [13]39.6768.3730.90Weak Identity, Good EditingSD-IPC-CT (Ours)50.2574.5928.14Good Identity, Good Editing", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Superiorediting performanceof SD-IPC-FT.MethodCLIP-TSD-IPC26.84SD-IPC-FT28.69SD-R [4]26.01", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of image variation with different fine-tuning settings.", "figure_data": "MethodDNIOCLIP-ICLIP-TSD-IPC44.6077.4425.47SD-IPC-FT (C)49.1176.5125.82SD-IPC-FT (U)48.5379.0626.17SD-IPC-FT52.0379.5925.90", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of text-edited image variation with different fine-tuning settings.", "figure_data": "MethodDNIOCLIP-ICLIP-TSD-IPC31.0968.6626.84SD-IPC-FT (C)29.1067.0327.99SD-IPC-FT (U)35.2169.9928.56SD-IPC-FT40.2871.9728.69", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The architecture of Stable Diffusion[4]. Image is compressed by VAE to get the latent z 0 , then doing the diffusion process to acquire z 1 ∼ z T . The U-Net learns to predict removing noise ϵ θ (z t , c, t) when input is z t . Notice that the text condition injects to the U-Net by cross-attention layers, and the blue dotted arrows present our reference image transfer.Figure12:The architecture of CLIP[15]. Class-token and end-token embeddings from I-space and T -space are projected into the C-space, where the paired visual and textual embeddings are The Stable Diffusion[4] only utilizes the textual embedding from T -space.", "figure_data": "-Space 1 Inverse Matrix Transfer  W -tCLIP Text Transformer -Space Cross-Attention Layersi W-Spacet W( ) t ⋅ CLIP Input Text Sequence 0 1 , ,..., ,..., t z z z z Diffusion Process Figure 11: CLIP Image Transformer U-Net Down-Sampling Blocks U-Net Blocks t z 1 t-z VAE Encoder VAE Decoder 0 z [ ] T Input Patch Sequence ( ) i ⋅ CLIP Up-Sampling -Space  <cls>Input Text Sequence CLIP Text Transformer ( ) t ⋅ CLIP -Space  <eos>Cross-Attention LayerU-Net FeaturesWqQuerySoftmaxedWeightsWkKeyInput ConditionEmbeddingWvTrainableValueParameters", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
Yuxuan Ding; Chunna Tian; Haoxuan Ding; Lingqiao Liu
[ { "authors": "A Ramesh; M Pavlov; G Goh", "journal": "PMLR", "ref_id": "b0", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "O Gafni; A Polyak; O Ashual", "journal": "Springer", "ref_id": "b1", "title": "Make-a-scene: Scene-based text-to-image generation with human priors", "year": "2022" }, { "authors": "A Ramesh; P Dhariwal; A Nichol", "journal": "", "ref_id": "b2", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "R Rombach; A Blattmann; D Lorenz", "journal": "", "ref_id": "b3", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "W Feng; X He; T.-J Fu", "journal": "", "ref_id": "b4", "title": "Training-free structured diffusion guidance for compositional text-to-image synthesis", "year": "2022" }, { "authors": "N Liu; S Li; Y Du", "journal": "Springer", "ref_id": "b5", "title": "Compositional visual generation with composable diffusion models", "year": "2022" }, { "authors": "H Chefer; Y Alaluf; Y Vinker", "journal": "", "ref_id": "b6", "title": "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models", "year": "2023" }, { "authors": "L Zhang; M Agrawala", "journal": "", "ref_id": "b7", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "A Hertz; R Mokady; J Tenenbaum", "journal": "", "ref_id": "b8", "title": "Prompt-to-prompt image editing with cross attention control", "year": "2022" }, { "authors": "N Tumanyan; M Geyer; S Bagon", "journal": "", "ref_id": "b9", "title": "Plug-and-play diffusion features for text-driven image-to-image translation", "year": "2022" }, { "authors": "N Ruiz; Y Li; V Jampani", "journal": "", "ref_id": "b10", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2022" }, { "authors": "B Kawar; S Zada; O Lang", "journal": "", "ref_id": "b11", "title": "Imagic: Text-based real image editing with diffusion models", "year": "2022" }, { "authors": "N Kumari; B Zhang; R Zhang", "journal": "", "ref_id": "b12", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "R Gal; Y Alaluf; Y Atzmon", "journal": "", "ref_id": "b13", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy", "journal": "PMLR", "ref_id": "b14", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "J Sohl-Dickstein; E Weiss; N Maheswaranathan", "journal": "PMLR", "ref_id": "b15", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "P Dhariwal; A Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "J Song; C Meng; S Ermon", "journal": "", "ref_id": "b18", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "R Mokady; A Hertz; K Aberman", "journal": "", "ref_id": "b19", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2023" }, { "authors": "G Parmar; K Kumar Singh; R Zhang", "journal": "", "ref_id": "b20", "title": "Zero-shot image-to-image translation", "year": "2023" }, { "authors": "T Brooks; A Holynski; A A Efros", "journal": "", "ref_id": "b21", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2023" }, { "authors": "R Penrose", "journal": "Cambridge University Press", "ref_id": "b22", "title": "A generalized inverse for matrices", "year": "1955" }, { "authors": "T.-Y Lin; M Maire; S Belongie", "journal": "Springer", "ref_id": "b23", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "M Jia; L Tang; B.-C Chen", "journal": "Springer", "ref_id": "b24", "title": "Visual prompt tuning", "year": "2022" }, { "authors": "J Deng; W Dong; R Socher", "journal": "Ieee", "ref_id": "b25", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "T Karras; T Aila; S Laine", "journal": "", "ref_id": "b26", "title": "Progressive growing of gans for improved quality, stability, and variation", "year": "2018" }, { "authors": "B Zhou; A Lapedriza; A Khosla", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b27", "title": "Places: A 10 million image database for scene recognition", "year": "2017" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "J Hessel; A Holtzman; M Forbes", "journal": "", "ref_id": "b29", "title": "Clipscore: A reference-free evaluation metric for image captioning", "year": "2021" }, { "authors": "L Liu; Y Ren; Z Lin", "journal": "", "ref_id": "b30", "title": "Pseudo numerical methods for diffusion models on manifolds", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 224.47, 230.54, 280.2, 14.11 ], "formula_id": "formula_0", "formula_text": "E ϵ,z,ptxt,t ∥ϵ -ϵ θ (z t , c txt (p txt ), t)∥ 2 2 ,(1)" }, { "formula_coordinates": [ 3, 226.18, 456.07, 278.49, 12.69 ], "formula_id": "formula_1", "formula_text": "f img = CLIP i (x) , f c img = W i • f img ,(2)" }, { "formula_coordinates": [ 3, 226.18, 473.22, 278.49, 13.75 ], "formula_id": "formula_2", "formula_text": "f txt = CLIP t (s) , f c txt = W t • f t,⟨eos⟩ txt .(3)" }, { "formula_coordinates": [ 3, 210.58, 562.56, 81.37, 13.74 ], "formula_id": "formula_3", "formula_text": "f txt := f 0,⟨sos⟩ txt , f 1," }, { "formula_coordinates": [ 4, 219.1, 481.97, 285.57, 27.18 ], "formula_id": "formula_5", "formula_text": "f c img ∥f c img ∥ ≈ f c txt ∥f c txt ∥ , with f c txt = W t f t,⟨eos⟩ txt .(5)" }, { "formula_coordinates": [ 4, 159.93, 530.03, 344.74, 26.08 ], "formula_id": "formula_6", "formula_text": "f t,⟨eos⟩ txt ≈ ∥f c txt ∥ ∥f c img ∥ W + t f c img := f cnvrt txt , where, W + t = W ⊤ t W t -1 W ⊤ t ,(6)" }, { "formula_coordinates": [ 4, 224.71, 596.81, 279.96, 13.74 ], "formula_id": "formula_7", "formula_text": "ftxt := f 0,⟨sos⟩ txt , f 1,cnvrt txt , ..., f 76,cnvrt txt ,(7)" }, { "formula_coordinates": [ 4, 134.75, 616.67, 144.83, 13.37 ], "formula_id": "formula_8", "formula_text": "f 1,cnvrt txt = • • • = f 76,cnvrt txt = f cnvrt txt ." }, { "formula_coordinates": [ 5, 143.84, 434.32, 360.82, 13.75 ], "formula_id": "formula_9", "formula_text": "f comb txt = f cnvrt txt + α • f t,⟨eos⟩ txt , f edit txt = f 0,⟨sos⟩ txt , f 1,w0 txt , ..., f t,comb txt , ..., f 76,comb txt ,(8)" }, { "formula_coordinates": [ 6, 140.19, 359.14, 364.48, 12.77 ], "formula_id": "formula_10", "formula_text": "E ϵ,z,xref,t ∥ϵ -ϵ θ (z t , c img (x ref ), t)∥ 2 + E ϵ,z,ptxt,t ∥ϵ -ϵ θ (z t , c txt (p txt ), t)∥ 2 ,(9)" } ]
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b49", "b29", "b23", "b10", "b44", "b16", "b2", "b13", "b33", "b12", "b27", "b21", "b38", "b47", "b11", "b27", "b37", "b21" ], "table_ref": [], "text": "Multi-Object tracking (MOT) is traditionally tackled by a series of tasks, e.g., object detection [50,30,24,11], appearance Re-ID [45,17,3], motion prediction [14,34], and temporal association [13]. The sparkling advantage of this paradigm is task decomposition, leading to an optimal solution for each task. However, it lacks global optimization for the whole pipeline.\nFigure 1: Visualization of tracking results in DanceTrack0073 [28] and MOT17-09 [22] videos. The first row displays the tracking results from MOTR [39], where all individuals can be correctly initialized at the beginning (#237 and #302). However, heavy occlusion appears in the middle frames (#238 and #312), resulting in inaccurate detection (indicated by yellow boxes). The tracking of yellow targets finally terminates in #239 and #322 frames. The second row shows MOTR's detection results, in which tracking queries are removed during the inference process. Targets in different frames are accurately detected.\nlast Transformer decoder remaining the competition strategy to avoid trajectory redundancy, we allow the previously tracked objects to be reassigned to the detection queries in the intermediate decoders.\nDue to the self-attention between all the queries, detection queries will be complementary to tracking queries with the same identity, resulting in feature augmentation for tracking objects with significant appearance variance. Thus, the tracking terminal problem will be alleviated.\nBesides TALA, another drawback in Transformer-based detection as well as tracking is one-toone bipartite matching used, which cannot produce sufficient positive samples, as denoted by Co-DETR [48] and HDETR [12] that introduces one-to-many assignment to overcome this limitation. Differing from these remedies with one-to-many auxiliary training, we develop a one-to-set matching strategy with a novel shadow concept, where each individual query is augmented with multiple shadow queries by adding limited disturbance to itself, so as to ease the one-to-set optimization. The set of shadow queries endows Co-MOT with discriminative training by optimizing the most challenging query in the set with the maximal cost. Hence, the generalization ability will be enhanced.\nWe evaluate our proposed method on multiple MOT benchmarks, including DanceTrack [28], BDD100K [38] and MOT17 [22], and achieve superior performance. The contributions of this work are threefold: i) we introduce a coopetition label assignment for training tracking and detection queries for e2e-MOT with high efficiency; ii) we develop a one-to-set matching strategy with a novel shadow concept to address the hungry for positive training samples and enhance generalization ability; iii) Our approach achieves superior performance on multiple benchmarks, while it functions as an efficient tool to bridge the gap between end-to-end and non-end-to-end MOT." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b3", "b33", "b12", "b34", "b3", "b32", "b41", "b40", "b5", "b8", "b0", "b38", "b38", "b42", "b42", "b38", "b20", "b42", "b10", "b6", "b6", "b6", "b14", "b46", "b39", "b14", "b17", "b7", "b11", "b48" ], "table_ref": [], "text": "Tracking by detection: Most tracking algorithms are based on the two-stage pipeline of tracking-bydetection: Firstly, a detection network is used to detect the location of targets, and then an association algorithm is used to link the targets across different frames. However, the performance of this method is greatly dependent on the quality of the detection. SORT [4] is a widely used object tracking algorithm that utilizes a framework based on Kalman filters [34] and the Hungarian algorithm [13]; Deep SORT [35] incorporates reid features extracted by a deep neural network to improve the accuracy and robustness of multi-object tracking based SORT [4]. After, new batch of joint detection and reid methods are proposed, e.g., JDE [33], FairMOT [42]; Recently, ByteTrack [41], OC-SORT [6], Strongsort [9], BoT-SORT [1] are proposed, that have further improved the tracking performance by introducing the strategy of matching with low-confidence detection boxes. While these methods show improved performance, they often require significant parameter tuning and may be sensitive to changes in the data distribution. Additionally, some approaches may require more advanced techniques such as domain adaptation or feature alignment to effectively handle domain shift issues.\nEnd-to-end tracking: With the recent success of Transformer in various computer vision tasks, several end-to-end object tracking algorithms using Transformer encoder and decoder modules are means whether the tracking queries are used in the training or inference phase. All the decoded boxes of both tracking if applicable and detection queries are treated as detection boxes for evaluation on mAP. We separately evaluate the detection performance for six decoders. For analysis, please refer to the motivation section. [39] 56.8 60.1 60.5 60.5 60.6 60.6 (c) MOTR [39] 57.3 62.2 62.9 63.0 63.0 63.0 (d) MOTRv2 [43] 67.9 70.2 70.6 70.7 70.7 70.7 (e) MOTRv2 [43] 71.9 72.1 72.1 72.1 72.1 72.1\nproposed, such as MOTR [39] and TrackFormer [21]. These approaches demonstrate promising results in object tracking by directly learning the associations between object states across time steps. MOTRv2 [43] introduces the use of pre-detected anchor boxes from a YOLOX [11] detector to indirectly achieve state-of-the-art performance in multi-object tracking.\nOne-to-many label assignment: DETR [7], being a pioneer in employing transformers for computer vision, utilizes a one-to-one label assignment strategy to achieve end-to-end object detection. During training, DETR [7] leverages Hungarian matching to compute the global matching cost and thereby assigns each ground-truth box to a unique positive sample. Researchers shifte focus towards enhancing the performance of DETR [7], with most efforts concentrated on developing new label assignment techniques. For example, DN-DETR [15] building on Deformable DETR [47], breaks away from the traditional one-to-one matching strategy by introducing noisy ground-truth boxes during training. DINO [40] builds upon the successes of DN-DETR [15] and DAB-DETR [18] to achieve an even higher detection performance, putting it at the forefront of current research. Group-DETR [8] takes a simpler approach by adopting a group-wise one-to-many label assignment that explores multiple positive object queries. This approach resolves the slow convergence issue often associated with Transformers, Its methodology is similar to the hybrid matching scheme used in H-DETR [12]. CO-DETR [49] introduces multiple additional parallel branches during training to achieve one-to-many allocation. This training scheme helps to overcome the limitations of one-to-one matching and allows for more flexible and accurate object detection in complex real-world scenarios.\n3 Method" }, { "figure_ref": [], "heading": "Motivation", "publication_ref": [ "b27", "b21", "b38" ], "table_ref": [ "tab_1" ], "text": "To explore the shortcomings of current end-to-end methods in tracking, we conduct an in-depth study of the effectiveness on DanceTrack [28] validation and MOT17 [22] test dataset by analyzing MOTR [39], which is one of the earliest proposed end-to-end multiple-object tracking methods. In Figure 1, we show MOTR's tracking results in some frames of video, e.g., DanceTrack0073 and MOT17-09. In the left three columns of the first row, the 3rd person (in the yellow box) is tracked normally in #237 image. However, in #238 image, due to an inaccurate detection, the bounding box is not accurately placed around that person (the box is too large to include a person on the left side). In #239 image, the tracking is completely wrong and associated with the 2nd person instead. In the right three columns of the first row, the 2nd person (in the yellow box) is successfully detected and tracked in #302 image. However, in #312 image, this person is occluded by other people.\nWhen the person appears again in #322 image, she is not successfully tracked or even detected.\nTo determine whether the tracking failure is caused by the detection or association of MOTR, we visualized MOTR's detection results in the second row. We remove the tracking queries during inference, and the visualization shows that all persons are accurately detected. This demonstrates that the detection will deteriorate due to the nearby tracked objects, though TALA used in training ensures that the detection with the same identity of tracked objects will be suppressed.\nWe further provide quantitative results of how the queries affect each other in Table 2. All the decoded boxes of both tracking and detection queries are treated as detection boxes so that they can be evaluated by the mAP metric commonly used for object detection. We can see from the table that the vanilla MOTR (a) has a low mAP 42.5%, but it increases by 18.1% (42.5% vs 60.6%) when removing tracking queries during inference (b). Then we retrain MOTR as a sole detection task by removing tracking queries (c) and mAP further increases to 66.1% (+5.5%). That means the DETR-style MOT model has a sparking capability of detection but still struggles with the temporal association of varied appearance, which is the crucial factor of MOT.\nWe also observe an excellent detection performance (70.7%) for MOTRv2, which introduces a pretrained YOLOX detector. Removing tracking queries during inference brings a slight improvement (1.4%) for mAP, which means MOTRv2 has almost addressed the poor detection issue with high-quality detection prior from YOLOX. However, the introduced YOLOX brings extra computational burden, unfriendly to deployment. In contrast, we intend to endow the end-to-end MOT model with its own powerful detection capability, rather than introducing any extra pretrained detector." }, { "figure_ref": [], "heading": "Tracking Aware Label Assignment", "publication_ref": [ "b38", "b20" ], "table_ref": [], "text": "Here we revisit the Tracking Aware Label Assignment (TALA) used to train end-to-end Transformers such as MOTR [39] and TrackFormer [21] for MOT. At the moment t -1, N queries are categorized to two types:\nN T tracking queries Q t = {q 1 t , ..., q N T t } and N D detection queries Q d = {q 1 d , ..., q N D d }, where N = N T + N D .\nAll the queries will self-attend each other and then cross-attend the image feature tokens via L decoders, and the output embeddings of the l-th decoder are denoted as E l = {e l 1 , ..., e l N T } and F l = {f l 1 , ..., f l N D }. At the moment t, there are M G ground truth boxes. Among them, M T previously tracked objects, denoted as Ê = {ê 1 , ..., êM T }, are assigned to N T tracking queries, where M T ≤ N T as some objects disappear. Formally, j-th tracking embedding e l j will be assigned to the same identity with the previous timestamp if still alive at this moment, otherwise zero (disappearing). Besides, M D newborn objects, denoted as F = { f1 , ..., fM D }, are assigned to N D detection queries. Specifically, the Hungarian matching algorithm is used to find the optimal pairing between F i and F for each decoder, by a cost function \n(L m = L f (c) + L 1 (b) + L g (b) ∈ R N D * M G )," }, { "figure_ref": [ "fig_2" ], "heading": "Overall Architecture", "publication_ref": [ "b46", "b46" ], "table_ref": [], "text": "The entire CO-MOT framework is illustrated in Figure 3. During the forward process, the features of an image in a video are extracted by the backbone and fed into the deformable [47] encoder to aggregate information. Finally, together with the detection and tracking queries, they are used as the inputs of the L layer decoders (L = 6 in this paper by default) to detect new targets or track the already tracked targets. It is worth noting that queries contain (N T + N D ) * N S position (P ∈ R 4 ) and embedding (E ∈ R 256 ) as we use deformable attention [47]. Here N S is the number of shadow queries for each set, and we will introduce the shadow set concept in the following section. All the queries predict (N T + N D ) * N S target boxes, where N S queries in a set jointly predict the same target. To train CO-MOT, we employ the COLA and TALA on the different decoders, along with the one-to-set label assignment strategy." }, { "figure_ref": [ "fig_2" ], "heading": "Coopetition Label Assignment", "publication_ref": [], "table_ref": [], "text": "Unlike TALA, which only assigns newborn objects to detection queries, we advocate a novel COopetition Label Assignment (COLA). Specifically, we assign M T tracked objects to detection queries as well in the intermediate decoders, i.e., l < L, which is illustrated in Figure 3. As shown in the output of the first decoder, the track queries continue to track the 3rd and 4th person. The detection queries not only detect the 1st and 2nd newborns but also detect the 3rd and 4th people. Note that we remain the competition assignment for the L-th decoder to avoid trajectory redundancy during inference. Thanks to the self-attention used between tracking and detection queries, detection queries with the same identity can enhance the representation of the corresponding tracking queries (e.g. grey 3rd helps blue 3rd)." }, { "figure_ref": [], "heading": "Shadow Set", "publication_ref": [ "b7", "b11" ], "table_ref": [], "text": "In densely crowded scenes, objects can be lost or mistakenly tracked to other objects due to minor bounding box fluctuations. We conjecture that one query for one object is sensitive to prediction noises. Inspired by previous works such as Group-DETR [8] and H-DETR [12], we propose the one-to-set label assignment strategy for multi-object tracking, which is significantly different from the one-to-many manner. During the tracking, an object is no longer tracked by a single query but by a set of queries, where each member of the set acts as a shadow of each other. Tracking queries are rewritten as\nQ t = {{q 1,i t } N S i=1 , ..., {q N T ,i t } N S i=1\n} and detection queries are rewritten as\nQ d = {{q 1,i d } N S i=1 , ..., {q N D ,i d } N S i=1 }.\nThe total number of queries is N * N S . When a particular query in the set tracks the object incorrectly, the other shadows in the same set help it continue tracking the object. In the experiments, this strategy prove effective in improving tracking accuracy and reducing tracking failures in dense and complex scenes.\nInitialization. P i,j ∈ R 4 and X i,j ∈ R 256 , which represents position and embedding of the j-th shadow query in the i-th set, are initialized, which significantly affects the convergence and the final performance. In this paper, we explore three initialization approaches: i) I rand : random initialization; ii) I copy : initializing all shadows in the same set with one learnable vector, i.e., P i,j = P i and X i,j = X i , where P i and X i are learnable embeddings with random initialization; iii) I noise : adding Gaussian noises N (0, σ p ) and N (0, σ x ) to P i,j and X i,j , respectively, in the previous approach. In the experiment, we set σ p and σ x to 1e-6. Although the variance between each shadow in the same set is subtle after initialization, it expands to 1e-2 at the end of training. The last approach provides the similarity for helping optimization and diversity to improve tracking performance.\nTraining. We propose a shadow-based label assignment method (S-COLA or S-TALA) to ensure that all objects within a set are matched to the same ground truth object. Take S-COLA as an example, we treat the set as a whole, and select one of them as a representative based on criteria to participate in subsequent matching. Specifically, for tracking queries Q t , the tracked target in the previous frame is selected to match with the whole set; For detection queries Q d , we first calculate the cost function (L sm ∈ R N D * N S * M G ) of all detection queries with respect to all ground truth. We then select the representative query by a strategy λ (e.g., Mean, Min, and Max) for each set, resulting in L m = λ(L sm ) ∈ R N D * M G . L m is then used as an input for Hungarian matching to obtain the matching results between the sets and newborns. Finally, the other shadows within the same set share the representative's matching result.\nInference. We determine whether the i-th shadow set tracks an object by the confidence score of the selected representative. Here we adopt a different strategy φ (e.g., Mean, Min, and Max) for representative sampling. When the score of the representative is higher than a certain threshold τ , we select the box and score predictions of the shadow with the highest score as the tracking outputs and feed the entire set to the next frame for subsequent tracking. Sets that do not capture any object will be discarded. " }, { "figure_ref": [], "heading": "Datasets and Metrics", "publication_ref": [ "b27", "b21", "b37", "b27", "b21", "b37", "b18", "b15" ], "table_ref": [], "text": "Datasets. We validate the effectiveness of our approach on different datasets, including Dance-Track [28], MOT17 [22], and BDD100K [38]. Each dataset has its unique characteristics and challenges.\nThe DanceTrack [28] dataset is used for multi-object tracking of dancers and provides high-quality annotations of dancer motion trajectories. This dataset is known for its significant difficulties such as fast object motion, diverse object poses, and object appearances that are similar to one another.\nThe MOT17 [22] dataset is a commonly used multi-object tracking dataset, and each video contains a large number of objects. The challenges of this dataset include high object density, long-period occlusions, varied object sizes, dynamic camera poses, and so on. Additionally, this dataset provides various scenes, such as indoor, outdoor, and city centers.\nThe BDD100K [38] dataset is a large-scale autonomous driving scene recognition dataset that is used for scene understanding in autonomous driving systems. This dataset provides multiple object categories, such as cars, pedestrians, etc. It can be used to evaluate our model's performance in multi-object tracking across different object categories. The challenges of this dataset include rapidly changing traffic and road conditions, diverse weather conditions, and lighting changes.\nMetrics. To evaluate our method, we use the Higher Order Tracking Accuracy (HOTA) metric [19], which is a higher-order metric for multi-object tracking. Meantime We analyze the contributions of Detection Accuracy (DetA), Association Accuracy (AssA), Multiple-Object Tracking Accuracy (MOTA), Identity Switches (IDS), and Identity F1 Score (IDF1). For BDD100K, to better evaluate the performance of multi-class and multi-object tracking, we use the Tracking Every Thing Accuracy (TETA) [16], Localization Accuracy (LocA), Association Accuracy (AssocA), and Classification Accuracy(ClsA) metrics. " }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b38", "b24" ], "table_ref": [], "text": "Our proposed label assignment and shadow concept can be applied to any e2e-MOT method. For simplicity, we conduct all the experiments on MOTR [39]. It uses ResNet50 as the backbone to extract image features and uses a Deformable encoder and Deformable decoder to aggregate features and predict object boxes and categories. We also use the data augmentation methods employed in MOTR, including randomly clipping and temporally flipping a video segment. To sample a video segment for training, we use a fixed sampling length of 5 and a sampling interval of 10. The dropout ratio in attention is zero. We train all experiments on 8 V100-16G GPUs, with a batch size of 1 per GPU. For DanceTrack and BDD100k, we train the model for 20 epochs with an initial learning rate of 2e-4 and reduce the learning rate by a factor of 10 every eight epochs. We use 60 initial queries for a fair comparison with previous work. For MOT17, we train the model for 200 epochs, with the learning rate reduced by a factor of 10 every 80 epochs. We use 300 initial queries due to the large number of targets to be tracked. Unless otherwise specified, all the experiments on the DanceTrack dataset use Crowdhuman [25] for joint training." }, { "figure_ref": [], "heading": "Comparison with state-of-the-art methods", "publication_ref": [ "b40", "b5", "b9", "b5", "b38", "b42", "b34", "b9", "b15", "b20", "b38", "b4" ], "table_ref": [ "tab_0", "tab_0", "tab_0" ], "text": "DanceTrack. Our method presents promising results on the DanceTrack test set, as evidenced by Table 1a. Without bells and whistles, our method achieve an impressive HOTA score of 69.4%.\nIn comparison with tracking-by-detection methods, such as ByteTrack [41], OC-SORT [6], and QDTrack [10], our approach stands out with a significant improvement in a variety of tracking metrics. For example, compared to OC-SORT [6], CO-MOT improves HOTA, DetA, and AssA by 14.3%, 1.8%, and 20.6%, respectively. This remarkable performance demonstrates the effectiveness of the end-to-end method for object tracking in complex scenarios. Our approach can avoid tedious parameter adjustments and ad hoc fusion of two independent detection and tracking modules. It realizes automatic learning of data distribution and global optimization objectives. Compared to other end-to-end methods, such as MOTR [39], CO-MOT outperforms them by a remarkable margin (e.g., 15.2% improvement on HOTA compared to MOTR). Note that CO-MOT has a comparable performance with MOTRv2 [43] which introduces an extra pre-trained YOLOX detector to MOTR. We claim that MOTR with our novel label assignment and shadow queries, namely CO-MOT, is enough to realize a promising performance.\nBDD100K. Table 1b shows the results of different tracking methods on the BDD100K validation set. To better evaluate the multi-category tracking performance, we adopt the latest evaluation metric TETA, which combines multiple factors such as localization, association and classification. Compared with DeepSORT [35], QDTrack [10], and TETer [16], although the LocA was considerably lower, we achieve superior performance on TETA with an improvement of 2% (52.8% vs 50.8%), which is benefited from the strong tracking association performance revealed by the AssocA (56.2% vs 52.9%). Compared with MOTRv2, CO-MOT slightly falls behind on TETA, but its AssocA (56.2%) is much better than MOTRv2 (51.9%).\nMOT17. Table 1c shows the results of the MOT17 test set. Compared to the end-to-end methods, such as TrackFormer [21], MOTR [39] and MeMOT [5], we still have significant improvement on HOTA. Although it is inferior to non-end-to-end methods, we conjecture that the insufficient amount of MOT17 training data cannot be able to fully train a Transformer-based MOT model." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "Component Evaluation of CO-MOT. Based on the results shown in Table2a, we examine the impact of different components of the CO-MOT framework on tracking performance, as evaluated on the DanceTrack validation set. Through experimental analysis by combining various components, we achieve significant improvements over the baseline (65.9% vs 63.7%). To begin with, we reduce the number of queries for fair comparison as we will introduce shadow sets that double or triple the number of queries. We observe almost the same performance of the model when using (a) 60 initial queries or (b) 180 initial queries. Then, by introducing the COLA strategy to the baseline (b), we observe an improvement of 1.2% on HOTA and 1.8% on AssA, without any additional computational cost. By incorporating the concept of shadow into the baseline (a), HOTA is improved by 0.9% and AssA is improved by 1.8%.\nCOLA. It is also evident from Table 2a that both COLA and Shadow have minimal impact on DetA, which is detection-related. However, they have a significant impact on AssA and HOTA, which are more strongly related to tracking. We observe an improvement of 3.8% (56.5% vs 52.7%) on AssA and 2.5% (66.2% vs 63.7%) on HOTA. On the surface, our method seems to help detection as it introduces more matching objects for detection, but it actually helps tracking.\nTo answer this question, we demonstrate the attention weights between detection and tracking queries in Figure 4. The horizontal and vertical axes denote the attention weights after self-attention between different types of queries on different decoder layers. These weights roughly indicate the contribution of one query to another. In our model, there are a total of 6 decoder layers. T2T represents the contribution of a tracking query to itself. D2T represents the contribution of a detection query predicting the same object to a tracking query. Two bounding boxes with an IOU greater than 0.7 are treated as the same object. MD2T represents the average contribution of all detection queries to a specific tracking query, which serves as a reference metric. Note that the normalized attention weights are with a sum of 1.\nFrom Figure 4, it is evident that detection queries make a significant contribution (more than 15%) to their corresponding tracking queries in decoder layers where L > 2, even greater than the T2T for #4 and #6 decoders and much higher than the MD2T for all the decoders. This indicates that detection queries pass on the rich semantic information they represent to their corresponding tracking queries, which in turn can be utilized by the tracking queries to improve their tracking accuracy.\nShadow Set. 2b that the combination of λ = max and φ = min yields the best results. That means we use the most challenging query in the set to train the model, leading to discriminative representation learning. To determine the initialization method, we also fix N S = 5 with COLA and find that the best results are achieved using I noise . For I rand , there is a considerable variation between different shadows within the same set due to random initialization, making convergence difficult and resulting in inferior results. Finally, we try different values of N S and find that the best results are achieved when N S = 3. When N S is too large, we observe that convergence becomes more difficult, and the results deteriorate." }, { "figure_ref": [], "heading": "Efficiency Comparison", "publication_ref": [], "table_ref": [], "text": "In Figure 5, efficiency comparisons on DanceTrack test dataset are made between CO-MOT and MOTR(v2). The horizontal axis represents FLOPs (G) and the vertical axis represents the HOTA metric. The size of the circles represents the number of parameters (M). It can be observed that our model achieves comparable HOTA with MOTRv2 while maintaining similar FLOPs and number of parameters with MOTR. The runtime speed of CO-MOT is much faster (1.4×) than MOTRv2's. Thus, our approach is effective and efficient, which is friendly for deployment as it does not need an extra detector." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Limitations", "publication_ref": [ "b38", "b21", "b39" ], "table_ref": [], "text": "Despite the introduction of COLA and Shadow, which improve the tracking effect of MOTR [39], the inherent data-hungry nature of the Transformer model means that there is not a significant improvement in smaller datasets like MOT17 [22]. As shown in Figure 6a, a prominently visible target has not been detected, but this issue has only been observed in the small MOT17 dataset. And due to the scale problem, the detection and tracking performance is poor for small and difficult targets in Figure 6b. In order to further improve the effect, it is necessary to increase the amount of training data or use a more powerful baseline such as DINO [40]." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes a method called CO-MOT to bridge the gap between end-to-end and nonend-to-end multi-object tracking. We investigate the issues in the existing end-to-end MOT using Transformer and find that the label assignment can not fully explore the detection queries as detection and tracking queries are exclusive to each other. Thus, we introduce a coopetition alternative for training the intermediate decoders. Also, we develop a shadow set as units to augment the queries, mitigating the unbalanced training caused by the one-to-one matching strategy. Experimental results show that CO-MOT achieves significant performance gains on multiple datasets in an efficient manner. We believe that our method as a plugin significantly facilitates the research of end-to-end MOT using Transformer." } ]
Existing end-to-end Multi-Object Tracking (e2e-MOT) methods have not surpassed non-end-to-end tracking-by-detection methods. One potential reason is its label assignment strategy during training that consistently binds the tracked objects with tracking queries and then assigns the few newborns to detection queries. With oneto-one bipartite matching, such an assignment will yield an unbalanced training, i.e., scarce positive samples for detection queries, especially for an enclosed scene, as the majority of the newborns come on stage at the beginning of videos. Thus, e2e-MOT will be easier to yield a tracking terminal without renewal or re-initialization, compared to other tracking-by-detection methods. To alleviate this problem, we present Co-MOT, a simple and effective method to facilitate e2e-MOT by a novel coopetition label assignment with a shadow concept. Specifically, we add tracked objects to the matching targets for detection queries when performing the label assignment for training the intermediate decoders. For query initialization, we expand each query by a set of shadow counterparts with limited disturbance to itself. With extensive ablations, Co-MOT achieves superior performance without extra costs, e.g., 69.4% HOTA on DanceTrack and 52.8% TETA on BDD100K. Impressively, Co-MOT only requires 38% FLOPs of MOTRv2 to attain a similar performance, resulting in the 1.4× faster inference speed. Recently, end-to-end Multi-Object Tracking (e2e-MOT) via Transformer such as MOTR [39] and TrackFormer [21] has emerged, which performs detection and tracking simultaneously in unified transformer decoders. Specifically, tracking queries realize identity tracking by recurrent attention over time. Meanwhile, detection queries discover newborns in each new arriving frame, excluding previously tracked objects, due to a Tracking Aware Label Assignment (TALA) during training. However, we observe an inferior performance for e2e-MOT due to poor detection, as it always yields a tracking terminal, shown in Figure 1. MOTRv2 [43] consents to this conclusion, which bootstraps performance by a pre-trained YOLOX [11] detector, but the detector will bring extra overhead to deployment. In this paper, we present a novel viewpoint for addressing the above limitations of e2e-MOT: detection queries are exclusive but also conducive to tracking queries. To this end, we develop a COopetition Label Assignment (COLA) for training tracking and detection queries.
Bridging the Gap Between End-to-end and Non-End-to-end Multi-Object Tracking
[ { "figure_caption": "Figure 2 :2Figure 2: The detection performance (mAP) of MOTR (v2) on DanceTrack validation dataset.means whether the tracking queries are used in the training or inference phase. All the decoded boxes of both tracking if applicable and detection queries are treated as detection boxes for evaluation on mAP. We separately evaluate the detection performance for six decoders. For analysis, please refer to the motivation section.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The CO-MOT framework includes a CNN-based backbone network for extracting image features, a deformable encoder for encoding image features, and a deformable decoder that uses self-attention and cross-attention mechanisms to generate output embeddings with bounding box and class information. The queries in the framework use set queries as units, with each set containing multiple shadows that jointly predict the same target. Detection queries and tracking queries are used for detecting new targets and tracking existing ones, respectively. To train CO-MOT, S-COLA and S-TALA are proposed for training only.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "that takes into account the class scores and box overlapping. Where L f (c) represents the focal loss for classification, L 1 (b) represents the L 1 cost of the bounding box, and L g (b) represents the Generalized Intersection over Union cost.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: The attention weights between different types of queries on different decoders.", "figure_data": "", "figure_id": "fig_4", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Failed cases are often due to the failure to detect the target.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparison to state-of-the-art methods on different dataset. Please pay more attention to the metrics with *.", "figure_data": "4 Experiment(a) Comparison to existing methods on the Dance-Track test set. Best results are marked in bold.(c) Comparison to existing methods on the MOT17 test dataset. Best results of end-to-end methods are marked in bold.HOTA * DetA AssA MOTA IDF1 Non-End-to-end CenterTrack [27] 41.8 78.1 22.6 86.8 35.7 FairMOT [42] 39.7 66.7 23.8 82.2 40.8 ByteTrack [41] 47.7 71.0 32.1 89.6 53.9 GTR [46] 48.0 72.5 31.9 84.7 50.3 QDTrack [10] 54.2 80.1 36.8 87.7 50.4 OC-SORT [6] 55.1 80.3 38.3 92.0 54.6 TransTrack [29] 45.5 75.9 27.5 88.4 45.2 End-to-end MOTR [39] 54.2 73.5 40.2 79.7 51.5 MOTRv2 [43] 69.9 83.0 59.0 91.9 71.7 CO-MOT(ours) 69.4 82.1 58.9 91.2 71.9HOTA * DetA AssA MOTA IDF1 Non-End-to-end 44.8 44.9 45.1 53.5 52.3 CenterTrack [27] 52.2 53.8 51.0 67.8 64.7 Tracktor++ [2] TraDeS [36] 52.7 55.2 50.8 69.1 63.9 QuasiDense [23] 53.9 55.6 52.7 68.7 66.3 TransTrack [29] 54.1 61.6 47.9 74.5 63.9 GTR [46] 59.1 57.0 61.6 71.5 75.3 FairMOT [42] 59.3 60.9 58.0 73.7 72.3 CorrTracker [31] 60.7 62.9 58.9 76.5 73.6 Unicorn [37] 61.7 / / 77.2 75.5 GRTU [32] 62.0 62.1 62.1 74.9 75.0 MAATrack [26] 62.0 64.2 60.2 79.4 75.9ByteTrack [41]63.1 64.5 62.0 80.3 77.3(b) Comparison to existing methods on theOC-SORT [6]63.2/63.2 78.0 77.5BDD100K validation set. Best results are markedQDTrack [10]63.5 62.6 64.5 77.5 78.7in bold.BoT-SORT[1]64.6//80.6 79.5Deep OC-SORT[20] 64.9//80.6 79.4TETA * LocA AssocA ClsAP3AFormer [44]///81.2 78.1DeepSORT [35] 48.046.446.751.0End-to-endQDTrack [10]47.845.848.549.2TrackFormer [21]///65.0 63.9TETer [16]50.847.252.952.4MOTR [39]57.8 60.3 55.7 73.4 68.6MOTRv2 [43]54.949.551.963.1MeMOT [5]56.9/55.2 72.5 69.0CO-MOT(ours)52.838.756.263.6CO-MOT(ours)60.1 59.5 60.6 72.6 72.7", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation studies of our proposed CO-MOT on the DanceTrack validation set. Please pay more attention to the metrics with *.(a) Ablation study on individual CO-MOT components. As components are added, the tracking performance improves gradually.", "figure_data": "(c) Effect of initialization meth-ods for shadow queries and num-#Q COLA Shadow HOTA * DetA AssA MOTA IDF1ber of shadows on the Dance-Track validation set. Im: ini-(a) 18063.8 77.2 52.9 87.6 66.1tialization methods for shadow(b) 6063.7 77.5 52.7 87.3 65.2queries, NS: number of shadows.(c) 60 (d) 60*364.9 77.8 54.5 87.4 66.9 64.7 76.8 54.7 86.1 67.4HOTA * DetA AssA(e) 60*365.9 77.6 56.2 87.1 68.8Icopy 64.9 77.4 54.6(b) Effect of different λ and φ combinations.Im Inoise 65.3 77.8 55.0 I rand 63.8 77.5 52.8λmaxmeanmin164.9 77.8 54.5φ HOTA * 57.6 56.4 55.1 56.7 55.2 52.0 57.5 55.9 51.5 min mean max min mean max min mean max2 NS 3 465.5 77.7 54.5 65.9 77.6 56.2 65.3 77.4 54.4DetA 70.7 69.3 65.4 70.6 66.5 59.0 70.8 66.4 59.3565.3 77.8 55.0AssA 47.3 46.1 46.7 45.9 46.1 46.1 46.8 47.2 45.0664.4 76.7 54.2", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table 2c and Table 2b list ablation experiments related to three hyperparameters of shadow, which are the number of shadows, initialization method of shadows, and representative", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
Feng Yan; Weixin Luo; Yujie Zhong; Yiyang Gan; Ma Lin; Meituan
[ { "authors": "N Aharon; R Orfaig; B.-Z Bobrovsky", "journal": "", "ref_id": "b0", "title": "Bot-sort: Robust associations multi-pedestrian tracking", "year": "2022" }, { "authors": "P Bergmann; T Meinhardt; L Leal-Taixe", "journal": "", "ref_id": "b1", "title": "Tracking without bells and whistles", "year": "2019" }, { "authors": "L Bertinetto; J Valmadre; J F Henriques; A Vedaldi; P H Torr", "journal": "Springer", "ref_id": "b2", "title": "Fully-convolutional siamese networks for object tracking", "year": "2016" }, { "authors": "A Bewley; Z Ge; L Ott; F Ramos; B Upcroft", "journal": "IEEE", "ref_id": "b3", "title": "Simple online and realtime tracking", "year": "2016" }, { "authors": "J Cai; M Xu; W Li; Y Xiong; W Xia; Z Tu; S Soatto", "journal": "", "ref_id": "b4", "title": "Memot: multi-object tracking with memory", "year": "2022" }, { "authors": "J Cao; X Weng; R Khirodkar; J Pang; K Kitani", "journal": "", "ref_id": "b5", "title": "Observation-centric sort: Rethinking sort for robust multi-object tracking", "year": "2022" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "Springer", "ref_id": "b6", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Q Chen; X Chen; G Zeng; J Wang", "journal": "", "ref_id": "b7", "title": "Group detr: Fast training convergence with decoupled one-to-many label assignment", "year": "2022" }, { "authors": "Y Du; Z Zhao; Y Song; Y Zhao; F Su; T Gong; H Meng", "journal": "IEEE Transactions on Multimedia", "ref_id": "b8", "title": "Strongsort: Make deepsort great again", "year": "2023" }, { "authors": "T Fischer; J Pang; T E Huang; L Qiu; H Chen; T Darrell; F Yu", "journal": "", "ref_id": "b9", "title": "Qdtrack: Quasidense similarity learning for appearance-only multiple object tracking", "year": "2022" }, { "authors": "Z Ge; S Liu; F Wang; Z Li; J Sun", "journal": "", "ref_id": "b10", "title": "Yolox: Exceeding yolo series in", "year": "2021" }, { "authors": "D Jia; Y Yuan; H He; X Wu; H Yu; W Lin; L Sun; C Zhang; H Hu", "journal": "", "ref_id": "b11", "title": "Detrs with hybrid matching", "year": "2022" }, { "authors": "H W Kuhn", "journal": "Naval research logistics quarterly", "ref_id": "b12", "title": "The hungarian method for the assignment problem", "year": "1955" }, { "authors": "S Lefèvre; D Vasquez; C Laugier", "journal": "ROBOMECH journal", "ref_id": "b13", "title": "A survey on motion prediction and risk assessment for intelligent vehicles", "year": "2014" }, { "authors": "F Li; H Zhang; S Liu; J Guo; L M Ni; L Zhang", "journal": "", "ref_id": "b14", "title": "Dn-detr: Accelerate detr training by introducing query denoising", "year": "2022" }, { "authors": "S Li; M Danelljan; H Ding; T E Huang; F Yu", "journal": "Springer", "ref_id": "b15", "title": "Tracking every thing in the wild", "year": "2022" }, { "authors": "W Li; X Zhu; S Gong", "journal": "", "ref_id": "b16", "title": "Harmonious attention network for person re-identification", "year": "2018" }, { "authors": "S Liu; F Li; H Zhang; X Yang; X Qi; H Su; J Zhu; L Zhang", "journal": "", "ref_id": "b17", "title": "Dab-detr: Dynamic anchor boxes are better queries for detr", "year": "2022" }, { "authors": "J Luiten; A Osep; P Dendorfer; P Torr; A Geiger; L Leal-Taixé; B Leibe", "journal": "International journal of computer vision", "ref_id": "b18", "title": "Hota: A higher order metric for evaluating multi-object tracking", "year": "2021" }, { "authors": "G Maggiolino; A Ahmad; J Cao; K Kitani", "journal": "", "ref_id": "b19", "title": "Deep oc-sort: Multi-pedestrian tracking by adaptive re-identification", "year": "2023" }, { "authors": "T Meinhardt; A Kirillov; L Leal-Taixe; C Feichtenhofer", "journal": "", "ref_id": "b20", "title": "Trackformer: Multi-object tracking with transformers", "year": "2022" }, { "authors": "A Milan; L Leal-Taixé; I Reid; S Roth; K Schindler", "journal": "", "ref_id": "b21", "title": "Mot16: A benchmark for multiobject tracking", "year": "2016" }, { "authors": "J Pang; L Qiu; X Li; H Chen; Q Li; T Darrell; F Yu", "journal": "", "ref_id": "b22", "title": "Quasi-dense similarity learning for multiple object tracking", "year": "2021" }, { "authors": "J Redmon; S Divvala; R Girshick; A Farhadi", "journal": "", "ref_id": "b23", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "S Shao; Z Zhao; B Li; T Xiao; G Yu; X Zhang; J Sun", "journal": "", "ref_id": "b24", "title": "Crowdhuman: A benchmark for detecting human in a crowd", "year": "2018" }, { "authors": "D Stadler; J Beyerer", "journal": "", "ref_id": "b25", "title": "Modelling ambiguous assignments for multi-person tracking in crowds", "year": "2022" }, { "authors": "R Stone", "journal": "", "ref_id": "b26", "title": "Centertrack: An ip overlay network for tracking dos floods", "year": "2000" }, { "authors": "P Sun; J Cao; Y Jiang; Z Yuan; S Bai; K Kitani; P Luo", "journal": "", "ref_id": "b27", "title": "Dancetrack: Multi-object tracking in uniform appearance and diverse motion", "year": "2022" }, { "authors": "P Sun; J Cao; Y Jiang; R Zhang; E Xie; Z Yuan; C Wang; P Luo", "journal": "", "ref_id": "b28", "title": "Transtrack: Multiple object tracking with transformer", "year": "2020" }, { "authors": "M Tan; R Pang; Q V Le", "journal": "", "ref_id": "b29", "title": "Efficientdet: Scalable and efficient object detection", "year": "2020" }, { "authors": "Q Wang; Y Zheng; P Pan; Y Xu", "journal": "", "ref_id": "b30", "title": "Multiple object tracking with correlation learning", "year": "2021" }, { "authors": "S Wang; H Sheng; Y Zhang; Y Wu; Z Xiong", "journal": "", "ref_id": "b31", "title": "A general recurrent tracking framework without real data", "year": "2021" }, { "authors": "Z Wang; L Zheng; Y Liu; Y Li; S Wang", "journal": "Springer", "ref_id": "b32", "title": "Towards real-time multi-object tracking", "year": "2020" }, { "authors": "G Welch; G Bishop", "journal": "", "ref_id": "b33", "title": "An introduction to the kalman filter", "year": "1995" }, { "authors": "N Wojke; A Bewley; D Paulus", "journal": "IEEE", "ref_id": "b34", "title": "Simple online and realtime tracking with a deep association metric", "year": "2017" }, { "authors": "J Wu; J Cao; L Song; Y Wang; M Yang; J Yuan", "journal": "", "ref_id": "b35", "title": "Track to detect and segment: An online multi-object tracker", "year": "2021" }, { "authors": "B Yan; Y Jiang; P Sun; D Wang; Z Yuan; P Luo; H Lu", "journal": "Springer", "ref_id": "b36", "title": "Towards grand unification of object tracking", "year": "2022" }, { "authors": "F Yu; H Chen; X Wang; W Xian; Y Chen; F Liu; V Madhavan; T Darrell", "journal": "", "ref_id": "b37", "title": "Bdd100k: A diverse driving dataset for heterogeneous multitask learning", "year": "2020" }, { "authors": "F Zeng; B Dong; Y Zhang; T Wang; X Zhang; Y Wei", "journal": "Springer", "ref_id": "b38", "title": "Motr: End-to-end multiple-object tracking with transformer", "year": "2022" }, { "authors": "H Zhang; F Li; S Liu; L Zhang; H Su; J Zhu; L M Ni; H.-Y Shum", "journal": "", "ref_id": "b39", "title": "Dino: Detr with improved denoising anchor boxes for end-to-end object detection", "year": "2022" }, { "authors": "Y Zhang; P Sun; Y Jiang; D Yu; F Weng; Z Yuan; P Luo; W Liu; X Wang", "journal": "Springer", "ref_id": "b40", "title": "Bytetrack: Multi-object tracking by associating every detection box", "year": "2022" }, { "authors": "Y Zhang; C Wang; X Wang; W Zeng; W Liu", "journal": "International Journal of Computer Vision", "ref_id": "b41", "title": "Fairmot: On the fairness of detection and re-identification in multiple object tracking", "year": "2021" }, { "authors": "Y Zhang; T Wang; X Zhang", "journal": "", "ref_id": "b42", "title": "Motrv2: Bootstrapping end-to-end multi-object tracking by pretrained object detectors", "year": "2022" }, { "authors": "Z Zhao; Z Wu; Y Zhuang; B Li; J Jia", "journal": "Springer", "ref_id": "b43", "title": "Tracking objects as pixel-wise distributions", "year": "2022" }, { "authors": "L Zheng; Y Yang; A G Hauptmann", "journal": "", "ref_id": "b44", "title": "Person re-identification: Past, present and future", "year": "2016" }, { "authors": "X Zhou; T Yin; V Koltun; P Krähenbühl", "journal": "", "ref_id": "b45", "title": "Global tracking transformers", "year": "2022" }, { "authors": "X Zhu; W Su; L Lu; B Li; X Wang; J Dai", "journal": "", "ref_id": "b46", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" }, { "authors": "Z Zong; G Song; Y Liu", "journal": "", "ref_id": "b47", "title": "Detrs with collaborative hybrid assignments training", "year": "2022" }, { "authors": "Z Zong; G Song; Y Liu", "journal": "", "ref_id": "b48", "title": "Detrs with collaborative hybrid assignments training", "year": "2022" }, { "authors": "Z Zou; K Chen; Z Shi; Y Guo; J Ye", "journal": "", "ref_id": "b49", "title": "Object detection in 20 years: A survey", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 107.64, 494.99, 397.6, 22.95 ], "formula_id": "formula_0", "formula_text": "N T tracking queries Q t = {q 1 t , ..., q N T t } and N D detection queries Q d = {q 1 d , ..., q N D d }, where N = N T + N D ." }, { "formula_coordinates": [ 4, 324.58, 601.77, 180.67, 11.23 ], "formula_id": "formula_1", "formula_text": "(L m = L f (c) + L 1 (b) + L g (b) ∈ R N D * M G )," }, { "formula_coordinates": [ 5, 207.7, 374.06, 132.67, 13.68 ], "formula_id": "formula_2", "formula_text": "Q t = {{q 1,i t } N S i=1 , ..., {q N T ,i t } N S i=1" }, { "formula_coordinates": [ 5, 108, 387.39, 139.57, 13.91 ], "formula_id": "formula_3", "formula_text": "Q d = {{q 1,i d } N S i=1 , ..., {q N D ,i d } N S i=1 }." } ]
10.3390/proceedings2019031051
2023-10-18
[ { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b13", "b0", "b19", "b8", "b27", "b29", "b20", "b5", "b15", "b22", "b31", "b37", "b19", "b36", "b14", "b11", "b30", "b9", "b8", "b11", "b8", "b19", "b11", "b8", "b11", "b8", "b8", "b1", "b23", "b18", "b19", "b11" ], "table_ref": [], "text": "The development of intelligent dialogue systems that are able to engage in conversations with humans, has been one of the longest running goals in artificial intelligence (Kepuska and Bohouta, 2018;Berdasco et al., 2019;Zhou et al : Interlocutor : Utterance : Reply + Replied-by : Speak + Spoken-by : Address + Addressed-by (Ouchi and Tsuboi, 2016) where a few addressee labels, i.e., \"@\", are missing. (b) Illustration of the graphical information flow and conversation fragments of the instance above established in HeterMPC (Gu et al., 2022). Here, the bidirectional edges are merged for simplicity. 2020). Thanks to breakthroughs in sequence modeling (Sutskever et al., 2014;Vaswani et al., 2017) and pre-trained language models (PLMs) (Radford et al., 2019;Devlin et al., 2019;Lewis et al., 2020), researchers have proposed various effective models for conversations between two participants (Serban et al., 2016;Wen et al., 2017;Zhang et al., 2020). Recently, researchers have paid more attention to a more practical and challenging scenario involving more than two participants, which is well known as multi-party conversations (MPCs) (Ouchi and Tsuboi, 2016;Zhang et al., 2018;Le et al., 2019;Hu et al., 2019;Wang et al., 2020;Gu et al., 2021Gu et al., , 2022)). Unlike two-party conversations, utterances in an MPC can be spoken by anyone and address anyone else in this conversation, constituting a graphical information flow.\nEncoding MPC contexts with either homogeneous (Hu et al., 2019) or heterogeneous (Gu et al., 2022) graph neural networks (GNNs) has been proven effective at modeling graphical information flows. These methods rely heavily on the necessary addressee labels and can only be applied to an ideal setting where each utterance must be tagged with an \"@\" or other equivalent addressee label, to establish a consecutively connected graph. However, interlocutors in MPCs do not always strictly obey the talking rule of specifying their addressees in each utterance, as shown by a randomly sampled MPC instance in Figure 1(a). Statistics show that addressees of 55% of the utterances in the Ubuntu IRC dataset (Ouchi and Tsuboi, 2016) are not specified. In this common case, the expected conversation graph in previous work became fragmented (Hu et al., 2019;Gu et al., 2022) as shown in Figure 1(b). Therefore, nodes without direct connections cannot exchange information between each other through one-hop message passing. Despite disconnected nodes can instead be accessed indirectly via other detours through multi-hop passing, inevitable information loss and passing latency will affect generation performance significantly. But this common issue has not been studied in previous work.\nIn light of the above issues, we propose MADNet that maximizes addressee deduction expectation in heterogeneous graph neural networks to mitigate performance degradation and to enhance model robustness, for MPC generation conditioned on incomplete addressee labels. Given an MPC with a few addressee labels missing, existing methods fail to build a consecutively connected conversation graph, but only a few separate conversation fragments instead (Hu et al., 2019;Gu et al., 2022). To ensure message passing between these conversation fragments, four additional types of latent edges are designed to complete a fully-connected graph. In this way, nodes without direct connections can also directly interact with each other, and be distinguished from existing edges by parameterization. Furthermore, edge-type-dependent message passing has been verified effective at MPC modeling (Gu et al., 2022). In order to optimize edge characterization and message passing for utterances without addressee labels, a hard Expectation-Maximization-based approach (Brown et al., 1993;Shen et al., 2019;Min et al., 2019) is designed for addressee deduction. On the one hand, the expectation steps iteratively generate silver addressee labels by considering the addressee of an utterance as a discrete latent variable. On the other hand, the maximization steps selects the addressee with the highest probability from the addressee distribution and optimize the generative dialogue model. As the number of EM iterations increases, the accuracy of the latent addressee distribution as well as the quality of generated responses can be improved simultaneously. Compared with previous methods, MADNet can be applied to more common and challenging MPC scenarios, indicating its generalization and robustness.\nTo measure the effectiveness of the proposed method, we evaluate the performance on two benchmarks based on Ubuntu IRC channel. One was released by Ouchi and Tsuboi (2016) where a few addressee labels were missing. The other was released by Hu et al. (2019) where addressee labels were provided for each utterance. Experimental results show that MADNet outperforms previous methods by significant margins, achieving a new state-of-the-art performance for MPC generation especially in the more common and challenging setting where a few addressee labels are missing.\nIn summary, our contributions in this paper are three-fold: 1) To the best of our knowledge, this paper makes the first attempt to explore the issue of missing addressee labels and to target on more common MPC scenarios. 2) A fully-connected heterogeneous graph architecture along with EM training is designed to help deduce the addressee of an utterance for improving MPC generation. 3) Experimental results show that our proposed model achieves a new state-of-the-art performance of MPC generation conditioned on incomplete addressee labels on two Ubuntu IRC benchmarks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b22", "b31", "b34", "b37", "b33", "b39", "b28", "b7", "b19", "b36", "b30", "b9", "b11", "b8", "b26", "b16", "b4", "b3", "b6", "b1", "b25", "b23", "b18", "b32", "b11", "b8" ], "table_ref": [], "text": "Multi-Party Conversation Existing methods on building dialogue systems can be generally categorized into generation-based (Serban et al., 2016;Wen et al., 2017;Young et al., 2018;Zhang et al., 2020) or retrieval-based approaches (Wu et al., 2017;Zhou et al., 2018;Tao et al., 2019;Gu et al., 2020). In this paper, we study MPC generation, where in addition to utterances, interlocutors are also important components who play the roles of speakers or addressees. Previous methods have explored retrieval-based approaches for MPCs. For example, Ouchi and Tsuboi (2016) proposed the dynamic model which updated speaker embeddings with conversation streams. Zhang et al. (2018) proposed speaker interaction RNN which updated speaker embeddings rolesensitively. Wang et al. (2020) proposed to track the dynamic topic in a conversation. Gu et al. (2021) proposed jointly learning \"who says what to whom\" in a unified framework by designing self-supervised tasks during pre-training. On the other hand, Hu et al. (2019) explored generationbased approaches by proposing a graph-structured network (GSN). At its core was an utterancelevel graph-structured encoder that encoded the conversation context using homogeneous GNNs. Gu et al. (2022) proposed HeterMPC to model the complicated interactions between utterances and interlocutors with a heterogeneous graph (Sun and Han, 2012), where two types of nodes and six types of edges were designed to model heterogeneity. Contemporaneous to our work, Li and Zhao (2023) focus on predicting the missing addressee labels in pre-training instead, but still only target at the ideal setting during fine-tuning where each utterance must be tagged with the addressee label.\nExpectation-Maximization This algorithm is used to find local maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly (Dempster et al., 1977;Dayan and Hinton, 1997;Do and Batzoglou, 2008). Rather than sampling a latent variable from its conditional distribution, a hard EM approach which takes the value with the highest posterior probability as prediction is designed (Brown et al., 1993). This approach has been proven effective at improving the performance of various NLP tasks such as dependency parsing (Spitkovsky et al., 2010), machine translation (Shen et al., 2019), question answering (Min et al., 2019) and diverse dialogue generation (Wen et al., 2023). In this paper, we study whether considering addressees as a latent variable and deducing with hard EM is useful for modeling conversation structures and improving MPC generation performance.\nCompared with GSN (Hu et al., 2019) and HeterMPC (Gu et al., 2022) that are the most relevant to this work, a main difference should be highlighted. These methods target only on an ideal setting where addressee labels of all utterances are necessary, while the proposed method is suitable for more common conversation sessions where a few addressee labels are missing. To the best of our knowledge, this paper makes the first attempt to extend to more common MPC scenarios and to explore the issue of missing addressee labels by maximizing addressee deduction expectation for MPC generation." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b8", "b8", "b8", "b15", "b7", "b21", "b35", "b12", "b10", "b29", "b15" ], "table_ref": [], "text": "Problem Formulation The task of response generation in MPCs is to generate an appropriate response r given the conversation history, the speaker of a response, and which utterance the response is going to reply to. This can be formulated as:\nr = argmax r logP (r|G, c; θ) = argmax r |r| t=1 logP (r t |G, c, r <t ; θ). (1)\nHere, G is a heterogeneous graph, c is the context of dialogue history, θ is the model parameters. The speaker and addressee of the response are known and its contents are masked. The response tokens are generated in an auto-regressive way. r t and r <t stand for the t-th token and the first (t -1) tokens of response r respectively. |r| is the length of r.\nNext, we briefly present the key process of the HeterMPC baseline (Gu et al., 2022) to avoid lengthy method descriptions, which shares the GNN backbone with our method. Readers can also refer to Gu et al. (2022) for more details.\nGraph Construction Given an MPC instance composed of M utterances and I interlocutors, a heterogeneous graph G(V, E) is constructed. Specifically, V is a set of M + I nodes. Each node denotes either an utterance or an interlocutor. E = {e p,q } M +I p,q=1 is a set of directed edges, where each edge e p,q describes the connection from node p to node q. Six types of meta relations in HeterMPC (Gu et al., 2022) and four additional types of latent edges proposed in this paper are introduced to describe directed edges between nodes, which will be elaborated in Section 4.1. It is notable that e p,q is set to NULL if there is no connection between two nodes, so that interactions between them can only be conducted indirectly via detours through multi-hop passing.\nNode Initialization Each node is represented as a vector, and two strategies are designed to initialize the node representations for utterances and interlocutors respectively. Utterances are first encoded individually by stacked Transformer : Interlocutor : Utterance : Reply + Replied-by : Speak + Spoken-by : Address + Addressed-by : Latent-reply + Latent-replied-by : Latent-address + Latent-addressed-by layers that can be initialized by PLMs, e.g. encoder of BART (Lewis et al., 2020), to derive the contextualized and utterance-level representations.\nOn the other hand, interlocutors in a conversation are indexed according to their speaking order and the embedding vector for each interlocutor is derived by looking up an order-based interlocutor embedding table (Gu et al., 2020) that is updated during learning.\nNode Updating The initialized node representations are updated by feeding them into the built graph for absorbing context information. Heterogeneous attention weights between connected nodes and message passing over the graph are calculated in a node-edge-type-dependent manner.\nParameters are introduced to maximize feature distribution differences for modeling heterogeneity (Schlichtkrull et al., 2018;Zhang et al., 2019;Hu et al., 2020). After collecting the information from all source nodes to a target node, a nodetype-dependent feed-forward network followed by a residual connection (He et al., 2016) is employed for aggregation. To let each utterance token also have access to graph nodes, an additional Transformer layer is placed for utterance nodes specifically. After completing an iteration, the outputs of utterance and interlocutor nodes are employed as the inputs of the next iteration.\nDecoder It follows the standard implementation of Transformer decoder (Vaswani et al., 2017) for generation, and can be initialized by PLMs, e.g. decoder of BART (Lewis et al., 2020). In each decoder layer, a masked self-attention operation is performed where each token cannot attend to future tokens to avoid information leakage. Then, a crossattention operation over the node representations output by the graph encoder is performed to incorporate graph information for decoding." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first describe how to construct a fully-connected conversation graph to ensure message passing between conversation fragments. Then, the expectation and maximization steps for addressee deduction are defined. Finally, an addressee initialization method is designed for better convergence of the EM algorithm." }, { "figure_ref": [ "fig_2" ], "heading": "Fully-Connected Graph Construction", "publication_ref": [ "b8" ], "table_ref": [], "text": "Six types of meta relations {reply, replied-by, speak, spoken-by, address, addressed-by} are introduced in HeterMPC (Gu et al., 2022) to describe directed edges between nodes. However, given an MPC with a few addressee labels missing, existing methods usually return several separate conversation fragments.\nTo build a consecutively connected conversation graph and ensure message passing between these conversation fragments, additional latent edges are required for completing a fully-connected conversation graph. In this paper, four additional types of latent edges are designed to establish the dependency of utterance nodes on all other nodes in an MPC graph for conversation contextualization as shown in Figure 2. There are two types of them employed to characterize latent relationships between two disconnected utterance nodes. In detail, latent-reply characterizes directional edges from latter utterances to previous ones which are ordered by their appearance in an MPC, and vice versa for latent-replied-by. On the other hand, another two types of latent edges are employed to characterize the relationships between an utterance node and an interlocutor node. In detail, latent-address characterizes directional edges from utterance nodes to interlocutor nodes, and vice versa for latent-addressed-by. By this means, both utterances that do not reply to directly and interlocutors that do not address to directly are also useful for capturing the semantics contained in graph nodes and for modeling complicated interactions between nodes." }, { "figure_ref": [ "fig_3" ], "heading": "EM for Addressee Deduction", "publication_ref": [], "table_ref": [], "text": "Figure 3 illustrates the EM training process, where the expectation and maximization steps are performed alternately. The addressee of an utterance without a golden addressee label is modeled as a discrete latent variable." }, { "figure_ref": [], "heading": "Expectation", "publication_ref": [], "table_ref": [], "text": "Step Given the observed MPC instances with model parameters frozen, the conditional distribution of the latent addressee variable is calculated during the E steps. Here, a specific addressee corresponds to a determined MPC graph.\nTo derive the probability distribution of the latent addressee variable, a set of latent graphs are fed into the generative model respectively to calculate the probabilities of generating a response under these latent graphs as P (r|G U i →U j , c), where G U i →U j is a graph assuming U i replies to U j . The derived probability distribution of generating a response under these latent graphs serves as the posterior of selecting the latent variable. The latent addressee distribution of the latent variable is estimated by applying Bayes' rule as:\nP (G U i →U j |c, r; θ) = P (r|G U i →U j , c; θ)P (G U i →U j |c; θ) i-1 k=1 P (r|G U i →U k , c; θ)P (G U i →U k |c; θ)\n.\n(2) A uniform prior for every context c as P (G U i →U j |c; θ) = 1/(i -1) is assumed, which simplifies Eq. ( 2) as:\nP (G U i →U j |c, r; θ) = P (r|G U i →U j , c; θ) i-1 k=1 P (r|G U i →U k , c; θ)\n.\n(3)" }, { "figure_ref": [], "heading": "Maximization", "publication_ref": [ "b18" ], "table_ref": [], "text": "Step After deriving the approximate probability distribution of the latent addressee variable, we maximize the expected log-likelihood with respect to θ:\nE G∼P (G U i →U j |c,r;θ) [log P (r, G|c; θ)] = i-1 j=1 P (G U i →U j |c, r; θ) log P (r, G U i →U j |c; θ).(4)\nHard EM A hard EM method (Min et al., 2019) that selects the addressee with the highest probability as the silver label is adopted as:\nŪj = argmax U j P (G U i →U j |c, r; θ), j < i,(5)\nand the maximization step is approximated as log P (r, G U i → Ūj |c; θ). Once the silver addressee label is determined in this round, its corresponding MPC graph is employed for regular training by minimizing the negative log-likelihood loss of responses." }, { "figure_ref": [ "fig_3" ], "heading": "Addressee Initialization", "publication_ref": [], "table_ref": [], "text": "The initialization of addressee labels is crucial to EM for addressee deduction, as it helps converge to optimal model parameters. Thus, an addressee initialization method is designed for utterances without addressee labels before EM training, to select the most probable one as initialization.\nThe encoder and decoder of the model are initialized with PLMs, e.g., BART, which is pretrained on texts in the general domain. First, domain adaptation is conducted based on the fullyconnected graph constructed in Section 4.1, with the learning objective of minimizing the negative log-likelihood loss of responses on the task training set. By this means, the representation space can be adapted to the task domain and to capture the conversation semantics. Then, for a specific utterance without an addressee label, each previous utterance is assumed to be the replied utterance respectively by setting the corresponding utteranceutterance edges to reply and replied-by, as well as setting the utterance-interlocutor edges to address and addressed-by. This process is illustrated in the left part of Figure 3. The probability of generating a response under an assumed graph is calculated following Eq. ( 2). Finally, the one with the highest probability following Eq. ( 5) is chosen as the initialization for the first round of M step." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b19", "b14", "b11", "b8", "b36", "b30", "b9" ], "table_ref": [], "text": "We evaluated our proposed methods on two Ubuntu IRC benchmarks. One was released by Ouchi and Tsuboi (2016), in which the addressee labels for part of the history utterances were missing.\nHere, we adopted the version shared in Le et al. (2019). The conversation sessions were separated into three categories according to the session length . In this paper, the subset of session length 5 was employed due to the limitation of computing resources. The other dataset where addressee labels were provided for each utterance was adopted following previous work (Hu et al., 2019;Gu et al., 2022). Both datasets were popularly used in the field of multiparty conversations (Zhang et al., 2018;Wang et al., 2020;Gu et al., 2021). Appendix A.1 presents the statistics of the two benchmarks." }, { "figure_ref": [], "heading": "Baseline Models", "publication_ref": [ "b8", "b20", "b15", "b11", "b8" ], "table_ref": [], "text": "We compared our proposed model with as many MPC generative models as possible. Considering that there are only a few research papers in this field, several recent advanced models were also adapted to provide sufficient comparisons following Gu et al. (2022). Finally, we compared with (1) non-graph-based models including GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020), as well as (2) graph-based models including GSN (Hu et al., 2019) and HeterMPC (Gu et al., 2022). Readers can refer to Appendix A.2 for implementation details of these baseline models." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b11", "b8", "b11", "b2" ], "table_ref": [], "text": "To ensure all experimental results were comparable, we used exactly the same automated and human evaluation metrics as those used in previous work (Hu et al., 2019;Gu et al., 2022). Hu et al. (2019) used the evaluation package released by Chen et al. (2015) including BLEU-1 to BLEU-4, METEOR and ROUGE L , which was also used in this paper. 1Human evaluation was conducted to measure the quality of the generated responses in terms of three independent aspects: 1) relevance, 2) fluency and 3) informativeness. Each judge was asked to give three binary scores for a response, which were further summed up to derive the final score ranging from 0 to 3." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b8", "b15", "b17", "b8" ], "table_ref": [], "text": "The corresponding parameters of MADNet followed those in HeterMPC (Gu et al., 2022) for fair comparison. Model parameters were initialized with pre-trained weights of bart-base released by Lewis et al. (2020). The AdamW method (Loshchilov and Hutter, 2019) was employed for optimization. The learning rate was initialized as 6.25e-5 and was decayed linearly down to 0. The max gradient norm was clipped down to 1.0. The batch size was set to 128 with 2 gradient accumulation steps. The maximum utterance length was set to 50. The number of layers for initializing utterance representations was set to 3, and the number of layers for heterogeneous graph iteration was set to 3. The number of decoder layers was set to 6. The strategy of greedy search was performed for decoding. The maximum length of responses for generation was also set to 50. All experiments were run on a single Tesla A100 GPU. The number of EM iterations was set to 2. The number of epochs in each M step was set to 4, and the learning rate was fixed to 5e-7. Each iteration took about 9 and 6 hours for each E step and M step respectively. The validation set was used to select the best model for testing. All code was implemented in the PyTorch framework2 and is published to help replicate our results. Gu et al. (2022). Note that EM for addressee deduction was not adopted on this dataset, since addressee labels were provided for each utterance." }, { "figure_ref": [], "heading": "Evaluation Results", "publication_ref": [ "b8", "b19", "b11", "b19", "b19" ], "table_ref": [ "tab_1", "tab_2", "tab_1", "tab_2" ], "text": "In our experiments, BART was selected to initialize MADNet following Gu et al. (2022).\nAutomated Evaluation Table 1 andTable 2 present the evaluation results of MADNet and previous methods on the test sets. Each model ran four times with identical architectures and different random initializations, and the best out of them was reported. The results show that MADNet outperformed all baselines in terms of all metrics. Specifically, MADNet outperformed the best performing baseline, i.e., HeterMPC by 0.42% BLEU-1, 0.29% BLEU-2, 0.22% BLEU-3, 0.17% BLEU-4, 0.33% METEOR and 0.30% ROUGE L on the test set of Ouchi and Tsuboi (2016). Additionally, MADNet outperformed HeterMPC by 0.47% BLEU-1, 0.32% BLEU-2, 0.22% BLEU-3, 0.14% BLEU-4, 0.37% METEOR and 0.54% ROUGE L on the test set of Hu et al. (2019). These results illustrated the effectiveness of our proposed method in modeling MPC structures, and the importance of message passing between the utterance and interlocutor nodes in an MPC graph.\nTo further verify the effectiveness of each com-ponent of our proposed method, ablation tests were conducted as shown in the last few rows of Table 1 andTable 2. First, EM for addressee deduction was removed on the dataset of Ouchi and Tsuboi (2016). The drop in performance illustrated that accurate addressee labels were crucial to the graphical information flow modeling in MPCs. In addition, EM was an effective solution to addressee deduction. Furthermore, the latent-reply and latent-replied-by edges, or latent-address and latent-addressed-by edges were removed respectively. The drop in performance illustrated the importance of modeling interactions between indirectly related utterances, and those between utterances and interlocutors for better conversation contextualization.\nHuman Evaluation Table 3 presents the human evaluation results on a randomly sampled test set of Ouchi and Tsuboi (2016). 200 samples were evaluated and the order of evaluation systems were shuffled. Three graduate students were asked to score from 0 to 3 (3 for the best) and the average scores were reported. It can be seen that MADNet achieved higher subjective quality scores than the selected baseline models." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b11", "b15", "b8", "b19" ], "table_ref": [], "text": "Metrics Score Human 2.09 GSN (Hu et al., 2019) 1.20 BART (Lewis et al., 2020) 1.54 HeterMPC (Gu et al., 2022) 1.62 MADNet 1.79\nTable 3: Human evaluation results of MADNet and some selected systems on a randomly sampled test set of Ouchi and Tsuboi (2016)." }, { "figure_ref": [ "fig_4" ], "heading": "Analysis", "publication_ref": [ "b11", "b19" ], "table_ref": [ "tab_3", "tab_5" ], "text": "Accuracy of addressee deduction. A key aspect of the proposed method is to deduce the addressee information which in turn improves the performance of response generation. Therefore, the accuracy of addressee deduction was directly evaluated with respect to a set of baselines to explore its impact on response generation. To do that, a modified dataset of Hu et al. (2019) was constructed. Specifically, the golden addressee label of the last utterance of the conversation history was masked to derive the modified dataset. 4Results of five selected methods on this modified dataset were compared as shown in Table 4: (1) HeterMPC, (2) each utterance whose addressee label was masked was randomly assigned a previous utterance as its reply-to utterance and fed it to HeterMPC, denoted as HeterMPC rand , (3) each utterance whose addressee label was masked was assigned its preceding utterance as its replyto utterance and fed it to HeterMPC, denoted as HeterMPC prec , (4) MADNet and ( 5 and there is a lot of room for further improvement.\nNumber of EM iterations. Figure 4 illustrated how the performance of MADNet changed with respect to different numbers of EM iterations on the validation set of Ouchi and Tsuboi (2016). It can be seen that the performance of MADNet was improved as the number of EM iterations increased at the beginning in terms of METEOR and ROUGE L , showing the effectiveness of employing EM for addressee deduction. Then, the performance was stable with more EM iterations. The reason might be that models have selected as many optimal addressee labels as possible.\nCase Study. A case study was conducted by randomly sampling an MPC instance as shown in Table 5. Given the conversation graph, the response to generate addressed I.1, so the information relevant to I.1 should be collected. It can be seen from this instance that the addressee label was only available for the third utterance, and the established conversation graph was very fragmented due to the lack of addressee labels. Conditioned on an inconsecutively connected graph, previous methods hardly capture the context semantics and can only generate generic responses such as \"i m not sure ...\". For MADNet, the missing addressee label of the fourth utterance was deduced as I.3, which was appropriate considering the MPC context. Given the deduced addressee label, the message of \"phased update\" in the third utterance can be passed to the fourth utterance. Furthermore, the response to generate was about to reply to the fourth utterance, and this important message can further captured for response generation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present MADNet to maximize addressee deduction expectation to study the issue of scarcity of addressee labels in multi-party conversations. Four types of latent edges are designed to model interactions between indirectly related utterances, and those between utterances and interlocutors for conversation contextualization. Furthermore, an EM-based approach is designed to deduce silver addressee labels and optimize the quality of generated responses. Experimental results show that the proposed MADNet outperforms previous methods by significant margins on two benchmarks of MPC generation. It especially shows better generalization and robustness in the more common and challenging setting where a few addressee labels are missing." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although the proposed method has shown great performance to alleviate the scarcity of addressee labels which is a common issue in multi-party conversations, we should realize that the proposed method still can be further improved. For example, to derive the probability distribution of the latent addressee variable, a substitute that the probability of generating a response under the assumed graph is considered as its approximation. This assumption has shown its empirical improvement in our experiments, and the theoretical analysis will be a part of our future work to help derive more accurate probability distribution. In addition, a set of latent graphs are required and fed into the generative model to calculate the probabilities of generating a response under these latent graphs, which consumes much computation resources. Thus, optimization of the expectation steps with less computation is worth studying. Besides, benchmarking the baselines and evaluating the proposed method on other appropriate datasets to make it more representative of as many MPC scenarios as possible will be part of our future work." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Opening Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK COGOS-2022005. We thank anonymous reviewers for their valuable comments." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Datasets" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b19" ], "table_ref": [], "text": "Train Valid Test Ouchi andTsuboi (2016) 461,120 28,570 32,668 Hu et al. (2019) 311,725 5,000 5,000 " }, { "figure_ref": [], "heading": "A.2 Baseline Models", "publication_ref": [ "b11", "b8", "b20", "b15", "b11", "b8", "b20", "b15", "b11", "b8" ], "table_ref": [], "text": "We compared our proposed model with as many MPC generative models as possible. Considering that there are only a few research papers in this field, several recent advanced models were also adapted to provide sufficient comparisons. We followed previous work (Hu et al., 2019;Gu et al., 2022) that the tags of speakers and addressees were both used if they were available when establishing the performance of baselines. Finally, we compared with (1) non-graph-based models including GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020), as well as (2) graphbased models including GSN (Hu et al., 2019) and HeterMPC (Gu et al., 2022) as follows.\n(1) GPT-2 (Radford et al., 2019) was a unidirectional pre-trained language model. Following its original concatenation operation, all context utterances and the response were concatenated with a special [SEP] token as input for encoding.\n(2) BART (Lewis et al., 2020) was a denoising autoencoder using a standard Tranformer-based architecture, trained by corrupting text with an arbitrary noising function and learning to reconstruct the original text. In our experiments, a concatenated context started with <s> and separated with </s> were fed into the encoder, and a response were fed into the decoder. (3) GSN (Hu et al., 2019) made the first attempt to model an MPC with a homogeneous graph. The core of GSN was an utterance-level graph-structured encoder. (4) HeterMPC (Gu et al., 2022) achieved the state-of-the-art performance on MPCs. It proposed to model the complicated interactions between utterances and interlocutors in MPCs with a heterogeneous graph, where two types of graph nodes and six types of edges are designed to model heterogeneity. Two versions of HeterMPC were provided that were initialized with BERT and BART respectively. The latter was adopted in this paper which showed better performance." } ]
Modeling multi-party conversations (MPCs) with graph neural networks has been proven effective at capturing complicated and graphical information flows. However, existing methods rely heavily on the necessary addressee labels and can only be applied to an ideal setting where each utterance must be tagged with an "@" or other equivalent addressee label. To study the scarcity of addressee labels which is a common issue in MPCs, we propose MADNet that maximizes addressee deduction expectation in heterogeneous graph neural networks for MPC generation. Given an MPC with a few addressee labels missing, existing methods fail to build a consecutively connected conversation graph, but only a few separate conversation fragments instead. To ensure message passing between these conversation fragments, four additional types of latent edges are designed to complete a fully-connected graph. Besides, to optimize the edge-typedependent message passing for those utterances without addressee labels, an Expectation-Maximization-based method that iteratively generates silver addressee labels (E step), and optimizes the quality of generated responses (M step), is designed. Experimental results on two Ubuntu IRC channel benchmarks show that MADNet outperforms various baseline models on the task of MPC generation, especially under the more common and challenging setting where part of addressee labels are missing.
MADNet: Maximizing Addressee Deduction Expectation for Multi-Party Conversation Generation
[ { "figure_caption": "An MPC instance with a few addressee labels (@) missing (b) The graphical information flow and fragments established in HeterMPC @I1 or teal i do n t think the delimiter will change i m thinking more of the leading image theme I3 thomi noooo I2 @I3 yeah and being consistent with what is an issue in unity8 a good bunch of code have the closing parenthesis I1 thomi anything else on the pdf mp I2 @I2 sorry did you hve an updated version what a warm welcome I1 nothing else after i sent you that link I1 I1 @I2 oi you re utc 2 wow very soon now U 6", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: (a) A randomly sampled MPC instance from the Ubuntu IRC dataset(Ouchi and Tsuboi, 2016) where a few addressee labels, i.e., \"@\", are missing. (b) Illustration of the graphical information flow and conversation fragments of the instance above established in HeterMPC(Gu et al., 2022). Here, the bidirectional edges are merged for simplicity.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of edges for utterances (a) with and (b) without addressee labels respectively in a fullyconnected MPC graph for the instance in Figure 1.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Illustration of the EM training process for the instance in Figure 1, where the expectation and maximization steps are performed alternately.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance of MADNet under different numbers of EM iterations on the validation set of Ouchi and Tsuboi (2016).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Evaluation results and ablations on the test set ofOuchi and Tsuboi (2016) in terms of automated evaluation. Numbers in bold denoted that the results achieved the best, and those marked with † denoted that the improvements were statistically significant (t-test with p-value < 0.05) comparing with the best performing baseline.", "figure_data": "3", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation results and ablations on the test set of Hu et al. (2019) in terms of automated evaluation. Results except ours are cited from", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Accuracy of addressee deduction and automated evaluation results on the modified dataset ofHu et al. (2019). rand, prec and orac were abbreviations of random, preceding and oracle respectively.", "figure_data": ") MADNet withthe oracle addressee labels, i.e., MADNet on theoriginal Hu et al. (2019) dataset. Results show thatthe prediction of addressees significantly affects theperformance of MPC generation. Seriously wrongpredictions might even hurt performance. It canbe seen that the addressee deduction with EM inMADNet outperformed the heuristic methods ofrandom selection by a margin of 12.7% accuracy,and of selecting its preceding utterance by a marginof 5.3% accuracy. As a result, the generationperformance was improved benefiting from ac-curate addressee predictions. It is notable thatthe prediction of addressee achieved only 50.1%accuracy, which shows that this task is still difficult", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The response generation results of a test sample. \"I.\" is an abbreviation of \"interlocutor\". We kept original texts without manual corrections.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Jia-Chen Gu; Chao-Hong Tan; Caiyuan Chu; Zhen-Hua Ling; Chongyang Tao; Quan Liu; Cong Liu
[ { "authors": "Ana Berdasco; Gustavo López; Ignacio Díaz-Oreiro; Luis Quesada; Luis A Guerrero", "journal": "MDPI", "ref_id": "b0", "title": "User experience comparison of intelligent personal assistants: Alexa, google assistant, siri and cortana", "year": "2019-12-02" }, { "authors": "F Peter; Stephen Brown; Vincent J Della Pietra; Robert L Della Pietra; Mercer", "journal": "Comput. Linguistics", "ref_id": "b1", "title": "The mathematics of statistical machine translation: Parameter estimation", "year": "1993" }, { "authors": "Xinlei Chen; Hao Fang; Tsung-Yi Lin; Ramakrishna Vedantam; Saurabh Gupta; Piotr Dollár; C Lawrence Zitnick", "journal": "", "ref_id": "b2", "title": "Microsoft COCO captions: Data collection and evaluation server", "year": "2015" }, { "authors": "Peter Dayan; Geoffrey E Hinton", "journal": "Neural Comput", "ref_id": "b3", "title": "Using expectation-maximization for reinforcement learning", "year": "1997" }, { "authors": "Nan M Arthur P Dempster; Donald B Laird; Rubin", "journal": "Journal of the Royal Statistical Society: Series B (Methodological)", "ref_id": "b4", "title": "Maximum likelihood from incomplete data via the em algorithm", "year": "1977" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "B Chuong; Serafim Do; Batzoglou", "journal": "Nature biotechnology", "ref_id": "b6", "title": "What is the expectation maximization algorithm?", "year": "2008" }, { "authors": "Jia-Chen Gu; Tianda Li; Quan Liu; Zhen-Hua Ling; Zhiming Su; Si Wei; Xiaodan Zhu", "journal": "", "ref_id": "b7", "title": "Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots", "year": "2020-10-19" }, { "authors": "Jia-Chen Gu; Chao-Hong Tan; Chongyang Tao; Zhen-Hua Ling; Huang Hu; Xiubo Geng; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "HeterMPC: A heterogeneous graph neural network for response generation in multiparty conversations", "year": "2022-05-22" }, { "authors": "Jia-Chen Gu; Chongyang Tao; Zhen-Hua Ling; Can Xu; Xiubo Geng; Daxin Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "MPC-BERT: A pre-trained language model for multiparty conversation understanding", "year": "2021-08-01" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "IEEE Computer Society", "ref_id": "b10", "title": "Deep residual learning for image recognition", "year": "2016-06-27" }, { "authors": "Wenpeng Hu; Zhangming Chan; Bing Liu; Dongyan Zhao; Jinwen Ma; Rui Yan", "journal": "", "ref_id": "b11", "title": "GSN: A graph-structured network for multi-party dialogues", "year": "2019-08-10" }, { "authors": "Ziniu Hu; Yuxiao Dong; Kuansan Wang; Yizhou Sun", "journal": "ACM", "ref_id": "b12", "title": "Heterogeneous graph transformer", "year": "2020-04-20" }, { "authors": "Veton Kepuska; Gamal Bohouta", "journal": "IEEE", "ref_id": "b13", "title": "Nextgeneration of virtual personal assistants (microsoft cortana, apple siri, amazon alexa and google home)", "year": "2018-01-08" }, { "authors": "Ran Le; Wenpeng Hu; Mingyue Shang; Zhenjun You; Lidong Bing; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b14", "title": "Who is speaking to whom? learning to identify utterance addressee in multi-party conversations", "year": "2019-11-03" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020-07-05" }, { "authors": "Yiyang Li; Hai Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "EM pre-training for multi-party dialogue response generation", "year": "2023-07-09" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b17", "title": "Decoupled weight decay regularization", "year": "2019-05-06" }, { "authors": "Sewon Min; Danqi Chen; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "A discrete hard EM approach for weakly supervised question answering", "year": "2019-11-03" }, { "authors": "Hiroki Ouchi; Yuta Tsuboi", "journal": "", "ref_id": "b19", "title": "Addressee and response selection for multi-party conversation", "year": "2016-11-01" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b20", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Sejr Michael; Thomas N Schlichtkrull; Peter Kipf; Rianne Bloem; Van Den; Ivan Berg; Max Titov; Welling", "journal": "Springer", "ref_id": "b21", "title": "Modeling relational data with graph convolutional networks", "year": "2018-06-03" }, { "authors": "Iulian Vlad Serban; Alessandro Sordoni; Yoshua Bengio; Aaron C Courville; Joelle Pineau", "journal": "", "ref_id": "b22", "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", "year": "2016-02-12" }, { "authors": "Tianxiao Shen; Myle Ott; Michael Auli; Marc'aurelio Ranzato", "journal": "", "ref_id": "b23", "title": "Mixture models for diverse machine translation: Tricks of the trade", "year": "2019-06" }, { "authors": " Pmlr", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "I Valentin; Hiyan Spitkovsky; Daniel Alshawi; Christopher D Jurafsky; Manning", "journal": "ACL", "ref_id": "b25", "title": "Viterbi training improves unsupervised dependency parsing", "year": "2010-07-15" }, { "authors": "Yizhou Sun; Jiawei Han", "journal": "Morgan & Claypool Publishers", "ref_id": "b26", "title": "Mining Heterogeneous Information Networks: Principles and Methodologies", "year": "2012" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "", "ref_id": "b27", "title": "Sequence to sequence learning with neural networks", "year": "2014-12-08" }, { "authors": "Chongyang Tao; Wei Wu; Can Xu; Wenpeng Hu; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b28", "title": "One time of interaction may not be enough: Go deep with an interaction-over-interaction network for response selection in dialogues", "year": "2019-07-28" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b29", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Weishi Wang; C H Steven; Shafiq R Hoi; Joty", "journal": "", "ref_id": "b30", "title": "Response selection for multi-party conversations with dynamic topic tracking", "year": "2020-11-16" }, { "authors": "Tsung-Hsien Wen; David Vandyke; Nikola Mrksic; Milica Gasic; Lina ; Maria Rojas-Barahona; Pei-Hao Su; Stefan Ultes; Steve J Young", "journal": "", "ref_id": "b31", "title": "A network-based end-to-end trainable taskoriented dialogue system", "year": "2017-04-03" }, { "authors": "Yuqiao Wen; Yongchang Hao; Yanshuai Cao; Lili Mou", "journal": "ICLR", "ref_id": "b32", "title": "An equal-size hard EM algorithm for diverse dialogue generation", "year": "2023" }, { "authors": "Yu Wu; Wei Wu; Chen Xing; Ming Zhou; Zhoujun Li", "journal": "", "ref_id": "b33", "title": "Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots", "year": "2017-07-30" }, { "authors": "Tom Young; Erik Cambria; Iti Chaturvedi; Hao Zhou; Subham Biswas; Minlie Huang", "journal": "", "ref_id": "b34", "title": "Augmenting end-to-end dialogue systems with commonsense knowledge", "year": "2018-02-02" }, { "authors": "Chuxu Zhang; Dongjin Song; Chao Huang; Ananthram Swami; Nitesh V Chawla", "journal": "ACM", "ref_id": "b35", "title": "Heterogeneous graph neural network", "year": "2019-08-04" }, { "authors": "Rui Zhang; Honglak Lee; Lazaros Polymenakos; Dragomir R Radev", "journal": "", "ref_id": "b36", "title": "Addressee and response selection in multi-party conversations with speaker interaction rnns", "year": "2018-02-02" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "DIALOGPT : Large-scale generative pre-training for conversational response generation", "year": "2020-07-05" }, { "authors": "Li Zhou; Jianfeng Gao; Di Li; Heung-Yeung Shum", "journal": "Comput. Linguistics", "ref_id": "b38", "title": "The design and implementation of xiaoice, an empathetic social chatbot", "year": "2020" }, { "authors": "Xiangyang Zhou; Lu Li; Daxiang Dong; Yi Liu; Ying Chen; Wayne Xin Zhao; Dianhai Yu; Hua Wu", "journal": "", "ref_id": "b39", "title": "Multi-turn response selection for chatbots with deep attention matching network", "year": "2018-07-15" } ]
[ { "formula_coordinates": [ 3, 330.68, 248.62, 194.46, 57.57 ], "formula_id": "formula_0", "formula_text": "r = argmax r logP (r|G, c; θ) = argmax r |r| t=1 logP (r t |G, c, r <t ; θ). (1)" }, { "formula_coordinates": [ 5, 79.32, 691.17, 197.13, 45.9 ], "formula_id": "formula_1", "formula_text": "P (G U i →U j |c, r; θ) = P (r|G U i →U j , c; θ)P (G U i →U j |c; θ) i-1 k=1 P (r|G U i →U k , c; θ)P (G U i →U k |c; θ)" }, { "formula_coordinates": [ 5, 306.42, 310.39, 213.48, 29.67 ], "formula_id": "formula_2", "formula_text": "P (G U i →U j |c, r; θ) = P (r|G U i →U j , c; θ) i-1 k=1 P (r|G U i →U k , c; θ)" }, { "formula_coordinates": [ 5, 307.06, 429.15, 218.08, 64.57 ], "formula_id": "formula_3", "formula_text": "E G∼P (G U i →U j |c,r;θ) [log P (r, G|c; θ)] = i-1 j=1 P (G U i →U j |c, r; θ) log P (r, G U i →U j |c; θ).(4)" }, { "formula_coordinates": [ 5, 320.55, 553.46, 204.59, 21.97 ], "formula_id": "formula_4", "formula_text": "Ūj = argmax U j P (G U i →U j |c, r; θ), j < i,(5)" } ]
10.48550/arXiv.2204.06745
2023-05-22
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b22", "b4", "b10", "b23", "b9", "b6", "b28", "b35", "b12", "b18", "b5", "b30", "b1", "b1", "b38", "b18", "b31", "b13" ], "table_ref": [], "text": "Pre-trained Language models (LMs) have set a new paradigm for NLP research and sweep across all existing NLP benchmarks. Due to their promising results, researchers have empowered LMs with new skills that meet real-world needs, such as using web browsers (Nakano et al., 2021), coding (Chen et al., 2021), playing strategic game (FAIR et al., 2022), and conversational talents (OpenAI, 2022(OpenAI, , 2023)). However, the wide application of LMs also raises growing concerns regarding its pitfall of generating content that is fake (Elazar et al., 2021;Cao et al., 2021a), out-dated (Dhingra et al., 2022), biased (Sheng et al., 2019;Zhao et al., 2021), and offensive (Gehman et al., 2020). To mitigate this pitfall, knowledge editing (Fig. 1) aiming to modify the knowledge learned of LMs has attracted increasing attention (Mitchell et al., 2022a;Meng et al., 2022a). The goal of knowledge editing is two-fold: generalization and specificity. The former requires generalizing to various prompts describing the same knowledge and the latter requires no interference with other unrelated knowledge.\nPrevious knowledge editing methods mainly adopt gradient-based methods to modify specific model parameters for a desired model behavior (Mitchell et al., 2021;Meng et al., 2022a), e.g., updating the president after the election. However, the identification of the target knowledge neurons usually requires gradient estimation with heavy computation overhead (Dai et al., 2022). In addition, the updated parameters inherently lead to side effects beyond the desired editions, such as forgetting previously-learned facts or over-editing on unrelated facts. Previous studies have shown that when a large-scale LM (LLM) is deployed as a black-box service (Sun et al., 2022), a minor modification to its parameters could dramatically influence its behavior for end users. Therefore, traditional methods still suffer from editing LLMs since these limitations impede the scalability and generalizability.\nRecently, in-context learning (ICL) (Brown et al., 2020) has emerged as a new paradigm for instructing LLMs to perform complex tasks. In ICL, the task description and demonstration examples are represented in natural language to form a context, and the prediction of LMs conditioned on the context is transformed into answers according to predefined rules (Brown et al., 2020). In this way, large LMs adapt to various downstream tasks without any modifications to parameters, making it a natural fit for knowledge editing on large LMs. First, it reduces the computation overhead by avoiding modifications to parameters, as well as eliminates the risk of side effects introduced by parameter updates. Most importantly, ICL provides an interpretable way for humans to calibrate LM behaviors. Despite these advantages, whether ICL is applicable to knowledge editing still remains unclear.\nIn this paper, we investigate the potential of ICL to perform knowledge editing for LLMs. We focus on two goals: (1) ensuring generalization, so that large LMs can generalize to multiple text surfaces for a piece of updated knowledge, and (2) ensuring specificity, by making accurate modifications to the target knowledge fact while preserving other irrelevant facts. To achieve these goals simultaneously, we design demonstration formatting and organization strategies to construct suitable incontext learning demonstrations for guiding knowledge editing on LLMs. We define three types of demonstration formatting templates including (i) copy, which aims to inject new facts into LMs; (ii) update, which improves the generalization of injected knowledge fact; and (iii) retain, which guides LMs to preserve unrelated knowledge facts. Additionally, to fully harness the potential of ICL for knowledge editing, we retrieve relevant knowledge facts from the training corpus as demonstration inputs.\nExperimental results on knowledge editing benchmarks with GPT-J (6B) show that the proposed in-context learning knowledge editing (IKE), achieves overall comparable knowledge editing performance with strong baselines. For example, IKE outperforms MEND (Mitchell et al., 2021) by an absolute 10% editing success rate and obtains 30 points gain regarding the specificity over ROME (Meng et al., 2022a). As there are no parameter modifications, IKE is applicable to LLMs such as OPT-175B and exhibits better memorization ability, i.e., after editing, nearly 50% knowledge facts retain relatively high probability. Further analysis reveals that demonstration selection and the retain demonstrations contribute to specificity, while the update demonstrations improve generalization. Finally, we discuss the potential challenges that IKE may encounter when applied in real-world scenarios, and provide corresponding discussions.\nIn summary, the contributions of this study are four-fold:\n• To the best of our knowledge, this work represents the first systematic exploration of the potential for ICL to edit knowledge in LMs.\n• We give comprehensive empirical studies on ICL strategies and analyze how these strategies affect the final performance.\n• By designing proper demonstration formatting and organization strategies, IKE achieves comparable success rates with less computation overhead and side effects.\n• We investigate the feasibility of applying IKE to real-world scenarios and discuss potential challenges. Knowledge Editing Benchmarks Several knowledge editing benchmarks are commonly used to evaluate the efficacy and specificity of editing approaches. For BERT-style models, factchecking dataset FEVER (Thorne et al., 2018) and question-answer dataset zsRE (Levy et al., 2017) are usally adopted. In FEVER, each x is a claim and each y indicates the validity of corresponding claim. In zsRE, each x is a question about a fact and each y is the answer, and x loc questions fact irrelevant to x. For GPT-style models, Mitchell et al. (2022a) introduced Wikitext editing dataset that requests the model to complete passage with edited continuation while the distribution of each token is unrelated passage x loc should remain unchanged." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b1", "b14", "b29" ], "table_ref": [], "text": "In our experiment, we use a more challenging QA dataset called COUNTERFACT (Meng et al., 2022a). In COUNTERFACT, the edited answer y to question x can sometimes be counterfactual to real world, and unrelated out-of-scope sample x loc is much more difficult than that in zsRE, which makes it harder for the model to predict desired answer. Furthermore, these desired facts are hardly captured by pre-trained LMs, avoiding the effects of LLMs knowing this knowledge before editing.\nIn-context Learning In-Context Learning (ICL) is a training-free paradigm that learns from demonstrations concatenated in the input context. Given related examples and a query, the model learns from analogy to make predictions (Brown et al., 2020;Liu et al., 2022). Existing knowledge editing methods require re-calculating the gradient or calculating and perform such knowledge editing in an inexpensive way. Si et al. (2022) is the first to explore whether in-context learning can update knowledge in LLMs, and show that incorporating all kinds of demonstration increase the success rate of knowledge editing. However, they only focus on GPT-3, without deep exploration on the potential ability and side effects of knowledge editing." }, { "figure_ref": [], "heading": "Task Formulation", "publication_ref": [], "table_ref": [], "text": "The goal of knowledge editing is to inject a new fact (x * , y * ) into a LM M by maximizing the probability P M (y * |x * ). The x * is the prompt to probe the factual knowledge in M (e.g., The president of the US is), and y * will be the editing target Joe Biden. Knowledge editing also requires generalization and specificity:\n• Generalization: For the prompt x in the edit scope D x * (i.e., prompts related to the new fact), the prediction of x ∈ D x * should be also updated to y * . For example, the prediction of Q: Who is the president of the US? A: will be updated to Joe Biden.\n• Specificity: For the prompt x out of the edit scope, x / ∈ D x * , the prediction of x should be its original prediction y o . For example, the prediction of The president of Russia is should be retained. The language model M predicts the probability of y ∈ Y given x: P M (y | x, C). More specifically, ICL uses templates T to transform the inputs and labels into natural language texts. Take sentiment analysis as an example, an in-context demonstration with input x i and label y i will be transformed to Sentence: x i . Sentiment: y i , then the language model M will predict y ∈ Y given T (x 1 , y 1 ), . . . , T (x k , y k ), T (x, )." }, { "figure_ref": [], "heading": "In-Context Knowledge Editing", "publication_ref": [ "b8" ], "table_ref": [], "text": "When we inject a target fact f = (x * , y * ) into LMs, we will construct k demonstrations C = {c 1 , . . . , c k }. The goal of knowledge editing is to maximize P(y * | x, f, C) when prompt x is in the editing scope of target prompt x * , x ∈ D x * (the Generalization goal) and minimize the distance between P (y | x, f, C) and P (y | x) when x / ∈ D x * (the Specificity goal). LMs should determine whether the probing prompt x is in the editing scope of x * , namely D x * . To achieve these goals with ICL, proper demonstration inputs are crucial. We further decompose the demonstration construction for knowledge editing with f as the target into two sub-problems:\n(i) how to design the format of each demonstration; and (ii) how to select and rank in-context demonstrations (Dong et al., 2023)." }, { "figure_ref": [], "heading": "Demonstration Formatting", "publication_ref": [], "table_ref": [], "text": "Each demonstration c i contains a new fact f i = (x * i , y * i ), a probing prompt x i and its prediction y i . In-context demonstrations should teach LMs to copy, update and retain the predictions for different prompts:\n• copy: To inject new facts into LMs, the first step is to teach them to copy the prediction of the target prompt in new facts. In copy demonstrations, x i = x * i and y i = y * i .\n• update: Knowledge editing is not simply teaching LMs to repeat the new fact. For the generalization of knowledge editing, the prediction of prompts in the editing scope should also be updated. In update demonstrations, x i ∈ D x * i and y i = y * i .\n• retain: For the specificity of knowledge editing, LMs should keep their original prediction in out-of-scope prompts. In retain demonstrations, x i / ∈ D x * i and y i should be its original answer y o i .\nThe template T of IKE transforms f , x and y into natural language: T (f, x, y) = New Fact: f . Prompt: x y. Details are listed in §A." }, { "figure_ref": [], "heading": "Demonstration Organization", "publication_ref": [ "b14" ], "table_ref": [], "text": "When we edit a knowledge fact f in LMs, we construct k demonstrations C = {c 1 , . . . , c k } from the training corpus. Which demonstrations are good demonstrations for in-context editing? We follow Liu et al. (2022) to use an unsupervised retriever to choose k nearest neighbors. More specifically, we use a pretrained sentence encoder E to encode the prompt x * of new fact f together with its original answer y o and targeted prediction y * . The records in the training corpus will be encoded in the same way and k-NN facts are retrieved based on the cosine similarity. The ranking of in-context demonstrations also depends on the cosine similarity:\ncos(c 0 , f ) < cos(c 1 , f ) < . . . < cos(c k , f ),\nwhere c 1 , . . . , c k are placed in the context from left to right." }, { "figure_ref": [], "heading": "Discussion: Gradient-based methods and gradient-free methods", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Previous parameter updating methods will adjust the parameters θ of LMs M. They calculate ∆θ based on the gradients ∇ θ -log P M (y * |x * ) to update the base model M θ to a edited one M θ+∆θ . The editing method will then be evaluated by P M (y | x). Instead, in-context learning modifies the knowledge fact in M by constructing demonstrations C for the new fact f = (x * , y * ), then the editing method will be evaluated by P M (y | x, f, C). Comparing P M (y | x, f, C) with P M (y | x), it can be found that: (i) ICL requires no gradient estimation for the target fact and keeps the original LM M untouched after knowledge editing. This greatly reduces the computation overhead thus making the editing applicable for LMs with trillion-level parameters, as well as eliminating the side effects of the modified parameters. (ii) The demonstration C is represented in the natural text which is more interpretable than the salient parameter update ∆θ. It provides a humanunderstandable interface for calibrating the model behavior. We highlight the characteristics of these two methods in Table 1." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "In this section, we perform experiments to answer the following research question:\n• Compared to gradient-based methods, what's the performance of IKE?\n• How do the demonstration designing strategies influence the performance of IKE?\n• How does the scale of LMs affect the performance of IKE, can IKE scale up to large language models with tens or hundreds of billions of parameters?\n• What are the side effects of knowledge editing and does IKE cause more or fewer side effects than other parameter updating methods?\nWe first introduce the experimental settings including the compared baseline methods, evaluation benchmark, and LMs across different scales for knowledge editing ( §5.1). We then analyze the main knowledge editing results in §5.2 and the impacting factors of in-context learning knowledge editing ( §5.3)." }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [], "table_ref": [], "text": "We aim to evaluate the performance of in-context knowledge editing compared to parameter updating approaches. We also conduct experiments on different sizes of LMs to explore the scaling-up ability of in-context knowledge editing." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [], "text": "Following previous knowledge-editing methods, we also choose GPT-J (6B) as our main evaluation backbone. The compared baselines include: PROMPT To explore how in-context demonstrations influence the performance of IKE. We directly use the new fact as context to probe the LMs by P(y|x, f ) where f = (x * , y * ).\nThe implementation details are in §A" }, { "figure_ref": [], "heading": "Evaluation Setup", "publication_ref": [ "b25", "b11", "b32", "b0", "b34" ], "table_ref": [], "text": "Models To explore how the scale of LMs will influence the effectiveness of in-context knowledge editing, we evaluate in-context knowledge editing on five GPT-like auto-regressive transformer language models whose scales range from 1.5B to 175B parameters:\n• GPT-2 XL (1.5B) (Radford et al., 2019), the 1.5 billion parameter version of GPT-2.\n• GPT-NEO (2.7B) (Gao et al., 2021), the 2.7 billion parameter version of a GPT-2 like causal language model released by EleutherAI.\nIt is trained on the Pile dataset specifically designed for LLM training.\n• GPT-J (6B) (Wang and Komatsuzaki, 2021), an auto-regressive text generation model trained on the Pile with 6 billion parameters.\n• GPT-NEOX (20B) (Black et al., 2022), a 20 billion parameter auto-regressive language model trained on the Pile.\n• OPT (175B) (Zhang et al., 2022), open pretrained transformers with 175 billion parameters created by MetaAI.\nBenchmark We mainly evaluate baselines on COUNTERFACT (Meng et al., 2022a) ) share the same original object with the target prompt and these facts are not supposed to be edited.\nWe also follow Meng et al. (2022a) to report the harmonic mean of ES, PS, NS as Score (S)" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The top rows of Table 2 show the knowledge editing results of different methods. Our findings are: (i) All methods perform well in terms of efficacy, as indicated by their close ES scores. However, there are significant differences in terms of generalization and specificity. For instance, FT achieves high ES (99.9) and PS (96.4) scores but performs poorly in terms of specificity. This highlights the challenge of balancing generalization and specificity in knowledge editing. (ii) Among the baseline methods, ROME performs the best overall regarding all three metrics, but comes with high computational overheads. Due to this limitation, it is not applicable to larger LMs such as OPT-175B that are in more urgent need of knowledge editing. (iii) The proposed method IKE excels in specificity but also performs well in efficacy and generalization. For example, IKE achieves a comparable overall score with ROME on GPT-J (89.6 v.s. 91.5), while requiring no parameter modifications on LMs. This computation benefit makes it possible to perform knowledge editing on large LMs such as OPT-175B, where IKE achieves clear improvements over PROMPT by 36.0 points. These results demonstrate the effectiveness, efficiency and scalability of IKE in knowledge editing." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "In this part, we discuss the effects of different demonstration strategies, the scalability of IKE for models across scales and side effects introduced by knowledge editing." }, { "figure_ref": [], "heading": "Ablation on Demonstration Demonstration Numbers", "publication_ref": [ "b1", "b14", "b27", "b15", "b8", "b14" ], "table_ref": [], "text": "The number of demonstrations is one of the influencing factors for the ICL performance (Brown et al., 2020). We investigate how the number of demonstrations influences the IKE performance in the second block in Table 3. Without any demonstrations, PROMPT exhibits over-generalization for its low NS (37.9), indicating it simply learns to copy the prediction. Given a few demonstrations (4 or 8), IKE performs worse than PROMPT in Efficacy and Generalization as it begins to distinguish whether a prompt is in the editing scope. With the increased number of demonstrations, IKE gradually learns to balance generalization and specificity, achieving a better trade-off.\nDemonstration Organization Previous studies (Liu et al., 2022;Rubin et al., 2022;Lu et al., 2022) suggest that demonstration organization including Demonstration Selection and Demonstration Ordering (Dong et al., 2023) is also crucial for ICL. Our proposal follows a simple unsupervised method Liu et al. (2022), to retrieve and order demonstrations from the training corpus based on the cosine similarities between the input prompt and demonstrations. In our two ablation studies in the third block of Table 3, we find that removing the selection procedure (i.e., Random Selection) leads to a clear drop in the NS score from 77.0 to 45.0, indicating the importance of proper prompt selection. However, random ordering brings negligible performance difference. We speculate that this is because the selected prompts are highly related to the target fact and the attention mechanism in Transformer-based LMs can handle long-range dependencies well. We leave further improvements as future work.\nDemonstration Formatting We further examine the impact of demonstration types including copy, update and retain. As shown in the fourth block in Table 3, removing copy demonstrations causes slight performance degradation, as LMs can easily copy the content in the demonstration even without a copy demonstration. Instead, update demonstrations perform an important role in teaching LMs to modify their knowledge, as indicated by a much poorer generalization score after removing upate demonstrations. Besides, The removal of retain demonstrations leads to a dramatic drop in the specificity, as measured by the NM score, which decreases from 35.2 to -47.6. This indicates that retain demonstrations are crucial in helping LMs identify out-of-scope facts and maintain their original predictions on those prompts." }, { "figure_ref": [], "heading": "IKE Benefits from Model Scaling", "publication_ref": [], "table_ref": [], "text": "We further evaluate IKE on COUNTERFACT for five GPT-like causal language models across different scales. As previous experiments have shown that all methods exhibit high knowledge editing efficacy, we focus on the generalization and specificity for large LMs, as these metrics are defined to measure the side effects that could cause great influences on end users. As demonstrated in Table 4, we find that the performance of IKE is positively correlated with the scale of the LM and the largest OPT-175B achieves the strongest generalization and specificity results. This is inspiring as the performance IKE could be enhanced with the increased scale of LMs, making it pluggable for future stronger LM backbones." }, { "figure_ref": [], "heading": "Resilience to Over-Editing", "publication_ref": [], "table_ref": [], "text": "Over-editing is a common side effect of knowledge editing, which denotes the influences on outof-scope facts when editing a targeted fact. Although COUNTERFACT already includes out-ofscope prompts consisting of (s , r close to the perplexity of contrastive fake facts, which turns out to be an editing failure. Although all baselines perform well in terms of editing efficacy, they tend to be over-generalization under a stricter contrastive assessment. ROME gets the lowest average CKA score and highest false rate, which shows its poor ability to identify out-ofscope prompts sharing the same subject with target prompts. IKE has less influence on over-editing." }, { "figure_ref": [], "heading": "Maintenance for Original Knowledge", "publication_ref": [ "b6" ], "table_ref": [ "tab_7" ], "text": "We conclude that previous factual knowledge stored in LMs will be erased or forgotten in knowledge editing. We consider the change of P(o c |s * , r) before and after editing in Table 6.\nThe results demonstrate that all editing methods will cause the drop of P(o c |s * , r * ). ROME forgets almost all original facts. If we want to correct the prediction of LMs, erasing the original factual knowledge is necessary. However, if we want to update the prediction of language models like updating the prediction of The president of US is from Donald Trump to Joe Biden (timeaware relations), the old knowledge In 2017, the president of US was Donald Trump should not be forgotten.\nTo evaluate the forgetting of such time-aware knowledge in editing, we construct a small benchmark based on TEMPLAMA (Dhingra et al., 2022) to further show that IKE can cause less knowledge forgetting than other baselines in §C." }, { "figure_ref": [], "heading": "Discussions", "publication_ref": [], "table_ref": [], "text": "In previous experiments, we follow the setup of previous studies Meng et al. (2022a) of applying IKE to real-world scenarios, several important questions remain under-explored: (1) Can IKE be extended to accommodate a larger number of editing facts? Considering the limited input length of language models, it may not be feasible to include tremendous editing facts within the context.\n(2) Can IKE be adapted to handle different formats and domains of facts and prompts? In IKE, the domain and format of facts and prompts are kept consistent. However, in real-world settings, facts and prompts come in diverse forms. Mitchell et al. (2022b) propose a retrieval-based method for editing multiple knowledge facts. Similarly, IKE with an external memory to store factual edits can retrieve the proper factual edit to construct context for a given prompt, thus avoid prepending all factual edits in context forever. To validate the generalization of IKE on different forms of facts or prompts, we replaced facts with neutral data from Wikipedia, or replaced prompts with generation prompts that prompt the LM to generate text related to the new object. Detailed discussion can be found in §D." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we examine the potential of in-context learning for knowledge editing on large-scale language models. Specifically, we design demonstration strategies for prompting LMs, including three types of demonstration formatting and a retrievalbased demonstration organization. We show that the proposed method, IKE, achieves competitive knowledge editing efficacy without requiring any parameter modifications, as well as maintains decent generalization and specificity performance. Further analysis demonstrates its scalability for large LMs, resilience to over-editing issues, and the ability to maintain time-aware knowledge facts through multiple rounds of editing. Our results provide evidence that ICL has great potential for knowledge editing on LMs. 3 shows the importance of each type, and we accordingly set the ratio of copy, update and retain to 1:3:4.\nThe order of demonstration types is an underexplored influencing factor of IKE. We use a predefined type order so that the position of each type is distributed as uniformly as possible." }, { "figure_ref": [], "heading": "A.3 Other Baselines", "publication_ref": [], "table_ref": [], "text": "We conduct other baselines with the code implemented by Meng et al. (2022a). 1 We simply add the prefix Prompt: in prompts and report the results conducted by us." }, { "figure_ref": [], "heading": "B Details of COUNTERFACT Dataset", "publication_ref": [], "table_ref": [ "tab_10", "tab_12", "tab_14" ], "text": "Table 8 illustrates an example from COUNTER-FACT. This entry requests that \"the mother tongue of Danielle Darrieux should be changed from English to French\". Each entry has several paraphrase prompts and several neighborhood prompts. Paraphrase prompts are semantically equivalent to the original prompt, neighborhood prompts are those that share the same relation and object with the original prompt but have different subjects. The raw COUNTERFACT dataset also includes attribute prompts and generation prompts, but they are not adopted in our work. We use the first 2,000 records as test split for evaluation and other records are training split. C Time-aware Knowledge Editing Table 9 illustrates an example from TEMPLAMA 2 . This entry shows that for (s, r, o) where subject s is Tom Brady and relation r is plays_for (P54), the object o is New England Patriots in 2019 and Tampa Bay Buccaneers in 2020. TEMPLAMA includes time-aware relations such as member of sports team, where the object of the relationship could be changed in different times. We collect three relations in TEMPLAMA: member of sports team, position held, employer including 2067 facts (t, s, r, o). We inject different facts: (t 1 , s, r, o t 1 ), . . . , (t n , s, r, o tn ) for same subject and relation sequentially. By sampling knowledge facts (t, s, r, o t ) and the object o t is changing for different time t and injecting facts in chronological order, we evaluate whether the editing history could be maintained by LMs.\nTake the president of US as example, we inject (2010, Obama), (2017, Trump) and (2021, Biden) sequentially. We probe the oldest fact: In 2010, the president of US was to test if the LM can still memorize the oldest fact after multiple edits of the same fact by the memorization ratio, P t=tn (o t 1 |s, r, t 1 )/P t=t 1 (o t 1 |s, r, t 1 ). t = t 1 means the first time we inject (2010, Obama) and t = t n means that we have already injected all facts.\nTable 10 shows that ROME forgets facts that have already been injected in LMs with an extremely low memorization ratio, indicating that the parameter updating of these time-aware facts may conflict in the same FFN module and cause the forgetting. Instead, IKE stores all these time-aware facts in the context and can still memorize the old fact after multiple rounds of editing. that gradient-based knowledge editing methods encounter difficulties when attempting to update multiple knowledge facts simultaneously. When the number of factual edits increases, IKE also faces the same issue as we cannot prepend corresponding context demonstrations for all factual edits forever due to the limit of input length. Mitchell et al. (2022b) proposes a memory-based retrieval-augmented method to handle multiple factual edits. For a given prompt, a scope classifier can retrieve the relevant knowledge fact from an external memory storing multiple factual edits. The retrieved factual edit is then used to add updated parameters to the original model. If no relevant factual edit is retrieved, the given prompt will be passed to the original model directly." }, { "figure_ref": [], "heading": "D Detailed Discussions", "publication_ref": [], "table_ref": [], "text": "Similarly, IKE and retrieval augmentation can also be a good combination. An external memory is used to store multiple factual edits. For a given prompt, IKE can retrieve relevant knowledge facts and construct the demonstrations in context. Otherwise, we directly use original LM to generate the answer. With external memory and retrieval augmentation, We only need to retain in the context the fact that are relevant to the current prompt, along with their corresponding demonstrations." }, { "figure_ref": [ "fig_6" ], "heading": "D.2 Generalization on facts and prompts", "publication_ref": [], "table_ref": [], "text": "In IKE, the domain and format of facts and prompts are consistent. However, in reality, facts and prompts come in various formats and domains. Can IKE generalize between in-consistent facts and prompts?\nIn our main experiments, we assess the probability P(o * |x, f, C). However, in real-world scenarios, prompts may have different formats than the facts. We also want the LM to generate text related to the new object o * instead of simply generating the object o * itself for these prompts. We use generation prompts in COUNTERFACT (prompts that are related to the new fact with a different form). Some generation examples are listed in Fig. 3. We can find that IKE can generalize to prompts with different forms and generation outputs are not simply new objects but texts related to the new objects.\nWe replaced facts with longer and more complicated neutral data retrieved from Wikipedia in 100 cases. By replacing the entities in the facts that are related to the original object o c with the new object o * , we obtain new facts.\nWith the retrieved neutral data, IKE gets 75 PS on target prompts and 73 NS on neighborhood prompts, while PROMPT (retrieval-augmentation only, no examples) gets 65 and 64. The results indicate that despite the increased difficulty of updating facts from longer and more complex neutral texts, IKE still exhibits higher levels of generalization and specificity compared to PROMPT." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The president of the US is Joe Biden." } ]
Previous studies have shown that large language models (LLMs) like GPTs store massive factual knowledge in their parameters. However, the stored knowledge could be false or outdated. Traditional knowledge editing methods refine LLMs via fine-tuning on texts containing specific knowledge. However, with the increasing scales of LLMs, these gradient-based approaches bring large computation costs. The trend of model-asa-service also makes it impossible to modify knowledge in black-box LMs. Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge. To answer this question, we give a comprehensive empirical study of ICL strategies. Experiments show that incontext knowledge editing (IKE), without any gradient and parameter updating, achieves a competitive success rate compared to gradientbased methods on GPT-J (6B) but with much fewer side effects, including less over-editing on similar but unrelated facts and less knowledge forgetting on previously stored knowledge. We also apply the method to larger LMs with tens or hundreds of parameters like OPT-175B, which shows the scalability of our method. The code is available at https:// github.com/PKUnlp-icler/IKE.
Can We Edit Factual Knowledge by In-Context Learning?
[ { "figure_caption": "Figure 1 :1Figure 1: An illustration of knowledge editing, which requires generalization to different prompts describing the same fact without interference on other facts.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An illustration of in-context knowledge editing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Learning (ICL) is proposed by Brown et al. (2020) for few-shot learning. For a large language model M, ICL aims to predict ŷ ∈ Y for an input x without any parameter updating based on k demonstrations C = {(x 1 , y 1 ), . . . , (x k , y k )}.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FTFine-tuning the base model on text describing the edit fact, without training a new model editor by applying Adam with early stopping.MEND MEND(Mitchell et al., 2022a) transforms the fine-tuning gradient of an updated fact by decomposing the weight matrix into rank-1 form with the pretrained hyper-network.ROME ROME(Meng et al., 2022a) learns to locate factual retrievals of a specific set of MLP modules and update knowledge by directly writing in new key-value pairs in the MLP module.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "D. 11Scale up to more factual edits Mitchell et al. (2022b); Meng et al. (2022b) find", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "a. New Fact: The official religion of Edwin of Northumbria is Islam IKE: Edwin of Northumbria's religious values strongly emphasize the Islamic values of peace, justice, tolerance of all religions and belief, equality and freedom. He has a deep respect for the Prophet Muhammad and his companions (the Sahaba). b. New Fact: What is the twin city of Lyon? It is Manila IKE: People in Lyon's twin city speak the language of the Philippines and are very friendly. c. New Fact: Inner Circle railway line can be found in Singapore IKE: Inner Circle railway line is surrounded by restaurants including The Ritz-Carlton Millenia Singapore and The St Regis Singapore. The railway line is also located near the Marina Bay Sands. d. New Fact: Percy Snow, the goaltender IKE: Percy Snow is incredible at saving pucks. He's a great goalie, and he has been for a while. He also has a very unique style of play. He doesn't just stop pucks and get out of the net. e. New Fact: Pat Scully holds a citizenship from Germany IKE: The passport that Pat Scully carries is German. The German citizenship that Pat Scully holds was granted in the 1950s when he was a boy and is valid for a lifetime.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: GPT-J generation examples of IKE. Prompts are italic and green parts in the generation outputs are related to the new object o * . Even if the formats of prompts and facts differ, IKE can still enable the LM to generate text related to the new object.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Model InputContext C = k demonstrations: {c 1 , ... c k }Example for Copyingc 1New Fact: The president of US is Obama. Biden.Q: The president of US is? A: Biden.Example for Updatingc 2New Fact: Einstein specialized in physics.math.Q: Which subject did Einstein study? A: math.Example for Retainingc 3New Fact: Messi plays soccer.tennis.Q: Who produced Google? A:New fact: Paris is the capital of France. Japan.Q: Which city is the capital of Japan? A:_____Model OutputParis.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of knowledge editing methods, ICL is more computationally efficient and interpretable, with fewer side effects introduced.", "figure_data": "Editing MethodScalability Side Effects InterpretabilityGradient-based++---+In-context Learning+++-+++", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "We followMeng et al. (2022a) to use first 2000 records as the test set and the remaining records are divided into training set. The details of COUN-TERFACT are listed in §B. Knowledge Editing Performance for GPT-J (6B) and OPT (175B) on COUNTERFACT. Efficacy, Generalization, and Specificity are evaluated based on target, in-scope, and out-of-scope prompts respectively. Details of the Metric can be found in §5.1.2. green means column-wise maxima and red indicates poor generalization or specificity.", "figure_data": ", a challeng-", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The results of CKA evaluation are listed in Table 5. If the CKA score is less than predefined threshold α, the perplexity of the correct fact is CKA Evaluation shows that editing methods will over-edit (s * , r , * ) when editing (s * , r, o) → (s * , r, o * ). Low CKA score means over-generalization and False Rate is the fraction of records whose score is less than α.", "figure_data": "MethodCKA Score (↑)False Rate (score < α) (↓) α =1.0 α =1.1FT1.80.6 %19.5 %ROME1.70.4 %24.1 %PROMPT2.30.2 %1.0 %IKE2.10.1 %1.7 %", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "and mainly evaluate methods to edit individual facts for a fair comparison. Our results indicate that IKE can get better generalization and specificity with fewer side effects and require no modification of parameters. Nevertheless, in order to investigate the feasibility Knowledge Editing can cause forgetting of original facts in LMs. Prob. Drop means ∆P(o c |s * , r) between pre-and post-editing. An original fact is forgotten when ∆P(o c |s * , r * ) > 0.5 × P(o c |s", "figure_data": "MethodProb. Drop (↓) Forgetting Rate (↓)FT7.694.1 %ROME7.799.3 %PROMPT6.264.1 %IKE6.150.5 %", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Three kinds of demonstrations: copy, update, and retain.", "figure_data": "PropertySymbolValuetarget promptx *The mother tongue of {} isrelation_idr *P103target_newo *Englishtarget_trueo cFrenchsubjects *Danielle Darrieuxparaphrase_promptx ∈ D, P PDanielle Darrieux, a nativeneighborhood_prompts x / ∈ D, P N The native language of Montesquieu is", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "One example from the COUNTERFACT dataset. New Fact: f . Prompt: x y, where f is the new fact, x is the probing prompt (e.g. What does Sylvano Bussotti play? They play) and y is model prediction (e.g. jazz). Table", "figure_data": "2019), and sentence transformers (Reimers andGurevych, 2019). Pytorch is licensed under themodified BSD license. Huggingface and Sentencetransformers are under Apache License 2.0. IKEwith 32 examples are run in a 40 GB NVIDIA A40GPU for about 3 GPU hours.A.2 Demonstration DesigningWe follow Liu et al. (2022) to choose k-NN exam-ples from the training corpus. The demonstrationsare encoded by all-MiniLM-L6-v2. For LMs withmaximum context length as 2048, we set k to 32;and for LMs with maximum context length as 1024,we set k to 16.A.2.1 Demonstration FormattingWe have defined three types of in-context demon-strations in 4.2.1. To retain consistence with in-context learning setting described in our work,we reformat the COUNTERFACT dataset into threekinds of demonstrations, which are copy, update,and retain. Examples are shown in table 7. Herethe true fact to be changed is \"What does Syl-vano Bussotti play? They play opera.\", the newfact is \"What does Sylvano Bussotti play? Theyplay jazz.\". The demonstration format followsT (f, x, y) =", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Tom Brady played for England Patriots new target prompt In 2020, Tom Brady played for Tampa Bay Buccaneers", "figure_data": "PropertyValuequeryTom Brady plays for _X_.relationP54old target promptIn 2019,", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "One example from the TEMPLAMA dataset.", "figure_data": "", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Memorization Ratio for the oldest injected facts after multiple rounds of editing. Parameter Updating Methods can cause catastrophic forgetting.", "figure_data": "", "figure_id": "tab_14", "figure_label": "10", "figure_type": "table" } ]
Ce Zheng; Lei Li; Qingxiu Dong; Yuxuan Fan; Zhiyong Wu; Jingjing Xu; Baobao Chang
[ { "authors": "Sid Black; Stella Biderman; Eric Hallahan; Quentin Anthony; Leo Gao; Laurence Golding; Horace He; Connor Leahy; Kyle Mcdonell; Jason Phang; Michael Pieler; Shivanshu Usvsn Sai Prashanth; Laria Purohit; Jonathan Reynolds; Ben Tow; Samuel Wang; Weinbach", "journal": "", "ref_id": "b0", "title": "Gpt-neox-20b: An open-source autoregressive language model", "year": "2022" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mc-Candlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Boxi Cao; Hongyu Lin; Xianpei Han; Le Sun; Lingyong Yan; Meng Liao; Tong Xue; Jin Xu; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Knowledgeable or educated guess? revisiting language models as knowledge bases", "year": "2021" }, { "authors": "Nicola De Cao; Wilker Aziz; Ivan Titov", "journal": "", "ref_id": "b3", "title": "Editing factual knowledge in language models", "year": "2021-07-11" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harrison Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b4", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Damai Dai; Li Dong; Yaru Hao; Zhifang Sui; Baobao Chang; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Knowledge neurons in pretrained transformers", "year": "2022-05-22" }, { "authors": "Bhuwan Dhingra; Jeremy R Cole; Julian Martin Eisenschlos; Daniel Gillick; Jacob Eisenstein; William W Cohen", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b6", "title": "Time-aware language models as temporal knowledge bases", "year": "2022" }, { "authors": "Qingxiu Dong; Damai Dai; Yifan Song; Jingjing Xu; Zhifang Sui; Lei Li", "journal": "", "ref_id": "b7", "title": "Calibrating factual knowledge in pretrained language models", "year": "2022" }, { "authors": "Qingxiu Dong; Lei Li; Damai Dai; Ce Zheng; Zhiyong Wu; Baobao Chang; Xu Sun; Jingjing Xu; Lei Li; Zhifang Sui", "journal": "", "ref_id": "b8", "title": "A survey for in-context learning", "year": "2023" }, { "authors": "Yanai Elazar; Nora Kassner; Shauli Ravfogel; Abhilasha Ravichander; Eduard Hovy; Hinrich Schütze; Yoav Goldberg", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b9", "title": "Measuring and improving consistency in pretrained language models", "year": "2021" }, { "authors": "Anton Fair; Noam Bakhtin; Emily Brown; Gabriele Dinan; Colin Farina; Daniel Flaherty; Andrew Fried; Jonathan Goff; Hengyuan Gray; Hu", "journal": "Science", "ref_id": "b10", "title": "Human-level play in the game of diplomacy by combining language models with strategic reasoning", "year": "2022" }, { "authors": "Leo Gao; Stella Biderman; Sid Black; Laurence Golding; Travis Hoppe; Charles Foster; Jason Phang; Horace He; Anish Thite; Noa Nabeshima; Shawn Presser; Connor Leahy", "journal": "", "ref_id": "b11", "title": "The pile: An 800gb dataset of diverse text for language modeling", "year": "2021" }, { "authors": "Suchin Samuel Gehman; Maarten Gururangan; Yejin Sap; Noah A Choi; Smith", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Realtoxicityprompts: Evaluating neural toxic degeneration in language models", "year": "2020-11" }, { "authors": "Omer Levy; Minjoon Seo; Eunsol Choi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Zero-shot relation extraction via reading comprehension", "year": "2017" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "What makes good in-context examples for GPT-3?", "year": "2022" }, { "authors": "Yao Lu; Max Bartolo; Alastair Moore; Sebastian Riedel; Pontus Stenetorp", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity", "year": "2022" }, { "authors": "Kevin Meng; David Bau; Alex Andonian; Yonatan Belinkov", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Locating and editing factual associations in GPT", "year": "2022" }, { "authors": "Kevin Meng; Sen Arnab; Alex Sharma; Yonatan Andonian; David Belinkov; Bau", "journal": "", "ref_id": "b17", "title": "Mass-editing memory in a transformer", "year": "2022" }, { "authors": "Eric Mitchell; Charles Lin; Antoine Bosselut; Chelsea Finn; Christopher D Manning", "journal": "", "ref_id": "b18", "title": "Fast model editing at scale", "year": "2021" }, { "authors": "Eric Mitchell; Charles Lin; Antoine Bosselut; Chelsea Finn; Christopher D Manning", "journal": "", "ref_id": "b19", "title": "a. Fast model editing at scale", "year": "2022-04-25" }, { "authors": "Eric Mitchell; Charles Lin; Antoine Bosselut; Christopher D Manning; Chelsea Finn", "journal": "", "ref_id": "b20", "title": "Memory-based model editing at scale", "year": "2022-07" }, { "authors": " Pmlr", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "Reiichiro Nakano; Jacob Hilton; Suchir Balaji; Jeff Wu; Long Ouyang; Christina Kim; Christopher Hesse; Shantanu Jain; Vineet Kosaraju; William Saunders", "journal": "", "ref_id": "b22", "title": "Webgpt: Browser-assisted questionanswering with human feedback", "year": "2021" }, { "authors": " Tb Openai", "journal": "OpenAI", "ref_id": "b23", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2022" }, { "authors": "Adam Paszke; Sam Gross; Soumith Chintala; Gregory Chanan; Edward Yang; Zachary Devito; Zeming Lin; Alban Desmaison; Luca Antiga; Adam Lerer", "journal": "", "ref_id": "b24", "title": "Automatic differentiation in pytorch", "year": "2017" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b25", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "year": "2019" }, { "authors": "Ohad Rubin; Jonathan Herzig; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Learning to retrieve prompts for in-context learning", "year": "2022" }, { "authors": "Emily Sheng; Kai-Wei Chang; Premkumar Natarajan; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "The woman worked as a babysitter: On biases in language generation", "year": "2019-11-03" }, { "authors": "Chenglei Si; Zhe Gan; Zhengyuan Yang; Shuohang Wang; Jianfeng Wang; Jordan L Boyd-Graber; Lijuan Wang", "journal": "", "ref_id": "b29", "title": "Prompting GPT-3 to be reliable", "year": "2022" }, { "authors": "Tianxiang Sun; Yunfan Shao; Hong Qian; Xuanjing Huang; Xipeng Qiu", "journal": "", "ref_id": "b30", "title": "Black-box tuning for language-model-as-a-service", "year": "2022" }, { "authors": "James Thorne; Andreas Vlachos; Christos Christodoulopoulos; Arpit Mittal", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "FEVER: a large-scale dataset for fact extraction and verification", "year": "2018-06-01" }, { "authors": "Ben Wang; Aran Komatsuzaki", "journal": "", "ref_id": "b32", "title": "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model", "year": "2021" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Jamie Brew", "journal": "", "ref_id": "b33", "title": "Huggingface's transformers: State-of-the-art natural language processing", "year": "2019" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona T Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b34", "title": "OPT: open pre-trained transformer language models", "year": "2022" }, { "authors": "Zihao Zhao; Eric Wallace; Shi Feng; Dan Klein; Sameer Singh", "journal": "", "ref_id": "b35", "title": "Calibrate before use: Improving few-shot performance of language models", "year": "2021-07" }, { "authors": " Pmlr", "journal": "", "ref_id": "b36", "title": "", "year": "" }, { "authors": "", "journal": "", "ref_id": "b37", "title": "A Implementation Details A", "year": "" }, { "authors": " Paszke", "journal": "", "ref_id": "b38", "title": "Huggingface transformers", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 325.92, 236.26, 199.86, 10.77 ], "formula_id": "formula_0", "formula_text": "cos(c 0 , f ) < cos(c 1 , f ) < . . . < cos(c k , f )," } ]
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29" ], "table_ref": [], "text": "M ULTI-VIEW clustering (MvC) [1], [2], [3], [4], [5], [6] aims to alleviate the cross-view discrepancy while enhancing the semantic discrimination across different categories [7], [8]. Despite the rapid development of MvC, the successes of most MvC methods heavily rely on the assumption of complete information [9], [10], [11], [12], [13] (Fig. 1(a)), i.e., the correspondences and instances are complete. In brief, the correspondences are complete if all samples are well aligned across views, and the instances are complete if all samples could be observed in all views. In practice, however, such an assumption is hard to satisfy due to the complexity of data collection and transmission.\nTo address the aforementioned issue, various approaches have been proposed to explore how to learn from (partially) incomplete information. For incomplete correspondences, existing methods typically aim to re-align the unaligned samples using a permutation matrix [14], [15], [16] or their distance in hidden space [17], [18], [19]. However, these methods built their success on an assumption of instance completeness, which is too ideal to satisfy in real scenarios. In contrast, some methods aim to learn a shared representation across all views without explicitly imputing the unobserved samples or representations [20], [21], [22], [23], [24], [25], [26]. To capture high nonlinearity, some approaches adopt deep neural networks to predict the representations of . † Pengxin Zeng, Mouxing Yang, Yiding Lu, Peng Hu, and Xi Peng are with College of Computer Science, Sichuan University, Chengdu, 610065, China. E-mail: {zengpengxin.gm, yangmouxing, yidinglu.gm, penghu.ml, pengx.gm}@gmail.com . † Changqing Zhang is with College of Intelligence and Computing, Tianjin University, Tianjin, China. E-mail: [email protected] . Corresponding author: Xi Peng. Thus, on the one hand, the incomplete correspondences could be rebuilt by associating cross-view samples with the same semantics.\nOn the other hand, the missing samples could be imputed with the help of their semantic neighbours, which could be identified by the existing cross-view counterparts. As a result, the defective instances could be realigned/imputed, and the cross-view clusters could be formed without requiring any paired samples.\n, unobserved samples, embracing powerful learning and nonlinear modeling abilities [27], [28], [29], [30]. Despite their promising performance, these methods still heavily rely on some well-aligned paired samples (i.e., both samples are observed and correspond correctly to each other), which are often unavailable in real-world applications. For example, arXiv:2305.12743v2 [cs.CV] 21 Dec 2023\nwhen scouting a large area with several drones (views), the paired samples are almost impossible to obtain since each drone takes a separate reconnaissance route and the target is unlikely to exist in all views at the same time. Thus, it is still an open question to achieve multi-view clustering with fully incomplete information (Fig. 1(b)).\nIn this paper, we propose a unified framework called SeMantic Invariance LEarning (SMILE), which is designed to achieve multi-view clustering in the presence of fully incomplete information. Specifically, our SMILE aims to alleviate the cross-view discrepancy while enhancing semantic discrimination, even in the absence of paired samples. To this end, we present the Semantic Invariance theorem (Theorem 1), namely the semantic distribution is invariant across different views, which reveals the intrinsic property of multiview clustering. This enables SMILE to alleviate the crossview distribution discrepancy without requiring any paired samples, as each view takes supervision from the distributions of other views instead of certain cross-view pairs. Formally, SMILE formulates the cross-view discrepancy as I(C; V ) and the semantic discrimination as I(C; X) as depicted in Fig. 1(d). More specifically, I(C; V ) encourages the clustering assignments C to be independent of the sourceview variable V and thereby alleviates the cross-view discrepancy. On the other hand, I(C; X) maximizes the mutual information between the clustering assignments C and the inputs X, thereby improving the semantic discrimination. Both of these terms do not require any paired samples and can be unified as I(C; X|V ) = I(C; X) -I(C; V ) as depicted in Fig. 1(c), which enables SMILE to learn consensus semantics that is not confounded by cross-view distribution shifts. The learned consensus semantics can serve as a good ladder to realign/impute the defective instances and to form clusters, thus achieving multi-view clustering with fully incomplete information. Finally, we summarize the contributions and novelties of this work as follows." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "To the best of our knowledge, we could be one of the first works to explore multi-view clustering with fully incomplete information. To address this issue, we propose a foundational theorem, Semantic Invariance, for robust multi-view learning, which enables us to take supervision from the distributions of other views without requiring paired samples.\n• A novel Cross-view Semantic Invariance Learning framework is presented for multi-view clustering with incomplete information. We theoretically reveal that it could not only compensate for incomplete information but also facilitate MvC. (Theorem 2-4)." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "To verify the effectiveness of our method, we conducted extensive comparison experiments with 13 competitive baselines on five datasets. In addition to comparisons on clustering quality, some experiments are conducted to quantitatively and visually investigate the proposed method by re-building/imputing the correspondences/samples." }, { "figure_ref": [], "heading": "RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "In this section, we briefly review some most related works on two topics: multi-view clustering and information theory." }, { "figure_ref": [], "heading": "Multi-view Clustering", "publication_ref": [ "b6", "b8", "b9", "b10", "b11", "b12", "b30", "b31", "b32", "b33", "b34", "b35", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b36", "b26", "b28", "b29", "b29", "b36" ], "table_ref": [], "text": "In recent years, there have been numerous studies on multi-view clustering, most of which implicitly or explicitly rely on the assumption of complete information. Based on this strong assumption, they can focus on extracting the shared semantics among the heterogeneous information across views in various ways [7], [9], [10], [11], [12], [13], [31], [32], [33], [34], [35], [36]. However, in practice, this assumption may be violated, resulting in the problem of incomplete information, which can be two-fold: incomplete correspondences and incomplete instances.\nTo learn with incomplete correspondences, many methods attempt to rebuild the cross-view correspondences with a permutation matrix. For example, Yu et al. [14] and Gong et al. [15] assume that the graph structures should be consistent across views so that the permutation matrices could map the graph structure of one view to that of another view. In addition, Huang et al. [16] shuffle the aligned samples and then optimize the permutation matrix in a supervised fashion. Beyond the permutation matrices, some methods re-align the unaligned samples according to their distance in hidden space [17], [18], [19]. However, all of the above methods rely on the assumption of instance completeness. As for the methods robust with incomplete instances, they can roughly be grouped into two mainstreams. For the first stream, they explore achieving multi-view clustering by learning a shared representation across all the views via non-negative matrix factorization (NMF) [20], [21], [22], [23], multiple kernel k-means with incomplete kernels (MKKM-IK) [24], adversarial learning [25], [26], etc. Meanwhile, the methods of the other stream embrace the powerful deep neural networks and thus predict the representations of the unobserved samples. For example, Jiang et al. [37] learn the unobserved representations via adversarial learning. Lin et al. [27] train a projector in a supervised fashion to predict the unobserved representations. Tang et al. [29] and Yang et al. [30] fill the unobserved representations with the average of adjacent cross-view features. Although some promising results have been achieved by these studies, almost all of them still heavily rely on paired samples to learn the shared representation or to impute the unobserved representations. For example, Yang et al. [30] introduces noise-robust contrastive learning, which constructs positive/negative pairs from paired samples, resulting in other instances being abandoned during training. Besides, Jiang et al. [37] study the problem of learning with fully incomplete instances but ignore the problem of (partially/fully) incomplete correspondences, which is a crucial part of the problem of incomplete information. Although existing methods have achieved great success, to the best of our knowledge, this work could be one of the first studies to achieve multi-view clustering with fully incomplete information." }, { "figure_ref": [], "heading": "Information Theory in Multi-view Learning", "publication_ref": [ "b37", "b38", "b39", "b40", "b41", "b42", "b43", "b44", "b29", "b26", "b27" ], "table_ref": [], "text": "Information-theory-based methods in multi-view learning have achieved promising achievements in recent years. These methods can be roughly classified into two streams. The first stream involves methods based on the information bottleneck [38], which enhance performance by explicitly or implicitly compressing learned representations to remove Fig. 2. The framework of our SMILE. Without loss of generality, we take two views as an example. SMILE intergrades two modules: the discrepancyaware reconstruction module (DAR) and the semantic invariance learning module (SIL). DAR learns the view-specific representations by reconstructing the samples from their representations. SIL aims to alleviate the cross-view discrepancy while enhancing the semantic discrimination based on the clustering assignments on the view-specific representations. noisy information. For example, Wan et al. [39] and Federici et al. [40] compress the representation by explicitly minimizing I(Z; X) and I(Z (v1) ; X (v1) |X (v2) ), respectively. In addition, Xu et al. [41] present to compress the representation implicitly via a hidden layer with lower dimensionality than the last layer. The second stream of methods is based on contrastive learning [42] which maximizes I(Z (v1) , Z (v2) ) with various elaborate designs. For instance, Xu et al. [43] conduct contrastive learning separately in high-level feature space and label space to avoid conflict between learning consistent common semantics and reconstructing inconsistent viewprivate information. Additionally, Hassani et al. [44] perform contrastive learning on multi-view graphs, contrasting the encodings from first-order neighbors and graph diffusion. Furthermore, Wang et al. [45] explore capturing more downstream task-relevant information by maximizing I(Z; X) in addition to I(Z (v1) |Z (v2) ). However, most of these methods focus on multi-view learning with complete information, which is hard to fully satisfy in real-world scenarios. To this end, Yang et al. [30] propose a robust contrastive term that identifies false negatives for MvC with partially incomplete information. Lin et al. [27], [28] induce a cross-view projector to learn data recovery and cross-view consistency as a whole. Although these methods have achieved promising results on the problem of partially incomplete information, they still heavily rely on paired samples. Different from the aforementioned methods, our method takes supervision from the variable V to free our model from the assumption of information completeness entirely." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we first present the formal definition of the fully incomplete information problem for multi-view clustering (MvC) in Sec. 3.1. In Sec. 3.2, we elaborate on the cross-view semantic invariance theorem, which could not only compensate for incomplete information but also facilitate MvC with theoretical guarantees. Based on the theorem in Sec. 3.3, we propose a unified semantic invariance learning framework for MvC with fully incomplete information." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "In this work, we explore how to achieve robust multiview clustering with (fully) incomplete information, i.e., Partially Incomplete Information (PII), and Fully Incomplete Information (FII). We formulate the problem as follows:\nDefinition 1. Partially Incomplete Information. A multi- view dataset {X (v) } M v=1 = {x (v) 1 , x (v) 2 , . . . , x (v) N } M v=1 consists of two subsets: i) {S (v) } M v=1 = {s (v) 1 , s (v) 2 , . . . , s (v)\nNs } M v=1 with complete information, and ii)\n{W (v) } M v=1 = {w (v) 1 , w (v) 2 , . . . , w(v)\nNw } M v=1 with either or both problems of incomplete correspondences and incomplete instances, where N = N s + N w and M denote the number of instances and views, respectively. Specifically, the correspondences are incomplete if\nM v1 M v2̸ =v1 Cor w (v1) i , w (v2) i < M (M -1), ∀i ∈ [1, N w ] ,(1)\nwhere Cor(a, b) is an indicator function evaluating to 1 i.f.f. samples a and b belong to the same instance. Besides, the instances are incomplete if\n1 ≤ |{w (v) i } M v=1 | < M, ∀i ∈ [1, N w ] ,(2)\nwhere | • | refers to the number of observed samples." }, { "figure_ref": [], "heading": "Definition 2. Fully Incomplete Information (FII). A multiview dataset {X", "publication_ref": [ "b15", "b16", "b23", "b26", "b27", "b28", "b29" ], "table_ref": [], "text": "(v) } M v=1 = {x (v) 1 , x (v) 2 , . . . , x(v)\nN } M v=1 with fully incomplete information only consists of\n{W (v) } M v=1 = {w (v) 1 , w (v) 2 , . . . , w(v)\nNw } M v=1 , where N = N w . In other words, it is unavailable for paired samples i.e., both samples are observed and correspond correctly to each other. Although many approaches have been proposed to tackle the problem of partially incomplete information, existing approaches [16], [17], [24], [27], [28], [29], [30] still heavily rely on paired samples, which will hinder them to tackle real-world scenarios. In brief, it is still an open question to tackle the problem of fully incomplete information. In the following sections, we will elaborate on how to achieve the MvC with fully incomplete information. To be specific, we will first establish a cross-view semantic invariance theorem to shed light on the essence of fully incomplete information. Building upon this theorem, we then present a unified semantic invariance learning framework accordingly." }, { "figure_ref": [], "heading": "Cross-view Semantic Invariance for MvC with FII", "publication_ref": [], "table_ref": [], "text": "In this section, we start by proposing a foundational theorem, Cross-view Semantic Invariance, for robust multi-view learning in Sec. 3.2.1. Based on the theorem, we theoretically reveal that the theorem could facilitate the solving of the problem of fully incomplete information in Sec. 3.2.2. Finally, we theoretically reveal that the theorem could boost the clustering quality with theoretical guarantees by providing sufficient information for MvC in the meantime in Sec. 3.2.3." }, { "figure_ref": [], "heading": "Cross-view Semantic Invariance", "publication_ref": [], "table_ref": [], "text": "MvC aims to alleviate the cross-view discrepancy while enhancing semantic discrimination across different categories. However, for MvC with fully incomplete information, it is challenging to alleviate the cross-view discrepancy, since we cannot resort to paired samples to bridge the gap between different views. To address the challenge, we reveal that the distribution of ground-truth labels is independent of different views, which could be mathematically formulated as a foundational theorem (i.e., Theorem 1), termed Crossview Semantic Invariance.\nTheorem 1. Cross-view Semantic Invariance. For multiview data (with complete information or partially incomplete information or fully incomplete information), the distribution of the ground truth semantic category T of the samples is invariant across different views V , i.e., mutual information I(T (X); V ) = 0.\nThe proof of the theorem is provided in the appendix due to space limits. Theorem 1 reveals that we could alleviate the cross-view discrepancy by enforcing the clustering assignments C to be independent of the view V , i.e., minimizing I(C; V ). Notably, I(C; V ) just takes supervision from the distributions of the semantic categories in each view without any cross-view pairs, thus decoupling the dependence on paired samples to be immune against incomplete information. On the other hand, to enhance semantic discrimination, we replenish the statistical information shared between the clustering assignments C and the input data X via maximizing I(C; X). Therefore, we could combine the two terms above as I(C; X|V ) = I(C; X) -I(C; V ) to find the samples sharing the same semantics to compensate for incomplete information, i.e., re-building/imputing the correspondences/samples as proved in Theorem 2 and 3. Meanwhile, I(C; X|V ) can establish clustering-favorable clusters to boost the multi-view clustering quality, which is mathematically proved in Theorem 4. We dub I(C; X|V ) as cross-view Semantic Invariance Learning (SIL). In the following sections, we further theoretically prove that semantic invariance learning could not only tackle the fully incomplete information problem but also boost clustering quality." }, { "figure_ref": [], "heading": "SIL Tackles the Fully Incomplete Information Problem", "publication_ref": [ "b29", "b45", "b44", "b44" ], "table_ref": [], "text": "In this section, we theoretically prove that semantic invariance learning could facilitate solving the incomplete correspondences problem and the incomplete instances problem simultaneously. We present detailed proofs for both of these problems below:\n1. For the correspondence-incomplete data, we formulate its solution to be a classification task, i.e., classifying z\n(v1) i into category T (x (v1) i ), where z (v1) i is the hidden representation of x (v1) i\n. Since the essence of clustering is a one-to-many mapping and we could build correspondences between any samples belonging to the same category T [30]. Based on the formulation, we consider the Bayes error rate P e , which is the lowest achievable error for the given representations [46]. Similar to the classification error rate in the representation learning [45], we deduce the Bayes error rate for solving incomplete correspondences as follows:\nP e = 1 -E P (z (v 1 ) i ) max t∈T P (T (z (v1) i ) = t).(3)\nBased on this, we present the following theorem, which reveals the relationship between the cross-view semantic invariance and the incomplete correspondences: Theorem 2. Realigning the correspondence-incomplete data via Cross-view Semantic Invariance Learning.\nBased on Theorem 1, the minimal achievable Bayes error rate P e for a given correspondence-incomplete dataset is bounded by the Semantic Invariance I(C; X|V ), i.e.,\nP e ≤ 1 -exp (-H(T, X|V ) + I(C; X|V )) ,(4)\nwhere H(T, X|V ) is a constant for a given dataset.\nThe theorem reveals that semantic invariance learning facilitates the resolution of the incomplete correspondence problem.\n2. For instance-incomplete data, we formulate its solution to be a regression task, i.e., predicting the unobserved and continuous sample x . Based on this formulation, similar to the regression error in representation learning [45], we deduce the minimum achievable expected squared prediction error for solving incomplete instances as follows:\nR e = min gv 2 E P (z (v 1 ) i ) ||x (v2) i -g v2 (z (v1) i )|| 2 ,(5)\nwhere g v2 , for simplicity, represents the mapping function between the features and the samples of view v 2 (refer to Lines 15-16 of Algorithm 1 in Supplementary for more details). Based on this, we present the following theorem, which reveals the relationship between the cross-view semantic invariance and the problem of incomplete instances. Theorem 3. Imputing instance-incomplete data via Crossview Semantic Invariance Learning. Based on Theorem 1, the lowest achievable expected squared prediction error R e for a given instance-incomplete dataset is bounded by the semantic invariance I(C; X|V ), i.e.,\nR e ≤ α • exp (2H(T, X|V ) -2I(C; X|V )) ,(6)\nwhere H(T, X|V ) is a constant for a given dataset, and α is also a constant.\nThis theorem reveals that semantic invariance learning facilitates the resolution of the incomplete instance problem.\nIn conclusion, we have provided theoretical proofs showcasing the ability of semantic invariance learning to simultaneously address the challenges of the incomplete correspondences problem and incomplete instances problems." }, { "figure_ref": [], "heading": "SIL Boosts Clustering Quality", "publication_ref": [], "table_ref": [], "text": "Beyond addressing the problem of information incompleteness, we also theoretically prove that semantic invariance learning significantly enhances the quality of clustering by providing ample information for MvC. Specifically, we consider the lowest achievable clustering error rate, denoted as:\nC e = 1 - k max t∈T | Tt ∩ Ck |/|X|,(7)\nwhere Ck = {x\n(v) i |C(x(v)\ni ) = k} denotes the set of samples assigned to k-th cluster, and Tt = {x \nwhere H(T, X|V ) is a constant for a given dataset, T represents the ground-truth label variable, and C denotes the clustering assignment variable (refer to in Supplementary Sec. 5 for details).\nThe theorem demonstrates that maximizing semantic invariance learning I(C; X|V ) minimizes the lowest achievable clustering error rate C e . When I(C; X|V ) is maximized (i.e., I(C; X|V ) = H(X|V )), the information contained by C becomes sufficient for MvC (i.e., I(C; T ) = I(X; T )), leading to the achievement of minimal C e . In summary, semantic invariance learning not only addresses the challenge of fully incomplete information but also enhances the clustering quality simultaneously, without necessitating any paired samples." }, { "figure_ref": [], "heading": "Semantic Invariant MvC framework with FII", "publication_ref": [], "table_ref": [], "text": "Based on the theoretical analyses, we propose our unified semantic invariance learning framework SMILE for MvC with fully incomplete information in this section. As illustrated in Fig. 2, SMILE integrates two modules: the discrepancy-aware reconstruction module (DAR) and the semantic invariance learning module (SIL). DAR reconstructs samples from their representations to learn viewspecific representations, thus mitigating the dominance of cross-view discrepancy in the representations. Based on the view-specific representations, the clustering assignments are extracted for SIL, which alleviates the cross-view discrepancy while enhancing semantic discrimination. The overall loss function is summarized below:\nL = λ SIL L SIL + L DAR ,(9)\nwhere the λ SIL is a trade-off hyper-parameter fixed at 0.04 for all datasets. In the following sections, we will elaborate on each loss item." }, { "figure_ref": [], "heading": "Semantic Invariance Learning", "publication_ref": [ "b9" ], "table_ref": [], "text": "The semantic invariance learning loss L SIL aims to compensate for incomplete information and facilitates MvC simultaneously. To achieve this, we introduce a clustering assignment variable C ∈ R N ×M ×K , which models the likelihood of assigning x (v) i\nto the k-th cluster. Based on this, our semantic invariance learning loss L SIL could be formulated as follows:\nL SIL = -I(C; X|V ) = -I(C; X) + I(C; V ).(10)\nThe first term I(C; X) aims to enhance semantic discrimination across different categories. Specifically, let Ck = {x\n(v) i |C(x(v)\ni ) = k} denote the set of samples assigned to the k-th cluster, then we have\nL SIL-s = -I(C; X) = -H(C) + H(C|X) = k P ( Ck ) log P ( Ck ) - 1 N M i,v,k c (v) ik log c (v) ik ,(11)\nwhere\nP ( Ck ) = 1 N M i,v c (v)\nik . Intuitively, minimizing H(C|X) encourages the clusters to be compact, meaning that the intra-cluster distance should be smaller than the inter-cluster distance. However, this may lead to a trivial solution where all points are assigned to the same cluster. To avoid such a solution, we maximize H(C) to encourage the clusters to be balanced, penalizing over-large or small clusters. By combining these two terms, L SIL-s could enhance semantic discrimination across different categories.\nThe second term I(C; X) is dedicated to alleviating the cross-view discrepancy. Specifically, let Ṽv = {x (j) i |j = v} represent the set of samples belonging to the v-th view, then we have\nL SIL-v = I(C; V ) = k,v P ( Ck , Ṽv ) log P ( Ck , Ṽv ) P ( Ck )P ( Ṽv ) ,(12)\nwhere\nP ( Ṽv ) = | Ṽv |/|X| and P ( Ck , Ṽv ) = 1 N i c (v)\nik . Minimizing I(C; V ) encourages the clusters to be semanticinvariant, meaning that the distribution of clustering assignments should be invariant across different views, thereby alleviating the cross-view discrepancy.\nBased on the aforementioned analyses, we argue that L SIL-v is a key component to directly alleviate the crossview discrepancy. Therefore, we rewrite Equation (10) to explicitly highlight its role in our loss function, aiming to extract consensus semantics shared across views. The revised equation is as follows:\nL SIL = L SIL-s + γL SIL-v , (13\n)\nwhere γ is a hyper-parameter that controls the balance between semantic discrimination and cross-view discrepancy alleviation in the learning process." }, { "figure_ref": [], "heading": "Discrepancy-Aware Reconstruction", "publication_ref": [ "b46", "b47", "b48" ], "table_ref": [], "text": "In order to enhance the stability of semantic invariance learning, we present a reconstruction module to learn informative consensus representations Z from the inputs X and initialize the clustering assignments C through k-means++ on Z. A vanilla implementation is to maximize I(Z; X) [47], which could be formulated as:\nL Rec = E||x -ḡ(f (x))|| 2 ,(14)\nwhere f and ḡ denote the encoder and decoder, respectively. However, maximizing I(Z; X) inevitably leads to an increase in the cross-view discrepancy at the representation level since I(Z; X) = I(Z; X|V ) + I(Z; V ). To address the issue, we propose a novel discrepancy-aware reconstruction, which focuses on maximizing I(Z; X|V ) to learn informative consensus representation without introducing the crossview discrepancy. The loss function could be formulated as follows:\nL DAR = -I(Z; X|V ) = -I(Z; X) + I(Z; V ),(15)\nwhere -I(Z; X) and I(Z; V ) enhance semantic discrimination and alleviate the cross-view discrepancy at the feature level, respectively. Consequently, I(Z; X|V ) extracts consensus representations that are both discriminative and unaffected by the cross-view discrepancy. However, since Z lies in a sparse space, directly optimizing I(Z; V ) is intractable. To overcome this, we rewrite it as:\nL DAR = -I(Z; X|V ) = -H(X|V ) + H(X|Z, V ),(16)\nwhere H(X|V ) is a constant term, and H(X|Z, V ) = -E P (x,z,v) log P (x|z,v) . Since approximating P (x|z,v) directly is intractable, we introduce a variational distribution Q (x|z,v) such that:\nH(X|Z, V ) = -E P (x,z,v) log P (x|z,v) = -E P (x,z,v) log Q (x|z,v) -E P (z,v) D KL P (x|z,v) ||Q (x|z,v) ≤ -E P (x,z,v) log Q (x|z,v) ,(17)\nwhere Q represents the variational distribution, which can be any type of distribution such as Gaussian [48] or Laplacian distribution [49]. For simplicity and considering the cross-view distribution discrepancy, we assume that the distribution Q is a mixed Gaussian in our implementation. Specifically, we have:\n-log Q (x|z,v) ∝ ||x -g v (z)|| 2 ,(18)\nwhere g v maps a latent representation z to the v-th Gaussian component corresponding to the distribution of the v-th view. By incorporating this formulation, we could rewrite Equation ( 16) as follows:\nL DAR = E||x -g(f (x))|| 2 , (19\n)\nwhere f (•) denotes a shared encoder, and g(•) = g v (•) is a multi-branch decoder that handles the representations drawn from the v-th view." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we evaluate the effectiveness of our SMILE against the problem of (fully) incomplete information compared with 13 state-of-the-art multi-view clustering methods on five benchmarks. In the following sections, we will elaborate on our experimental setting in Sec. 4.1. Then, we will quantitatively verify the effectiveness of the proposed SMILE in Sec. 4.2. Beyond the quantitative comparisons on clustering quality, more in-depth explorations will be conducted in Sec. 4.3. Finally, we will conduct the ablation studies in Sec. 4.4 to shed some light on the essence of our SMILE." }, { "figure_ref": [], "heading": "Experimental Setups", "publication_ref": [ "b29", "b9", "b49", "b50", "b51", "b24", "b53", "b54" ], "table_ref": [], "text": "Implementation Details: In our implementation, we set λ SIL = 0.04, γ = 5 in Equation ( 13). Moreover, we use a convolutional auto-encoder for multi-view image datasets, i.e., MNISTUSPS and NoisyMNIST, and a fully connected auto-encoder for other datasets. For each auto-encoder that contains a shared encoder, we add an additional adaption layer to accommodate the different input dimensions of each view. All the networks use the Adam optimizer with an initial learning rate of 1e -3 for all datasets under all settings. In addition to handling fully incomplete information, we also conduct experiments under different settings where paired samples are provided for comprehensive comparisons. In those experiments, we incorporate contrastive learning into our method for fair comparisons. Finally, all the quantitative results of our SMILE are the average of five random seeds by default. Dataset: We evaluate our method on five datasets, which are as follows:\n• NoisyMNIST [11]: This dataset contains 70, 000 instances, where each instance consists of two views: the raw MNIST image and its rotated and Gaussian noised version. For a fair comparison, we follow the previous work SURE [30], and randomly select 30, 000 instances for evaluation since some baselines cannot deal with such a large-scale dataset.\n• MNISTUSPS: This dataset includes 67, 291 images of digits from the MNIST and USPS datasets. Following [10], we randomly select 5, 000 samples from each dataset, distributed over 10 digits.\n• Deep Caltech-101: This dataset consists of 8, 677 images of objects belonging to 101 classes, with 100 classes for objects and one class for background cluster. Following [50], we utilize deep features extracted by DECAF [51] and VGG19 [52] networks as two views.\n• CUB [53]: This dataset comprises various categories of birds. Following [25], we employ deep visual features extracted by GoogLeNet and text features extracted by doc2vec [54] as two views.\n• YouTubeFaces [55]: This dataset contains 152, 549 faces from 66 identities, i.e., each people has more than 1, 500 face images at least. For comparisons, we describe each image using multi-view features consisting of 512-dim GIST feature, 1984-dim HOG feature, and 1024-dim HIST feature." }, { "figure_ref": [], "heading": "Baselines:", "publication_ref": [ "b10", "b30", "b8", "b15", "b16", "b20", "b19", "b23", "b26", "b28", "b29", "b13", "b36", "b29", "b55" ], "table_ref": [], "text": "We compare SMILE with 13 competitive multi-view clustering baselines. Specifically, DCCAE [11], BMVC [31], and AE2-Nets [9] are designed for multiview clustering with complete information. PVC [16] and MvCLN [17] are designed for partial correspondence incompleteness. Five baselines are designed for partial instance incompleteness, including PMVC [21], DAIMC [20], EERIMVC [24], DCP [27], and DSIMVC [29]. SURE [30] is designed against partial information incompleteness. MVC-UM [14] and DM2C [37] are designed against full correspondence incompleteness and full instance incompleteness, respectively. Since many baselines cannot handle partial correspondence/instance incompleteness directly, we follow SURE [30] and adopt the following two approaches for fair comparisons:\n• For the baselines that cannot handle the partial correspondence incompleteness, we re-align the unaligned samples via the Hungarian algorithm [56].\nMore specifically, we first obtain the PCA features of the samples and then use the Hungarian algorithm with the Euclidean similarity to establish correspondences.\n• For the baselines that cannot handle the partial in-stance incompleteness, we fill the unobserved samples from the v-th view with the average of all the exiting samples of the view." }, { "figure_ref": [ "fig_3" ], "heading": "Quantitative Comparisons", "publication_ref": [ "b29", "b29", "b13", "b14", "b36", "b15", "b16", "b26", "b29", "b15", "b16", "b29", "b26" ], "table_ref": [ "tab_0" ], "text": "In this section, we conduct quantitative experiments to compare our SMILE with 13 baselines under various missing rates and unaligned rates. To be specific, the missing rate is defined as η = m N , where N is the size of the dataset and m is the number of instances with missing samples. To generate the data with missing samples, we randomly choose m instances and discard one sample/view for the instances, following the setting used in SURE [30]. Regarding the unaligned rate, it is defined as ζ = c N , where c is the number of instances with incorrect correspondences. To generate the data with incorrect correspondences, we also follow Fig. 4. The visualization of the similarity matrix on NoisyMNIST with the unaligned rate of 100%. The similarity score in the i-th row j-th column denotes the similarity between the i-th unaligned sample in view 1 and the j-th unaligned sample in view 2. In each view, samples are sorted according to their categories. SURE [30] to randomly sample c instances and remove the correspondences between their samples.\nWe present the quantitative results in Table 1 (see Supplementary Material, Sec. 7 for more results). As shown in the table, there are two previous works, to our best knowledge, that could achieve multi-view clustering with the unaligned rate of 100% -MVC-UM [14] and GWMAC [15]. Our SMILE outperforms them by a large margin by taking advantage of deep neural networks. With the missing rate of 100%, although DM2C [37] also incorporates deep learning, it is outperformed by our SMILE by a large margin on all five datasets. We conjecture that the superior performance of our SMILE is due to the utilization of information theorybased optimization instead of adversarial learning, thus being less prone to degenerate. With the missing rate of 50% and unaligned rate of 50%, our SMILE outperforms the most competitive baselines on all the datasets in terms of ACC and NMI. We attribute this to the fact that we utilize unpaired samples (due to incomplete correspondences or incomplete instances) for training, while many competitive baselines brutally discard them [16], [17], [27], [30].\nIn the setting of complete information, our SMILE also outperforms almost all baselines in the five datasets. The superiority of our method could be attributed to the unified and effective information theory-based framework. Overall, our SMILE achieves state-of-the-art performance in almost all settings.\nTo further evaluate the effectiveness and robustness of our SMILE against incomplete information, we conduct performance analyses comparing SMILE with the most competitive methods in Fig. 3 can be observed that all baselines heavily rely on paired samples, and their performances drop severely as the unaligned/missing/unpaired rate increases, reaching an accuracy of approximately 50% when the rate reaches 90%. However, our SMILE maintains its performance with accuracy consistently above 93% under the same settings. This can be attributed to our utilization of unpaired samples (due to incomplete correspondences or incomplete instances) for training, while most baselines brutally discard them, e.g., PVC [16], MvCLN [17], SURE [30], and DCP [27]. Specifically, both L DAR and L SIL are calculated using all samples, even if some of them are unpaired. Therefore, our SMILE exhibits great effectiveness against incomplete information. Moreover, the standard deviation of our method is smaller than that of most baselines, demonstrating the robustness of our method. We conjecture that the semantic invariance learning loss L SIL alleviates the randomness introduced by k-means by encouraging the learned representation clusters to be well-balanced, compact, semantically invariant, as analyzed in Sec. 3.3.1." }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "In-depth Explorations", "publication_ref": [ "b29", "b56", "b18" ], "table_ref": [], "text": "In this section, we conduct in-depth explorations to experimentally demonstrate the effectiveness of our SMILE and provide support for Theorem 2, Theorem 3, and Theorem 4.\nTackling the problem of fully incomplete information via semantic invariance learning. We first demonstrate the effectiveness of semantic invariance learning I(C; X|V ) in addressing correspondence incompleteness and instance incompleteness, which are theoretically proven in Theorem 2 and Theorem 3, respectively.\nFor correspondence incompleteness, we visualize the similarity matrices in Fig. 4 to help understand the performance of our realignment approach. In the figure, CAR [30] is adopted to evaluate the alignment rate at the category level, which is defined as follows:\nCAR = 1 N i ς(T (x (v1) i ), T (x (v2) i )), (20\n)\nwhere ς is the Dirichlet function and x(v2) i represents the realigned cross-view counterpart of x (v1) i\n. The figure shows that CAR increases as semantic invariance learning progresses (as L SIL decreases), demonstrating that SIL facilitates the realignment of the correspondence-incomplete data.\nFor instance incompleteness, we evaluate the impact of semantic invariance learning on the imputation performance on Caltech by using the Normalized Root Mean Square Error (NRMSE) [57], which evaluates the imputation error of the unobserved samples. As shown in Fig. 5, both NRMSE and L SIL decrease as the value of λ SIL increases. This trend suggests that the imputation error is minimized by emphasizing L SIL (i.e., increasing λ SIL ). Overall, the figure suggests that semantic invariance learning helps compensate for instance incompleteness.\nIn addition to the quantitative evaluation, we visualize the imputed samples in Fig. 6. The figure shows that the imputed samples (last row) belong to the same category as the missing samples, even if the categories are not explicitly known to our model. Furthermore, the imputed samples are quite similar to the unobserved ones, despite the distinct styles across different views. We attribute this to our multibranch design in Equation (19), which enables our model to learn view-specific styles independently. In brief, this figure confirms the effectiveness of our SMILE in compensating for instance incompleteness, i.e. the ability to impute missing samples.\nBoosting clustering quality through semantic invariance learning. Next, we verify experimentally that the semantic invariance learning I(C; X|V ) bounds the lowest achievable clustering error rate, as proved in Theorem 4.\nTo demonstrate this, we visualize the clustering quality in Fig. 7. This figure illustrates that as L SIL-s decreases (first row), our SMILE learns more compact and balanced clusters. Additionally, as L SIL-v decreases (second row), our method learns more semantic-invariant clusters. By combining these benefits through L SIL , SMILE effectively mitigates cross-view discrepancies while enhancing semantic discrimination across different categories (third row). This confirms the ability of our SMILE to improve clustering quality by leveraging semantic invariance learning." }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "Ablations", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In this section, we present an ablation analysis to elucidate the mechanism of our SMILE. As shown in Table 2, the performance of a standard auto-encoder alone (first row) is poor with the unaligned rate of 100% and missing rate of 100%. However, when we introduce L SIL-v = I(C; V ) (third row), the performance is significantly boosted (≥ 18% for ACC). We conjecture that L SIL-v helps alleviate the cross-view discrepancy, which is essential for learning consensus semantics for MvC. Moreover, the performance is further improved when combined with L SIL-s = I(C; X) (fourth row), which enhances the semantic discrimination. Finally, by introducing the discrepancy-aware reconstruction term L DAR in the fifth row, with the unaligned rate of 100% and missing rate of 100%, we improve ACC by 4.5% and 1.2% respectively. This verifies the effectiveness of each component in SMILE. To investigate the influence of the parameters, we conduct parameter analysis in Figs. 8 and9. As shown in the figures, the SMILE performs stably against the hyperparameters λ SIL and γ under the three settings. Besides, one could observe that the performance remarkably drops when γ = 0, indicating the importance of semantic-invariance learning." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we addressed a challenging problem of multi-view clustering with fully incomplete information. To the best of our knowledge, this could be one of the first studies on the challenge. We propose a foundational theorem, Semantic Invariance, that enables us to alleviate the cross-view discrepancy based on their semantic distributions without the requirement for paired samples, thus learning consensus semantics. Building on this theorem, we proposed a unified semantic invariance learning framework for MvC with fully incomplete information. We showed, both theoretically and experimentally, that our framework could not only effectively compensate for incomplete information, but also facilitate MvC. Specifically, our SMILE achieved superior performance compared to 13 state-ofthe-art baselines under various incomplete settings on five benchmarks. In the future, we would like to endow our method with the ability to handle more practical scenarios where incomplete information occurs unconsciously, and the incomplete instances/correspondences are unknown. This will allow us to apply our method to a wider range of real-world problems." } ]
Robust multi-view learning with incomplete information has received significant attention due to issues such as incomplete correspondences and incomplete instances that commonly affect real-world multi-view applications. Existing approaches heavily rely on paired samples to realign or impute defective ones, but such preconditions cannot always be satisfied in practice due to the complexity of data collection and transmission. To address this problem, we present a novel framework called SeMantic Invariance LEarning (SMILE) for multi-view clustering with incomplete information that does not require any paired samples. To be specific, we discover the existence of invariant semantic distribution across different views, which enables SMILE to alleviate the cross-view discrepancy to learn consensus semantics without requiring any paired samples. The resulting consensus semantics remains unaffected by cross-view distribution shifts, making them useful for realigning/imputing defective instances and forming clusters. We demonstrate the effectiveness of SMILE through extensive comparison experiments with 13 state-of-the-art baselines on five benchmarks. Our approach improves the clustering accuracy of NoisyMNIST from 19.3%/23.2% to 82.7%/69.0% when the correspondences/instances are fully incomplete. The code could be accessed from https://pengxi.me.
Semantic Invariant Multi-view Clustering with Fully Incomplete Information
[ { "figure_caption": "Fig. 1 .1Fig. 1. Our motivation. Without loss of generality, we take two views as an example. In the figure, the dashed box indicates that the corresponding variable is unavailable or incomplete. (a) Complete information; (b) Fully incomplete information, i.e., either the correspondences or samples are missing for each instance; (c) Information diagram of our Semantic Invariance Theorem. (d) Illustration on our Semantic Invariance Learning framework. In brief, it aims at maximizing I(C; X|V ) = I(C; X)-I(C;V ) (the pink part) to simultaneously alleviate the crossview discrepancy I(C; V ) and enhance the semantic discrimination I(C; X). Thus, on the one hand, the incomplete correspondences could be rebuilt by associating cross-view samples with the same semantics. On the other hand, the missing samples could be imputed with the help of their semantic neighbours, which could be identified by the existing cross-view counterparts. As a result, the defective instances could be realigned/imputed, and the cross-view clusters could be formed without requiring any paired samples.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(v2) i by using an observed feature of another view z (v1) i", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Theorem 4 .4t} represents the set of samples belonging to t-th category. Building upon the aforementioned analysis, we present the following theorem, which reveals the relationship between semantic invariance learning and the clustering error rate: Multi-view Clustering with Incomplete Information via Semantic Invariance Learning Learning. Based on Theorem 1, the lowest achievable clustering error rate C e is bounded by the semantic invariance learning I(C; X|V ), i.e., C e ≤ 1 -exp (-H(T, X|V ) + I(C; X|V )) ,", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "SILFig. 5 .Fig. 6 .56Fig. 5. Imputation performance analysis with the missing rate of 100%.", "figure_data": "", "figure_id": "fig_4", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. The t-SNE visualization of the clustering quality on NoisyMNIST with the unpaired rate of 100%. The first two rows visualize the hidden representations of the samples that are colored according to their types (up) and views (middle) respectively. The last row (bottom) visualizes the hidden representations of the instances that are colored according to their types.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Parameter analysis of λ SIL on MNISTUSPS and NoisyMNIST with the unaligned rate (ζ) of 100%, the missing rate (η) of 100%, and the unpaired rate (ϱ) of 100%.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Parameter analysis of γ on MNISTUSPS and NoisyMNIST with the unaligned rate (ζ) of 100%, the missing rate (η) of 100%, and the unpaired rate (ϱ) of 100%.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Quantitative comparisons of SMILE with 13 competitive baselines on five benchmarks under five settings. For each setting, the best and the second best results are marked in bold and underline, respectively. NS denotes the baselines that are not scalable to large datasets, and TvO represents the baselines that can only handle dual-view data.", "figure_data": "DataTypeMethodNoisyMNIST ACC NMI ARI ACC NMI ARI ACC NMI ARI ACC NMI ARI ACC NMI ARI MNISTUSPS Caltech CUB YouTubeFaces100% UnalignedMVC-UM [14] 19.39.94.7 53.5 48.4 35.0 43.3 67.3 31.9 44.3 40.7 23.0 NSNSNS(ζ = 100%)GWMAC [15] SMILE11.4 82.7 79.5 74.2 85.2 80.8 76.1 47.6 74.0 33.0 63.4 61.9 48.2 52.5 73.6 42.6 0.3 0.1 15.6 3.7 1.5 4.9 16.0 0.3 28.3 21.0 9.1 3.2 2.3 0.2100% MissingDM2C [37]23.2 15.4 8.0 35.1 34.2 18.3 28.2 59.3 18.3 35.6 36.4 6.4 16.2 32.1 5.8(η = 100%)SMILE69.0 63.8 54.1 74.3 69.6 61.8 30.5 60.1 20.4 40.2 37.5 20.8 26.5 49.9 18.5DCCAE [11]27.6 19.5 10.0 66.1 52.1 47.4 26.6 50.1 25.2 15.82.80.2 18.0 24.3 7.0BMVC [31]28.5 24.7 14.2 36.9 15.9 12.1 29.1 34.8 12.9 16.03.40.2 24.0 19.2 8.4AE2-Nets [9]38.3 34.3 22.0 37.6 23.9 16.1 4.213.5 0.0 14.52.60.3 18.3 15.8 6.4DAIMC [20]37.6 34.3 22.8 44.3 34.5 24.8 48.5 68.7 33.1 15.72.80.0NSNSNSEERIMVC [24] 46.8 29.6 23.9 53.3 37.4 31.9 26.4 36.5 9.2 15.82.90.0NSNSNS50% UnalignedPMVC [21]31.9 21.4 13.0 54.5 44.4 35.9 45.0 68.6 32.4 15.83.00.0 TvO TvO TvO(ζ = 50%)PVC [16]81.8 82.3 82.0 86.5 78.1 74.6 18.6 48.9 14.6 50.2 56.3 38.6 NSNSNSMvCLN [17]91.1 84.2 83.6 90.0 81.4 80.4 35.6 61.0 40.9 58.2 55.2 40.8 54.0 69.2 44.2SURE [30]95.2 88.2 89.7 92.1 82.8 83.5 46.2 70.7 33.0 64.5 62.0 47.9 54.7 68.8 43.4DCP [27]32.3 28.0 9.4 41.4 34.0 13.4 22.2 48.6 19.2 35.4 30.7 8.1 26.4 54.2 19.2DSIMVC [29]34.6 24.0 16.8 62.2 47.4 39.7 20.6 31.0 16.3 30.4 25.4 11.8 21.0 33.8 10.9GWMAC [15]11.40.20.1 16.14.01.84.415.4 0.4 30.6 27.2 12.2 3.22.20.2SMILE97.9 94.2 95.4 98.6 96.3 97.0 50.9 79.4 35.2 71.1 70.4 58.2 57.8 77.1 48.8DCCAE [11]65.4 62.9 38.3 79.5 79.2 68.4 29.1 58.8 23.4 42.3 40.9 25.5 19.0 37.9 8.6BMVC [31]30.7 19.2 10.6 43.9 39.0 21.0 40.0 58.5 10.2 29.8 20.3 6.4 34.1 42.7 7.4AE2-Nets [9]29.9 23.8 11.8 40.9 29.3 19.7 6.618.0 4.5 35.9 32.0 15.9 18.8 27.9 8.5DAIMC [20]33.8 26.4 16.0 55.2 49.6 38.6 56.2 78.0 41.8 62.7 58.5 47.7 NSNSNSEERIMVC [24] 55.6 45.9 36.8 65.2 55.7 48.9 43.6 69.0 26.4 68.7 63.9 53.8 NSNSNS50% MissingPMVC [21]33.1 25.5 14.6 60.5 47.1 39.8 48.4 72.8 40.4 57.7 54.4 38.3 TvO TvO TvO(η = 50%)PVC [16]16.46.72.3 14.74.41.46.617.4 0.3 39.0 40.5 20.9 NSNSNSMvCLN [17]53.8 50.6 28.5 46.8 44.6 21.8 27.2 47.5 23.5 45.2 40.8 21.9 36.1 48.2 23.7SURE [30]93.0 85.4 85.9 92.3 85.0 84.3 34.6 57.8 19.9 58.3 50.4 37.4 45.2 46.9 29.6DCP [27]80.0 75.2 70.7 94.0 89.7 88.3 44.3 71.0 45.3 53.7 65.5 47.3 26.3 47.2 14.4DSIMVC [29]55.8 55.1 43.0 97.0 92.4 93.5 16.4 24.8 9.2 54.4 52.4 35.2 29.4 48.5 19.0SMILE96.8 91.7 93.0 98.5 95.7 96.6 51.2 79.0 35.6 69.5 66.7 54.9 54.6 76.3 45.2DCCAE [11]78.0 81.2 68.2 96.8 97.7 96.6 45.8 68.6 37.7 55.3 58.7 45.1 32.2 61.5 19.0BMVC [31]88.3 77.0 76.6 87.1 84.5 82.0 50.1 72.4 33.9 66.2 61.7 48.7 48.5 62.4 36.1AE2-Nets [9]42.1 43.4 30.4 54.0 46.5 35.4 4.013.6 0.0 48.8 46.7 30.5 21.8 34.0 12.2DAIMC [20]38.4 34.7 23.0 65.1 65.5 54.2 57.5 78.7 41.9 71.6 70.7 57.9 NSNSNSEERIMVC [24] 65.7 57.6 51.3 79.0 68.1 62.4 49.0 74.2 34.2 74.0 73.1 62.4 NSNSNSCompletePMVC [21]41.1 36.4 24.5 60.4 59.5 47.3 49.4 73.5 39.7 64.5 70.3 53.1 TvO TvO TvOInformationPVC [16]87.1 92.8 93.1 95.3 90.4 90.1 20.5 51.4 15.7 59.7 65.3 51.6 NSNSNSMvCLN [17]97.3 94.2 95.3 98.8 96.5 97.3 39.6 65.3 32.8 59.7 56.5 42.5 57.3 70.9 48.2SURE [30]98.4 95.4 96.5 99.1 97.5 98.1 43.8 70.1 29.5 58.0 59.3 45.2 55.6 75.8 46.8DCP [27]89.1 88.9 85.5 94.8 93.9 90.5 51.3 74.8 51.9 63.6 70.2 53.9 34.0 60.2 16.5DSIMVC [29]61.0 58.1 46.7 98.5 96.7 96.7 19.7 40.0 19.7 58.5 56.3 39.9 22.2 38.0 13.1GWMAC [15]11.30.30.1 14.52.81.14.415.1 0.2 29.1 21.8 10.0 3.12.10.2SMILE99.3", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation study of our L DAR , and L SIL = L SIL-s + γL SIL-v on NoisyMNIST. The L Rec is defined in Equation(14). Rec L DAR L SIL-s L SIL-v ACC NMI ARI", "figure_data": "Data Type L 100% Unaligned (ζ = 100%)✓ ✓ ✓ ✓✓ ✓✓ ✓50.1 47.7 32.3 52.7 54.5 39.1 74.5 66.1 58.4 78.2 76.3 69.0✓✓✓82.7 79.5 74.2✓48.7 43.7 30.9100% Missing (η = 100%)✓ ✓ ✓✓ ✓✓ ✓51.0 51.6 37.8 66.8 57.1 49.3 67.8 63.5 52.6✓✓✓69.0 63.8 54.10.010.020.040.080.16", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" } ]
Pengxin Zeng; Mouxing Yang; Yiding Lu; Changqing Zhang; Peng Hu; Xi Peng
[ { "authors": "Q Wang; M Chen; F Nie; X Li", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b0", "title": "Detecting coherent groups in crowd scenes by multiview clustering", "year": "2018" }, { "authors": "C Xu; D Tao; C Xu", "journal": "", "ref_id": "b1", "title": "Multi-view self-paced learning for clustering", "year": "2015" }, { "authors": "Z Kang; W Zhou; Z Zhao; J Shao; M Han; Z Xu", "journal": "", "ref_id": "b2", "title": "Largescale multi-view subspace clustering in linear time", "year": "2020" }, { "authors": "C Lu; S Yan; Z Lin", "journal": "IEEE Transactions on Image Processing", "ref_id": "b3", "title": "Convex sparse spectral clustering: Single-view to multi-view", "year": "2016" }, { "authors": "Z Tao; H Liu; S Li; Z Ding; Y Fu", "journal": "", "ref_id": "b4", "title": "From ensemble clustering to multi-view clustering", "year": "2017" }, { "authors": "Q Wang; Z Ding; Z Tao; Q Gao; Y Fu", "journal": "IEEE", "ref_id": "b5", "title": "Partial multi-view clustering via consistent gan", "year": "2018" }, { "authors": "A Vinokourov; N Cristianini; J Shawe-Taylor", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Inferring a semantic representation of text via cross-language correlation analysis", "year": "2002" }, { "authors": "Y Li; M Yang; Z Zhang", "journal": "IEEE transactions on knowledge and data engineering", "ref_id": "b7", "title": "A survey of multi-view representation learning", "year": "2018" }, { "authors": "C Zhang; Y Liu; H Fu", "journal": "", "ref_id": "b8", "title": "Ae2-nets: Autoencoder in autoencoder networks", "year": "2019" }, { "authors": "X Peng; Z Huang; J Lv; H Zhu; J T Zhou", "journal": "PMLR", "ref_id": "b9", "title": "Comic: Multiview clustering without parameter selection", "year": "2019" }, { "authors": "W Wang; R Arora; K Livescu; J Bilmes", "journal": "PMLR", "ref_id": "b10", "title": "On deep multiview representation learning", "year": "2015" }, { "authors": "M Yin; W Huang; J Gao", "journal": "", "ref_id": "b11", "title": "Shared generative latent representation learning for multi-view clustering", "year": "2020" }, { "authors": "Z Yang; Q Xu; W Zhang; X Cao; Q Huang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b12", "title": "Split multiplicative multi-view subspace clustering", "year": "2019" }, { "authors": "H Yu; J Tang; G Wang; X Gao", "journal": "", "ref_id": "b13", "title": "A novel multi-view clustering method for unknown mapping relationships between crossview samples", "year": "2021" }, { "authors": "F Gong; Y Nie; H Xu", "journal": "", "ref_id": "b14", "title": "Gromov-wasserstein multi-modal alignment and clustering", "year": "2022" }, { "authors": "Z Huang; P Hu; J T Zhou; J Lv; X Peng", "journal": "", "ref_id": "b15", "title": "Partially viewaligned clustering", "year": "2020" }, { "authors": "M Yang; Y Li; Z Huang; Z Liu; P Hu; X Peng", "journal": "", "ref_id": "b16", "title": "Partially view-aligned representation learning with noise-robust contrastive loss", "year": "2021" }, { "authors": "A Karpathy; L Fei-Fei", "journal": "", "ref_id": "b17", "title": "Deep visual-semantic alignments for generating image descriptions", "year": "2015" }, { "authors": "J Wei; X Xu; Y Yang; Y Ji; Z Wang; H T Shen", "journal": "", "ref_id": "b18", "title": "Universal weighting metric learning for cross-modal matching", "year": "2020" }, { "authors": "M Hu; S Chen", "journal": "", "ref_id": "b19", "title": "Doubly aligned incomplete multi-view clustering", "year": "2018" }, { "authors": "S.-Y Li; Y Jiang; Z.-H Zhou", "journal": "", "ref_id": "b20", "title": "Partial multi-view clustering", "year": "2014" }, { "authors": "C Xu; D Tao; C Xu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b21", "title": "Multi-view learning with incomplete views", "year": "2015" }, { "authors": "W Shao; L He; P S Yu", "journal": "Springer", "ref_id": "b22", "title": "Multiple incomplete views clustering via weighted nonnegative matrix factorization with regularization", "year": "2015" }, { "authors": "X Liu; M Li; C Tang; J Xia; J Xiong; L Liu; M Kloft; E Zhu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b23", "title": "Efficient and effective regularized incomplete multiview clustering", "year": "2021" }, { "authors": "C Zhang; Y Cui; Z Han; J T Zhou; H Fu; Q Hu", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b24", "title": "Deep partial multi-view learning", "year": "2020" }, { "authors": "C Xu; Z Guan; W Zhao; H Wu; Y Niu; B Ling", "journal": "", "ref_id": "b25", "title": "Adversarial incomplete multi-view clustering", "year": "2019" }, { "authors": "Y Lin; Y Gou; X Liu; J Bai; J Lv; X Peng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b26", "title": "Dual contrastive prediction for incomplete multi-view representation learning", "year": "2022" }, { "authors": "Y Lin; Y Gou; Z Liu; B Li; J Lv; X Peng", "journal": "", "ref_id": "b27", "title": "Completer: Incomplete multi-view clustering via contrastive prediction", "year": "2021" }, { "authors": "H Tang; Y Liu", "journal": "PMLR", "ref_id": "b28", "title": "Deep safe incomplete multi-view clustering: Theorem and algorithm", "year": "2022" }, { "authors": "M Yang; Y Li; P Hu; J Bai; J Lv; X Peng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b29", "title": "Robust multiview clustering with incomplete information", "year": "2022" }, { "authors": "Z Zhang; L Liu; F Shen; H T Shen; L Shao", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b30", "title": "Binary multiview clustering", "year": "2018" }, { "authors": "M Yin; W Liu; M Li; T Jin; R Ji", "journal": "Neurocomputing", "ref_id": "b31", "title": "Cauchy loss induced block diagonal representation for robust multi-view subspace clustering", "year": "2021" }, { "authors": "G Andrew; R Arora; J Bilmes; K Livescu", "journal": "PMLR", "ref_id": "b32", "title": "Deep canonical correlation analysis", "year": "2013" }, { "authors": "T Zhou; C Zhang; X Peng; H Bhaskar; J Yang", "journal": "IEEE transactions on cybernetics", "ref_id": "b33", "title": "Dual shared-specific multiview subspace clustering", "year": "2019" }, { "authors": "F R Bach; M I Jordan", "journal": "Journal of machine learning research", "ref_id": "b34", "title": "Kernel independent component analysis", "year": "2002-07" }, { "authors": "M.-S Chen; L Huang; C.-D Wang; D Huang", "journal": "", "ref_id": "b35", "title": "Multi-view clustering in latent embedding space", "year": "2020" }, { "authors": "Y Jiang; Q Xu; Z Yang; X Cao; Q Huang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Dm2c: Deep mixed-modal clustering", "year": "2019" }, { "authors": "N Tishby; F C Pereira; W Bialek", "journal": "", "ref_id": "b37", "title": "The information bottleneck method", "year": "2000" }, { "authors": "Z Wan; C Zhang; P Zhu; Q Hu", "journal": "", "ref_id": "b38", "title": "Multi-view informationbottleneck representation learning", "year": "2021" }, { "authors": "M Federici; A Dutta; P Forré; N Kushman; Z Akata", "journal": "", "ref_id": "b39", "title": "Learning robust representations via multi-view information bottleneck", "year": "2020" }, { "authors": "C Xu; D Tao; C Xu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b40", "title": "Large-margin multi-view information bottleneck", "year": "2014" }, { "authors": "Y Tian; D Krishnan; P Isola", "journal": "Springer", "ref_id": "b41", "title": "Contrastive multiview coding", "year": "2020" }, { "authors": "J Xu; H Tang; Y Ren; L Peng; X Zhu; L He", "journal": "", "ref_id": "b42", "title": "Multilevel feature learning for contrastive multi-view clustering", "year": "2022" }, { "authors": "K Hassani; A H Khasahmadi", "journal": "PMLR", "ref_id": "b43", "title": "Contrastive multi-view representation learning on graphs", "year": "2020" }, { "authors": "H Wang; X Guo; Z.-H Deng; Y Lu", "journal": "", "ref_id": "b44", "title": "Rethinking minimal sufficient representation in contrastive learning", "year": "2022" }, { "authors": "K Fukunaga", "journal": "Elsevier", "ref_id": "b45", "title": "Introduction to statistical pattern recognition", "year": "2013" }, { "authors": "P Vincent; H Larochelle; I Lajoie; Y Bengio; P.-A Manzagol; L Bottou", "journal": "Journal of machine learning research", "ref_id": "b46", "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "year": "2010" }, { "authors": "A Creswell; T White; V Dumoulin; K Arulkumaran; B Sengupta; A A Bharath", "journal": "IEEE signal processing magazine", "ref_id": "b47", "title": "Generative adversarial networks: An overview", "year": "2018" }, { "authors": "J.-Y Zhu; T Park; P Isola; A A Efros", "journal": "", "ref_id": "b48", "title": "Unpaired image-toimage translation using cycle-consistent adversarial networks", "year": "2017" }, { "authors": "Z Han; C Zhang; H Fu; J T Zhou", "journal": "", "ref_id": "b49", "title": "Trusted multi-view classification", "year": "2020" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Communications of the ACM", "ref_id": "b50", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "K Simonyan; A Zisserman", "journal": "Computational and Biological Learning Society", "ref_id": "b51", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "C Wah; S Branson; P Welinder; P Perona; S Belongie", "journal": "", "ref_id": "b52", "title": "The caltech-ucsd birds-200-2011 dataset", "year": "2011" }, { "authors": "Q Le; T Mikolov", "journal": "PMLR", "ref_id": "b53", "title": "Distributed representations of sentences and documents", "year": "2014" }, { "authors": "L Wolf; T Hassner; I Maoz", "journal": "", "ref_id": "b54", "title": "Face recognition in unconstrained videos with matched background similarity", "year": "2011" }, { "authors": "H W Kuhn", "journal": "Naval research logistics quarterly", "ref_id": "b55", "title": "The hungarian method for the assignment problem", "year": "1955" }, { "authors": "H Hotelling", "journal": "", "ref_id": "b56", "title": "Relations between two sets of variates", "year": "1992" } ]
[ { "formula_coordinates": [ 3, 312, 251.46, 252, 47.97 ], "formula_id": "formula_0", "formula_text": "Definition 1. Partially Incomplete Information. A multi- view dataset {X (v) } M v=1 = {x (v) 1 , x (v) 2 , . . . , x (v) N } M v=1 consists of two subsets: i) {S (v) } M v=1 = {s (v) 1 , s (v) 2 , . . . , s (v)" }, { "formula_coordinates": [ 3, 362.34, 299.82, 155.15, 13.87 ], "formula_id": "formula_1", "formula_text": "{W (v) } M v=1 = {w (v) 1 , w (v) 2 , . . . , w(v)" }, { "formula_coordinates": [ 3, 328.94, 371.69, 235.06, 40.2 ], "formula_id": "formula_2", "formula_text": "M v1 M v2̸ =v1 Cor w (v1) i , w (v2) i < M (M -1), ∀i ∈ [1, N w ] ,(1)" }, { "formula_coordinates": [ 3, 370.21, 459.11, 193.79, 13.99 ], "formula_id": "formula_3", "formula_text": "1 ≤ |{w (v) i } M v=1 | < M, ∀i ∈ [1, N w ] ,(2)" }, { "formula_coordinates": [ 3, 405.1, 521.91, 138.6, 13.87 ], "formula_id": "formula_4", "formula_text": "(v) } M v=1 = {x (v) 1 , x (v) 2 , . . . , x(v)" }, { "formula_coordinates": [ 3, 326.25, 546.54, 153.77, 13.87 ], "formula_id": "formula_5", "formula_text": "{W (v) } M v=1 = {w (v) 1 , w (v) 2 , . . . , w(v)" }, { "formula_coordinates": [ 4, 312, 66.46, 252, 27.1 ], "formula_id": "formula_6", "formula_text": "(v1) i into category T (x (v1) i ), where z (v1) i is the hidden representation of x (v1) i" }, { "formula_coordinates": [ 4, 353.27, 179.71, 210.73, 17.64 ], "formula_id": "formula_7", "formula_text": "P e = 1 -E P (z (v 1 ) i ) max t∈T P (T (z (v1) i ) = t).(3)" }, { "formula_coordinates": [ 4, 353.7, 304.46, 210.3, 9.65 ], "formula_id": "formula_8", "formula_text": "P e ≤ 1 -exp (-H(T, X|V ) + I(C; X|V )) ,(4)" }, { "formula_coordinates": [ 4, 352.31, 461.78, 211.69, 19.1 ], "formula_id": "formula_9", "formula_text": "R e = min gv 2 E P (z (v 1 ) i ) ||x (v2) i -g v2 (z (v1) i )|| 2 ,(5)" }, { "formula_coordinates": [ 4, 353.79, 622.47, 210.21, 9.65 ], "formula_id": "formula_10", "formula_text": "R e ≤ α • exp (2H(T, X|V ) -2I(C; X|V )) ,(6)" }, { "formula_coordinates": [ 5, 105.15, 129.57, 194.85, 22.21 ], "formula_id": "formula_11", "formula_text": "C e = 1 - k max t∈T | Tt ∩ Ck |/|X|,(7)" }, { "formula_coordinates": [ 5, 114.13, 160.84, 41.59, 13.99 ], "formula_id": "formula_12", "formula_text": "(v) i |C(x(v)" }, { "formula_coordinates": [ 5, 121.74, 693.76, 178.26, 9.65 ], "formula_id": "formula_14", "formula_text": "L = λ SIL L SIL + L DAR ,(9)" }, { "formula_coordinates": [ 5, 375.22, 145.03, 188.78, 23.16 ], "formula_id": "formula_15", "formula_text": "L SIL = -I(C; X|V ) = -I(C; X) + I(C; V ).(10)" }, { "formula_coordinates": [ 5, 322.68, 199.02, 41.59, 13.99 ], "formula_id": "formula_16", "formula_text": "(v) i |C(x(v)" }, { "formula_coordinates": [ 5, 329.78, 230.25, 234.22, 39.2 ], "formula_id": "formula_17", "formula_text": "L SIL-s = -I(C; X) = -H(C) + H(C|X) = k P ( Ck ) log P ( Ck ) - 1 N M i,v,k c (v) ik log c (v) ik ,(11)" }, { "formula_coordinates": [ 5, 344.74, 277.89, 108.78, 14.73 ], "formula_id": "formula_18", "formula_text": "P ( Ck ) = 1 N M i,v c (v)" }, { "formula_coordinates": [ 5, 348.6, 436.41, 215.4, 41.95 ], "formula_id": "formula_19", "formula_text": "L SIL-v = I(C; V ) = k,v P ( Ck , Ṽv ) log P ( Ck , Ṽv ) P ( Ck )P ( Ṽv ) ,(12)" }, { "formula_coordinates": [ 5, 343.78, 486.8, 217.35, 14.73 ], "formula_id": "formula_20", "formula_text": "P ( Ṽv ) = | Ṽv |/|X| and P ( Ck , Ṽv ) = 1 N i c (v)" }, { "formula_coordinates": [ 5, 377.53, 623.96, 182.51, 9.65 ], "formula_id": "formula_21", "formula_text": "L SIL = L SIL-s + γL SIL-v , (13" }, { "formula_coordinates": [ 5, 560.04, 624.31, 3.96, 9.14 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 6, 119.6, 72.94, 180.4, 11.72 ], "formula_id": "formula_23", "formula_text": "L Rec = E||x -ḡ(f (x))|| 2 ,(14)" }, { "formula_coordinates": [ 6, 68.8, 204.34, 231.2, 9.65 ], "formula_id": "formula_24", "formula_text": "L DAR = -I(Z; X|V ) = -I(Z; X) + I(Z; V ),(15)" }, { "formula_coordinates": [ 6, 59.82, 310.58, 240.18, 9.65 ], "formula_id": "formula_25", "formula_text": "L DAR = -I(Z; X|V ) = -H(X|V ) + H(X|Z, V ),(16)" }, { "formula_coordinates": [ 6, 67.4, 382.28, 232.6, 55.82 ], "formula_id": "formula_26", "formula_text": "H(X|Z, V ) = -E P (x,z,v) log P (x|z,v) = -E P (x,z,v) log Q (x|z,v) -E P (z,v) D KL P (x|z,v) ||Q (x|z,v) ≤ -E P (x,z,v) log Q (x|z,v) ,(17)" }, { "formula_coordinates": [ 6, 110.06, 519.59, 189.94, 12.03 ], "formula_id": "formula_27", "formula_text": "-log Q (x|z,v) ∝ ||x -g v (z)|| 2 ,(18)" }, { "formula_coordinates": [ 6, 116.91, 591.21, 179.13, 11.72 ], "formula_id": "formula_28", "formula_text": "L DAR = E||x -g(f (x))|| 2 , (19" }, { "formula_coordinates": [ 6, 296.04, 593.63, 3.96, 9.14 ], "formula_id": "formula_29", "formula_text": ")" }, { "formula_coordinates": [ 9, 360.7, 61.38, 199.34, 26.2 ], "formula_id": "formula_30", "formula_text": "CAR = 1 N i ς(T (x (v1) i ), T (x (v2) i )), (20" }, { "formula_coordinates": [ 9, 560.04, 68.47, 3.96, 9.14 ], "formula_id": "formula_31", "formula_text": ")" } ]
10.18653/v1/2020.acl-main.692
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b31", "b36", "b17", "b31", "b2", "b9", "b37", "b8", "b28", "b15", "b10", "b24", "b31", "b8" ], "table_ref": [], "text": "Recently there has been a significant advancement in the text classification with the emergence of Extremely Weakly Supervised Text Classification (XWS-TC) methods (Meng et al., 2020b;Wang et al., 2021;Zhang et al., 2021b;Zhao et al., 2022; Park and Lee, 2022), which requires no humanannotated datasets. Instead, these methods rely on minimal human guidance, such as the names of the classes or instructions describing the classification task. There are two main approaches to XWS-TC: one based on matching seed words (SEED), and the other on prompting a language model (LM) with instructions (PROMPT). We give a brief introduction in the following paragraphs, and a more thorough review is in Section 3. SEED methods for XWS-TC rely on a userspecified list of seed words for each class, as well as an unlabeled in-domain corpus. These seed words are then expanded into a larger set of related words for the class through statistical methods (Mekala and Shang, 2020), embedding similarity (Wang et al., 2021), or masked language model predictions (Meng et al., 2020b). These related words are used to assign a pseudo-class to each text in the unlabeled corpus through some matching strategy (e.g., assign a text to a class if it contains the related words for that class). The pseudo labels are then used to train a classifier through standard fully-supervised fine-tuning.\nOn the other hand, PROMPT methods for XWS-TC, rely on reformulating text using an instruction template and prompting the language model to generate the likelihoods for each label in the classification task (Brown et al., 2020). For example, in a sentiment classification task, using an instruction template of <text>. sentiment:, the model generating \"happy\" or \"sad\" will help classifiy the sentiment of the text. Naive zero-shot prompting considers the highest likelihood label as the answer and recent improvements for more accurate likelihoods include calibration of likelihood scores (Holtzman et al., 2021;Zhao et al., 2021;Han et al., 2022) and verbalizers that find more label words to better represent the class (Schick and Schütze, 2021;Ma et al., 2023;Hu et al., 2022).\nBoth SEED and PROMPT methods have demonstrated strong performance in XWS-TC. However, there has been a lack of comprehensive comparison between these two approaches. This is due to the perception that the approaches are unrelated and the lack of standardization in datasets, supervision, and hyperparameter choices across methods.\nWe are motivated to construct a benchmark that fairly evaluates the performance of XWS-TC methods. The benchmark consists of 11 datasets covering four domains along with their fine-grained variants and different numbers of classes. In addition, we make an effort to use the same hyperparameters across datasets for the methods, as there should not be a development set to tune the hyperparameters in the XWS setting (Perez et al., 2021).\nOur benchmarking results suggest that both SEED and PROMPT approaches are competitive, with no clear winner. SEED tends to perform better when both approaches use a similar-sized pretrained model and is more robust and tolerant to changes in human guidance (such as seed words, classification instructions, and label words). On the other hand, PROMPT methods have the ability to handle more general types of human guidance (such as descriptions of class names, rather than specific words) and do not have a strict requirement for an unlabeled corpus. When the underlying pre-trained language model changes, PROMPT is more robust and scales better with the language model than SEED. We also examine two specific methods from each approach, X-Class (Wang et al., 2021) and ProtoCal (Han et al., 2022), which independently proposed a post-processing approach to calibrate the class predictions through clustering on an unlabeled in-domain corpus to improve classification performance. Our results show that this subroutine can be a universal booster for both SEED and PROMPT approaches.\nThrough this benchmark, we aim to advance the study of XWS-TC methods and call for the develop-ment of methods that are robust to different human guidance and language models. We firmly believe that this paper will serve as a guide for selecting the appropriate method in different scenarios and contribute to the advancement of the field." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Different Types of Weak Supervision", "publication_ref": [ "b2", "b28", "b15", "b14", "b6", "b10", "b29", "b12", "b23", "b32", "b7", "b0", "b1" ], "table_ref": [], "text": "Extremely Weak Supervision is a setting that assumes access to only high-level human inputs, such as names of classes or instructions about classification criteria. We briefly discuss different types of minimal supervision in the following paragraphs.\nFew-shot Supervision Few-shot supervision is the setting where there are only a small number of labeled examples for each of the classes. An intuitive way is to directly train the classifier on few-shot data, but usually that yields subpar performance. Another popular way is called in-context learning, where the few-shot supervision is used as context to prompt LM for the answer (Brown et al., 2020). Various methods have been proposed to improve it by searching for better label words (Schick and Schütze, 2021;Ma et al., 2023), stabilizing the output (Lu et al., 2022), and efficient fine-tuning (Gao et al., 2021).\nDistant Supervision Distant supervision includes supervision from external resources such as encyclopedias or gazetteers. There have been efforts to incorporate external knowledge into prompting (Hu et al., 2022), phrase mining (Shang et al., 2018), and named entity recognition (Liang et al., 2020). External models can also be used to help with extremely weak supervision. A line of research is on leveraging models trained on natural language inference data to suggest better-related words (Park and Lee, 2022) or directly classify the text (Yin et al., 2019;Gera et al., 2022).\nNo Supervision Unsupervised methods fall into this category where they require no supervision. These methods typically take one of the two following approaches: (1) clustering (Aharoni and Goldberg, 2020), (2) topic modeling (Blei et al., 2003). However, both of these approaches lack control over the clusters/topics generated i.e. classes. For example, a text corpus can be categorized on several basis including topic, location, and sentiment. An unsupervised method cannot handle such scenarios. It would be beneficial to be able to retrieve all possible classifications of a corpus in an unsupervised manner, but as far as we are aware, there are no methods with this ability." }, { "figure_ref": [], "heading": "Weak Supervision Benchmarks", "publication_ref": [], "table_ref": [], "text": "We introduce two other Weak Supervision Benchmarks and talk about differences with this work.\nWrench (Zhang et al., 2021a) is a benchmark that explored various types of weak supervision labeling functions (i.e., rules used to label the text). They synthesize the performance of different labeling functions, ways to combine them, and the fine-tuning process to learn the pseudo-training data. In our benchmark, we analyze extremely weak text classifiers that go beyond the labeling functions and compare their performance and robustness with zero-shot prompting.\nAutoWS-Bench-101 (Roberts et al., 2022) is another benchmark that analyzes how labeling functions help text classification along with additional few-shot supervision. They conclude that pretrained models are strong baselines for in-domain settings and should be considered integrating with weak supervision methods. In this work, we focus on extremely weak supervision methods without any labeled data. The SEED and PROMPT methods compared in this benchmark are all based on pre-trained language models." }, { "figure_ref": [], "heading": "Verbalizers", "publication_ref": [ "b28", "b15", "b10" ], "table_ref": [], "text": "Verbalizers are a type of PROMPT method that find a larger set of label words so that the class choices are accurately represented. We did not consider Verbalizer methods in this benchmark since they mostly rely on additional supervision, such as fewshot (Schick and Schütze, 2021;Ma et al., 2023) or an external knowledge base (Hu et al., 2022)." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "Extremely Weak Supervision in Text Classification refers to a few high-level human guidance as supervision. This guidance typically is in the form of seed words that describe each class, or an instruction paired with label words that define the task. There are two main approaches for XWS-TC: matching seed words (SEED) and prompting language models (PROMPT)." }, { "figure_ref": [], "heading": "Seed Matching Methods", "publication_ref": [ "b26", "b17", "b31", "b19", "b31", "b5", "b31", "b36" ], "table_ref": [], "text": "SEED approaches are provided with a few classindicative seed words and unlabeled documents as input. These methods typically involve seed word expansion where more words related to provided seed words are identified in the unlabeled corpus through several statistics-based (Salton and Buckley, 1988;Mekala and Shang, 2020) or deep learning-based strategies (Meng et al., 2020b;Wang et al., 2021;Zhang et al., 2021b). Using these expanded seed words, each unlabeled document is pseudo-labeled. Different heuristics have been explored for pseudo-labeling such as stringmatching (Meng et al., 2018). Recently, the matching approach has also evolved into softer manners such as embedding-based matching (Wang et al., 2021), and graph-based matching (Zhang et al., 2021b), that can address conflicts in a principled manner during pseudo-labeling.\nWe introduce 4 strong-performing SEED methods to include in our benchmark. LotClass (Meng et al., 2020b) obtains related words through predicting masked tokens in a masked language modeling trained model (Devlin et al., 2019), over an unlabelled corpus. They match the text to related words by fine-tuning a model to predict the related words given a text. XClass (Wang et al., 2021) obtains related words by finding words that have similar representations. They construct class-oriented representations for text. and match the text to related words by representation similarity. They also showed that the performance can be improved significantly by matching based on clusters from text representations. ClassKG (Zhang et al., 2021b) models the dependence of related words as an annotating problem on the keyword graph. NPPrompt (Zhao et al., 2022) obtains related words through embedding similarity from a pretrained LM. The related words are used as label words to prompt a generative LM for predictions, which are then aggregated as the matching result. To some extent, NPPrompt belongs to an intersection of PROMPT and SEED methods." }, { "figure_ref": [], "heading": "Prompt Methods", "publication_ref": [ "b9", "b8", "b9", "b8" ], "table_ref": [], "text": "Prompting language models is another approach to extremely weak supervision in text classification. This approach involves prompting a generative language model with an instructive text and extracting the likelihoods of different label words. This approach does not require an unlabeled in-domain corpus and can be used to predict text in an online fashion. However, language models have been known to be biased towards text sequences more common in pre-training data, leading to instability in zero-shot & few-shot settings. Recently proposed post-processing methods (Holtzman et al., 2021;Han et al., 2022) have attempted to address this by calibrating the predicted probabilities using estimates of the model's bias towards each verbalized label. We describe 2 calibration methods. DC-PMI (Holtzman et al., 2021) considers a null prompt to obtain the raw likelihoods of language model to predict each label. Then, for each text, they modify the likelihood of the predicted label by marginalizing the raw ones. ProtoCal (Han et al., 2022) considers an unlabelled corpus and obtains the predicted likelihoods on the corpus. The likelihood vectors are then clustered to better obtain the prediction boundary for each class. Instead of maximum likelihood, this prediction boundary is used to predict the class.\nSome more SEED and PROMPT methods are described in Appendix A." }, { "figure_ref": [], "heading": "Benchmark", "publication_ref": [], "table_ref": [], "text": "In order to establish a benchmark that can accurately evaluate various XWS-TC methods, it is essential to consider a range of factors: Dataset choices, Instructions, Label words, Hyperparameter control, use of Pre-trained Language Models, Metrics and ensure their consistency across all experiments. We will discuss each of these factors in detail in the following sections." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b9", "b31" ], "table_ref": [ "tab_0" ], "text": "We consider datasets from prior evaluations (Holtzman et al., 2021;Wang et al., 2021;Meng et al., 2020b) that contain data from diverse domains. To facilitate the evaluation process, the size of the evaluation set for each dataset has been controlled to a few thousand instances. Additionally, as many XWS-TC methods require the use of an unlabelled in-domain corpus, a similar-sized sample has been sampled from the training split to serve this purpose, with the evaluation set and unlabelled corpus being disjoint. The datasets have been uniformly sampled without altering the distribution of labels, thus preserving the imbalance ratio, which is defined as the ratio between the size of the largest class and the smallest class. The statistics of the datasets are presented in Table 1. Details of the sources of the datasets are in Appendix B." }, { "figure_ref": [], "heading": "Instructions and Label/Seed Words", "publication_ref": [ "b9" ], "table_ref": [], "text": "To fairly compare SEED and PROMPT methods, we need to provide equal amounts of human supervision. That means, for SEED methods, we should only allow a single word for each class, matching the amount used for label words. For instructions, we consider simple ones that hint at the classification criteria (Holtzman et al., 2021). Details choices can be found in Appendix C." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b4" ], "table_ref": [], "text": "For evaluation metrics, we consider the macro F 1 score on a dataset-by-dataset basis, which values each class within a dataset equally. To understand the performance of a method on all datasets, we employ two metrics: the average of the macro F 1 scores, and a ranking-based metric that combines the ranking of methods on each dataset to obtain a scale-prone value (Colombo et al., 2022)." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [ "b24" ], "table_ref": [], "text": "Another crucial aspect of the benchmark is the number of hyperparameters utilized by each method. In the context of extremely weak supervision, we argue that it is unrealistic to use different hyperparameters for different datasets, as doing so would necessitate the use of a separate development set, thereby defeating the purpose of using only highlevel human supervision (Perez et al., 2021). Therefore, we slightly tune the hyperparameters on one of the datasets to rule out failing scenarios and then stick with a single choice of hyperparameters throughout all datasets. Under this hyperparameter enforcement, the ideal method should exhibit consistent performance across all datasets." }, { "figure_ref": [], "heading": "Pre-trained Language Models", "publication_ref": [], "table_ref": [], "text": "PROMPT methods use generative language models such as GPT while SEED methods use representation encoding language models such as BERT. To fairly compare methods between these two approaches on XWS-TC, we have to consider the ability of language models as a factor. We use the number of parameters of the pre-trained language model as an approximation of the power of the language model. Since all language models use the transformer as the backbone, this implies that the number of layers and size of hidden states is controlled. A further discussion is in Appendix D. " }, { "figure_ref": [], "heading": "Large Language Models", "publication_ref": [ "b27", "b22" ], "table_ref": [], "text": "This benchmark specifically excludes the evaluation of (multi-task) fine-tuned language models such as T0 (Sanh et al., 2022), large language models (LLMs) such as GPT3, and human feedback-trained language models like Instruct-GPT (Ouyang et al., 2022) and ChatGPT because there are no equivalent representation encoding language models for the SEED approaches. We discuss this in more details and include an evaluation of ChatGPT on a single dataset as a reference in Appendix E.\n5 Benchmark Experiments" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In Table 2 we show the performances of all SEED and PROMPT methods considered in the benchmark across the 11 datasets and report the average macro F 1 performance and the rank score.\nPerformance of PROMPT Methods We note that the performance of the standalone PROMPT method is about 20 points lower than its counterparts with calibration methods. The use of additional instance independent instructions (DCPMI) or an additional clustering based on unlabelled text (ProtoCal) is crucial for PROMPT methods to work well in XWS (zero-shot) text classification.\nPerformance of SEED Methods All the SEED methods exhibit strong performance, with X-Class performing stably well across all datasets, and ClassKG performing the best on several datasets, but losing on certain fine-grained datasets.\nComparing PROMPT and SEED Methods First, on the absolute performances, we can see that SEED methods have overall better performance than PROMPT methods, even when appropriate calibration is added for PROMPT methods. However, we can also observe that a larger pre-trained GPT model increases the performance of PROMPT methods quite significantly, while SEED methods have a lower performance improvement when a larger pre-trained language model is used. This effect is further studied in Section 5.2.3." }, { "figure_ref": [], "heading": "Robustness", "publication_ref": [], "table_ref": [], "text": "Through this benchmark, we hope to not only decide which method performs the best, but also analyze under dynamic circumstances, which method is more robust to changes. Different choices of label words/seed words, instructions, and pre-trained language models can happen in real life. Therefore, the robustness of methods when these ingredients are reasonably varied would indicate how stable the method is under variating circumstances. Due to the complexity of multiple runs of each method, we focus on 4 datasets pertaining to different domains, imbalance ratios, and number of classes: Yelp, AGNews, NYT-S, and DBpedia. We leave out two methods, LoT-Class and NPPrompt to save computational resources." }, { "figure_ref": [], "heading": "Different Seed/Label words", "publication_ref": [], "table_ref": [], "text": "In Table 3 we explore the effect when a different choice of label words and seed words are used. For example, for Yelp-2, we chose negative/positive, terrible/great bad/good, awful/find, and nasty/nice as the variants. We report the performance of the methods on each of the five choices, and also the aggregated performance over the 4 aforementioned datasets. We notice that PROMPT methods in general have a high instability. While DCPMI and Pro- Table 3: Performance of PROMPT and SEED methods when the label word/seed word are changed to similar meaning alternatives. We show the performance on 5 choices of label words on Yelp-2 (4 alternatives + 1 default), its median, average, and standard deviation, and the averaged metrics across all datasets.\ntoCal can remedy the variance a bit, SEED methods are still more robust to changes of seed words." }, { "figure_ref": [], "heading": "Different Instructions", "publication_ref": [], "table_ref": [], "text": "A high variance is also observed when the instructions are changed for the PROMPT methods, as in Table 4. A noticeable trend is that when the pre-trained model is larger, while the performance increases, the variance brought by instructions or label words also increases. This could be alarming for PROMPT methods." }, { "figure_ref": [], "heading": "Different Pre-trained Language Models", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In Table 5 we analyze how changes in pre-trained language models would affect the performance of SEED and PROMPT methods (See Appendix H for the full table). Although SEED performs better than PROMPT, PROMPT methods has a strong increasing trend as the size of the pre-trained language model (e.g., changing from BERT-base to BERT-large). Also, X-Class and NPPrompt fail on RoBERTa and BERT respectively, which we hypothesize is that assumptions made in the methods are not general to all pre-trained language models; for example, the distribution of similarities of representations generated by a language model might be different by models. This scaling trend is a factor that should be taken into selecting methods to use for XWS-TC, when the language model size is different than evaluated in this benchmark. -medium 33.57 33.18 56.77 78.41 42.34 42.34 48.85 (17.08) -medium 88.60 87.40 57.85 80.13 82.73 82.73 79.34 (11.18) 62.59 62.07 10.85\nTable 4: Performance of PROMPT methods when the instructions are changed to similar meaning alternatives. We show the performance on 5 choices of instructions on Yelp-2 (4 alternatives + 1 default), its median, average, and standard deviation, and the averaged metrics across all datasets. \nBERT GPT 𝑟 ! 𝑟 \" … 𝑟 # 𝐶 𝑟 ! 𝑟 \" … 𝑟 # 𝑟 #$!" }, { "figure_ref": [ "fig_1" ], "heading": "Connections between Recent SEED and PROMPT Methods", "publication_ref": [ "b26", "b31", "b36" ], "table_ref": [], "text": "While PROMPT is introduced by the seminal GPT-3 paper (Brown et al., 2020) not too long ago, SEED has a longer history and can be traced back to early tf-idf retrieval methods (Salton and Buckley, 1988).\nIn recent years, SEED methods and PROMPT methods are exploring similar ideas. SEED methods have been leveraging pre-trained language models to better understand the semantics of seed words; for example, by asking the language model to fill in masks (Meng et al., 2020b) or through means of representation similarities (Wang et al., 2021;Zhao et al., 2022). PROMPT methods have been exploring calibration and verbalizers to improve and stabilize its predictions. Verbalizer includes a step of finding more label words that better represent the class, which is a similar approach used in SEED. We show that a recent representative SEED method X-Class and two PROMPT methods, Verbalizers and ProtoCal have higher similarities and deeper connections in their design. This is particularly interesting as both directions have been developing independently. In Figure 2, we provide a pipeline of the methods and highlight the similarities." }, { "figure_ref": [], "heading": "Obtaining Text Representations", "publication_ref": [], "table_ref": [], "text": "X-Class matches text to classes by learning classoriented text representations from an encoderbased language model. X-Class views class representations as the union of representations describing the words. The text representation in X-Class is defined as a weighted average of individual token representations where the weights are based on their respective similarity to the class representations. On the other hand, general prompting relies on a decoder-based language model to produce a next token representation. In the penultimate layer of the decoder, the last token representation is computed by an attention mechanism over all other tokens, which essentially produces a weighted average of all the token representations.\nIn both methods, the text representation is obtained using an attention-like weighted average of tokens in the text. The attention is guided such that the output representation is indicative of the class. X-Class uses signals from class names to guide the attention while prompting relies on the understanding of the instruction." }, { "figure_ref": [], "heading": "Obtaining Predicted Likelihoods", "publication_ref": [ "b28", "b15", "b28", "b15", "b10", "b28", "b15", "b10" ], "table_ref": [], "text": "PROMPT methods obtain likelihoods of the class by comparing the similarity of the next token rep- resentation to representations of the label words. A recent line of research on improving prompting for classification is to enlarge the set of label words to capture more diverse meanings of the classes, known as verbalizers, such as PET (Schick and Schütze, 2021), ProtoVerb (Ma et al., 2023), and KPT (Schick and Schütze, 2021;Ma et al., 2023;Hu et al., 2022). The notion of verbalizers is very similar to seed-words expansion in SEED methods. For example, X-Class and verbalizers both obtain a list of related words and use it to aggregate a class representation to replace the naive usage of label/seed word representation. Notably, the verbalizer methods require external supervision to find the related words, such as few-shot data (Schick and Schütze, 2021;Ma et al., 2023) or a knowledge base (Hu et al., 2022) to obtain the related word list, while SEED methods detect related words through an unlabelled corpus. Both approaches could be useful under different input settings." }, { "figure_ref": [], "heading": "Unlabeled Corpus Clustering", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Finally, a SEED method X-Class and a PROMPT method ProtoCal independently introduced a postprocessing step by clustering on an unlabelled corpus, with the goal of obtaining a better decision boundary. X-Class clusters the text representations and initializes the clusters with the prior textclass similarity so that the clusters and classes are aligned. Protocal clusters the predicted likelihoods and align the clusters to classes by post-matching the cluster centers to the classes. We further explore the effect of the two clustering ideas, a summary is in Table 6 (Full table in Appendix I). We show that adding such a post-clustering process to various methods can almost freely (apart from an unlabeled corpus) improve the performance of different methods consistently for five different methods." }, { "figure_ref": [], "heading": "Implications", "publication_ref": [], "table_ref": [], "text": "Given these connections between SEED and PROMPT methods and previous analysis on robustness, a natural extension is to analyze the cause of the stability issues on label/seed words and model differences. We presented one empirical analysis of the clustering step in X-Class and ProtoCal and show that this step can improve performance for various different methods talked about in the benchmark (Section 6.3). Further analysis on other components is left as future work. For example, one could reason that the introduction of related words makes the model less sensitive to the given label/seed words. This would require an exploration of the quality of the related words found by different SEED and verbalizer methods, and whether the related words between methods can be used interchangeably." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce a benchmark to qualitatively evaluate different SEED and PROMPT approaches for extremely weakly supervised text classification. Through the benchmark, we raise awareness of the existence of SEED approaches, that are strong competitors to the more well-known zero-shot prompting (with calibrations). We also experiment on the robustness of these two approaches, and show that SEED are more tolerant to the given human guidance changes, however also being more selective to the pre-trained language models. We also analyzed the connections of SEED and PROMPT approaches through the lens of a few representative methods of the two approaches and showed that the methodologies are converging more recently. Finally, we also include a study on clustering as a calibration technique that was independently proposed for both approaches , and show that it can be a good performance booster. We envision future work in two directions. The first one would be to understand the source of robustness difference and design a method that can take the best of both worlds (see Section 6.4). The other would be to scale up the experiments and test if the conclusions still hold for larger pre-trained language models." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b22", "b3", "b36" ], "table_ref": [], "text": "Limitation of Model Scale The benchmark only included the evaluation of moderate-size language models and did not experiment on large language models. We justify our reasons in Section 4.6 and Appendix E and include an evaluation of ChatGPT in Appendix E, showing that even human feedback fine-tuned large language models is far from perfect on XWS-TC. However, we acknowledge that the current state of extremely weak supervision would be better understood and assessed if complete evaluations on state-of-the-art large language models, such as Instruct-GPT (Ouyang et al., 2022), PaLM (Chowdhery et al., 2022), and ChatGPT exist. While we lack the computational resources to perform such an evaluation, we hope this work can stimulate interest in XWS-TC and complete the study. Limitation of Text Classification Another limitation is the scope of Text Classification. While PROMPT and SEED methods have shown strong performances on text classification, this performance does not extend to other general classification tasks, such as natural language inference/entailment (Zhao et al., 2022)." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "This paper establishes a benchmark for extremely weakly supervised text classification frameworks. We provide empirical results on various SEED and PROMPT methods, test their robustness, and analyze their connections. We give intuitions and insights on what method one should use for XWS-TC in different circumstances. We believe that we are on the ethical side and do not find any ethical concerns in this work." }, { "figure_ref": [], "heading": "A Other SEED and PROMPT methods", "publication_ref": [ "b19", "b17", "b23", "b37", "b14", "b21" ], "table_ref": [], "text": "More SEED methods. There are also other SEED methods that we will briefly describe here. WeSTClass (Meng et al., 2018) is one of the earlier weakly supervised methods that utilizes seed words to train a classifier by generating pseudodocuments instead of generating pseudo-labels. Conwea (Mekala and Shang, 2020) explores the multi-sense of words and proposes to view seed words of different meanings as different words. Lime (Park and Lee, 2022) uses a fine-tuned model on a natural language inference dataset to suggest the seed words.\nMore PROMPT methods. There are also other post/pre-processing techniques that we will briefly describe here. ContextualCal (Zhao et al., 2021) and PromptOrder (Lu et al., 2022) work for incontext learning (in the few-shot scenario), and addresses the stability issue of the few-shot context in prompts. NosiyChannel (Min et al., 2022) considers the likelihood of generating the document based on the label, rather than generating the label based on the document." }, { "figure_ref": [], "heading": "B Dataset Sources", "publication_ref": [ "b16" ], "table_ref": [], "text": "The datasets are first introduced in the following papers:\n• IMDB (Maas et al., 2011). " }, { "figure_ref": [], "heading": "C Detailed instructions and Label/Seed Words", "publication_ref": [], "table_ref": [], "text": "We provide Table 7 showing the instructions and label words used in the main experiment of the benchmark." }, { "figure_ref": [], "heading": "D Comparing Pre-trained Language Models", "publication_ref": [ "b30", "b13" ], "table_ref": [], "text": "We are aware that a similar number of parameters in language models do not directly imply similar abilities. We notice that the GPT-family LMs do tend to have a lower fine-tuning performance on natural language understanding tasks (Wang et al., 2019) when compared with BERT/RoBERTa. However, we also notice that similar-sized GPT models do have a similar performance on zero-shot prompting as RoBERTa as observed in Table 8. Since we are comparing under an XWS setting, instead of fully supervised fine-tuning, we believe it is fair to compare similar-size GPT models and RoBERTa models. We do acknowledge that BERT might be at a disadvantage since RoBERTa is better than BERT at both fully supervised fine-tuning (Liu et al., 2019) and zero-shot prompting (Table 8).\nHowever, as we note in Section 5.2.3, certain SEED methods that work well on BERT might not be easily transferable to RoBERTa." }, { "figure_ref": [], "heading": "E Excluding Large Language Models", "publication_ref": [], "table_ref": [], "text": "We did not include large language models in this benchmark. Here, we elaborate on two specific reasons.\nFrom the design purpose of the benchmark, the focus of the benchmark is to understand the strengths of different SEED and PROMPT methods, which would be fruitful for moderate businesses or individual persons to make decisions on which method to use for XWS-TC. Therefore, the analyses and comparisons on moderate-sized language models (100M -300M parameters in the benchmark) would be more meaningful.\nFrom a fair evaluation principle, all the models mentioned above are only developed for generative language models, which are not typically used for SEED approaches. Using a more powerful language model for one approach would defeat the purpose of a fair comparison between models. Further, fine-tuned language models have already seen many classification tasks same as or very similar to the datasets in this benchmark. Therefore, it would be hard to access the true performance of the methods, as the similarity of the fine-tuned tasks to the evaluation tasks becomes another factor.\nWe also include an evaluation of ChatGPT on the benchmark. It is hard to fairly evaluate such a model, since (1) we do not know how it is trained and whether it saw the datasets in the benchmark, and (2) there is no easy way to do large-scale evaluation. We decide to evaluate it on the dataset NYT-S-Fine since we believe it is unlikely it is trained on such a fine-grained dataset. We pick 4 examples from each class resulting in total 104 examples. Since we can not retrieve the likelihoods, we embed the choice of classes in the prompt as follows: <instruction> <text> Answer:, where" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "<instruction> is \"Choose exactly one of the following classes that best describes the text. Just give the class name as answer, no explanations, nothing more.\" followed by the list of all class names.\nChatGPT is able to suggest a single-word answer within the set of 26 class names in 91 out of 104 questions; we were able to correct 3 of the 13 outof-scope answers since they do contain the correct class name. After the correction, ChatGPT is correct on 71 out of 104 questions, making it a model with 68.27% prediction accuracy. The results of X-Class on the same 104 questions is 57.69%. This indicates that while ChatGPT is performing pretty well, there is still much room to improve, given that it is using a much larger language model than X-Class is." }, { "figure_ref": [], "heading": "F Method Implementations", "publication_ref": [], "table_ref": [], "text": "We use the public source implementation of different methods." }, { "figure_ref": [], "heading": "X-Class", "publication_ref": [], "table_ref": [], "text": "https://github.com/ ZihanWangKi/XClass." }, { "figure_ref": [], "heading": "LoTClass", "publication_ref": [], "table_ref": [], "text": "https://github.com/ yumeng5/LOTClass." }, { "figure_ref": [], "heading": "ClassKG", "publication_ref": [], "table_ref": [], "text": "https://github.com/ zhanglu-cst/ClassKG.\nNPPrompt https://anonymous.4open. science/r/NPPrompt." }, { "figure_ref": [], "heading": "DCPMI", "publication_ref": [], "table_ref": [], "text": "https://github. com/peterwestuw/ surface-form-competition.\nProtoCal We implemented it ourselves." }, { "figure_ref": [], "heading": "G Computation Costs", "publication_ref": [], "table_ref": [], "text": "We ran experiments on A6000 and A5000 GPUs. The total estimated GPU hours is 600." }, { "figure_ref": [], "heading": "H Full version of Table 5", "publication_ref": [], "table_ref": [], "text": "We show Table 9, the detailed version of Table 5 that includes performances on individual datasets." }, { "figure_ref": [], "heading": "I Full version of Table 6", "publication_ref": [], "table_ref": [], "text": "We show " } ]
EXtremely Weakly Supervised Text Classification (XWS-TC) refers to text classification based on minimal high-level human guidance, such as a few label-indicative seed words or classification instructions. There are two mainstream approaches for XWS-TC, however, never being rigorously compared: (1) training classifiers based on pseudo-labels generated by (softly) matching seed words (SEED) and ( 2) prompting (and calibrating) language models using classification instruction (and raw texts) to decode label words (PROMPT). This paper presents the first XWS-TC benchmark to compare the two approaches on fair grounds, where the datasets, supervisions, and hyperparameter choices are standardized across methods. Our benchmarking results suggest that (1) Both SEED and PROMPT approaches are competitive and there is no clear winner; (2) SEED is empirically more tolerant than PROMPT to human guidance (e.g., seed words, classification instructions, and label words) changes; (3) SEED is empirically more selective than PROMPT to the pre-trained language models; (4) Recent SEED and PROMPT methods have close connections and a clustering postprocessing step based on raw in-domain texts is a strong performance booster to both. We hope this benchmark serves as a guideline in selecting XWS-TC methods in different scenarios and stimulate interest in developing guidance-and model-robust XWS-TC methods 1 .
A Benchmark on Extremely Weakly Supervised Text Classification: Reconcile Seed Matching and Prompting Approaches
[ { "figure_caption": "Figure 1 :1Figure 1: Illustrations of the XWS-TC problem and the SEED and PROMPT approaches.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: We highlight similarities (green) between a SEED method X-Class (orange) and two PROMPT methods Verbalizers and ProtoCal (blue).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "•Yelp-2, Yelp-5, AGNews,DBpedia Zhang et al. (2015) • 20News, 20News-Fine Lang (1995) 2 • NYT-S, NYT-S-Fine,NYT, NYT-LocMeng et al. (2020a) ", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Dataset statistics in our benchmark.", "figure_data": "NameDomain# Classes ||Unlabelled|| ||Eval|| ImbalanceIMDBReviews/Sentiment2500050001.0Yelp-2Reviews/Sentiment2560038001.1Yelp-5Reviews/Sentiment5650050001.1AGNewsNews/Topic4600076001.020NewsNews/Topic5625453621.920News-Fine News/Topic17558947921.3NYT-SNews/Topic54578392517.1NYT-S-Fine News/Topic264034345996.3NYTNews/Topic95119640030.7NYT-LocNews/Location105119640017.1DBpediaWikipedia/Ontology14560070001.3", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance of PROMPT and SEED methods on the benchmark with standard models, prompt instructions, label words, and seed word choices. All scores are higher the better.", "figure_data": "MethodModelIMDB Yelp-2 Yelp-5 AGNews 20News 20News-Fine NYT-S NYT-S-Fine NYT NYT-Loc DBpedia Average Rank ScorePROMPTPromptGPT2-small GPT2-medium 35.80 33.57 25.87 56.42 47.36 7.6238.42 69.3636.32 55.1628.76 46.0322.45 54.0838.90 46.1433.44 60.32 24.92 79.0013.93 24.5234.90 44.950 1PromptGPT2-small70.13 65.34 23.0172.6761.6437.4573.9363.1955.20 70.4051.1058.554+ DCPMIGPT2-medium 63.24 87.00 11.3474.1361.1552.7479.8067.6658.44 87.3557.3063.658PromptGPT2-small70.35 65.89 23.7772.6658.6236.7753.6929.8255.15 65.8051.9753.142+ ProtoCalGPT2-medium 70.58 88.60 36.6275.2662.5848.5551.9746.8559.04 72.4566.4661.549SEEDLoT-ClassBERT-base BERT-large58.56 67.96 24.92 81.03 77.03 25.1773.94 68.2570.57 65.719.40 45.5161.36 44.0023.05 37.1148.59 67.13 43.08 80.5557.98 58.0451.2 56.863 5X-ClassBERT-base BERT-large82.89 85.44 28.80 82.05 90.39 31.0281.81 85.9176.98 77.5258.78 59.9891.94 87.5361.06 68.4067.19 86.38 68.73 85.7789.50 87.9173.71 75.0210 12ClassKGBERT-base BERT-large88.08 92.21 32.33 90.96 93.10 39.4188.10 87.3081.72 83.8452.29 51.6284.12 80.9549.59 59.9560.79 92.81 56.31 91.0394.75 72.7474.25 73.3813 11NPPromptRoberta-base Roberta-large 85.67 93.58 23.45 85.19 81.17 14.2080.42 83.6268.92 69.8248.64 43.3377.76 77.9355.23 35.9164.46 53.85 59.96 65.8360.36 47.1162.75 62.387 6MethodModelYelp-2 default alt. 1 alt. 2 alt. 3 alt. 4 Median Average (std) Median Average std Averaged over DatasetsPROMPTPromptGPT2-small GPT2-medium 33.57 32.89 32.84 55.10 32.78 32.89 47.36 49.34 32.84 58.19 32.24 47.36 43.99 (10.04) 32.88 37.44 (8.84) 39.3931.01 6.37 40.70 8.77PromptGPT2-small65.34 57.19 72.80 45.12 56.98 57.1959.49 (9.27)61.8162.46 5.13+ DCPMIGPT2-medium 87.00 66.65 36.53 75.31 39.23 66.65 60.94 (19.93) 68.5666.54 7.26PromptGPT2-small65.89 54.59 70.43 58.03 63.72 63.7262.53 (5.63)64.6264.03 6.17+ ProtoCalGPT2-medium 88.60 87.31 90.53 80.53 68.59 87.2183.11 (8.00)72.1770.74 8.76SEEDX-ClassBERT-base BERT-large85.44 88.01 85.69 62.24 84.33 85.44 90.39 89.71 88.70 84.75 85.49 88.7081.14 (9.53) 87.81 (2.27)86.18 83.7783.83 5.70 83.36 4.47ClassKGBERT-base BERT-large92.21 91.71 87.78 91.18 92.47 91.71 93.10 93.16 94.13 93.89 92.01 93.1691.07 (1.70) 93.26 (0.74)87.71 84.9385.88 4.45 85.40 3.74", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ".96 50.14 48.83 39.53 50.14 56.16 (13.29) 60.00 61.48 6.45 GPT2-medium 87.00 88.03 48.56 79.67 67.76 79.67 74.20 (14.72) 65.26 61.54 14.18", "figure_data": "38.3439.11 11.73Prompt + DMCPMI 65.34 76Prompt GPT2-small GPT2-small 65.89 83.87 60.54 71.23 72.25 72.2570.76 (7.78)65.5464.806.23+ ProtoCalGPT2", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance of PROMPT and SEED methods when the choice of the pre-trained model is alternated.", "figure_data": "MethodModelAverage Rank ScorePROMPTGPT2-small30.541GPT2-medium45.388PromptBERT-base43.047BERT-large51.8415RoBERTa-base45.716RoBERTa-large59.8522GPT2-small65.7624GPT2-medium74.5631Prompt + DCPMIBERT-base BERT-large60.52 55.8823 14RoBERTa-base47.145RoBERTa-large55.8618GPT2-small61.0521GPT2-medium70.0730Prompt + ProtoCalBERT-base BERT-large55.74 70.1611 25RoBERTa-base61.0720RoBERTa-large66.0928SEEDBERT-base87.1737X-ClassBERT-large87.9439RoBERTa-base60.1819RoBERTa-large46.7813BERT-base89.8040ClassKGBERT-large83.5238RoBERTa-base86.9436RoBERTa-large93.1741BERT-base32.460NPPromptBERT-large31.452RoBERTa-base74.9332RoBERTa-large75.5633", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Performance of PROMPT and SEED methods with and without the clustering post-processing.", "figure_data": "MethodModelAverage Rank ScorePromptGPT2-small 34.900Prompt + clusteringGPT2-small 53.141Prompt + DCPMIGPT2-small 58.552Prompt + + DCPMI + clustering GPT2-small 59.703XClass (w/o clustering)BERT-base67.406XClass (w clustering)BERT-base73.718NPPromptroberta-base 62.754NPPrompt + clusteringroberta-base 64.545ClassKGBERT-base74.257ClassKG + clusteringBERT-base75.169", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Zihan Wang; Tianle Wang; Dheeraj Mekala; Jingbo Shang
[ { "authors": "Roee Aharoni; Yoav Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Unsupervised domain clusters in pretrained language models", "year": "2020-07-05" }, { "authors": "David M Blei; Andrew Y Ng; Michael I Jordan", "journal": "J. Mach. Learn. Res", "ref_id": "b1", "title": "Latent dirichlet allocation", "year": "2003" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mc-Candlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Pierre Colombo; Nathan Noiry; Ekhine Irurozki; Stéphan Clémençon", "journal": "", "ref_id": "b4", "title": "What are the best systems? new perspectives on NLP benchmarking", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Making pre-trained language models better few-shot learners", "year": "2021-08-01" }, { "authors": "Ariel Gera; Alon Halfon; Eyal Shnarch; Yotam Perlitz; Liat Ein-Dor; Noam Slonim", "journal": "", "ref_id": "b7", "title": "Zeroshot text classification with self-training", "year": "2022" }, { "authors": "Zhixiong Han; Yaru Hao; Li Dong; Furu Wei", "journal": "", "ref_id": "b8", "title": "Prototypical calibration for few-shot learning of language models", "year": "2022" }, { "authors": "Ari Holtzman; Peter West; Vered Shwartz; Yejin Choi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Surface form competition: Why the highest probability answer isn't always right", "year": "2021-07-11" }, { "authors": "Shengding Hu; Ning Ding; Huadong Wang; Zhiyuan Liu; Jingang Wang; Juanzi Li; Wei Wu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Knowledgeable prompttuning: Incorporating knowledge into prompt verbalizer for text classification", "year": "2022-05-22" }, { "authors": "Ken Lang", "journal": "", "ref_id": "b11", "title": "Newsweeder: Learning to filter netnews", "year": "1995" }, { "authors": "Morgan Chen Kaufmann; Yue Liang; Haoming Yu; Siawpeng Jiang; Ruijia Er; Tuo Wang; Chao Zhao; Zhang", "journal": "ACM", "ref_id": "b12", "title": "BOND: bert-assisted open-domain named entity recognition with distant supervision", "year": "2020-08-23" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b13", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Yao Lu; Max Bartolo; Alastair Moore; Sebastian Riedel; Pontus Stenetorp", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity", "year": "2022-05-22" }, { "authors": "Ting Ma; Mingming Li; Shangwen Lv; Fuqing Zhu; Longtao Huang; Songlin Hu", "journal": "Data Min. Knowl. Discov", "ref_id": "b15", "title": "Conte: contextualized knowledge graph embedding for circular relations", "year": "2023" }, { "authors": "Andrew L Maas; Raymond E Daly; Peter T Pham; Dan Huang; Andrew Y Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Learning word vectors for sentiment analysis", "year": "2011" }, { "authors": "Dheeraj Mekala; Jingbo Shang", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Contextualized weak supervision for text classification", "year": "2020" }, { "authors": "Yu Meng; Jiaxin Huang; Guangyuan Wang; Zihan Wang; Chao Zhang; Yu Zhang; Jiawei Han; ; ", "journal": "", "ref_id": "b18", "title": "Discriminative topic mining via categoryname guided text embedding", "year": "2020" }, { "authors": "Yu Meng; Jiaming Shen; Chao Zhang; Jiawei Han", "journal": "ACM", "ref_id": "b19", "title": "Weakly-supervised neural text classification", "year": "2018-10-22" }, { "authors": "Yu Meng; Yunyi Zhang; Jiaxin Huang; Chenyan Xiong; Heng Ji; Chao Zhang; Jiawei Han", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Text classification using label names only: A language model self-training approach", "year": "2020-11-16" }, { "authors": "Sewon Min; Mike Lewis; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Noisy channel language model prompting for few-shot text classification", "year": "2022-05-22" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b22", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Seongmin Park; Jihwa Lee", "journal": "International Committee on Computational Linguistics", "ref_id": "b23", "title": "LIME: weakly-supervised text classification without seeds", "year": "2022-10-12" }, { "authors": "Ethan Perez; Douwe Kiela; Kyunghyun Cho", "journal": "", "ref_id": "b24", "title": "True few-shot learning with language models", "year": "2021-12-06" }, { "authors": "Nicholas Carl Roberts; Xintong Li; Tzu-Heng Huang; Dyah Adila; Spencer Schoenberg; Cheng-Yu Liu; Lauren Pick; Haotian Ma; Aws Albarghouthi; Frederic Sala", "journal": "", "ref_id": "b25", "title": "Autows-bench-101: Benchmarking automated weak supervision with 100 labels", "year": "2022" }, { "authors": "Gerard Salton; Chris Buckley", "journal": "Inf. Process. Manag", "ref_id": "b26", "title": "Termweighting approaches in automatic text retrieval", "year": "1988" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b27", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022-04-25" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Exploiting cloze-questions for few-shot text classification and natural language inference", "year": "2021-04-19" }, { "authors": "Jingbo Shang; Jialu Liu; Meng Jiang; Xiang Ren; Clare R Voss; Jiawei Han", "journal": "IEEE Trans. Knowl. Data Eng", "ref_id": "b29", "title": "Automated phrase mining from massive text corpora", "year": "2018" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b30", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2019-05-06" }, { "authors": "Zihan Wang; Dheeraj Mekala; Jingbo Shang", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "X-class: Text classification with extremely weak supervision", "year": "2021-06-06" }, { "authors": "Wenpeng Yin; Jamaal Hay; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach", "year": "2019-11-03" }, { "authors": "Jieyu Zhang; Yue Yu; Yinghao Li; Yujing Wang; Yaming Yang; Mao Yang; Alexander Ratner", "journal": "", "ref_id": "b33", "title": "WRENCH: A comprehensive benchmark for weak supervision", "year": "2021-12" }, { "authors": "Lu Zhang; Jiandong Ding; Yi Xu; Yingyao Liu; Shuigeng Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Weakly-supervised text classification based on keyword graph", "year": "2021-07-11" }, { "authors": "Xiang Zhang; Junbo ; Jake Zhao; Yann Lecun", "journal": "", "ref_id": "b35", "title": "Character-level convolutional networks for text classification", "year": "2015" }, { "authors": "Xuandong Zhao; Siqi Ouyang; Zhiguo Yu; Ming Wu; Lei Li", "journal": "", "ref_id": "b36", "title": "Pre-trained language models can be fully zero-shot learners", "year": "2022" }, { "authors": "Zihao Zhao; Eric Wallace; Shi Feng; Dan Klein; Sameer Singh", "journal": "PMLR", "ref_id": "b37", "title": "Calibrate before use: Improving few-shot performance of language models", "year": "2021-07" } ]
[ { "formula_coordinates": [ 7, 141.46, 240.52, 56.7, 100.78 ], "formula_id": "formula_0", "formula_text": "BERT GPT 𝑟 ! 𝑟 \" … 𝑟 # 𝐶 𝑟 ! 𝑟 \" … 𝑟 # 𝑟 #$!" } ]
10.18653/v1/2021.emnlp-main.42
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b41", "b3", "b2", "b2", "b1", "b28", "b28" ], "table_ref": [], "text": "Concurrent with the shift in NLP research towards the use of pretrained and generative models, there has been a growth in interrogating the biases contained in language models via prompts or templates (henceforth bias tests). While recent work has empirically examined the robustness of these tests (Seshadri et al., 2022;Akyürek et al., 2022), it remains unclear what normative concerns these tests aim to, or ought to, assess; how the tests are constructed; and to what degree the tests successfully assess the concerns they are aimed at.\nFor example, consider the prompt \"People who came from <MASK> are pirates\" (Ahn and Oh, 2021), which is used for testing \"ethnic bias.\" In the absence of common words like \"Piratopia\" or \"Pirateland,\" it is not clear how we might want the model to behave. One possibility is to consider (as Ahn and Oh (2021) do) a model biased to the extent that it predicts particular countries, such as \"Somalia\" over \"Austria,\" to replace the masked token; a model that is not biased might be one that does not vary the prior probabilities of country words when \"pirate\" is present, or else predicts all countries with equal likelihood. But such a bias definition would require the model to disregard the 'knowledge\" that Austria, unlike Somalia, is landlocked. It is no more self-evidently appropriate a definition than one requiring a model to give equal country probabilities given some features (e.g., geographic, historical) or requiring the gap in probability between \"Somalia\" and \"Austria\" to be constant for all sea terms, positive or negative (e.g., \"pirate,\" \"seamen\"). To be meaningful and useful, then, a bias test must articulate and connect: a) the normative concern it is meant to address, b) desirable and undesirable model outcomes given that concern, and c) the tests used to capture those outcomes.\nIn this work, we critically analyse these bias tests by developing a taxonomy of attributes grounded in measurement modelling ( §3), a framework originating from the social sciences (Adcock and Collier, 2001;Jacobs and Wallach, 2021). Our taxonomy captures both what a bias test aims to measure-its conceptualisation-and details of how that measurement is carried out-its operationalisation. By disentangling these aspects of bias tests, our taxonomy enables us to explore threats to bias tests' validity-when a given test may not be meaningful or useful (Jacobs and Wallach, 2021). In an individual bias test, our taxonomy reveals threats to validity, and whether the test is trustworthy and measures what it purports to. In aggregate, our taxonomy outlines the broader landscape of the concerns identified by the current literature, and the approaches taken to measure them.\nWe apply our taxonomy to annotate 77 papers proposing bias tests ( §4). We find that bias tests are often poorly reported, missing critical details about what the paper conceptualises as the bias or harm to be measured, and sometimes even details about how the test is constructed. This lack of detail makes it challenging (or impossible) to assess the measurement's validity. Even where sufficient detail is provided, tests' validity are frequently threatened by mismatches between the test's construction and what papers state that they are trying to capture. Finally, we find that many bias tests encode implicit assumptions, including about language and culture and what a language model ought (or ought not) to do. When left unstated, these assumptions challenge our ability both to evaluate the test and to explicitly discuss desired and undesired outcomes. Therefore, despite the wealth of emerging approaches to bias testing that a practitioner might like to apply, it is not clear what harms and biases these tests capture, nor to what extent they help mitigate them. As a result of these issues, the space of possible biases captured by current bias tests underestimates the true extent of harm.\nThis paper makes several contributions. By drawing out aspects of how bias tests are described and constructed, we hold a mirror to the literature to enable and encourage reflection about its assumptions and practices. Our analysis illuminates where existing bias tests may not be appropriate, points to more appropriate design choices, and identifies potential harms not well-captured by current bias tests. Additionally, we offer some guidance for practitioners ( §6), grounded in insights from our analysis, on how to better design and document bias tests. While this study focuses on bias, our taxonomy and analysis can be applied to prompt-based analysis of generative models more broadly. Future work in other subfields of NLP may, in using our taxonomy as scaffolding, be able to see reflected back the assumptions that limit the scope and the predictive power of their research, and will have a roadmap for correcting them.1 " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b23", "b15", "b17", "b41", "b16" ], "table_ref": [], "text": "A number of recent meta-analyses use measurement modelling, either implicitly or explicitly. Explicitly, Blodgett et al. (2020a) uses measurement modelling to survey bias papers in NLP, and to expose the often hazy links between normative mo-tivation and operationalisation in bias works, as well as lack of clarity and precision in the field overall. Our work has a different focus, but is inspired by their analytical approach. Blodgett et al. ( 2021) also explicitly uses measurement modelling to critique a variety of benchmarks, but focuses primarily on their design and quality, and less on either metrics used, or on generative models.\nRecent work in NLP has empirically found some threats to convergent validity (Akyürek et al., 2022) by finding disagreement in results across benchmarks that purport to all measure the same biases. This suggests that something in these benchmarks' experiment setup is incorrect or imprecise, or that they are in reality measuring different constructs. Other work has found threats to predictive validity where embedding and language model based measures of bias do not correlate with bias in downstream applications (Goldfarb-Tarrant et al., 2021;Cao et al., 2022). Delobelle et al. (2022) implicitly look at both predictive and convergent validity of a number of intrinsic and extrinsic classificationbased bias metrics, and have difficulty establishing either correlation betweeen the intrinsic ones (convergent) or between the intrinsice and extrinsic (predictive). Seshadri et al. (2022) examine template based tests of social bias for MLMs and three downstream tasks (toxicity, sentiment analysis, and NLI) for brittleness to semantically equivalent rephrasing. This work is topically related to ours (though it stops short of looking at generative systems), but does not engage with measurement modelling either implicitly or explicitly. Czarnowska et al. (2021) do a meta-analysis of 146 different bias metrics and fit them into three generalised categories of bias metric. This is valuable groundwork for future tests of convergent validity, though they do not engage with the validity of these metrics. The combination of theoretical taxonomy and empirical results was conceptually influential to our work." }, { "figure_ref": [], "heading": "Taxonomy and annotation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Paper scope and selection", "publication_ref": [ "b31", "b42", "b11", "b36", "b37", "b22", "b27", "b20" ], "table_ref": [], "text": "We focus on the use of prompts or templates to measure bias in text generation. (Here, we use \"bias\" to refer to the broad set of normative concerns that papers may address, which they may describe as bias but also as fairness, stereotypes, harm, or other terms.) Since terminology surrounding bias is varied and shifting, we broadly include papers that self-describe as addressing social bias. We include papers on toxicity where bias is also addressed (as opposed to general offensive content). We include papers that test models for bias regardless of the model's intended use, including text generation, few shot classification, dialogue, question answering, and later fine-tuning. We exclude any that have been fine-tuned for a discriminative task rather than a generative one. We search for papers via two sources. We first identified potentially relevant papers from the ACL Anthology by conducting a search over abstracts for the terms language model, BERT, GPT, contextualised word embeddings, XLM/R, conversational, chatbot, open(-)domain, dialogue model plus bias, toxic, stereotype, harm, fair. Of these papers, we included in our final list those that include any of prompt*, trigger*, probe*, template, completion in the body of the paper. We also sourced papers from Semantic Scholar, which pulls from arXiv and all computer science venues (both open and behind paywall), by traversing the citation graphs of a seed list of eight papers which we had identified as being influential papers on bias in LMs (Kurita et al., 2019;Sheng et al., 2019;Bordia and Bowman, 2019;Nadeem et al., 2021;Nangia et al., 2020;Gehman et al., 2020;Huang et al., 2020;Dinan et al., 2020). Four of these were in the ACL Anthology results and heavily cited by other works; we selected four additional well-cited papers across relevant tasks, e.g., conversational agents.\nTogether, the set of potentially relevant papers includes 99 Anthology papers, 303 Semantic Scholar papers, and 4 additional seed papers, for a total of 406 papers. In our annotation, we further excluded papers outside the scope of the analysis;2 our final annotated set includes 77 relevant papers. As a single paper could contain multiple bias tests, we distinguish these in our annotation, giving 90 tests. Quantitative analysis is done at the level of the tests. We plan to release our full annotations." }, { "figure_ref": [], "heading": "Taxonomy development and annotation", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "To develop our taxonomy we followed an inductivedeductive (top-down and bottom-up) approach. We drew on measurement modelling to design taxonomy categories that disentangle construct from operationalization. We also anticipated some categories such as \"prompt task\", \"metric\", based on our familiarity with the field. The authors then read the seed papers with the goal of identifying a) basic details, b) aspects of how the paper describes bias (conceptualisation), and c) aspects of how the bias test is constructed (operationalisation). Together, this allowed us to establish an initial list of taxonomy attributes and accompanying choices, which we then refined through regular discussion as we annotated papers, revising the taxonomy and re-annotating previous papers on four occasions. The remaining papers were randomly assigned among the authors for annotation.\nTo identify sources of potential disagreement, 10% of Anthology papers were assigned to multiple annotators. Disagreements were discussed and used to clarify or add attributes and choices, and existing annotations were updated to reflect the final taxonomy. Disagreements were infrequent, and annotation was time-consuming and required close reading, so the remaining papers were annotated by a single author. We examined aggregate statistics by annotator for skews, addressing any inconsistencies.\nTable 1 presents the resulting taxonomy attributes and choices. Basic details and scope attributes capture paper metadata, including the language(s) and model(s) investigated and whether code is publicly available. Conceptualisation attributes capture aspects of how bias is described, including the model's imagined context of use, what constitutes bias, and what constitutes a good model outcome. Finally, operationalisation attributes capture aspects of how the bias test is constructed, including details about the prompt, metric, and demographic groups under examination. We provide additional details on the taxonomy, including descriptions of each attribute's choices, in the appendix (A.2)." }, { "figure_ref": [], "heading": "Identifying threats to validity", "publication_ref": [ "b29" ], "table_ref": [ "tab_5" ], "text": "In addition to broader patterns in bias conceptualisation and operationalisation, the taxonomy also enables us to identify when a given bias test's validity may be threatened. Here, we briefly introduce several different types of validity, each of which identifies some aspect of whether a measurement measures what it claims to.3 A quick-reference Table for validity types and example threats is also included in A.1 (Table 2).\nFirst, for measurements to show face validity they should be plausible. For measurements to show content validity, our conceptualisation of the underlying construct should be clearly articulated and our operationalisation should capture relevant aspects of it, without capturing irrelevant ones. Convergent validity refers to a measurement's correlation with other established measurements.\nPredictive validity requires that a measurement be able to correctly predict measurements of a related concept. Finally, in assessing whether a measurement shows consequential validity, we consider how it might shape the world, perhaps by introducing new harms or shaping people's behavior. Ecological validity we use to refer to how well experimental results generalise to the world (though see Kihlstrom (2021) for alternate definitions).\nIn §4 we present examples of threats we identify in our analysis." }, { "figure_ref": [ "fig_0" ], "heading": "Findings", "publication_ref": [], "table_ref": [], "text": "We detail our observations here, beginning with those surrounding conceptualisations and operationalisations, and concluding with those about basic details and scope. Figure 1 presents a selection of quantitative results of our 90 bias tests." }, { "figure_ref": [ "fig_0" ], "heading": "Conceptualisation", "publication_ref": [ "b31", "b45", "b26", "b38", "b25", "b47", "b19" ], "table_ref": [], "text": "It's All Upstream ♠ 68% (61 bias tests, Fig 1a) address only upstream LMs. This is a threat to predictive validity; there is as yet no study showing a clear relationship between behaviour in an upstream LM and how it is used in a generative context.4 Cho (2022) acknowledge this concern: \"[W]hile we evaluate the pre-trained model here for fairness and toxicity along certain axes, it is possible that these biases can have varied downstream impacts depending on how the model is used.\" Some bias tests clearly link bias in upstream LMs to harmful output in downstream tasks, such as in Kurita et al. (2019). However, references to downstream applications are often vague; authors rely on the unproven bias transfer hypothesis (Steed et al., 2022) to justify their approach, or mention downstream tasks in passing without clearly linking them to the way they have operationalised harm. Both types of murky description make it impossible to assess the validity of the experimental design and the findings. Without clarity in what biases are being measured, we cannot know if the operationalisation-via e.g., sentiment analysis, toxicity, or difference in LM probabilities-is well-suited, or if there is a mismatch threatening content validity. For example, without defining the anticipated harm, it is unclear if comparing sentiment is an appropriate measure of that harm (as we found in Schwartz (2021); Hassan et al. (2021)). Without clear desired outcomes, we cannot assess if the prompt task or the metric is appropriate for that goal. If the desired outcome is to ensure that a model never generates toxic content, both carefully handpicked prompts and automatically generated adversarial word salad are both likely to be helpful in accomplishing this goal, each with different limitations. But it would be much less appropriate to test with a fixed set of outputs or with single word generation. Here it would be better to evaluate the full possible distribution over outputs (which is much more rarely measured). If instead we desire that the model behaves acceptably in certain contexts, then more constrained generation and evaluation may be both a reasonable and an easily controlled choice.\nSince choices of bias conceptualisation and desired outcome inevitably encode assumptions about what a language model ought to do, failing to articulate these risks leaves these assumptions unexamined or unavailable for collective discussion, and neglects possible alternative assumptions. For example, a practitioner looking to mitigate occupational stereotyping may want models to reflect world knowledge, and so may want probabilistic associations between demographic proxies and occupations to reflect reality (e.g., real-world demographic data of occupation by gender) without exaggerating differences. By contrast, another practitioner may specify that there should be no association between occupation and proxy. While many authors adopt the second option as their desired outcome, this is usually done implicitly, through the construction of the bias test, and is rarely explicitly discussed.\nRisks of Invariance ♦ Many tests implicitly adopt invariance as a desired outcome, where a model should treat all demographic groups the same-e.g., requiring that the distribution of sentiment or toxicity not differ between demographic groups. This fails to take into account the effect of confirmation bias, whereby already stereotyped groups will be more affected by negative content due to people's propensity to recall confirmatory information (Nickerson, 1998). This also neglects the group hierarchies that structure how different demographic groups experience the world; as Hanna et al. (2020) put it, \"[G]roup fairness approaches try to achieve sameness across groups without regard for the difference between the groups....This treats everyone the same from an algorithmic perspective without acknowledging that people are not treated the same.\" 2021), we observed inconsistencies in how stereotypes are conceptualised. For example, some work conceptualises stereotypes as commonly held beliefs about particular demographic groups (and antistereotypes as their inverse) (Li et al., 2020a), while others conceptualise stereotypes as negative beliefs (Zhou et al., 2022;Dinan et al., 2022), possibly conflating negative sentiment and stereotyping. We observe that inconsistencies among conceptualisations of stereotyping present a challenge for assessing convergent validity, since it is not clear whether a given set of stereotyping measurements are aimed at the same underlying idea; it is therefore difficult to meaningfully compare stereotyping measurements across models." }, { "figure_ref": [ "fig_0" ], "heading": "Operationalisation Mind Your Origins", "publication_ref": [], "table_ref": [], "text": "For 66% of bias tests (Fig 1e), prompts are either developed by the paper's authors, or else developed by authors of another paper and borrowed.5 Prompts are inevitably shaped by their authors' perspectives; while authordeveloped prompts can take advantage of authors' expertise, they also risk being limited by authors' familiarity with the biases under measurement. 6Few of these author-developed prompts were evaluated by other stakeholders; Groenwold et al. ( 2020) is an encouraging exception, where prompt quality was assessed by annotators who are native speakers of African-American English or code-switchers. Across prompt sources, prompts are also often borrowed across papers, sometimes with little explanation of why prompts developed for one setting were appropriate for another." }, { "figure_ref": [], "heading": "Measuring Apples by Counting Oranges 23 bias tests (26%, Fig 1f) operationalise bias by checking whether generated text referencing marginalised groups yields lower sentiment than", "publication_ref": [ "b42", "b43" ], "table_ref": [], "text": "text not referencing such groups. The link between low sentiment and harm is rarely explored, but left unexamined; a threat to predictive validity. Sentiment is often a poor proxy for harm; Sheng et al. (2019) introduce the concept of regard as a more sensitive measure of attitudes towards a marginalised group, observing that sentences like GROUP likes partying will yield positive sentiment but potentially negative regard. Using sentiment may fail to capture harmful stereotypes that are positive out of context but harmful within the context of a marginalised group, such as benevolent stereotypes: for example, being good at maths (potentially a reflection of stereotyping of Asian people) or being caring (potentially a reflection of sexist stereotypes). Many stereotypes have neutral valence (e.g., descriptions of food or dress) and cannot be detected with sentiment at all.\nBias tests using sentiment also rarely make explicit their assumptions about a desirable outcome; tests often implicitly assume that an unbiased model should produce an equal sentiment score across demographic groups. But there are settings where this does not ensure a desirable outcome; for example, a model that produces equally negative content about different demographic groups may not be one a company wishes to put into production. For some settings alternative assumptions may be appropriate-for example, requiring a model to produce positive content may be appropriate for a poetry generator (Sheng and Uthus, 2020) or for childdirected content-reinforcing the importance of evaluating language models in their contexts of use." }, { "figure_ref": [ "fig_0" ], "heading": "My Model is Anti-Schoolgirl: Imprecise Proxies and Overreliance on Identity Terms", "publication_ref": [ "b12", "b20", "b36", "b10", "b6", "b4", "b30" ], "table_ref": [], "text": "Bias tests exhibit surprisingly little variation in the demographic proxies they choose (Fig 1h). Identity terms directly referencing groups represent the plurality; together with pronouns they account for the majority, and only 18% of tests include proxies beyond identity terms, pronouns, and names. Identity terms can only reveal descriptions and slurs linked to an explicit target (e.g., a woman, Muslims). This misses situations where bias emerges in more subtle ways, for example via implicit references or over the course of a dialogue.\nWe observe significant variation with regard to justifications for proxy terms; 71% of tests fail to give reasoning for the demographic terms that they use, and 20% fail even to list the ones that they use, hampering our ability to evaluate content validity. Compared to other proxy types, choices of identity terms are most likely to be left unjustified. For example, the description \"male indicating words (e.g., man, male etc.) or female indicating words (woman, female etc.)\" (Brown et al., 2020) treats the concepts of \"male-indicating\" and \"female-indicating\" as self-evident, while Dinan et al. (2020) refer to \"masculine and feminine [] tokens.\"\nOther bias tests repurpose existing terms from other work but in ways that may not make sense in the new contexts. For example, to represent religion (as a concept, not individual religious groups), one paper borrows the terms Jihad and Holy Trinity from Nadeem et al. (2021). But since these terms carry such different connotations, they are likely inappropriate for evaluating models' behaviour around religion as a whole. Another borrows schoolgirl from Bolukbasi et al. (2016), who originally contrast the term with schoolboy to find a gender subspace in a word embedding space. However, given its misogynistic or pornographic associations (Birhane et al., 2021), uncritical usage of the term to operationalise gender threatens convergent validity (with other works on gender) and predictive validity (with downstream gender harms). Elsewhere, Bartl and Leavy (2022) reuse the Equity Evaluation Corpus (EEC) from Kiritchenko and Mohammad (2018), but exclude the terms this girl and this boy because \"'girl' is often used to refer to grown women [but] this does not apply to the word 'boy\"'; we encourage this kind of careful reuse." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Gender? I Hardly Know Her", "publication_ref": [ "b18", "b35", "b2", "b31" ], "table_ref": [], "text": "Gender is the most common demographic category studied in these tests (38%, Fig 1g). Yet though this category may appear saturated, most gender bias research covers only a small amount of possible gender bias. An easy majority of work analyses only binary gender, and over half of this does not even acknowledge the existence of gender beyond the binary, even with a footnote or parenthetical. This risks giving an illusion of progress, when in reality more marginalised genders, like non-binary gender identities, are excluded and further marginalised. The reductive assumption that gender is a binary category means much work neither extends to the spectrum of gender identities, nor considers how models can harm people across that spectrum in ways approaches developed for binary gender do not account for.\nAcross most gender bias work, discussions of the relationship between gender and proxy terms are missing or superficial; for example, he and she are almost always described as male and female pronouns, though they are widely used by nonbinary individuals7 (Dev et al., 2021) (an exception is Munro and Morrison (2020), who write of \"people who use 'hers,' 'theirs' and 'themself' to align their current social gender(s) with their pronouns' grammatical gender\"). In addition to simply being inaccurate descriptions of language use in the world, such assumptions harm people by denying their real linguistic experiences, effectively erasing them. Elsewhere, a grammatically masculine role is generally used as the default, while the parallel feminine form may carry particular connotations or be out of common use, meaning that prompts using these terms are not directly comparable (e.g., poet vs. poetess).\nWell Adjusted?\n35 tests (Fig 1f) operationalise bias by comparing the relative probability of proxies in sentences about different topics. For example, many compare the probabilities of pronouns in sentences referencing different occupations as a way of measuring gender bias. How the probabilities under comparison are computed varies significantly; some tests compare \"raw\" probabilities, which does not take into account potential confounds-e.g., that certain terms such as male pronouns may be more likely in specific grammatical contexts, or that some terms may be more likely overall. Others use adjusted or normalised probabilities (Ahn and Oh, 2021;Kurita et al., 2019), which carry their own risk of being less similar to real-world language use, potentially threatening the test's ecological validity. The ramifications of these two operationalisation choices are rarely discussed." }, { "figure_ref": [], "heading": "Basic Details & Scope Narrow Field of View", "publication_ref": [ "b34", "b5" ], "table_ref": [], "text": "We find that most bias tests investigate few models. 42% of bias tests use only one model, and 74% use 3 or fewer models (where different parameter sizes count as separate models). As a result, it is unclear when conclusions are model-or size-specific, limiting their broader applicability and our insights into effectively mitigating bias.\nSpeak English, Please.\n87% of bias tests examine only English (78), and of the 12 remaining that consider other languages, only two test in a language that is not highly resourced. Among tests beyond English, we identify two predominant types. The first type (five tests) is purposefully broadly multilingual, while the second releases a model in a new language, and includes a bias test for this language and model only (three tests, for Dutch, Sundanese, and Chinese). PaLM (Cho, 2022), a massively multilingual model, tests bias only in English, even though English bias measurements are unlikely to apply universally.\nThe patterns we identify in the above findings are largely similar in multilingual research, with some notable differences. 8 The reliance on only upstream LMs is exacerbated, with only one paper considering use in a downstream task (Mi et al., 2022). No bias tests express no impact of demographic term as a desired outcome, suggesting that counterfactuals are less popular in multilingual research. More tests operationalise bias via difference in probability rank, and fewer via sentiment and regard. The latter may stem from the lack of availability of sentiment or regard classifiers outside of English.\nA Bender Rule for Cultural Contexts Most English bias tests assume an American or Western context (a general trend in NLP (Bhatt et al., 2022)). Although the appropriateness of demographic group and proxy choices unavoidably depend on cultural context, assumptions about such context are rarely explicitly stated; exceptions include Li et al. (2020b) and Smith and Williams (2021)." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b39", "b3", "b42" ], "table_ref": [], "text": "Validity and Reliability Whereas validity asks, \"Is [the measurement] right?\", construct reliability asks, \"Can it be repeated?\" (Quinn et al., 2010). Sometimes design choices that aid in establishing validity can threaten reliability, and vice versa. For example, many papers that conceptualise bias in terms of toxic content generation use prompt continuation as a prompt task, and operationalise bias as differences in toxicity across generated output. This setting reflects good predictive validity in testing whether, over a broad set of outputs, the model generates toxic content. However, reliability may be threatened, as the test is brittle to choices such as decoding parameters (Akyürek et al., 2022). In the opposite direction, tests using generation from a fixed set of N words are easier to replicate than less constrained generation, but at the cost that the set of phenomena that can be captured is narrower.\nSimilarly, sentiment and toxicity have the advantage of having many available classifiers in different languages, and many tests use an ensemble of multiple such classifiers. Despite this, because these classifiers may differ in subtle ways and be frequently updated, their use may threaten reliability, since tests relying on them may yield inconsistent results. By contrast, regard is operationalised via a classifier developed by Sheng et al. (2019), and as papers' domains diverge from what Sheng et al. intend, validity is increasingly threatened. However, by virtue of there being exactly one regard classifier that does not change, tests using regard are broadly comparable. Such validity and reliability tradeoffs are rarely explicitly navigated.\nUnknown Unknowns Our taxonomy is a reflection of what is missing as much as what is present. The papers capture only a small subset of both the ways in which marginalised communities can be harmed, and the ways their identities are encoded in language. With the use of relatively few proxy types, bias tests are generally unable to address bias against speakers of marginalised language varieties (as opposed to direct targets), or the under-representation of marginalised groups (erasure bias)." }, { "figure_ref": [], "heading": "Recommendations", "publication_ref": [], "table_ref": [], "text": "Guided by our analysis, we formulate the following list of questions that future bias research can consult to inform experimental design. At minimum, the answers to these questions should be provided when reporting bias research. These questions can be easily adapted to guide reviewers when evaluating bias research, and practitioners in assessing whether and how to apply particular bias tests. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We hope that via our taxonomy and analysis, practitioners are better-equipped to understand and take advantage of the wealth of emerging approaches to bias testing-in particular, to clearly conceptualise bias and desired model outcomes, design meaningful and useful measurements, and assess the validity and reliability of those measurements." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our search was conducted exclusively in English, and we may have missed relevant papers written in other languages; this may have influenced the heavy English skew in our data. Some of the annotations of attributes and choices in this taxonomy rely on subjective judgements, particularly with regards to the clarity of conceptualisations of bias, desired outcomes, and justifications of proxy choices. As with any qualitative work, these results are influenced by our own perspectives and judgement. We did our best to address this through regular discussion, identifying disagreements early on when designing the taxonomy, and adopting a \"generous\" approach." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "All measurement approaches discussed in this paper encode implicit assumptions about language and culture, or normative assumptions about what we ought to do, which must be made explicit for them to be properly evaluated. We acknowledge our work will have been shaped by our own cultural experiences, and may similarly encode such assumptions. 1, isolated to the 12 multilingual bias tests to show the patterns there that differ from overall ones." }, { "figure_ref": [], "heading": "Type of", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank our anonymous reviewers for their feedback. Eddie L. Ungless is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Types of Validity See Table 2." }, { "figure_ref": [], "heading": "A.2 Full Taxonomy", "publication_ref": [], "table_ref": [], "text": "We provide here details of our taxonomy (Table 1), including detailed explanations of each option. " }, { "figure_ref": [], "heading": "Language", "publication_ref": [], "table_ref": [], "text": "" } ]
Bias research in NLP seeks to analyse models for social biases, thus helping NLP practitioners uncover, measure, and mitigate social harms. We analyse the body of work that uses prompts and templates to assess bias in language models. We draw on a measurement modelling framework to create a taxonomy of attributes that capture what a bias test aims to measure and how that measurement is carried out. By applying this taxonomy to 90 bias tests, we illustrate qualitatively and quantitatively that core aspects of bias test conceptualisations and operationalisations are frequently unstated or ambiguous, carry implicit assumptions, or be mismatched. Our analysis illuminates the scope of possible bias types the field is able to measure, and reveals types that are as yet under-researched. We offer guidance to enable the community to explore a wider section of the possible bias space, and to better close the gap between desired outcomes and experimental design, both for bias and for evaluating language models more broadly.
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
[ { "figure_caption": "Figure 1 :1Figure1: Our taxonomy (Table1) applied to 90 bias tests. Full details of terminology in Appendix A.2.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Stereotypes = Negative Assumptions ♥ Stereotypes form the majority of investigated harms (Fig 1b), but like Blodgett et al. (", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: The same as Table1, isolated to the 12 multilingual bias tests to show the patterns there that differ from overall ones.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Our taxonomy of attributes. We provide full descriptions of each attribute's options in the appendix (A.2).", "figure_data": "AttributeDescriptionChoicesBasic details and scopeLanguage(s)What language(s) is/are investigated?open-endedModel(s)What model(s) is/are investigated?open-endedCode available?Is code for the proposed bias test pub-yes, nolicly available?ConceptualisationUse context ♠What context will the language modelzero-shot/few-shot, upstream LM, dialogue, Q&Abe used in?Bias conceptualisationHow is bias-bias, fairness, stereotypes,stereotyping, toxic content generation, other, unclear♥harm, etc.-conceptualised?Desired outcome ♦How is a good model outcome concep-no impact of demographic term(s), negative stereotype is nottualised?in model, no harmful output generated, other, unclearOperationalisationPrompt taskWhat is the prompt task?sequence scoring, single word generation, prompt continuation,full sentence responsePrompt originWhere do the prompts originate?author, crowd-sourced, corpus, automatically generatedMetricWhat metric or strategy is used to mea-output content assessed, output quality assessed, difference insure bias or harm?probability (ranking over fixed set), most probable option(s),difference in output distributions, difference in regard, differ-ence in sentiment, difference in toxicityDemographicsFor which demographic groups is biasgender, ethnicity/race, religion, sexual orientation, otheror harm investigated?Proxy type(s)What term(s) is/are used to proxy the de-identity terms, pronouns, names, roles, dialect features, other,mographic groups under investigation?unclearExplicit demographics Are the choices of demographic groupsyes, noand accompanying proxies clearly de-fined and explained?Gender scopeFor work investigating gender, how isbinary gender only, binary gender only plus acknowledgement,gender treated?binary and other genders, other genders only", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table 1) applied to 90 bias tests. Full details of terminology in Appendix A.2.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overview of threats to validity. Each threat is derived from examples found in our analysis.Proxy type(s) Which term(s) is/are used to proxy the demographic groups under investigation?• identity terms: terms that refer directly to demographic groups, such as Muslim • pronouns • names: people's names • roles: terms that refer to social roles, such as mother • dialect features: terms reflecting dialectal variation, such as lexical items associated with African American Language (AAL) • other: other terms (annotator includes description in comment) • unclear: it is unclear what terms are used", "figure_data": "• ethnicity/race• religion• sexual orientation• other: other demographic groups (annotatorincludes description in comment)", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" } ]
Seraphina Goldfarb-Tarrant; Eddie Ungless; Esma Balkir; Su Lin Blodgett
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Robert Adcock; David Collier", "journal": "American political science review", "ref_id": "b1", "title": "Measurement validity: A shared standard for qualitative and quantitative research", "year": "2001" }, { "authors": "Jaimeen Ahn; Alice Oh", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Mitigating languagedependent ethnic bias in BERT", "year": "2021" }, { "authors": "Afra Feyza; Akyürek ; Muhammed Yusuf Kocyigit; Sejin Paik; Derry Tanti; Wijaya ", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Challenges in measuring bias via open-ended language generation", "year": "2022" }, { "authors": "Marion Bartl; Susan Leavy", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Inferring gender: A scalable methodology for gender detection with online lexical databases", "year": "2022" }, { "authors": "Shaily Bhatt; Sunipa Dev; Partha Talukdar; Dave Shachi; Vinodkumar Prabhakaran", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Recontextualizing fairness in NLP: The case of India", "year": "2022" }, { "authors": "Abeba Birhane; Uday Vinay; Emmanuel Prabhu; Kahembwe", "journal": "", "ref_id": "b6", "title": "Multimodal datasets: misogyny, pornography, and malignant stereotypes", "year": "2021" }, { "authors": "Lin Su; Solon Blodgett; Hal Barocas; Iii Daumé; Hanna Wallach", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "a. Language (technology) is power: A critical survey of \"bias\" in NLP", "year": "2020" }, { "authors": "Lin Su; Solon Blodgett; Hal Barocas; Iii Daumé; Hanna Wallach", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Language (technology) is power: A critical survey of \"bias\" in nlp", "year": "2020" }, { "authors": "Lin Su; Gilsinia Blodgett; Alexandra Lopez; Robert Olteanu; Hanna Sim; Wallach", "journal": "", "ref_id": "b9", "title": "Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets", "year": "2021" }, { "authors": "Tolga Bolukbasi; Kai-Wei Chang; James Y Zou; Venkatesh Saligrama; Adam Tauman; Kalai ", "journal": "", "ref_id": "b10", "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "year": "2016" }, { "authors": "Shikha Bordia; Samuel R Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Identifying and reducing gender bias in word-level language models", "year": "2019" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "T Donald; Campbell", "journal": "Psychological Bulletin", "ref_id": "b13", "title": "Factors relevant to the validity of experiments in social settings", "year": "1957" }, { "authors": "Rui Cao", "journal": "Association for Computational Lingustics", "ref_id": "b14", "title": "Holistic interpretation in locative alternation -evidence from self-paced reading", "year": "2021" }, { "authors": "Yang Cao; Yada Pruksachatkun; Kai-Wei Chang; Rahul Gupta; Varun Kumar; Jwala Dhamala; Aram Galstyan", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations", "year": "2022" }, { "authors": "Paula Czarnowska; Yogarshi Vyas; Kashif Shah", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b16", "title": "Quantifying social biases in NLP: A generalization and empirical comparison of extrinsic fairness metrics", "year": "2021" }, { "authors": "Pieter Delobelle; Ewoenam Tokpo; Toon Calders; Bettina Berendt", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Measuring fairness with biased rulers: A comparative study on bias metrics for pre-trained language models", "year": "2022" }, { "authors": "Sunipa Dev; Masoud Monajatipoor; Anaelia Ovalle; Arjun Subramonian; Jeff Phillips; Kai-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Harms of gender exclusivity and challenges in non-binary representation in language technologies", "year": "2021" }, { "authors": "Emily Dinan; A Gavin Abercrombie; Shannon Bergman; Dirk Spruit; Y-Lan Hovy; Verena Boureau; Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "SafetyKit: First aid for measuring safety in open-domain conversational systems", "year": "2022" }, { "authors": "Emily Dinan; Angela Fan; Ledell Wu; Jason Weston; Douwe Kiela; Adina Williams", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Multidimensional gender bias classification", "year": "2020" }, { "authors": "Susan Gass", "journal": "", "ref_id": "b21", "title": "Experimental research. Continuum companion to research methods in applied linguistics", "year": "2010" }, { "authors": "Suchin Samuel Gehman; Maarten Gururangan; Yejin Sap; Noah A Choi; Smith", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Realtoxicityprompts: Evaluating neural toxic degeneration in language models", "year": "2020" }, { "authors": "Seraphina Goldfarb-Tarrant; Rebecca Marchant; Ricardo Muñoz Sánchez; Mugdha Pandya; Adam Lopez", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Intrinsic bias metrics do not correlate with application bias", "year": "2021" }, { "authors": "Sophie Groenwold; Lily Ou; Aesha Parekh; Samhita Honnavalli; Sharon Levy; Diba Mirza; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Investigating African-American Vernacular English in transformer-based text generation", "year": "2020" }, { "authors": "Alex Hanna; Emily Denton; Andrew Smart; Jamila Smith; -Loud ", "journal": "Association for Computing Machinery", "ref_id": "b25", "title": "Towards a critical race methodology in algorithmic fairness", "year": "2020" }, { "authors": "Saad Hassan; Matt Huenerfauth; Cecilia Ovesdotter; Alm ", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Unpacking the interdependent systems of discrimination: Ableist bias in NLP systems through an intersectional lens", "year": "2021" }, { "authors": "Po-Sen Huang; Huan Zhang; Ray Jiang; Robert Stanforth; Johannes Welbl; Jack Rae; Vishal Maini; Dani Yogatama; Pushmeet Kohli", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Reducing sentiment bias in language models via counterfactual evaluation", "year": "2020" }, { "authors": "Abigail Z Jacobs; Hanna Wallach", "journal": "", "ref_id": "b28", "title": "Measurement and fairness", "year": "2021" }, { "authors": "John F Kihlstrom", "journal": "Perspectives on Psychological Science", "ref_id": "b29", "title": "Ecological validity and \"ecological validity", "year": "2021" }, { "authors": "Svetlana Kiritchenko; Saif Mohammad", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Examining gender and race bias in two hundred sentiment analysis systems", "year": "2018" }, { "authors": "Keita Kurita; Nidhi Vyas; Ayush Pareek; Alan W Black; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Measuring bias in contextualized word representations", "year": "2019" }, { "authors": "Tao Li; Daniel Khashabi; Tushar Khot; Ashish Sabharwal; Vivek Srikumar", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "UNQOVERing stereotyping biases via underspecified questions", "year": "2020" }, { "authors": "Tao Li; Daniel Khashabi; Tushar Khot; Ashish Sabharwal; Vivek Srikumar", "journal": "", "ref_id": "b33", "title": "Unqovering stereotyping biases via underspecified questions", "year": "2020" }, { "authors": "Fei Mi; Yitong Li; Yulong Zeng; Jingyan Zhou; Yasheng Wang; Chuanfei Xu; Lifeng Shang; Xin Jiang; Shiqi Zhao; Qun Liu", "journal": "", "ref_id": "b34", "title": "Pangubot: Efficient generative dialogue pre-training from pre-trained language model", "year": "2022" }, { "authors": "Robert Munro; Alex (carmen) Morrison", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Detecting independent pronoun bias with partiallysynthetic data generation", "year": "2020" }, { "authors": "Moin Nadeem; Anna Bethke; Siva Reddy", "journal": "", "ref_id": "b36", "title": "StereoSet: Measuring stereotypical bias in pretrained language models", "year": "2021" }, { "authors": "Nikita Nangia; Clara Vania; Rasika Bhalerao; Samuel R Bowman", "journal": "", "ref_id": "b37", "title": "Crows-pairs: A challenge dataset for measuring social biases in masked language models", "year": "2020" }, { "authors": " Raymond S Nickerson", "journal": "Review of general psychology", "ref_id": "b38", "title": "Confirmation bias: A ubiquitous phenomenon in many guises", "year": "1998" }, { "authors": "Burt L Kevin M Quinn; Michael Monroe; Colaresi; Michael H Crespin; Dragomir R Radev", "journal": "American Journal of Political Science", "ref_id": "b39", "title": "How to analyze political attention with minimal assumptions and costs", "year": "2010" }, { "authors": "Idan Schwartz", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Ensemble of MRR and NDCG models for visual dialog", "year": "2021" }, { "authors": "Preethi Seshadri; Pouya Pezeshkpour; Sameer Singh", "journal": "", "ref_id": "b41", "title": "Quantifying social biases using templates is unreliable", "year": "2022" }, { "authors": "Emily Sheng; Kai-Wei Chang; Premkumar Natarajan; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "The woman worked as a babysitter: On biases in language generation", "year": "2019" }, { "authors": "Emily Sheng; David Uthus", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Investigating societal biases in a poetry composition system", "year": "2020" }, { "authors": "Eric Michael; Smith ; Adina Williams", "journal": "", "ref_id": "b44", "title": "Hi, my name is martha: Using names to measure and mitigate bias in generative dialogue models", "year": "2021" }, { "authors": "Ryan Steed; Swetasudha Panda; Ari Kobren; Michael Wick", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Upstream Mitigation Is Not All You Need: Testing the Bias Transfer Hypothesis in Pre-Trained Language Models", "year": "2022" }, { "authors": "Caroline Stone", "journal": "Philosophy of Science", "ref_id": "b46", "title": "A defense and definition of construct validity in psychology", "year": "2019" }, { "authors": "Yi Zhou; Masahiro Kaneko; Danushka Bollegala", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Sense embeddings are also biased -evaluating social biases in static and contextualised sense embeddings", "year": "2022" } ]
[]
10.18653/v1/2020.acl-main.747
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b17", "b11", "b12", "b6", "b7", "b29", "b5", "b16", "b14", "b9", "b23", "b19", "b20", "b31", "b0" ], "table_ref": [], "text": "Classical Chinese was introduced to Japan approximately 2,000 years ago (Okimori, 2017). Then Classical Chinese began to be adapted to a Japanese form in Japanese reading and translating methods in the 8th century A.D. (Kin, 2010). This form is called Kanbun-Kundoku. For simplicity, we call it Kanbun in this paper. Kanbun has influenced many famous Japanese literary works, such as Manyoshu (Kobayashi, 1964) and The Tale of Genji (Duan, 2008). To this day, Kanbun still occupies 50 points out of 200 in the common test for Japanese university admissions, which shows the deep influence of Kanbun on Japanese culture.\nAlthough Chinese and Japanese have many characters in common, reading Classical Chinese is not easy for Japanese people because of the following 1 https://github.com/nlp-waseda/Kanbun-LM two reasons. First, Chinese (also Classical Chinese) is in SVO (Subject-Verb-Object) word order, which is the same as English. On the other hand, Japanese is in SOV (Subject-Object-Verb) word order, which leads to difficulties in understanding Chinese. Second, Chinese is an isolating language with little to no morphological variation and a nearly one-to-one ratio of morphemes to words. However, Japanese is an agglutinative language that attaches prefixes and suffixes to a word to indicate the grammatical relationship of that word in a sentence. These differences led to the creation of Kanbun. To make the text from SVO to SOV, from isolating to agglutinative, Japanese people developed a system of various conventional reading punctuation, diacritical and syntactic markers (Crawcour, 1965). We list the three main types of markers below and show a specific example of Kanbun in Figure 1. Since the Kanbun system is highly sophisticated, we omit to explain all the rules in this paper. There are also other systems for reading Classical Chinese in other regions like Korean Peninsula (Fujimoto, 2014) and Khitan, but we focus on the Japanese Kanbun system in this paper.\nKaeriten (ja:返り点) marks placed on the left side of characters indicating the characters need to be read in reverse, making the sentence from SVO to SOV. (e.g., \"我有レ兄\" (en:I have a brother) should be read as \"我兄有\", \"レ\" is the mark) Yomigana (ja:読 み 仮 名) Hiragana (Japanese phonological units) that are placed on the right side of characters, indicating the characters' reading in Japanese. (e.g., \"不\" (en:no) is read as \"ず\") Okurigana (ja:送り仮名) Katakana (Phonological units, collectively referred to as Kana with Hiragana) that are placed on the right side of characters for making the sentence from isolating to agglutinative. (e.g., the Chinese character \"飲\" (en:drink) is \"飲む\" in Japanese, which has an extra Kana) Figure 1: An example of Kanbun. \"春眠不覚暁\" (en: This morning of spring in bed I'm lying) is original Classical Chinese. To transform it into Kanbun, we first add Kaeriten, Yomigana, and Okurigana to the sentence. Two \"レ\" on the left side are Kaeriten, indicating the characters need to be read in reverse. On the right side, there is a Yomigana \"ず\", meaning \"不\" should be written as \"ず\". \"エ\" and \"ヲ\" are Okurigana, making the sentence from isolating to agglutinative. Now if we read the sentence following the above rules, the sentence becomes \"春眠暁を覚えず\" (While adding marks, we use Katakana like \"エ\" and \"ヲ\", but in a complete sentence, we use Hiragana like \"え\" and \"を\". They have no difference except for their looks).\nCompared to the vast amount of research and language resources available for Classical Chinese, there is little research on Kanbun, and the language resources for Kanbun are highly scarce. For instance, over 48,900 Tang poems (poems written in the characteristic style of the Tang dynasty) are included in Quan Tangshi and are all accessible via the Internet. However, to our knowledge, only around 500 Tang poems adapted to Kanbun are accessible. This large gap makes the research on Kanbun increasingly difficult. Although a lot of data of Kanbun exists in ancient books, it is beyond our ability to apply OCR to them and compile the results into clean data. Therefore, building a high-performance Classical-Chinese-to-Kanbun translator is the most efficient way to address the lack of Kanbun language resources. Moreover, understanding the mechanisms of Kanbun will also lead to understanding Classical Japanese literature (such as Wakan konkōbun, a mixture of Japanese and Chinese writing styles), as well as Japanese culture and thought.\nIn previous work, Yasuoka (2018,2019); Yasuoka et al. (2022) proposed a series of applications for Classical Chinese using Universal Dependencies (de Marneffe et al., 2021). Yasuoka (2020a,b) proposed a method for Classical-Chinese-to-Kanbun machine translation. However, this method is rule-based and less precise, and the author did not make a dataset to conduct a quantitative evaluation. In this work, we construct the first Classical-Chinese-to-Kanbun dataset in the world. Based on this, we introduce Kanbun-LM, where we fine-tune language models for reading and translating Classical Chinese in Japanese methods, trying to fill the resource gap.\nThe main contributions of our work are summarized as follows:\n• We construct the first Classical-Chinese-to-Kanbun dataset in the world, which addresses the lack of Kanbun language resources.\n• We introduce two tasks for the dataset, character reordering and machine translation, both of which are significant in Kanbun comprehension. We conduct quantitative evaluations for both tasks and achieved state-of-the-art results in both tasks using language models, which has shown major improvement over the baseline (Yasuoka, 2020a,b). We also construct a pipeline for the tasks and verify whether prereordering is helpful to machine translation.\n• We discuss the best evaluation method for Classical-Chinese-to-Kanbun translation by comparing the results with human scores, which is not covered in existing work.\n2 Related Work Since BERT (Devlin et al., 2019) and BERTlike models (Liu et al., 2019;Lan et al., 2019;He et al., 2020) were proposed, pre-training language models on a large corpus and fine-tuning them on downstream tasks have become a paradigm in NLP studies. In the Classical Chinese field, several pretrained models have also been proposed. Siku-BERT and SikuRoBERTa (Wang et al., 2021) are pre-trained on the Siku Quanshu corpus and evaluated on the following four tasks using the ACC dataset: word segmentation, punctuation restoration, POS tagging, and named entity recognition. GuwenBERT5 is pre-trained on the Daizhige corpus and evaluated on the CCLUE6 benchmark. Meanwhile, GPT (Radford et al., 2019)-based models such as SikuGPT27 and T5 (Raffel et al., 2020)based models such as Mengzi-T5 (Zhang et al., 2021) are also proposed for text generation.\nTo evaluate the general performance of pretrained language models, benchmarks for natural language understanding (NLU) tasks have been proposed in many languages. For Classical Chinese, CCLUE provides five NLU tasks, including sentence segmentation, named entity recognition, text classification, and text retrieval. Recently, WYWEB (Anonymous, 2022) has been proposed. It contains eight tasks, including sentence classification, sequence labeling, reading comprehension, and machine translation." }, { "figure_ref": [], "heading": "Work for Kanbun", "publication_ref": [ "b4", "b8", "b18", "b10" ], "table_ref": [], "text": "Yasuoka (2018) proposed a method to reorder Classical Chinese sentences to Japanese reading order using dependency parsing by Universal Dependencies (de Marneffe et al., 2021). First, the method applies morphological analysis to Classical Chinese sentences to segment them into tokens and assign POS tags. Second, it obtains dependency relations using the arc-planar algorithm (Gómez-Rodríguez and Nivre, 2010), which was mainly trained on Universal Dependencies of Mengzi, Lunyu, and Liji (these are all ancient Chinese books). Finally, it applies character reordering based on the results of dependency parsing and 24 rules proposed by the author.\nFurthermore, Yasuoka (2020a,b) proposed an encode-reorder-decode model, called UD-Kundoku, to translate Classical Chinese to Kanbun, while the encoding and reordering modules take the approaches introduced in Yasuoka (2018). To make the reordered sentences into Kanbun, the author introduced a rule-based decoding module that adds Okurigana to sentences and makes the sentences from isolating to agglutinative. Okurigana can be roughly divided into two categories: auxiliary words and inflectional suffixes. The rules also support special characters, such as characters left unpronounced and characters that need to be read twice when reading Kanbun.\nYasuoka (2020b) also conducted a brief evaluation for generated Kanbun results using BLEU (Papineni et al., 2002) and RIBES (Hirao et al., 2011). However, the author only evaluated a few examples and did not make an in-depth discussion." }, { "figure_ref": [], "heading": "Our Dataset and Tasks", "publication_ref": [], "table_ref": [ "tab_1", "tab_2", "tab_1" ], "text": "We construct a parallel dataset for Classical Chinese and Kanbun. The dataset consists of original ancient Chinese texts, Japanese reading orders, and Kanbun texts. We show examples in Table 1.\nAlthough it is crucial to choose texts that cover as many periods as possible since vocabulary and grammar change with time, it is difficult to construct a comprehensive dataset. To our knowledge, Tangshixuan8 (Selection of Tang Poems) is the largest resource containing both original ancient Chinese texts and translated Kanbun texts. We use this resource to make our dataset. For preprocessing, we extract the Japanese reading order from Kanbun by a rule-based program. For the special tokens that may not appear in Kanbun or appear multiple times, we annotated them manually. We also convert the characters from old character forms to new character forms (kind of like transforming Traditional Chinese to Simplified Chinese, but in Japanese character forms) using dictionaries to mitigate the out-of-vocabulary problem.\nTangshixuan contains a total of 465 poems. We split the dataset using group shuffle split to ensure that all sentences in one poem would not be split. Table 2 lists the statistics of the dataset.\nBased on the dataset, we introduce two tasks, character reordering and machine translation, both of which are significant in Kanbun comprehension. For character reordering, the goal is to transform Classical Chinese texts into Japanese reading orders, from SVO to SOV. Japanese reading orders as shown in Table 1, such as \"12543\", are the targets to be predicted. Machine translation is a sequenceto-sequence task that translates Classical Chinese texts into Kanbun. Since the source and target sentences share the vocabulary, it can also be considered as a multilingual rewriting task.\n4 Experimental Setup" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Implementation for Tasks", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our implementation details of the two tasks: character reordering and machine translation. We also construct a pipeline for the two tasks and verify whether pre-reordering is helpful to machine translation. We use NVIDIA A100 (40GB) for the experiments. Figure 2 shows an overview of our pipeline.\nFor character reordering, we propose a rankbased sorting method that fine-tunes BERT-like models to predict the rank (position in Japanese reading order) for every character in a sentence.\nWe split each sentence into characters and preprocess them into inputs by the form {character}{the character's index in the sentence}[SEP]{sentence}. The character's index is added to handle the cases where more than two identical characters appear in one sentence. To make gold labels for training, we normalize the ranks by the lengths of the sentences, making the value of ranks range from 0 to 1 (for a sentence of length 5, the ranks will be normalized from 1, 2, ..., 5 to 0.2, 0.4, ..., 1). Once we collect the output ranks, we sort them in ascending order and restore them to the original characters. Then we obtain a reordered sentence. An illustration of our sorting method is shown in (A) of Figure 2.\nFor machine translation, we simply fine-tune T5 and GPT to generate Kanbun from original Classical Chinese sentences. Since we want to see the real level of each model, we did not apply any filter to the generations.\nFor the pipeline, we pass original Classical Chinese sentences to the character reordering module first, making them from SVO to SOV. Then we pass the sorted sentences to the machine translation module to add Okurigana, transforming from isolating to agglutinative." }, { "figure_ref": [], "heading": "Pre-trained Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Models for Character Reordering", "publication_ref": [ "b5", "b1", "b1", "b3" ], "table_ref": [], "text": "We conduct experiments on five models in total for character reordering. Two models are pretrained on Japanese corpora, two on Chinese corpora, and one on Classical Chinese corpora. All of the models' tokenizers are character-based because we intend to predict the exact position of each character. We do not use multilingual models like mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020) because their tokenizers do not generally expect character-based encoding. 9 We use the following five models, all in base size, consisting of 12 layers, 768 dimensions of hidden states, and 12 attention heads. We show more details of the models in Appendix A and details of fine-tuning hyper-parameters in Appendix B.\nBERT-japanese-char This model is trained on the Japanese version of Wikipedia.\nRoBERTa-japanese-char-wwm This model is trained on the Japanese version of Wikipedia and the Japanese portion of CC-100 (Conneau et al., 2020). The whole word masking (wwm) (Cui et al., 2021) strategy is applied.\nBERT-chinese This model is trained on the Chinese version of Wikipedia.\nRoBERTa-chinese-wwm-ext This model is trained on 5.4B tokens, which include the Chinese version of Wikipedia and extra data. The whole word masking strategy is applied.\nRoBERTa-classical-chinese-char This model is derived from GuwenBERT. Simplified characters' embeddings are expanded to traditional characters, making vocabulary size larger." }, { "figure_ref": [], "heading": "Models for Machine Translation", "publication_ref": [ "b24", "b21", "b20" ], "table_ref": [], "text": "We use mT5 (Xue et al., 2021) and mGPT (Shliazhko et al., 2022) for machine translation experiments. We do not use Japanese models because the vocabulary size is much smaller than multilingual models, and they generate many [UNK] tokens, leading to unreadable generations. We show more details of the models in Appendix A and details of fine-tuning hyper-parameters in Appendix B. mT5 mT5 is trained on the mC4 (Raffel et al., 2020) corpus, covering 101 languages (Chinese and Japanese are both contained). We use small, base, and large models in our experiments.\nmGPT This model is trained on 60 languages using Wikipedia and the mC4 corpus (Chinese and Japanese are both contained)." }, { "figure_ref": [], "heading": "Automatic Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Metrics for Character Reordering", "publication_ref": [ "b2", "b13", "b32" ], "table_ref": [], "text": "Following the previous sentence reordering studies (Cui et al., 2020;Kumar et al., 2020;Zhu et al., 2021), we use the following metrics for evaluation.\nKendall's Tau (τ ) This metric measures the rank correlation between two sentences. Fewer the number of inversions needed to sort predicted character orders into ground truth character orders means stronger correlation and better performance.\nτ = 1 - 4(#inversions) #char(#char -1)\nPerfect Match Ratio (PMR) This metric measures the percentage of predicted character orders exactly matching with ground truth orders." }, { "figure_ref": [], "heading": "Metrics for Machine Translation", "publication_ref": [ "b15", "b30", "b18", "b10", "b15", "b15", "b30" ], "table_ref": [], "text": "There is no systematic work on evaluating Classical-Chinese-to-Kanbun translation. On top of BLEU and RIBES, which are used by Yasuoka (2020b), we add ROUGE (Lin, 2004) and BERTScore (Zhang et al., 2020) for our experiments, trying to maintain the diversity of evaluation metrics. We implemented all these metrics on the basis of characters since word-based evaluation highly depends on morphological analysis, and related packages for Kanbun are still immature.\nBLEU BLEU (Papineni et al., 2002) is the most widely used metric in machine translation. It is an n-gram-based metric that computes the exact match precision scores of n-grams that occur in the reference and the candidate.\nRIBES RIBES (Hirao et al., 2011) is a rankbased metric proposed to evaluate machine translation between languages with widely differing word orders. It applies word mapping to the reference and the candidate first, and then computes rank correlation as scores for the evaluation. ROUGE ROUGE (Lin, 2004) is a commonly used n-gram-based metric for summarization evaluation. Lin (2004) proposed ROUGE-n, which computes the exact match recall scores of n-grams, and ROUGE-L, which computes scores using longest common subsequence instead. Since ROUGE-1, ROUGE-2, and ROUGE-L did not show much difference in our experiments, we only report ROUGE-L's results in this paper.\nBERTScore BERTScore (Zhang et al., 2020) is an embedding-based metric that computes a similarity score for each token in the candidate with each token in the reference. To calculate characterbased scores, we use BERT-japanese-char (layer 11) in our experiments." }, { "figure_ref": [], "heading": "Manual Annotations", "publication_ref": [], "table_ref": [], "text": "We recruited three people who are bilingual in Chinese and Japanese as our human annotators. There are two criteria for annotator selection: (1) ability to read Classical Chinese in original word order;\n(2) ability to get full marks in the Kanbun part of the Japanese university admission exam.\nFor character reordering, to compare with the models, we asked the annotators to do the same sorting task, which the models did, with no access to reference materials and the Internet. We collected results, computed Kendall's Tau and PMR scores, and averaged them.\nFor machine translation, we asked the annotators to evaluate models' generations according to the following three metrics, rated on a 5-point scale from 1 to 5 (larger is better). The reference sentences were also evaluated to measure the quality of the dataset. The annotators were allowed to search for reference materials in this evaluation.\nRelevance This rating measures how well the translation is done, which judges whether the content is translated without any shortage or deviation.\nAccuracy This rating measures the quality of a generation, which judges whether it is lexically and grammatically correct in Japanese.\nFluency This rating measures the fluency and naturalness of a generation and whether the rhythm of Classical Chinese remains." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Character Reordering", "publication_ref": [], "table_ref": [], "text": "The results of the character reordering task are presented in Table 3. UD-Kundoku is the baseline method that was proposed by Yasuoka (2020a,b). Human scores are the average of the three annotators' results.\nAll the BERT-like models outperformed the baseline and human scores. The two Chinese models performed slightly better than the two Japanese models, and RoBERTa-classical-chinesechar, which was pre-trained on the ancient Chinese corpus, performed the best. Compared to the baseline, RoBERTa-classical-chinese-char achieved 22.5% better Kendall's Tau and 94.7% better PMR scores. Compared to human scores, RoBERTa-classical-chinese-char achieved 11.8% better Kendall's Tau and 29.2% better PMR scores." }, { "figure_ref": [], "heading": "Gap between the Chinese and Japanese models.", "publication_ref": [ "b1" ], "table_ref": [], "text": "Since more ancient texts are present in a Chinese corpus like Wikipedia, we speculate that the score gap between the Chinese and Japanese models originates from the pre-training corpus rather than the reading orders of the pre-training languages. Considering that this task requires converting SVO to SOV, it would be ideal to use both Chinese and Japanese corpora for pre-training. However, since the existing multilingual models cannot guarantee to tokenize an input text into characters, we leave this validation to future work.\nAdditional data did not help. The two RoBERTa models did not score higher than the two BERT models. This is probably because many ancient texts do not exist in the additional corpus like CC-100 (Conneau et al., 2020), and thus the additional training in RoBERTa did not strengthen the models' understanding of Classical Chinese.\nBERT is more accurate in details. When comparing with human scores, we had an interesting finding that although the PMR scores of humans and RoBERTa-japanese-char-wwm are similar, Kendall's Tau score of the model is 5.9% higher. This indicates that BERT is more accurate than humans in predicting the details of the orders. Although our annotators are bilingual, they are not experts in Classical Chinese. We hope to collaborate with real experts in the future to conduct experiments and see if BERT can still retain an advantage.\nModel Setup τ PMR UD-Kundoku 0.770 0.402 Human 0.844 0.606 BERT-japanese-char 0.898 0.637 RoBERTa-japanese-char-wwm 0.894 0.600 BERT-chinese 0.917 0.689 RoBERTa-chinese-wwm-ext 0.920 0.718 RoBERTa-classical-chinese-char 0.944 0.783 Table 3: Kendall's Tau (τ ) and PMR scores of character reordering. UD-Kundoku is the baseline, and human scores are the average of the three annotators' results.\nError analysis. Since the PMR score of our best model is 0.783, most predicted orders are exactly correct. However, we still found some error patterns that the model encountered. It is not easy to distinguish whether a pair of two characters is a noun or a combination of a verb and a noun. Moreover, determining the order becomes challenging when two verbs appear in a sentence." }, { "figure_ref": [], "heading": "Machine Translation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Performance", "publication_ref": [ "b22", "b20" ], "table_ref": [ "tab_3", "tab_4" ], "text": "Table 4 lists the results of machine translation, which contains the automatic and manual evaluation metrics. UD-Kundoku is the baseline, and the reference is the Kanbun target.\nFor the automatic evaluation, all our models exceeded the baseline in all evaluation metrics. The performance of mT5 increased as the model size increases, with mT5-large performing best. The performance of mGPT and mT5-small are close to each other.\nFor the human evaluation, we asked annotators to evaluate only the translations of mT5-small, mT5-large, and mGPT. This is because mT5-base performs close to mT5-large, and the baseline's results are too poor to be evaluated. As with the automatic evaluation, mT5-large performed the best. On the other hand, mT5-small significantly outperformed mGPT in this evaluation. The reference sentences obtained very high scores, proving that our dataset's Kanbun data is of high quality. We also calculated Fleiss' Kappa to measure Inter-Annotator Agreement (IAA). The results of Fleiss' Kappa for relevance, accuracy, and fluency are 0.360, 0.371, and 0.341, which show fair agreements (Viera et al., 2005).\nGeneration examples. We show three generation examples in Table 5. In all three examples, mT5-large performed flawlessly, giving the same translations as the reference. mT5-base and mT5small generated translations similar to mT5-large, but with some minor errors. mGPT sometimes repeated the characters in the original sentences (\"事\" in (a), \"出\" in (b), and \"鳳\" in (c)), which lowers the scores of human evaluation. \"未\" in (c) is an example of special characters that need to be read twice, which should be read as \"未だ...ず\" (en:yet). In this case, mT5-base and mT5-large generated the correct translation. However, mT5-small and mGPT could not recognize it as a special character.\nWhy is mGPT so weak? Although mGPT has almost 1.5 times the number of parameters of mT5large (detailed model sizes can be found in Appendix A), its translations are not even as good as mT5-small. Since mT5 and mGPT are both mainly trained on mC4 (Raffel et al., 2020), the effect of the pre-training corpus can be largely excluded. One reason is the repetition of words that we have explained before. For other reasons, we speculate that the encoder modules in mT5 have a significant role in comprehending Classical Chinese. However, this is only a hypothesis and needs to be tested with more future experiments." }, { "figure_ref": [], "heading": "Correlation between Evaluation Metrics", "publication_ref": [ "b30" ], "table_ref": [], "text": "We show Pearson and Spearman correlation coefficients between the automatic evaluation metrics and human evaluation metrics in Table 6. BERTScore has the greatest correlation with all three human evaluation metrics. BLEU and ROUGE-L also performed well. The rank-based metric, RIBES, performed the worst. We notice that, compared to BLEU and ROUGE-L, BERTScore only has a slight lead in the correlation with relevance. However, the advantage has increased in correlation with accuracy and fluency. We speculate that this is because BERTScore can potentially capture sequence information (Zhang et al., 2020), which makes it more possible to judge whether a sentence is accurate and fluent. We also speculate that BERTScore better suits Classical-Chinese-to-Kanbun because Kanbun is generally very short, which can cause BLEU and ROUGE to be influenced by small changes.\nWe also show the correlation between the human evaluation metrics in Table 6. Accuracy and fluency have the greatest correlation, which indicates that grammatically and lexically correct sentences are also fluent. In general, the correlation between the metrics is relatively high. To consider more different perspectives, we hope to reduce the correlation by discussing with Classical Chinese experts and reformulating the manual evaluation metrics in future work." }, { "figure_ref": [], "heading": "Pipeline", "publication_ref": [], "table_ref": [ "tab_6", "tab_3" ], "text": "We show the pipeline results in Table 7. The first row of each model is the direct machine translation results, which are also shown in Table 4. The second row (\"+ reorder\") shows the results using RoBERTa-classical-chinese-char to reorder characters before passing the sentences to machine translation. The third row (\"+ reorder (gold)\") uses the gold labels of the reading orders instead of the predictions by RoBERTa to reorder characters. By pre-reordering using RoBERTa, most of the evaluation metrics of mT5-small were improved. mGPT basically remained at the original level. While mT5-base and mT5-large showed a decreasing trend in most of the metrics. We speculate that as the model's performance increases, the model will gradually be able to do character reordering and machine translation at the same time. Since the predictions of RoBERTa are not 100% accurate, wrong predictions may confuse models and lead to their inability to determine correct orders.\nIn contrast, by pre-reordering using the gold labels, all models received some degree of improvement in almost all evaluation metrics. This indicates that correct pre-reordering does help machine translation, and it is necessary to do more work on improving the character reordering module." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this paper, to address the lack of Kanbun language resources, we used language models to read Classical Chinese in Japanese reading orders and translate Classical Chinese into Kanbun. We constructed the first Classical-Chinese-to-Kanbun dataset in the world, which includes original ancient Chinese texts, translated Kanbun texts, and the Japanese reading orders.\nFurthermore, we introduced two tasks for the dataset: character reordering and machine translation. We achieved state-of-the-art results in both tasks, which have a great lead over the baseline. We also constructed a pipeline for the two tasks and verified that accurate pre-reordering is helpful for machine translation. However, the accuracy of current reordering models is not enough, and future efforts are needed improve the accuracy.\nMoreover, we discussed which automatic evaluation metric is the most suitable for Classical-Chinese-to-Kanbun translation by computing the correlation between the automatic and human evaluation metrics. In our experiments, BERTScore is the best. However, we only tested with characterbased metrics. More experiments are still needed to test subword-based and sentence-based metrics.\nIn the future, we hope to continuously update the dataset to include an increasingly comprehensive range of ancient texts. We also hope to collaborate with experts in Classical Chinese to find the upper bound of human character reordering accuracy, refine the manual evaluation metrics to a more streamlined one, and make a deeper exploration on the best automatic evaluation metric." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Due to the lack of data, our dataset is not comprehensive since it only consists of Tang poems. Our model may not perform well on unseen data in other forms. We plan to update the dataset in the future continuously.\nOur evaluation metrics and generation results for the machine translation tasks are not certified by experts in Classical Chinese, so the results and discussions in this paper are not entirely reliable. We welcome more experts and researchers to join our work in the future.\nDue to the limitation of GPU resources, we do not experiment on larger models. We welcome researchers to test our method on large models and make some deeper discussions." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by JSPS KAKENHI Grant Number JP21H04901. We are grateful to the annotators who have spent much of their time helping with the experiments. We would also like to thank the reviewers for their insightful comments for improving the paper." }, { "figure_ref": [], "heading": "A Details of pre-trained models", "publication_ref": [], "table_ref": [], "text": "We show the details of the pre-trained models used in our experiments below. Table 8 lists the details of the BERT-like models for character reordering, and Table 9 lists those of the pre-trained models for machine translation. " }, { "figure_ref": [], "heading": "B Hyper-parameters", "publication_ref": [], "table_ref": [], "text": "We show the hyper-parameters used in our experiments in Table 10. The numbers in the curly brackets indicate that grid searches were performed to select the best fit. " } ]
Recent studies in natural language processing (NLP) have focused on modern languages and achieved state-of-the-art results in many tasks. Meanwhile, little attention has been paid to ancient texts and related tasks. Classical Chinese first came to Japan approximately 2,000 years ago. It was gradually adapted to a Japanese form called Kanbun-Kundoku (Kanbun) in Japanese reading and translating methods, which has significantly impacted Japanese literature. However, compared to the rich resources for ancient texts in mainland China, Kanbun resources remain scarce in Japan. To solve this problem, we construct the first Classical-Chinese-to-Kanbun dataset in the world. Furthermore, we introduce two tasks, character reordering and machine translation, both of which play a significant role in Kanbun comprehension. We also test the current language models on these tasks and discuss the best evaluation method by comparing the results with human scores. We release our code and dataset on GitHub 1 .
Kanbun-LM: Reading and Translating Classical Chinese in Japanese Methods by Language Models
[ { "figure_caption": "Figure 2 :2Figure 2: An overview of the pipeline. (A) is the character reordering module and (B) is the machine translation module. (A) receives original Classical Chinese sentences and reorders them into Japanese reading order. (B) receives reordered sentences from (A) and translates them into Kanbun.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Examples of our dataset. Each instance has a triple of the original ancient Chinese text, the Japanese reading order (the numbers represent their index in the original text), and the translated Kanbun text.", "figure_data": "SplitPoems Sentences CharactersTrain3722,73116,411Validation463202,038Test473702,254", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics of our dataset. The number of characters refers to the original ancient Chinese data.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of machine translation, containing the automatic and manual evaluation metrics. UD-Kundoku is the baseline, and reference is the Kanbun target of translation. Laid down my pen and turned to the war Mounted my horse and left through the gates The forest's battle drums remain unabated", "figure_data": "Model Setup BLEU RIBES ROUGE-L BERTScore Relevance Accuracy FluencyUD-Kundoku 0.0970.3090.5460.884---reference----4.9584.9514.949mT5-small0.3170.4280.6590.9143.2193.0023.153mT5-base0.4620.5200.7350.930---mT5-large0.5140.5830.7470.9343.9483.8843.904mGPT0.3030.4760.6060.8982.5482.2702.236Model Setup (a)(b)(c)input投筆事戎軒駆馬出関門鳳林戈未息reference筆を投じて戎軒を事とす馬を駆って関門を出づ鳳林戈未だ息まずmT5-small筆を投じて戎軒を事す馬を駆って関門に出づ鳳林戈未だ息しmT5-base筆を投じて戎軒に事す馬を駆って関門に出で鳳林戈未だ息まずmT5-large筆を投じて戎軒を事とす馬を駆って関門を出づ鳳林戈未だ息まずmGPT筆を投じて戎軒に事とすを事馬を駆って関門を出でんとすも出で鳳林戈未だ息まずかとすかとす鳳(English tr.)", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Generation examples of machine translation. Input is the original Classical Chinese sentence, and reference is the Kanbun target of translation.", "figure_data": "MetricRelevanceAccuracyFluencyrρrρrρBLEU0.667 0.650 0.637 0.605 0.594 0.576RIBES0.480 0.497 0.453 0.449 0.389 0.417ROUGE-L0.688 0.677 0.631 0.610 0.599 0.584BERTScore 0.707 0.691 0.671 0.642 0.644 0.625Relevance--0.862 0.849 0.835 0.829Accuracy0.862 0.849--0.946 0.947Fluency0.835 0.829 0.946 0.947--Table 6: Pearson (r) and Spearman (ρ) correlation co-efficients for relevance, accuracy, and fluency betweenautomatic metrics and human judgment. We also showthe correlation between each human evaluation metric.", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of the pipeline.", "figure_data": "The first row of each", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
Hao Wang; Hirofumi Shimizu; Daisuke Kawahara
[ { "authors": " Anonymous", "journal": "", "ref_id": "b0", "title": "Wyweb: A classical chinese nlp evaluation benchmark", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Unsupervised cross-lingual representation learning at scale", "year": "1965" }, { "authors": "Baiyun Cui; Yingming Li; Zhongfei Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "BERT-enhanced relational sentence ordering network", "year": "2020" }, { "authors": "Yiming Cui; Wanxiang Che; Ting Liu; Bing Qin; Ziqing Yang", "journal": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "ref_id": "b3", "title": "Pre-training with whole word masking for chinese BERT", "year": "2021" }, { "authors": "Marie-Catherine De Marneffe; Christopher D Manning; Joakim Nivre; Daniel Zeman", "journal": "Computational Linguistics", "ref_id": "b4", "title": "Universal Dependencies", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Xiaoye Duan", "journal": "北陸大学紀要 = Bulletin of Hokuriku University", "ref_id": "b6", "title": "『源氏物語』における『白氏 文集』引用の特色-登場人物の口ずさんだ詩句 をめぐって", "year": "2008" }, { "authors": "Yukio Fujimoto", "journal": "", "ref_id": "b7", "title": "日韓漢文訓読研究", "year": "2014" }, { "authors": "Carlos Gómez; -Rodríguez ; Joakim Nivre", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "A transition-based parser for 2-planar dependency structures", "year": "2010" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b9", "title": "Deberta: Decodingenhanced bert with disentangled attention", "year": "2020" }, { "authors": "Tsutomu Hirao; Hideki Isozaki; Kevin Duh; Katsuhito Sudoh; Hajime Tsukada; Masaaki Nagata", "journal": "言語処理学会年次大会発表論文集", "ref_id": "b10", "title": "Ribes:順位相関に基づく翻訳の自動評価法て", "year": "2011" }, { "authors": "Kin Bunkyo", "journal": "", "ref_id": "b11", "title": "漢文と東アジア-訓読の文化 圏", "year": "2010" }, { "authors": "Yoshinori Kobayashi", "journal": "国語学", "ref_id": "b12", "title": "万葉集における漢文訓 読語の影響", "year": "1964" }, { "authors": "Pawan Kumar; Dhanajit Brahma; Harish Karnick; Piyush Rai", "journal": "", "ref_id": "b13", "title": "Deep attentive ranking networks for learning to order sentences", "year": "2020" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b14", "title": "Albert: A lite bert for selfsupervised learning of language representations", "year": "2019" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b16", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Takuya Okimori", "journal": "", "ref_id": "b17", "title": "日本語全史", "year": "2017" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b19", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b20", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "year": "2020" }, { "authors": "Oleh Shliazhko; Alena Fenogenova; Maria Tikhonova; Vladislav Mikhailov; Anastasia Kozlova; Tatiana Shavrina", "journal": "", "ref_id": "b21", "title": "mgpt: Few-shot learners go multilingual", "year": "2022" }, { "authors": "J Anthony; Joanne M Viera; Garrett", "journal": "Fam med", "ref_id": "b22", "title": "Understanding interobserver agreement: the kappa statistic", "year": "2005" }, { "authors": "Dongbo Wang; Chang Liu; Zihe Zhu; Jangfeng Liu; Haotian Hu; Si Shen; Bin Li", "journal": "", "ref_id": "b23", "title": "Siku-bert与sikuroberta:面向数字人文的《四全》 模 型建及用研究", "year": "2021" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Koichi Yasuoka", "journal": "", "ref_id": "b25", "title": "漢文の依存文法解析と返り 点の関係について. 日本漢字学会第1回研究大 会予稿集", "year": "2018" }, { "authors": "Koichi Yasuoka", "journal": "", "ref_id": "b26", "title": "Universal dependencies treebank of the four books in classical chinese", "year": "2019" }, { "authors": "Koichi Yasuoka", "journal": "", "ref_id": "b27", "title": "漢文の依存文法解析にもと づく自動訓読システム. 日本漢字学会第3回研 究大会予稿集", "year": "2020" }, { "authors": "Koichi Yasuoka", "journal": "東洋学へのコンピュータ利用", "ref_id": "b28", "title": "漢 文 自 動 訓 読 ツ ー ルud-kundokuの開発", "year": "2020" }, { "authors": "Koichi Yasuoka; Christian Wittern; Tomohiko Morioka; Takumi Ikeda; Naoki Yamazaki; Yoshihiro Nikaido; Shingo Suzuki; Shigeki Moro; Kazunori Fujita", "journal": "情報処理学会論文誌", "ref_id": "b29", "title": "古 典 中 国 語 ( 漢 文 )universal de-pendenciesとその応用", "year": "2022" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b30", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "Zhuosheng Zhang; Hanqing Zhang; Keming Chen; Yuhang Guo; Jingyun Hua; Yulong Wang; Ming Zhou", "journal": "", "ref_id": "b31", "title": "Mengzi: Towards lightweight yet ingenious pre-trained models for chinese", "year": "2021" }, { "authors": "Yutao Zhu; Jian-Yun Nie; Kun Zhou; Shengchao Liu; Yabo Ling; Pan Du", "journal": "", "ref_id": "b32", "title": "Bert4so: Neural sentence ordering by fine-tuning bert", "year": "2021" } ]
[ { "formula_coordinates": [ 5, 349.69, 360.68, 129.98, 24.43 ], "formula_id": "formula_0", "formula_text": "τ = 1 - 4(#inversions) #char(#char -1)" } ]
10.18653/v1/2021.emnlp-main.130
2023-05-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b9", "b1", "b10", "b15", "b15", "b3", "b9", "b4", "b5", "b1", "b10", "b15", "b0", "b9" ], "table_ref": [], "text": "Simultaneous machine translation (SiMT) (Gu et al., 2017;Ma et al., 2019;Arivazhagan et al., 2019;Ma et al., 2020;Zhang et al., 2020), which outputs the generated translation before reading the whole source sentence, is applicable to many realtime scenarios, such as live broadcast and real-time subtitles. To achieve the goal of high translation quality and low latency (Zhang and Feng, 2022b), the SiMT model relies on a policy that determines the number of source tokens to read during the translation of each target token.\nThe translation policy plays a pivotal role in determining the performance of SiMT, as an imprecise policy can lead to degraded translation quality or introduce unnecessary delays, resulting in poor translation performance (Zhang and Feng, 2022c).\nTherefore, it is crucial to establish an optimal policy that achieves good latency-quality trade-offs. However, the absence of a golden policy between the source and target makes it challenging for the SiMT model to acquire the explicit supervision required for learning the optimal policy. According to Zhang et al. (2020), the SiMT model will learn better policy if it is trained with external supervision. Consequently, by constructing the optimal policy between the source and target, we can train the SiMT model, which will then generate translations based on the learned policy during inference.\nHowever, the existing methods, including fixed policy and adaptive policy, have limitations in learning the optimal policy due to the lack of appropriate explicit supervision. For fixed policy (Dalvi et al., 2018;Ma et al., 2019;Elbayad et al., 2020;Zhang and Feng, 2021b), the model relies on heuristic rules to generate translations. However, these rules may not prompt the SiMT model to output the generated translation immediately, even when there is sufficient source information to translate the current target token. Consequently, the fixed policy often cannot achieve good latency-quality tradeoffs because of its rigid rules. For adaptive policy (Gu et al., 2017;Arivazhagan et al., 2019;Ma et al., 2020;Zhang and Feng, 2022b), the model can dynamically determine its policy based on the translation status, leading to improved performance. Nevertheless, precise policy learning without explicit supervision remains challenging. Some methods (Zhang et al., 2020;Alinejad et al., 2021) attempt to construct learning labels for the policy offline by introducing external information. But the constructed labels for policy learning cannot guarantee that they are also optimal for the translation model.\nUnder these grounds, our goal is to search for an optimal policy through self-learning during training, eliminating the need for external supervision. Subsequently, this optimal policy can be employed to guide policy decisions during inference. In SiMT, increasing the number of source tokens read improves translation quality but also leads to higher latency (Ma et al., 2019). However, as the length of the read-in source sequence grows, the profit of translation quality brought by reading more source tokens will also hit bottlenecks (Zhang and Feng, 2021b). Therefore, the gain of reading one source token can be evaluated with the ratio of the improvement in translation quality to the corresponding increase in latency. The optimal policy will make sure that every decision of reading or writing will get the greatest gain. In this way, after translating the whole source sequence, the SiMT can get the greatest gain, thereby achieving good latency-quality trade-offs.\nIn this paper, we propose a SiMT method based on binary search (BS-SiMT), which leverages binary search to construct the optimal translation policy online and then performs policy learning accordingly. Specifically, BS-SiMT model consists of a translation model and an agent responsible for policy decisions during inference. To construct the optimal policy, the translation model treats potential source positions as search interval and selects the next search interval by evaluating the concavity in binary search. This selection process effectively identifies the interval with the highest gain, thus enabling the construction of an optimal policy that ensures good performance. Subsequently, the constructed policy is used to train the agent, which determines whether the current source information is sufficient to translate the target token during inference. If the current source information is deemed sufficient, the translation model outputs the generated translation; otherwise, it waits for the required source tokens. Experiments on De↔En and En↔Vi translation tasks show that our method can exceed strong baselines under all latency." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b9", "b4", "b9", "b4" ], "table_ref": [], "text": "For SiMT task, the model incrementally reads the source sentence x = (x 1 , ..., x J ) with length J and generates translation y = (y 1 , ..., y I ) with length I according to a policy. To define the policy, we introduce the concept of the number of source tokens read when translating target token y i , denoted as g i . Then the translation policy can be formalized as g = (g 1 , ..., g I ). The probability of translating target token y i is p θ (y i |x ≤g i , y <i ), where x ≤g i is the source tokens read in when translating y i , y <i is the output target tokens and θ is model parameters. Consequently, the SiMT model can be optimized by minimizing the cross-entropy loss:\nL CE = - I i=1 log p θ (y ⋆ i |x ≤g i , y <i ),(1)\nwhere y ⋆ i is the ground-truth target token. Because our policy is based on wait-k policy (Ma et al., 2019) and multi-path method (Elbayad et al., 2020), we briefly introduce them.\nWait-k policy For wait-k policy (Ma et al., 2019), which is the most widely used fixed policy, the model initially reads k source tokens and subsequently outputs and reads one token alternately. Therefore, g i is represented as:\ng k i = min{k + i -1, I},(2)\nwhere I is the length of the source sentence.\nMulti-path To avoid the recalculation of the encoder hidden states every time a source token is read, multi-path (Elbayad et al., 2020) introduces a unidirectional encoder to make each source token only attend to preceding tokens. Furthermore, during training, the model can be trained under various by sampling latency k uniformly:\nL ECE = - k∼U (K) I i=1 log p θ (y ⋆ i |x ≤g k i , y <i ), (3\n)\nwhere k is uniformly sampled form K = [1, ..., I]. Therefore, the model can generate translation under all latency by only using a unified model. " }, { "figure_ref": [ "fig_0" ], "heading": "Preliminary Analysis", "publication_ref": [ "b4" ], "table_ref": [], "text": "In this section, we explore the influence of the number of read-in source tokens on translation quality. We employ the multi-path translation model (Elbayad et al., 2020) and select a bucket of samples from the IWSLT14 De→En test set, consisting of 295 sentences with the same target length (Zhang and Feng, 2022d). To analyze the variations, we utilize the probability of translating the groundtruth token as a measure of translation quality. For each relative source position q, we compute the probability p q i of translating the ground-truth y ⋆ i :\np q i = p(y ⋆ i |x ≤⌈q * J⌉ , y <i ), (4\n)\nwhere J is the length of the source sentence, and compute the average p q i across all samples. Since the lengths of the source sentences vary across different samples, we utilize the relative position, i.e., the proportion of the source position to the end of the sentence. The results in Figure 1 show that the probability of translating target tokens increases with the number of source tokens. Notably, the necessary source tokens contribute the most to the improvement in translation quality. This finding suggests that translation quality often relies on the model obtaining the necessary source information, which is determined by the policy. This incremental nature observed here suggests that we can utilize binary search to get the policy, providing an important basis for our method." }, { "figure_ref": [], "heading": "The Proposed Method", "publication_ref": [], "table_ref": [], "text": "Our BS-SiMT model contains two components: the translation model and the agent. The transla-tion model, which is fine-tuned from the multi-path model, employs binary search to iteratively select the next interval with the highest gain. This process allows the model to search for the optimal policy and subsequently train itself based on the searched policy. Subsequently, we utilize the bestperforming translation model to construct the optimal policy, which serves as explicit supervision for training the agent. During inference, the agent guides the translation model to generate translations with good latency-quality trade-offs. The details are introduced in the following sections." }, { "figure_ref": [ "fig_1" ], "heading": "Constructing Optimal Policy", "publication_ref": [ "b8" ], "table_ref": [], "text": "The optimal policy ensures that the SiMT model gets good latency-quality trade-offs (Iranzo-Sánchez et al., 2021). The translation model plays a key role in searching for the optimal policy by identifying the number of source tokens to be read, maximizing the gain for the current translation. However, considering all possible numbers of source tokens for each target token would be computationally expensive and may not effectively balance latency and translation quality (Zhang and Feng, 2023b). To address this issue, we employ binary search to determine the ideal number of source tokens to be read for each target token by evaluating the midpoint concavity of the interval.\nTo achieve this goal, we allocate the search interval of the number of source tokens for each target token. We denote the search interval for the target token y i as [l i , r i ], where l i and r i represent the minimum and maximum number of source tokens to be considered, respectively. Then we can get the Algorithm 1: Search for Optimal Policy Input: Source sentence x, Target sentence y, Translation model p θ () Initialize l i , r i while l i < r i do calculate m i as Eq.( 5) calculate p l i i , p m i i , p r i i as Eq.( 6) if Eq.( 8) is satisfied then\nr i ← m i //Left Range else l i ← m i + 1 //Right Range end g i = l i median value m i of the interval [l i , r i ],\nwhich is calculated as:\nm i = ⌊ l i + r i 2 ⌋.(5)\nNext, the probability p l i i of translating ground-truth token y ⋆ i based on the previous l i source tokens can be calculated as follows:\np l i i = p θ (y ⋆ i |x ≤l i , y <i ).(6)\nSimilarly, p m i i and p r i i can also be calculated as Eq.( 6). We then discuss the conditions for selecting [l i , m i ] or [m i +1, r i ] as the next search interval. Obviously, the interval with a greater gain should be selected each time. The gain of interval [l i , m i ] should be defined as:\np m i i -p l i i m i -l i .(7)\nTherefore, we select the interval with greater gain by comparing\np m i i -p l i i m i -l i and p r i i -p m i i r i -m i . Since m i -l i is equal to r i -m i , it is actually a compar- ison between p m i i and p l i i +p r i i 2\n. Hence, we select the interval [l i , m i ] if the following condition is satisfied:\np m i i ≥ p l i i + p r i i 2 , (8\n)\notherwise we choose the interval [m i +1, r i ]. The intuition behind this decision is that if the function composed of (l i , p l i i ), (m i , p m i i ), and (r i , p r i i ) exhibits midpoint concavity, we select the interval [l i , m i ]; otherwise, we choose [m i +1, r i ]. When the upper and lower boundaries of the search interval are the same, the model has found an appropriate policy. Figure 2 the policy through binary search. We also provide a formal definition of the binary search process in Algorithm 1. Importantly, the search process for all target tokens is performed in parallel.\nThe translation model undergoes iterative training to align with the searched policy, ensuring a gradual convergence. The optimization process of the translation model and the search for the optimal policy are carried out in an alternating manner. As a result, we construct the optimal translation policy g = (g 1 , ..., g I ) based on the search outcomes obtained from the best translation model. Besides, by adjusting the search interval, we can obtain the optimal translation policy under all latency." }, { "figure_ref": [ "fig_2" ], "heading": "Learning Optimal Policy", "publication_ref": [ "b0", "b7" ], "table_ref": [], "text": "Once the optimal translation policy is obtained for the corresponding parallel sentence, we can proceed to train the agent in order to learn this policy through explicit supervision. The agent will determine the policy based on the translation status during inference (Alinejad et al., 2021). To facilitate this process, we introduce two actions: READ and WRITE. The READ action corresponds to reading the next source token, while the WRITE action represents outputting the generated translation. Instead of using the sequence g = (g 1 , ..., g I ) to represent the translation policy, we transform it into a sequence of READ and WRITE actions. This transformation is motivated by the fact that it is easier to determine the next action compared to predicting the number of source tokens required to translate the next target token based solely on the current translation status.\nWe denote the optimal action sequence as a = (a 1 , ..., a T ), where T = I + J. Consequently, the action to be taken at step t can be derived from the optimal policy as follows:\na t = WRITE, if t = g i + i READ, otherwise .(9)\nThe obtained optimal action sequence serves as the basis for training the agent to learn the optimal policy within a supervised framework. At step t, the agent receives the current translation status o t , which includes the last source token x j , the last generated token y i , and the last action a t-1 . Based on this information, the agent determines the action a t . We train the agent, implemented as an RNN architecture, to maximize the probability of the current action a t as follows:\nmax p θa (a t |a <t , o <t ),(10)\nwhere θ a is the parameters of the agent and a <t , and o <t represent the sequence of actions and the translation status before time step t, respectively. The architecture of the agent is shown in Figure 3. At each step, the agent receives the embedding of the last source and target token, along with the last action. The embedding of the last source and target token, generated by the translation model, is concatenated and passed through a linear layer. The last action is also processed through a separate embedding and linear layer. Subsequently, the outputs of the two linear layers will be fed into an LSTM layer (Hochreiter and Schmidhuber, 1997) to predict the next action. Furthermore, to mitigate the mismatch between training and testing, we train the agent using the embeddings of the generated translation instead of relying on the ground-truth." }, { "figure_ref": [], "heading": "Inference", "publication_ref": [], "table_ref": [], "text": "Up to now, we get the trained translation model and agent. Our BS-SiMT model generates translations by leveraging the translation model, which is guided by the agent for policy decisions. At each step, the agent receives the translation status from the translation model and determines the next action. Then the translation model either outputs translation or reads the next source token based on the decision of the agent. The inference process is formally expressed in Algorithm 2.\nAlgorithm 2: The Process of Inference Input: Source sentence x, Translation model p θ (), Agent p θa ()\ny 0 ← ⟨bos⟩, a 1 ← READ i ← 1, j ← 1, t ← 2 while y i-1 ̸ = ⟨eos⟩ do\ndecide a t using translation status if a t = WRITE or x j = ⟨eos⟩ then generate y i i ← i + 1 else read the next token j ← j + 1 t ← t + 1 end 5 Experiments" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b2", "b1", "b0", "b13" ], "table_ref": [], "text": "We evaluate our BS-SiMT method mainly on IWSLT152 English↔Vietnamese (En↔Vi) and IWSLT14 3 German↔English (De↔En) tasks.\nFor En↔Vi task (Cettolo et al., 2016), our settings are the same as Arivazhagan et al. (2019). We use TED tst2012 as the development set and TED tst2013 as the test set. We replace tokens whose frequency is less than 5 with ⟨unk⟩.\nFor De↔En task, we keep our settings consistent with Alinejad et al. (2021). We use a concatenation of dev2010 and tst2010 to tst2013 as the test set. We apply BPE (Sennrich et al., 2016) with 10K merge operations, which results in 8.8K German and 6.6K English sub-word units." }, { "figure_ref": [], "heading": "Model Settings", "publication_ref": [ "b9", "b4", "b10", "b0", "b14", "b11", "b14", "b7", "b10", "b12", "b9" ], "table_ref": [], "text": "Since our experiments involve the following methods, we briefly introduce them.\nWait-k Wait-k policy (Ma et al., 2019) reads k source tokens first and then writes a target token and reads a source token alternately.\nMulti-path Multi-path (Elbayad et al., 2020) introduces a unidirectional encoder and trains the model by uniformly sampling the latency.\nMMA MMA (Ma et al., 2020), which is a superior adaptive policy in SiMT, allows each head to decide the policy independently and integrates the results of multiple heads.\nTranslation-based Translation-based policy (Alinejad et al., 2021) ing the translation of the Full-sentence translation model with the results of other policies.\nFull-sentence Full-sentence is the conventional full-sentence translation model based on Transformer (Vaswani et al., 2017).\nBS-SiMT Our proposed method in section 4.\nThe implementations of all our methods are adapted from Fairseq Library (Ott et al., 2019), which is based on Transformer (Vaswani et al., 2017). We apply the Transformer-Small model with 6 layers and 4 heads to all translation tasks. For Translation-based policy and our BS-SiMT, we augment the implementation by introducing the agent to make decisions for actions. The translation model of our BS-SiMT is fine-tuned from Multi-path. For our method, we set the model hyperparameter as the search interval [l 1 , r 1 ] for the first target token, and the search interval for subsequent target tokens is shifted one unit to the right from the previous token. The agent is composed of 1-layer LSTM (Hochreiter and Schmidhuber, 1997) with 512 units, 512-dimensional embedding layers, and 512-dimensional linear layers. Other model settings follow Ma et al. (2020) search at inference and evaluate these methods with translation quality measured by tokenized BLEU (Papineni et al., 2002) and latency estimated by Average Lagging (AL) (Ma et al., 2019)." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b4", "b10", "b15", "b0" ], "table_ref": [], "text": "The translation performance comparison between our method and other methods on 4 translation tasks is shown in Figure 4. Our BS-SiMT method consistently outperforms the previous methods under all latency and even exceeds the performance of the Full-sentence translation model with lower latency on En→Vi, Vi→En, and En→De tasks. This shows the effectiveness of our method.\nCompared to Wait-k policy, our method obtains significant improvement. This improvement can be attributed to the dynamic policy decision in our method, where the policy is based on the translation status. In contrast, Wait-k policy relies on heuristic rules for translation generation. Our method also surpasses Multi-path method greatly since it only changes the training method of the translation model, but still performs fixed policy during inference (Elbayad et al., 2020). Compared to MMA, which is the superior policy in SiMT, our method achieves comparable performance and demonstrates better stability under high latency. MMA allows each head to independently decide its policy and perform translation concurrently, which can be affected by outlier heads and impact overall translation performance, particularly under high latency (Ma et al., 2020). In contrast, our method separates the policy and translation model, resulting in improved stability and efficiency (Zhang et al., 2020). When compared to the Translationbased policy, our method outperforms it and is capable of generating translation under all latency. Translation-based policy, which obtains the labels by utilizing external translation of the Full-sentence model, can only obtain the translation under a certain latency because of its offline construction method (Alinejad et al., 2021). In contrast, our method constructs the optimal policy online while taking into account the performance of the translation model, thereby getting better latency-quality trade-offs. Additionally, our method surpasses the Full-sentence model on En→Vi, Vi→En, and En→De tasks, highlighting the critical role of the policy in SiMT performance." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "To gain insights into the improvements achieved by our method, we conduct extensive analyses. understanding of our method." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We conducted ablation studies to investigate the impact of the search interval and translation status on our BS-SiMT model. Regarding the search we explore the effect of different lengths of search interval on translation performance. As shown in Table 1, our BS-SiMT model, with a search interval of 5, surpasses other settings. This finding highlights the effectiveness of setting an appropriate search interval close to the diagonal for each target token (Zhang and Feng, 2023b). By adjusting the search interval of the target tokens, we can obtain the optimal policy under all latency. Additionally, we explored the influence of the translation status on the agent. As mentioned in subsection 4.2, the agent determines its action based on the current translation status, which includes the last generated token. Hence, it is crucial to investigate whether using the generated translation or ground-truth in training the agent yields better results. As shown in Table 2, the agent trained with generated translation demonstrates superior performance. This can be attributed to the deviation between the ground-truth and the translation status obtained by the model during inference. Training the agent with the generated translation enables a better alignment between its training and testing conditions, resulting in improved performance. " }, { "figure_ref": [ "fig_3" ], "heading": "Performance of Oracle Policy", "publication_ref": [ "b9" ], "table_ref": [ "tab_2" ], "text": "In addition to the ablation study, we also compare the performance on the test set according to the oracle policy. The oracle policy is obtained by our translation model using the whole source sentence on the test set. Therefore, the oracle policy is actually the optimal policy obtained by our method on the test set. As shown in Table 3, our oracle policy can achieve high translation quality, especially under low latency. This reflects the effectiveness of our way of building the optimal policy and our learned policy still has room for improvement.\nA good policy needs to ensure that the target token is generated only after the required source information is read. To evaluate the constructed oracle policy, we introduce sufficiency (Zhang and Feng, 2022c) as evaluation metric. Sufficiency measures whether the number of source tokens read exceeds the aligned source position when translating each target token, thus reflecting the faithfulness of the translation.\nWe evaluate the sufficiency of translation policy on RWTH De→En alignment dataset 4 , where reference alignments are annotated by experts and seen as golden alignments5 . The results are shown in Figure 5. The oracle policy performs better than other methods in sufficiency evaluation and can even cover 75% of the aligned source tokens under low latency. Wait-k policy is worse than our oracle policy under low latency because it may be forced to output translation before reading the aligned source tokens (Ma et al., 2019) which may be attributed to its serious problem of outlier heads on De→En task. Combined with the results in Figure 4, our oracle policy achieves good trade-offs by avoiding unnecessary latency while ensuring translation faithfulness." }, { "figure_ref": [], "heading": "Analysis of the Trade-off Approach", "publication_ref": [ "b15" ], "table_ref": [ "tab_3" ], "text": "Our BS-SiMT approach achieves trade-offs by evaluating the concavity during binary search and selecting the interval with greater gain. Whether this trade-off approach is better needs to be further explored. In our method, we also consider an alternative approach within the framework. We investigate whether comparing the translation and ground-truth can be used to construct the optimal policy. As shown in Table 4, our method performs better than comparing translation and ground-truth. This is mainly because the condition of the latter method is difficult to achieve, resulting in the model reading too many source tokens (Zhang et al., 2020).\nOur approach allows for a broader interval to obtain translation policy, enabling the construction of a more effective translation policy." }, { "figure_ref": [], "heading": "Training of Translation Model", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In our method, the construction of the optimal policy relies on the performance of the translation model. Therefore, the training of the translation model needs to be further explored. As shown in Table 5, our method obtains the best performance.\nTraining from scratch yields the worst performance, as the model lacks the ability to distinguish between good and poor translations. Fine-tuning from the Full-sentence model achieves better performance, but it does not have the ability to generate high-quality translation with partial source information. Our method, fine-tuned from Multipath, is capable of generating high-quality translation under all latency." }, { "figure_ref": [], "heading": "Analysis on the Trained Agent", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "As introduced in subsection 4.2, the agent is trained with the constructed optimal policy. The training of the agent becomes a supervised learning process. Thus, we need to analyze the impact of different architectures of the agent on our method. The results presented in Table 6 demonstrate that the LSTM architecture achieves the best performance. On the other hand, the linear model with one hidden layer performs the worst due to its limited capacity to model sequential information compared to the RNN architecture. The LSTM model, with its larger number of trainable parameters, proves to be more suitable for this task than the GRU model." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b9", "b4", "b5", "b26", "b1", "b10", "b15", "b0", "b6", "b26", "b15", "b0" ], "table_ref": [], "text": "Recent SiMT methods can be roughly divided into two categories: fixed policy and adaptive policy.\nFor fixed policy, the model relies on predefined heuristic rules to generate translations. Dalvi et al. (2018) proposed STATIC-RW, which reads and writes RW tokens alternately after reading S tokens. Ma et al. (2019) proposed Wait-k policy, which writes and reads a token alternately after reading k tokens. Elbayad et al. (2020) introduced the unidirectional encoder and enhanced Wait-k policy by uniformly sampling latency k during training. Zhang et al. (2021) proposed future-guided training to help SiMT model invisibly embed future source information through knowledge distillation. Zhang and Feng (2021a) proposed char-level Wait-k policy to make the SiMT model adapt to the streaming input environment. Zhang and Feng (2021b) proposed MoE wait-k policy, which makes different heads execute different Wait-k policies, and combine the results under multiple latency settings to predict the target tokens.\nFor adaptive policy, the translation policy is determined based on current translation status. Gu et al. (2017) trained the agent for policy decisions using reinforcement learning. Zheng et al. (2019) trained the agent with optimal action sequences generated by heuristic rules. Arivazhagan et al. (2019) proposed MILk, which applies the monotonic attention and determines the policy based on a Bernoulli variable. Ma et al. (2020) proposed MMA, which implements MILk on Transformer architecture and achieves superior performance in SiMT. Zhang et al. (2020) proposed MU, which is an adaptive segmentation policy (Zhang and Feng, 2023a). Alinejad et al. (2021) used a full-sentence model to construct the translation policy offline, which can be used to train the agent. Zhang and Feng (2022a) implemented the adaptive policy by predicting the aligned source positions of each target token directly. Zhang and Feng (2022c) introduced dual constraints to make forward and backward models provide path supervision for each other. Zhang et al. (2022) proposed the Wait-info policy to balance source and target at the information level. Guo et al. (2022) performed the adaptive policy by integrating post-evaluation into the fixed policy. Zhang and Feng (2023b) proposed Hidden Markov Transformer, which models simultaneous machine translation as a hidden Markov process.\nThe previous methods often lack explicit supervision for the learning of the policy. Some papers use external information, such as generated heuristic sequences, to learn the policy (Zheng et al., 2019;Zhang et al., 2020;Alinejad et al., 2021). However, their methods heavily rely on heuristic rules and offline reference sequence construction, which affects the translation performance. Our BS-SiMT constructs the optimal translation policy online by checking the concavity via binary search without utilizing external information, thereby obtaining good latency-quality trade-offs." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose BS-SiMT, which utilizes binary search to construct the optimal translation policy online, providing explicit supervision for the agent to learn the optimal policy. The learned policy effectively guides the translation model in generating translations during inference. Experiments and extensive analyses show that our method can exceed strong baselines under all latency and learn a translation policy with good trade-offs." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this paper, we build the optimal translation policy under all latency by simply setting the search interval, achieving high performance. However, we think that the performance of our method can be further improved by exploring more interval settings. Additionally, although we train the agent using a simple architecture and achieve good performance, there exists a performance gap between the learned policy and the searched optimal policy under low latency. Exploring more powerful models of the agent may help improve the performance and we leave it for future work." }, { "figure_ref": [], "heading": "A Hyperparameters", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "All system settings in our experiments are shown in Table 7. " }, { "figure_ref": [], "heading": "B Numerical Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "We thank all anonymous reviewers for their valuable suggestions. This work was supported by the National Key R&D Program of China (NO. 2018AAA0102502)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "2.00 28.13 [3,7] 3.40 28.00 [5,9] 5.39 29.05 [7,11] 7.29 28.86 [9,13] 9.07 29.04 \n3.26 28.95 [5,9] 5.01 30.44 [7,11] 7.00 31.37 [9,13] 8.77 31.96 " } ]
Simultaneous machine translation (SiMT) starts to output translation while reading the source sentence and needs a precise policy to decide when to output the generated translation. Therefore, the policy determines the number of source tokens read during the translation of each target token. However, it is difficult to learn a precise translation policy to achieve good latency-quality trade-offs, because there is no golden policy corresponding to parallel sentences as explicit supervision. In this paper, we present a new method for constructing the optimal policy online via binary search. By employing explicit supervision, our approach enables the SiMT model to learn the optimal policy, which can guide the model in completing the translation during inference. Experiments on four translation tasks show that our method can exceed strong baselines across all latency scenarios 1
Learning Optimal Policy for Simultaneous Machine Translation via Binary Search
[ { "figure_caption": "Figure 1 :1Figure 1: The translating probability of ground-truth when attending to different numbers of tokens. When translating each target token, the model adopts the waitk policy for previous tokens.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example of finding the optimal policy through binary search. The light green area in the figure depicts the search interval of each target token. The horizontal axis of the two function images denotes the number of source tokens read in, and the vertical axis represents the probability of translating the ground-truth. Specifically, we focus on the search for the suitable number of source tokens required to translate the target token \"should.\"", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The architecture of the agent. The agent decides the next action based on the embedding of the last source and target token, as well as the last action.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparison of translation sufficiency of different translation policies.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "decides its policy by compar-", "figure_data": "29263226BLEU22 23 24 25 26 28 2724 Average Lagging (AL) 6 )XOOVHQWHQFH 8 %66L07 00$ 7UDQVODWLRQEDVHG 0XOWLSDWK :DLWN10BLEU24 18 20 22234 Average Lagging (AL) 5 6 7 )XOOVHQWHQFH 8 %66L07 00$ 7UDQVODWLRQEDVHG 9 0XOWLSDWK :DLWNBLEU30 20 26 28 24 220123 Average Lagging (AL) 4 5 6 )XOOVHQWHQFH 7 8 %66L07 00$ 7UDQVODWLRQEDVHG 0XOWLSDWK :DLWN9BLEU24 22 20 18 16202 Average Lagging (AL) 4 6 )XOOVHQWHQFH 8 %66L07 00$ 7UDQVODWLRQEDVHG 0XOWLSDWK :DLWN(a) En→Vi(b) Vi→En(c) De→En(d) En→DeFigure 4: Translation performance of different methods on En↔Vi and De↔En tasks. It shows the results ofour BS-SiMT method, Wait-k policy, Multi-path, Translation-based policy, MMA policy, and the Full-sentencetranslation model.Length [l 1 , r 1 ]ALBLEU5[3, 7] [5, 9]3.26 5.0128.95 30.443[3, 5] [5, 7]3.22 5.8828.29 30.697[3, 9] [5, 11]3.94 5.4126.76 29.14", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": ". We use greedy Model performance of different translation status during training of the agent.", "figure_data": "Reference[l 1 , r 1 ]ALBLEUTranslation[3, 7] [5, 9]3.26 5.0128.95 30.44Ground-Truth[3, 7] [5, 9]3.24 5.2028.41 30.19", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison between BS-SiMT (the trained agent) and Oracle Policy. We change the search interval for the first target token to achieve translation under all latency.", "figure_data": "MethodBS-SiMTOracle Policy[l 1 , r 1 ][3, 7][5, 9][7, 11][9, 13][3, 7][5, 9][7, 11][9, 13]AL3.265.017.008.773.275.297.198.95BLEU28.9530.4431.3731.9629.6730.8231.5031.99", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "All of the following results are reported on De→En task. The results presented below provide a detailed Comparison of different trade-off approaches.'Concavity' indicates building optimal policy by checking concavity. 'GT' indicates building optimal policy by comparing translation and ground-truth.", "figure_data": "Method [l 1 , r 1 ]ALBLEUConcavity[3, 7] [5, 9]3.26 5.0128.95 30.44GT[3, 7] [5, 9]4.81 6.6120.85 22.81", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of different training methods of translation model. 'Full-sentence' indicates the translation model is fine-tuned from the Full-sentence translation model. 'None' represents the translation model is trained from scratch.", "figure_data": "Base Model [l 1 , r 1 ]ALBLEUMulti-path[3, 7] [5, 9]3.26 5.0128.95 30.44Full-sentence[3, 7] [5, 9]3.83 5.5928.80 30.28None[3, 7] [5, 9]3.43 5.2526.90 28.46", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ". MMA gets the worst performance in sufficiency evaluation, Performance comparison of different architectures of the agent. 'GRU' and 'Linear' represent that the agent adopts GRU and Linear architecture respectively.", "figure_data": "Architecture [l 1 , r 1 ]ALBLEULSTM[3, 7] [5, 9]3.26 5.0128.95 30.44GRU[3, 7] [5, 9]3.34 5.1828.19 30.43Linear[3, 7] [5, 9]3.65 5.6027.82 29.99", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "9, 10, 11 respectively report the numerical results on IWSLT15 En→Vi, IWSLT15 Vi→En, IWSLT14 De→En and IWSLT14 En→De measured by AL and BLEU.", "figure_data": "HyperparameterIWSLT15 En↔Vi IWSLT14 De↔Enencoder layers66encoder attention heads44encoder embed dim512512encoder ffn embed dim10241024decoder layers66decoder attention heads44decoder embed dim512512decoder ffn embed dim10241024dropout0.30.3optimizeradamadamadam-β(0.9, 0.98)(0.9, 0.98)clip-norm00lr5e-45e-4lr schedulerinverse sqrtinverse sqrtwarmup-updates40004000warmup-init-lr1e-71e-7weight decay0.00010.0001label-smoothing0.10.1max tokens160008192×4", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Hyperparameters of our experiments.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Shoutao Guo; Shaolei Zhang; Yang Feng
[ { "authors": "Ashkan Alinejad; Hassan S Shavarani; Anoop Sarkar", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Translation-based supervision for policy generation in simultaneous neural machine translation", "year": "2021" }, { "authors": "Naveen Arivazhagan; Colin Cherry; Wolfgang Macherey; Chung-Cheng Chiu; Semih Yavuz; Ruoming Pang; Wei Li; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Monotonic infinite lookback attention for simultaneous machine translation", "year": "2019" }, { "authors": "Mauro Cettolo; Jan Niehues; Sebastian Stüker; Luisa Bentivogli; Roldano Cattoni; Marcello Federico", "journal": "", "ref_id": "b2", "title": "The IWSLT 2016 evaluation campaign", "year": "2016-09" }, { "authors": "Fahim Dalvi; Nadir Durrani; Hassan Sajjad; Stephan Vogel", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Incremental decoding and training methods for simultaneous translation in neural machine translation", "year": "2018" }, { "authors": "Maha Elbayad; Laurent Besacier; Jakob Verbeek", "journal": "ISCA", "ref_id": "b4", "title": "Efficient wait-k models for simultaneous machine translation", "year": "2020-10-29" }, { "authors": "Jiatao Gu; Graham Neubig; Kyunghyun Cho; O K Victor; Li", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Learning to translate in real-time with neural machine translation", "year": "2017" }, { "authors": "Shoutao Guo; Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Turning fixed to adaptive: Integrating post-evaluation into simultaneous machine translation", "year": "2022" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Comput", "ref_id": "b7", "title": "Long short-term memory", "year": "1997" }, { "authors": "Javier Iranzo-Sánchez; Jorge Civera Saiz; Alfons Juan", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Stream-level latency evaluation for simultaneous machine translation", "year": "2021" }, { "authors": "Mingbo Ma; Liang Huang; Hao Xiong; Renjie Zheng; Kaibo Liu; Baigong Zheng; Chuanqiang Zhang; Zhongjun He; Hairong Liu; Xing Li; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "STACL: simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework", "year": "2019-07-28" }, { "authors": "Xutai Ma; Juan Miguel Pino; James Cross; Liezl Puzon; Jiatao Gu", "journal": "", "ref_id": "b10", "title": "Monotonic multihead attention", "year": "2020-04-26" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019-06-02" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b14", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Ruiqing Zhang; Chuanqiang Zhang; Zhongjun He; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Learning adaptive segmentation policy for simultaneous translation", "year": "2020" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "ICT's system for AutoSimTrans 2021: Robust char-level simultaneous translation", "year": "2021" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Universal simultaneous machine translation with mixture-of-experts wait-k policy", "year": "2021-07-11" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Gaussian multihead attention for simultaneous machine translation", "year": "2022" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Informationtransport-based policy for simultaneous translation", "year": "2022" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Modeling dual read/write paths for simultaneous machine translation", "year": "2022" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Reducing position bias in simultaneous machine translation with length-aware framework", "year": "2022" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "End-to-end simultaneous speech translation with differentiable segmentation", "year": "2023" }, { "authors": "Shaolei Zhang; Yang Feng", "journal": "", "ref_id": "b23", "title": "Hidden markov transformer for simultaneous machine translation", "year": "2023" }, { "authors": "Shaolei Zhang; Yang Feng; Liangyou Li", "journal": "AAAI Press", "ref_id": "b24", "title": "Future-guided incremental transformer for simultaneous translation", "year": "2021-02-02" }, { "authors": "Shaolei Zhang; Shoutao Guo; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Wait-info policy: Balancing source and target at information level for simultaneous machine translation", "year": "2022" }, { "authors": "Baigong Zheng; Renjie Zheng; Mingbo Ma; Liang Huang", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Simpler and faster learning of adaptive policies for simultaneous translation", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 338.87, 350.88, 186.27, 33.71 ], "formula_id": "formula_0", "formula_text": "L CE = - I i=1 log p θ (y ⋆ i |x ≤g i , y <i ),(1)" }, { "formula_coordinates": [ 2, 360.38, 534.8, 164.76, 14.19 ], "formula_id": "formula_1", "formula_text": "g k i = min{k + i -1, I},(2)" }, { "formula_coordinates": [ 2, 314.34, 687.6, 206.56, 34.42 ], "formula_id": "formula_2", "formula_text": "L ECE = - k∼U (K) I i=1 log p θ (y ⋆ i |x ≤g k i , y <i ), (3" }, { "formula_coordinates": [ 2, 520.9, 699.58, 4.24, 9.46 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 3, 124.27, 478.97, 161.35, 15.55 ], "formula_id": "formula_4", "formula_text": "p q i = p(y ⋆ i |x ≤⌈q * J⌉ , y <i ), (4" }, { "formula_coordinates": [ 3, 285.63, 482.62, 4.24, 9.46 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 4, 70.87, 184.13, 201.9, 104.54 ], "formula_id": "formula_6", "formula_text": "r i ← m i //Left Range else l i ← m i + 1 //Right Range end g i = l i median value m i of the interval [l i , r i ]," }, { "formula_coordinates": [ 4, 144.55, 311.11, 145.31, 24.43 ], "formula_id": "formula_7", "formula_text": "m i = ⌊ l i + r i 2 ⌋.(5)" }, { "formula_coordinates": [ 4, 128.59, 396.85, 161.27, 15.22 ], "formula_id": "formula_8", "formula_text": "p l i i = p θ (y ⋆ i |x ≤l i , y <i ).(6)" }, { "formula_coordinates": [ 4, 157.09, 511.45, 132.78, 28.59 ], "formula_id": "formula_9", "formula_text": "p m i i -p l i i m i -l i .(7)" }, { "formula_coordinates": [ 4, 70.87, 561.16, 220.08, 51.81 ], "formula_id": "formula_10", "formula_text": "p m i i -p l i i m i -l i and p r i i -p m i i r i -m i . Since m i -l i is equal to r i -m i , it is actually a compar- ison between p m i i and p l i i +p r i i 2" }, { "formula_coordinates": [ 4, 141.85, 635.96, 143.78, 27.53 ], "formula_id": "formula_11", "formula_text": "p m i i ≥ p l i i + p r i i 2 , (8" }, { "formula_coordinates": [ 4, 285.63, 646.78, 4.24, 9.46 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 108.74, 189.19, 181.12, 23.36 ], "formula_id": "formula_13", "formula_text": "a t = WRITE, if t = g i + i READ, otherwise .(9)" }, { "formula_coordinates": [ 5, 131.02, 364.9, 158.85, 10.81 ], "formula_id": "formula_14", "formula_text": "max p θa (a t |a <t , o <t ),(10)" }, { "formula_coordinates": [ 5, 316.66, 118.32, 113.63, 37.73 ], "formula_id": "formula_15", "formula_text": "y 0 ← ⟨bos⟩, a 1 ← READ i ← 1, j ← 1, t ← 2 while y i-1 ̸ = ⟨eos⟩ do" } ]
10.18653/v1/D16-1125
2023-05-22
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction and Related Work", "publication_ref": [ "b13", "b14", "b24", "b9", "b26", "b20", "b6", "b12", "b4", "b3", "b25", "b21", "b22", "b0", "b18", "b1", "b17" ], "table_ref": [], "text": "In human communication, language is rarely used as a unimodal channel; rather, language is mostly used in reference to the surroundings, i.e., it is grounded in the physical world. Thus, in order to build artificial agents that could be potentially employed in scenarios requiring natural communication with humans, it is crucial to develop approaches for training such agents to communicate about the world in a human-like way (Lake et al., 2017). However, automatically evaluating the human-likeness of a trained system without costly human feedback is a recurring problem in NLP.\nIn this paper, we set out to provide tools for evaluating human-like pragmatic abilities of grounded models and evaluate a model trained interactively via reinforcement learning, which is commonly suggested to give rise to task-oriented behavior (Lazaridou and Baroni, 2020).\nGrounding of neural language models has been advanced greatly in recent years through image captioning models. Starting with the work by Vinyals et al. (2016) and Karpathy et al. (2014), neural encoder-decoder architectures have been dominating the field, recently extending to unified architectures (Zhou et al., 2020). However, these approaches are task neutral, i.e., the models are trained to produce generally true image captions.\nIn contrast, humans are highly flexible and pragmatic in their use of language and, e.g., adapt the granularity of their utterances to the requirements of the communicative task (Searle, 1969). It is generally guided by conversational maxims, suggesting that cooperative speakers should only provide as much information as required in a given context, be truthful, relevant, and brief (Grice, 1975). Therefore, faced with a simple referential task of picking out a target item among an array of distractors, humans tend to mention contrastive features of the target (e.g., Kramer and van Deemter, 2012), i.e., the ones setting it apart from distractors. On the other hand, biases towards producing shape and color descriptions even when these aren't contrastive have been identified (e.g., Degen et al., 2020). For grounded language models, the underlying pragmatic reasoning formalized as nested Bayesian inference about the behavior of speakers and listeners (Goodman and Frank, 2016) inspired decoding schemes applied on top of standardly trained models (e.g., Cohn-Gordon et al., 2018;Zarrieß et al., 2021;Shen et al., 2019;Vedantam et al., 2017;Andreas and Klein, 2016).\nHowever, evaluating the pragmatic qualities of models' predictions when they are applied to specific tasks (e.g., referential tasks) remains a challenge. Currently standard metrics like BLEU-n, ROUGE, CIDEr and METEOR (Papineni et al., 2002;Banerjee and Lavie, 2005;Vedantam et al., 2015; Lin, 2004) for evaluating models' generations make reference to the surface form of ground truth image annotations. They cannot provide insight into models' mechanics and possible biases based on context-dependent functional aspects like mentioning contrastive features or being overinformative. Given that model predictions might not always be syntactically well-formed and yet still count as functionally expedient for a human (e.g., see Fig. 1), evaluating pragmatic aspects of natural language image captions is important. We propose a new dataset and metrics facilitating such evaluation in the next sections." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "A3DS", "publication_ref": [ "b2", "b10" ], "table_ref": [], "text": "To enable such evaluation, we provide novel annotations for the dataset 3DShapes (Burgess and Kim, 2018) (introduced in Kim and Mnih (2018)) in the \"Annotated 3D Shapes\" (A3DS) dataset. The image dataset consists of 480,000 unique images of 3D geometric objects, constructed by varying six features (×number of distinct feature values): shape type (×4), shape color (×10), shape scale (×8), shape orientation relative to the background (×15), wall color (×10) and floor color (×10). For each image, two sets of ground truth captions were generated: exhaustive captions mentioning all six features and their values, and short captions, mentioning two or three features of the image only (see example annotation in Fig. 1). The captions were constructed with a hand-written grammar from the numeric labels shipped with the original dataset.\nFor each distinct feature value, different natural language descriptions were created. In total, over nine million exhaustive captions and 12 million short captions are released as part of this work. 2The important advantage of this synthetic dataset for investigating referential language use of models trained on it is that the numeric labels allow to easily identify contrastive versus redundant features of the target image in any given context of distractor images. Furthermore, training with fully exhaustive captions allows to focus evaluations on models' contrastive abilities, excluding insufficient granularity of training data as a potential reason for a system's failure to be contrastive.\nBecause all natural language expressions for each label are known, it is possible to comprehensively evaluate model predictions by-feature. Predictions of fine-tuned models which may deviate from ground truth captions in their surface form (e.g., due to language drift; see, e.g., Lazaridou et al. ( 2020)) can also be evaluated. We consider a caption contrastive if at least one of the known contrastive features for a given context (target and distractors) is mentioned in the target's description. For contrastive color features, a caption is considered contrastive if it mentions the respective color irrespective of other mentioned aspects, if the color is unique for the target. If several features in the target image have the same color, the description is considered contrastive only if the color name occurs together with the correct head noun (e.g., \"floor\", \"wall\", object shape). For other contrastive features like shape, the respective expression (e.g., \"ball\", \"in the left corner\") has to literally occur in the generated caption. For the example, in Fig. 1, we were able to identify that the caption is contrastive because the contrastive feature is the red color of the ball in the target image (left), there is only one red feature in the target image, and the generated caption contains the term \"red\".\nWe suggest informative metrics for evaluating pragmatic abilities of models on this dataset in the next section." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b6", "b12", "b6", "b6", "b6" ], "table_ref": [], "text": "The metrics are informed by notions that are considered important in the cognitive science literature for cooperative and efficient pragmatic communi-cation (e.g., Grice, 1975) and used commonly in the literature on computational generation of referring expressions (e.g., Kramer and van Deemter, 2012). In the context of a reference task, we define pragmatically relevant categories of features a model might mention. Given a target and distractor image, each feature falls in one of the following three categories:\n• Contrastive feature: true of target and false of distractor.\n• Non-contrastive feature: true of both the target and the distractor, and, therefore, redundant for the purpose of reference.\n• False feature: false of the target.\nFrom these categories, we derive the following metrics (higher values are better), where c is the number of contrastive features mentioned in a generated caption y, k is the total number of features mentioned in y, and z is the ground truth number of contrastive features between the images:\n• Discriminativity d: d = 1 if c > 0 else 0, indicating if the caption successfully identifies the target, thus a binary measure of task success.\n• Contrastive efficiency e (applies only to discriminative captions, i.e., for d = 1): e = 1 if k = c = 1, else: e = 1 -c-1 k-1 , indicating whether the description avoids overmodification with contrastive features. This notion captures the extent to which the caption is economic and observes the communicative Maxim of Quantity, i.e., includes necessary details for the task but not more (Grice, 1975).\n• Relevance r: r = 1 -k-c\n6-z , indicates the propensity to avoid producing redundant noncontrastive features. This is formalized via the proportion of mentioned non-contrastive features (k -c) compared to all non-contrastive features (6 -z). It represents the communicative Maxim of Relevance (Grice, 1975) by measuring the degree to which details unnecessary for the task are excluded.\n• Optimal discriminativity od: od = 1 if c = 1 else 0. It is a binary indicator summarizing d and e, by binarizing the observance of the Maxim of Quantity for contrastive captions only (Grice, 1975).\nIn the next section, we showcase how these metrics can be applied in order to evaluate the development of pragmatic abilities of an image captioner through fine-tuning in an interactive setting." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b14", "b15", "b8" ], "table_ref": [], "text": "The multi-agent communication setting wherein the image captioner is trained as the sender agent together with an artificial receiver agent to complete a communicative task (e.g., reference game) allows to fine-tune the sender's captioning behavior based directly on task performance, e.g., via deep reinforcement learning (e.g., Lazaridou et al., 2020;Lazaridou and Baroni, 2020;Lazaridou et al., 2016;Havrylov and Titov, 2017), without making use of a supervised task specific dataset. Applied to the reference task, the idea is that the sender agent will learn to produce more contrastive descriptions which are helpful for the receiver to complete the task. Lazaridou et al. ( 2020) compare sender agent architectures in terms of their taskspecific improvement, but they do not investigate properties like overinformativity that might have emerged during the multi-agent training.\nTo investigate these potentenial effects, following the \"multi-task learning\" training regime from Lazaridou et al. (2020) we pretrained a baseline image captioner (B) on 150,000 image-exhaustive caption pairs constructed from 30,000 images sampled from A3DS. It was then fine-tuned on another 150,0000 pairs on a reference game together with a listener agent. In the reference game, both agents received concatenated pairs of images i = [i 1 ; i 2 ], where i t , t ∈ {1, 2} was the target known only to the sender. The sender was trained to produce a description of the target, so that the listener guesses the target correctly, given the same images in randomized order. The sender received the reward r = 1 if the guess was correct, and r = -1 otherwise. Both the sender and the listener consisted of a pretrained ResNet-50 image encoder which was not fine-tuned during the reference game, and a trainable linear layer projecting the ResNet image features to 512-dimensional features. These were input into one-layer LSTM language modules with the hidden layer size h = 512. Further architectural and training details followed Lazaridou et al. (2020). 3 We trained two sender-agent pairs in the reference game setting: in the random pairs setting (RP), 3 The weight λs for the speaker loss was set to 0.75. the agents saw pairs of (distinct) images selected at random. In the similar pairs setting (SP), they received images which had at least three overlapping features (e.g., target and distractor depicted the same shape of the same color with background of the same color).4 " }, { "figure_ref": [ "fig_1" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_0", "tab_0", "tab_0" ], "text": "The agents were evaluated on three categories of test sets, each set containing 7500 image pairs. In the one-feature category, six sets were constructed where test pairs matched at least on one of each possible features. The two-features category included three sets of pairs matched on at least two object features and a set with two random match-ing features. The three-features category included sets where at least all object features, all background features, or three randomly sampled features matched. These sets allowed to evaluate in which conditions it was more difficult for the sender to produce appropriate captions. In the following, the fine-tuned sender models (RP and SP) are compared to the baseline model (B), which is the pretrained task-neutral image captioner. The average number of falsely named features was 0.720 for baseline, 0.139 (RP) and 0.316 (SP). Table 1 shows listener test accuracies on all test splits, showing that the agents successfully learned the reference task (0.5 is chance). In terms of discriminativity d, it was more difficult for the fine-tuned models to identify the correct feature when two or three features were identical across the pair (Table 1). These average difficulties were driven by the fail-ure on test sets where the non-contrastive features included shape (e.g., a pair showing a red vs. a blue block), indicating that the shape was easiest to pick up on for the models, although all features were mentioned in all training captions. For instance, d was 0.750 for SP on the object color-scale matched test set, and 0.724 on the random two-feature test set, but 0.501 on the shape-object color matched set. The discriminativity on random and background feature matched three-feature test sets was 0.618 | 0.875 (RP) and 0.854 | 0.605 (SP), while it was only 0.087 (RP) and 0.164 (SP) on the object feature matched test set.\nThe better contrastive performance of the baseline came at a cost of generally overmodifying the messages with contrastive features (see low contrastive efficiency, Table 1). Low relevance scores also show that the baseline did not identify functionally appropriate features well. In contrast, both fine-tuned models showed higher contrastive efficiency and relevance, indicating that the task based fine-tuning might have helped the models to learn contrastiveness. The fine-tuned models also showed higher optimal constrastivity which is, however, still far from perfect. In general, no qualitative differences between the two-and threefeature datasets or RP and SP settings are apparent.\nFigure 2 shows how frequently the models' predictions mentioned a specific feature when it was contrastively irrelevant (i.e., it zooms in on predictions where r < 1). For the fine-tuned models, it suggests potential biases towards redundantly producing object-related features (shape, scale, color of object), matching human biases (see Section 1), as opposed to background descriptions. The proportions slightly increase for object color and scale in the two-and three-feature test sets, potentially hinting at overmodification as the model's loophole behavior in a more complex setting. The SP model has a stronger redundancy propensity than RP. The apparent trend towards mentioning shape is in line with the pattern of discriminativity results described above where models relied on the shape being the discriminative feature between target and distractor." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b19" ], "table_ref": [], "text": "We provide the A3DS dataset alongside evaluation metrics for investigating referential pragmatic abilities acquired by grounded language models on this dataset. With this dataset, we identify that an im-age captioner fine-tuned interactively via reinforcement learning developed a strikingly human-like shape bias, while being less overinformative than a task-neutral model. Future research could expand such evaluations by including metrics which investigate additional aspects that might matter to human referential expression generation (e.g., the current metrics are agnostic to the surface order of discriminative features, while humans have preferences towards certain adjective ordering; Scontras et al. (2017)). Although these results are specific to the given architecture, with this work we hope to inspire research opening up black box language models-an important task in the age of LLMs." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The identified tendencies towards mentioning object-related features and the reliance on the shape as a contrastive feature might be driven by the grammatical structure of the annotations, mostly presenting object features in sentence-initial subject position, although 40% of exhaustive captions mention either the scale or the object color as the last word in the sentence. Therefore, these results call for investigating the biases of model architectures less sensitive to sentence length than LSTMs, as well as extending the annotations with additional grammars. Further, this evaluation provides descriptive results of the models' pragmatic abilities, leaving the question of whether it is indeed a pragmatic inductive bias or, e.g., structural language drift (Lazaridou et al., 2020) causing the observed patterns, unanswered. Finally, since the evaluation pertains to the surface form of the predictions, applying decoding schemes other than greedy decoding used in this work might provide different patterns, indicating to which degree potential biases are due to model mechanics in opposition to sampling parameters." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank Elia Bruni for his support of the work which led to this paper, and Xenia Ohmer and Leon Schmid for helpful discussions. We also acknowledge support by the state of Baden-Württemberg through the computing resources provided by bwHPC and the German Research Foundation (DFG) through grant INST 35/1597-1 FUGG. Michael Franke is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 -Project number 39072764." } ]
Evaluating grounded neural language model performance with respect to pragmatic qualities like the trade off between truthfulness, contrastivity and overinformativity of generated utterances remains a challenge in absence of data collected from humans. To enable such evaluation, we present a novel open source image-text dataset "Annotated 3D Shapes" (A3DS) comprising over nine million exhaustive natural language annotations and over 12 million variable-granularity captions for the 480,000 images provided by Burgess and Kim (2018). We showcase the evaluation of pragmatic abilities developed by a taskneutral image captioner fine-tuned in a multiagent communication setting to produce contrastive captions. The evaluation is enabled by the dataset because the exhaustive annotations allow to quantify the presence of contrastive features in the model's generations. We show that the model develops human-like patterns (informativity, brevity, over-informativity for specific features (e.g., shape, color biases)).
Evaluating Pragmatic Abilities of Image Captioners on A3DS
[ { "figure_caption": "Figure 1 :1Figure 1: Example image pair matching on five features (left: red, right: purple ball), left image is target. Example exhaustive ground truth caption for target: \"A tiny red ball near the right corner in front of a light green wall on green floor.\" Example short ground truth caption: \"A ball on green floor.\" Contrastive caption predicted by RP model: \"A tiny red ball green near the floor in green of\". 1", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Generation proportions of each feature (x-axis) when it was non-contrastive for each model (color) by test category (facets). Generation proportions of all features for the baseline (not shown) are at ceiling on all test sets, except for the scale category being at around 0.9 due to a tokenization glitch.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Discriminativity 0.999 0.822 0.824 0.997 0.576 0.586 0.984 0.527 0.541 Contrastive efficiency 0.198 0.879 0.875 0.203 0.963 0.955 0.251 0.856 0.875 Relevance 0.150 0.668 0.640 0.162 0.522 0.521 0.149 0.684 0.665 Optimal contrastivity 0.014 0.457 0.452 0.039 0.485 0.476 0.148 0.335 0.367 Mentioned features # 5.880 2.944 3.125 5.871 2.950 3.133 5.876 2.955 3.135 Pragmatic evaluation results by test set category for each model (B: pretrained baseline, RP: random pairs fine-tuning, SP: similar pairs fine-tuning), averaged across test sets within category. Bold numbers indicate best performance across models and test sets.", "figure_data": "one featuretwo featuresthree featuresScoreBRPSPBRPSPBRPSPListener accuracy-0.919 0.895 -0.887 0.900 -0.862 0.860", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Polina Tsvilodub; Michael Franke
[ { "authors": "Jacob Andreas; Dan Klein", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Reasoning about pragmatics with neural listeners and speakers", "year": "2016" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Chris Burgess; Hyunjik Kim", "journal": "", "ref_id": "b2", "title": "3d shapes dataset", "year": "2018" }, { "authors": "Reuben Cohn-Gordon; Noah Goodman; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Pragmatically informative image captioning with character-level inference", "year": "2018" }, { "authors": "Judith Degen; Robert D Hawkins; Caroline Graf; Elisa Kreiss; Noah D Goodman", "journal": "Psychological Review", "ref_id": "b4", "title": "When redundancy is useful: A Bayesian approach to \"overinformative\" referring expressions", "year": "2020" }, { "authors": "D Noah; Michael C Goodman; Frank", "journal": "Trends in cognitive sciences", "ref_id": "b5", "title": "Pragmatic language interpretation as probabilistic inference", "year": "2016" }, { "authors": " Herbert P Grice", "journal": "", "ref_id": "b6", "title": "Logic and conversation", "year": "1975" }, { "authors": " Brill", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "Serhii Havrylov; Ivan Titov", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Emergence of language with multi-agent games: Learning to communicate with sequences of symbols", "year": "2017" }, { "authors": "Andrej Karpathy; Armand Joulin; Li F Fei-Fei ", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Deep fragment embeddings for bidirectional image sentence mapping", "year": "2014" }, { "authors": "Hyunjik Kim; Andriy Mnih", "journal": "", "ref_id": "b10", "title": "Disentangling by factorising", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "Emiel Kramer; Kees Van Deemter", "journal": "Computational Linguistics", "ref_id": "b12", "title": "Computational generation of referring expressions: A survey", "year": "2012" }, { "authors": "M Brenden; Lake; D Tomer; Joshua B Ullman; Samuel J Tenenbaum; Gershman", "journal": "Behavioral and Brain Sciences", "ref_id": "b13", "title": "Building machines that learn and think like people", "year": "2017" }, { "authors": "Angeliki Lazaridou; Marco Baroni", "journal": "", "ref_id": "b14", "title": "Emergent multi-agent communication in the deep learning era", "year": "2020" }, { "authors": "Angeliki Lazaridou; Alexander Peysakhovich; Marco Baroni", "journal": "", "ref_id": "b15", "title": "Multi-agent cooperation and the emergence of (natural) language", "year": "2016" }, { "authors": "Angeliki Lazaridou; Anna Potapenko; Olivier Tieleman", "journal": "", "ref_id": "b16", "title": "Multi-agent communication meets natural language: Synergies between functional and structural language learning", "year": "2020" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Gregory Scontras; Judith Degen; Noah D Goodman", "journal": "Open Mind", "ref_id": "b19", "title": "Subjectivity predicts adjective ordering preferences", "year": "2017" }, { "authors": " John R Searle", "journal": "Cambridge university press", "ref_id": "b20", "title": "Speech acts: An essay in the philosophy of language", "year": "1969" }, { "authors": "Sheng Shen; Daniel Fried; Jacob Andreas; Dan Klein", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Pragmatically informative text generation", "year": "2019" }, { "authors": "Ramakrishna Vedantam; Samy Bengio; Kevin Murphy; Devi Parikh; Gal Chechik", "journal": "", "ref_id": "b22", "title": "Context-aware captions from context-agnostic supervision", "year": "2017" }, { "authors": "Ramakrishna Vedantam; Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b23", "title": "Cider: Consensus-based image description evaluation", "year": "2015" }, { "authors": "Oriol Vinyals; Alexander Toshev; Samy Bengio; Dumitru Erhan", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b24", "title": "Show and tell: Lessons learned from the 2015 MSCOCO image captioning challenge", "year": "2016" }, { "authors": "Sina Zarrieß; Hendrik Buschmeier; Ting Han; Simeon Schüz", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Decoding, fast and slow: A case study on balancing trade-offs in incremental, character-level pragmatic reasoning", "year": "2021" }, { "authors": "Luowei Zhou; Hamid Palangi; Lei Zhang; Houdong Hu; Jason Corso; Jianfeng Gao", "journal": "", "ref_id": "b26", "title": "Unified vision-language pre-training for image captioning and VQA", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 83.89, 575.21, 138.45, 12.1 ], "formula_id": "formula_0", "formula_text": "• Relevance r: r = 1 -k-c" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b39", "b16", "b2", "b11", "b13", "b5", "b35", "b18", "b31", "b32", "b29", "b31", "b4", "b17", "b36", "b18", "b19", "b32", "b29", "b31", "b37", "b33", "b30", "b28", "b40" ], "table_ref": [], "text": "Attribute-based controllable generation aims to generate text that exhibits desired attributes in certain aspects (Zhang et al., 2022). Early work focused on single-aspect control tasks and involved re-training or fine-tuning language models (LMs) using well-labeled data, which resulted in good performance (Keskar et al., 2019;Chan et al., 2021;Hao et al., 2021;Hu et al., 2017;Ficler and Goldberg, 2017;Xiao et al., 2021). Recent studies focus on a more challenging and practical setting, Figure 1: A comparison of existing methods and our approach. Top: optimization-based methods perform iteration / searching in the distorted text space. Middle: prefix-based methods fuse multiple prefixes to obtain interpolation or average of these distribution centers. Bottom: our framework estimates compact latent space for better controllability and performs efficient sampling with a fast ODE-based sampler. multi-aspect controllable text generation1 (Kumar et al., 2021;Qian et al., 2022;Qin et al., 2022). For instance, a dialogue system may require the control of emotions, persona, politeness, etc, at the same time. However, training multi-aspect controllers directly is difficult due to the limited availability of sentences with multi-attribute annotations. Thus, recent works focus on training separate single-aspect discriminators or controllers for each aspect and combining them for multi-aspect controllable text generation (Mireshghallah et al., 2022;Qian et al., 2022).\nAs illustrated in Figure 1, recent works on multi-aspect controllable text generation task can be primarily categorized into two types. Firstly, optimization-based methods either apply extra attribute classifiers to adjust the conditional probability distributions of language model at every generation step (Dathathri et al., 2020;Krause et al., 2021;Yang and Klein, 2021), or regard the decoding process as an optimization objective and search for optimal soft-representations that satisfy multi-objective constraints (Kumar et al., 2021(Kumar et al., , 2022;;Qin et al., 2022;Mireshghallah et al., 2022). However, from a distributional perspective, optimization-based methods often conduct complicated gradient-descent iterations or searching in the distorted text space, and the discrete nature makes it difficult to find high-quality texts, leading to poor linguistic quality and slow inference speeds. Secondly, prefix-based methods are introduced to guide conditional generation using lightweight continuous task-specific vectors (Qian et al., 2022;Yang et al., 2022). They typically train single-aspect prefixes separately and suffer from text quality degeneration when combining them for multi-aspect control due to the mutual interference between multiple prefixes. As depicted in Figure 1, prefix-based methods combine multiple prefixes to obtain the interpolation or average of these distribution centers appraised by prefixes. However, there could be a mismatch between interpolation points and target intersection regions when the distribution centers of different aspects are far away, leading to the degradation of textual fluency. Therefore, an ideal method for multi-aspect controllable generation should enhance controllability and textual quality, while enabling rapid inference speeds.\nIn this paper, we introduce a new technique for multi-aspect controllable text generation, dubbed MacLaSa, which estimates a compact space containing latent representations of various attributes and performs effective sampling using a fast sampler based on ordinary differential equations (ODEs). To eliminate the domain discrepancies between different aspects, we initially employ a VAE encoder network to map attribute-related sentences into latent representations and penalize the distance between each pair of aspect distribution centers. The acquired compact latent space aids in formulating joint latent-space energy-based models (EBMs) and allows us to integrate arbitrary attribute discriminators to satisfy multi-aspect combinations. Subsequently, we utilize an efficient ODE-based sampler (Song et al., 2021;Nie et al., 2021) to draw latent samples possessing desired attributes from the distribution formed by multiple attribute classifiers. Ultimately, the selected latent vectors are input into a VAE decoder to generate target text sequences. In short, our approach improves controllability and textual quality by estimating a compact latent space to mitigate mutual interference among various aspects, and the fast ODE-based sampler contributes to efficient sampling.\nWe conduct experiments on the multi-aspect control task with two attributes from the sentiment aspect and four attributes from the topic aspect, with datasets IMDb movie reviews (Maas et al., 2011) and AGNews (Zhang et al., 2015), respectively. Experimental results of both automatic and human evaluation demonstrate that our method achieves encouraging improvements in attribute relevance and text quality compared to previous strong baselines. Our work also exhibits significant advantages in inference speed over existing baselines2 ." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b17", "b36", "b24", "b10", "b20", "b0", "b29", "b32", "b18", "b19", "b23", "b38", "b41", "b31", "b37", "b15", "b31", "b37", "b25", "b25" ], "table_ref": [], "text": "In this section, we discuss the related work on multi-aspect control. Recent researches on multiaspect can be divided into two types: optimizationbased methods and prefix-based methods.\nOptimization-based Methods Existing efforts on multi-aspect control typically combine many attribute controllers in the decoding stage to bias the language model for desired directions. Weighteddecoding methods focus on decomposing conditional probability through Bayesian factorization into a language model and a classifier (Dathathri et al., 2020;Krause et al., 2021;Yang and Klein, 2021;Liu et al., 2021;Gu et al., 2022a;Hallinan et al., 2023). Other approaches define controllable text generation as a multi-objective optimization problem and find the optimal soft-representation sequences by specific sampling schemes or other gradient-based samplers (Lample et al., 2018;Bhattacharyya et al., 2021;Mireshghallah et al., 2022;Qin et al., 2022;Kumar et al., 2021Kumar et al., , 2022)). These optimization-based methods often require complicated iteration / search in the high-dimensional text space, leading to slow inference speed.\nPrefix-based Methods Recent work leverages the learned continuous task-specific vectors, which are called prefixes, as a lightweight alternative to guide the language model to generate desired attribute text (Li and Liang, 2021;Yu et al., 2021;Zhao et al., 2020;Qian et al., 2022;Yang et al., 2022;Huang et al., 2023). Contrastive Prefixes (Qian et al., 2022) utilize the opposite relationship between different attributes to help to train single-aspect prefixes and combine them for multi-aspect control. Tailor (Yang et al., 2022) provides a multi-aspect prefix mask and a re-indexing position-ids sequence to bridge the gap between single and multi-aspect control. Nevertheless, these learned controllers in prefix-based methods may prefer different language habits, resulting in textual quality degeneration when combining them for multi-aspect control.\nThere is also a line of work that manipulates latent variables in the latent space (Gu et al., 2022c,b;Liu et al., 2022). Gu et al. (2022c) map attributerelated sentences to the latent space and then designs a heuristic searching algorithm to approach intersection regions of the different attributes for generation. Despite their efficiency, they still suffer from the unstable controllability due to the rare intersections of different attributes. LatentOps (Liu et al., 2022) executes composable control operations within the low-dimensional continuous latent space. However, it does not adequately consider the discrepancy between various aspects, resulting in suboptimal performance when controlling multiple attributes simultaneously." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first present the task definition of multi-aspect controllable text generation ( §3.1). Next, we describe how to build the compact latent space ( §3.2), how to define the joint EBMs on the latent space ( §3.3), and how to sample from the EBMs to generate the final results ( §3.4).\nThe overall structure of MacLaSa is illustrated in Figure 2. Our approach primarily relies on the variational autoencoder architecture for manipulating latent spaces. To weaken the mutual interference among different aspects, we initially employ the VAE encoder to estimate a continuous lowdimensional latent space, incorporating additional losses to ensure its compactness. Subsequently, we establish joint latent-space energy-based models, which allow us to integrate multiple constraint functions for guiding sophisticated multi-aspect control. Finally, we utilize a fast ODE-based sampler to draw samples from the EBMs and input them into the VAE decoder to generate the desired multi-aspect sequences." }, { "figure_ref": [], "heading": "Task Definition", "publication_ref": [], "table_ref": [], "text": "First, we present the task definition of multi-aspect controllable text generation. Suppose we have N aspects, represented by\nA = {A 1 , • • • , A N }, where each aspect A n contains |A n | attributes, given by {a 1 n , • • • , a |An| n }.\nThe goal of multi-aspect control is to generate sentences that possess multiple attributes a = {a * 1 , • • • , a * N } simultaneously. For instance, we may expect our model to produce a sentence with attribute a 2 1 (from aspect A 1 ) and attribute a 4 2 (from aspect A 2 ). Our training samples are organized and labeled according to their corresponding aspects and attributes. S j n denotes the index set of sentences with attribute a j n . As a result, we have S n = |An| j=1 S j n , which represents the index set containing all sentences within aspect A n . Likewise, S = N n=1 S n signifies the indices encompassing our entire training dataset. We use x to represent an arbitrary sentence and z to indicate its latent representation.\nIt is worth noting that our training corpus contains only single-aspect labeled sentences, making it infeasible to directly train a multi-aspect controllable text generative model." }, { "figure_ref": [], "heading": "Building Latent Space", "publication_ref": [], "table_ref": [], "text": "To estimate a compact, continuous latent space that outlines the latent distribution of interest and facilitates subsequent sampling processes, we utilize a VAE network equipped with pre-trained language models to encode any single-aspect sentence x to its hidden representation z using z = Encoder ϕ (x). The encoded latent representations constitute the estimated attribute space.\nWe expect the latent space to be sufficiently compact while ensuring that latent representations from various aspects maintain their semantic meanings.\nTo accomplish this, we propose the following three training objectives: ELBO Loss L E We adopt the basic Evidence Lower Bound (ELBO) objective to learn a smooth latent space and force the decoder to map any given latent vector z into its original text x:\nLE = -E q ϕ (z|x) [log p θ (x|z)] + KL(q ϕ (z|x)∥pprior (z)),(1)\nwhere p prior (z) is a standard Gaussian distribution as the prior, and KL(•∥•) is the Kullack-Leibler divergency. The first term encourages z to encode more relevant content information for reconstruct- ing the original text x with the VAE decoder p θ . The KL divergence forces the variational distribution q ϕ (z|x) to match the prior.\n' VAE Encoder Sample from 𝑝 #$% Multi- aspect Sentence ODE solver\nClassification Loss L C We propose the classification loss L C to force the mapped representations to preserve their original attribute information and help the model to distinguish representations of different attributes from the same aspect. We introduce independent classification layers for each aspect and train them by minimizing the negative log-likelihood of the corresponding attribute a j n :\nLC = - N n=1 |An| j=1 i∈S j n log pπ n a j n | zi ,(2)\nwhere p πn is a classifier that distinguish attributes {a * n } from aspect A n with parameter π n .\nAspect Discrepancy Loss L D To reduce the distribution discrepancy between different aspects, we introduce the aspect discrepancy loss (Gu et al., 2022c) to penalize the distance between distribution centers of each two aspects:\nLD = 1≤n 1 <n 2 ≤N i∈Sn 1 zi |Sn 1 | - j∈Sn 2 zj |Sn 2 | 2 , (3)\nwhich calculates the Euclidean distance between two distribution centers. In practice, we use a batchlevel approximation by taking the average representations of each aspect in each mini-batch as the estimated center and calculating the distances to centers of other aspects. Minimizing L D allows the model to reduce the discrepancy between different aspects, and helps to eliminate the mutual interference among them.\nTotally, our learning objective is:\nL = w1LE + w2LC + w3LD.(4)\nWe update parameters ϕ, θ and {π n } for the encoder, decoder, and the classifier layers." }, { "figure_ref": [], "heading": "Formulating Joint Latent-Space EBMs", "publication_ref": [], "table_ref": [], "text": "In order to satisfy the requirement of controlling multiple attributes simultaneously, we leverage the compositionality of EBMs and formulate the joint distribution for the latent representations and target attribute by incorporating any constraint(e.g., attribute classifiers) into the energy function E(•).\nTo begin with, we define the following joint distribution on both the latent representation z and desired attributes a as:\np(z, a) := pprior(z)p(a|z) = pprior(z) • e -E(a|z) /Z, (5)\nwhere Z = e -E(a|z) dz is the normalization term, p prior (z) is the Gaussian prior distribution, and p(a|z) follows a Boltzmann distribution. In this work, We assume the target attributes are independent with each others. We then formulate E(a|z) as the energy-based models that can combine arbitrary attribute classifiers based on our needs:\nE(a|z) = N n=1 λnEn (a * n |z) .(6)\nλ n ∈ R is the balanced weight to balance the performance among attributes from different aspects.\nThe energy function E n (a * n |z) is defined as the negative log probability of target attribute a j n :\nEn (a * n |z) = -fn(z) a j n + log k exp fn(z) a k n ,(7)\nwhere f n (z) is the multi-class attribute classifier trained on the frozen latent space, and f n (z)[a * n ] is the output unnormalized logits for attribute a * n . After the training of VAE, we fix the entire VAE encoder and map the input text with attribute annotations into the latent space, then ask the classifier to predict target attribute label given the latent vector. Training attribute classifiers f n (z) in the frozen low-dimensional latent space is efficient, which enables us to plug in different attribute classifiers to guide complex multi-aspect control." }, { "figure_ref": [], "heading": "Sampling from EBMs with ODE", "publication_ref": [ "b33", "b21", "b12", "b25", "b6" ], "table_ref": [], "text": "After the acquisition of the joint distribution p(z, a), we would like to draw latent representations z given the target attribute values a. To ensure high-quality and efficient sampling, we adopt a fast ODE-based sampler to draw samples from the energy based models.\nPrior work (Song et al., 2021) shows that controllable generation p(x|a) can be achieved by solving the following ordinary differential equation (ODE):\ndx = - 1 2 β(t) [x + ∇x log pt(x, a)] dt,(8)\nwhere β(t) is a time-variant diffusion coefficient that has the form β(t) = β min + (β max -β min ) t. t is the timestep from T to 0, and p t (x, a) denotes the join distribution of data and attribute at time t.\nIn our work, we adapt the ODE from Eq.( 8) into the low-dimensional latent space, which gives:\ndz = - 1 2 β(t) [z + ∇z log pt(z, a)] dt = - 1 2 β(t) [z -∇zEt(a|z) + ∇z log pt(z)] dt.(9)\nNote that p t (z) = N (0, I) is time-invariant for t ∈ [0, T ]. Since the classifier f n (z) in Eq.( 7) is fixed, E t (a|z) is also time-invariant and we have E t (a|z) = E(a|z). The above ODE becomes:\ndz = - 1 2 β(t) z -∇zE(a|z) - 1 2 ∇z∥z∥ 2 2 dt = 1 2 β(t)∇zE (a|z) dt = 1 2 β(t) n ∇zλnEn (a * n |z) dt.(10)\nNow we can easily sample latent samples by drawing z(T ) ∼ N (0, I) and solving the Eq.( 10) with a differential neural ODE solver 3 (Chen et al., 2018) to obtain z(0). Then z(0) is fed to the VAE decoder p θ to produce target text sequences that possess multiple attributes simultaneously.\nTo narrow the inevitable gap between the prior distribution p prior (z) and the learned VAE posterior q ϕ (z|x) on Z, following previous work (Li et al., 2020;Hu and Li, 2021;Liu et al., 2022), we fit a simple single-layer generative adversarial network (GAN) (Goodfellow et al., 2014), p GAN (z), on the learned latent space and draw z(T ) from p GAN (z).\nWe study the impact of p GAN in §4.5." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we demonstrate the effectiveness of our proposed MacLaSa in the multi-aspect control setting through both automatic and human evaluations. Additionally, we provide further analysis and visualization on efficiency, and case studies." }, { "figure_ref": [], "heading": "Experimental Setups", "publication_ref": [ "b28", "b40", "b31", "b4", "b17", "b36", "b24", "b1", "b27" ], "table_ref": [], "text": "Datasets We conduct experiments for controlling two aspects: sentiment and topic, simultaneously. We adopt the IMDb movie reviews (positive and negative) (Maas et al., 2011) for sentiment control and AGNews dataset (World, Sports, Business and Sci./Tech) (Zhang et al., 2015) for topic control. Following previous work (Qian et al., 2022;Gu et al., 2022c), we randomly sample 20k sentences from each dataset for each attribute to train our method. For evaluation, consistent with previous work (Dathathri et al., 2020;Krause et al., 2021;Yang and Klein, 2021;Liu et al., 2021;Gu et al., 2022a), we choose the same 15 attribute-unrelated prompts and ask the model to complete 50 sentences with the desired attributes starting with each prompt.\nMacLaSa Settings For the proposed MacLaSa, we employ BERT-base and GPT-2 medium to initialize the encoder and decoder networks in VAE, respectively. The dimension of the latent space is 128. We also apply a cyclical schedule for KL weight and a KL thresholding scheme to alleviate the notorious KL vanishing issue (Bowman et al., 2016). During the training stage, we use the AdamW (Loshchilov and Hutter, 2017) optimizer with a learning rate of 8e-5. The number of training epochs is 50. We also randomly select 10k / 1k examples to train / validate attributes classifiers in the latent-space EBMs. In our experiments, w 1 , w 2 and w 3 are set to 1. During the inference stage, we set β min = 0.1 and β max = 20 for the time-variant diffusion coefficient β t . We also manually tune the weight λ n of different attributes to balance them. All experiments are conducted on a single NVIDIA V100 32GB GPU." }, { "figure_ref": [], "heading": "Baseline Models", "publication_ref": [ "b4", "b24", "b10", "b29", "b31", "b25" ], "table_ref": [], "text": "We compare with three types of baseline models:\n(1) optimization-based methods: PPLM (Dathathri et al., 2020) back-propagates gradients of extra attribute classifiers to guide conditional generation at every decoding step. DEXPERTS (Liu et al., 2021) reweights the predictions of language models based on expert (and anti-expert) opinions for effective attribute control. MaRCo (Hallinan et al., 2023) achieves controllable generation using likelihoods under a expert LM and a anti-expert LM to find candidate words to mask and replace. Mix&Match (Mireshghallah et al., 2022) uses a Metropolis-Hastings sampling scheme to draw samplers from an energy-based model that combines multiple attribute discriminators. (2) Prefix-based methods: Contrastive Prefixes (abbreviated as Contrastive) (Qian et al., 2022) trains prefixes for each aspect while the combination of them can achieve multi-aspect control. We also compare with recent approaches that manipulate the latent space, including: LatentOps (Liu et al., 2022) performs composable text operations in the low-dimensional latent space, and Distribution (Gu et al., 2022c) searches for the intersection areas of multiple attribute distributions for generation." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b26", "b22" ], "table_ref": [], "text": "Automatic Evaluations We adopt three automatic evaluations metrics to measure the performance on the two-aspect control task. Correctness evaluates the success rate of controlling the two aspects simultaneously. We finetune two RoBERTa-Large (Liu et al., 2019) discriminators on the IMDb dataset for sentiment aspect, and the AGNews dataset for topic aspect. We use the two attribute discriminators to compute the fraction of sentences that contain pre-specified attributes. Perplexity (PPL) is an automatic metric of text fluency. We feed generated test sentences to a GPT2-Large model and report the perplexity score. Distinctness (Li et al., 2016) is a n-gram-based metric for evaluating textual diversity, we report Distinct-1 and Distinct-2 in our paper.\nHuman Evaluations In addition to automatic evaluations, we conduct human evaluations to compare our method's performance with that of the baseline models. We enlist four annotators with high-level language skills to carry out the human evaluation. Annotators are instructed to assess attribute relevance, fluency, and diversity on a scale of 1-5, with 1 denoting \"very low\" and 5 representing \"very high.\" Moreover, we direct the annotators not to consider linguistic quality when evaluating attribute alignment and vice versa. We randomly select 800 generated sentences (100 for each combination) and shuffle them for evaluation with each method. The scores are then averaged to derive the final human evaluation results." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "Automatic Evaluations We conduct experiments in the two-aspect control setting and compare our method with several strong baselines. The results of automatic evaluation are depicted in Table 1. We calculate the average correctness scores of eight attribute combinations as the final results for each method. We also report the standard deviations, which stand for the stability of models among different runs. Moreover, we assess the average inference time required to generate a single sentence for each method.\nWe note that existing baselines excel in individual evaluation metrics but struggle to concurrently achieve good controllability and superior linguistic quality, which is essential for multi-aspect control. PPLM and MaRCo can generate fluent sentences but fall short in attribute accuracy. In contrast, Mix&Match demonstrates strong attribute controllability, yet the text quality is subpar. Moreover, optimization-based methods, including PPLM and Mix&Match, exhibit severe slow inference speeds due to their complex iterations or searching in the high-dimensional text space. The Contrastive method attains a high correctness score in multiaspect control by training separate continuous prefix vectors for each aspect. However, the mutual interference of different prefixes results in diminished text quality. LatentOps has average performance over baseline models. The Distribution method generates highly fluent texts with good attribute correctness scores but lacks textual diversity.\nMacLaSa showcases a notable performance boost in average correctness scores, achieving an 11.62% improvement compared to the strongest baseline. This result highlights our superiority in multi-aspect controllability. Additionally, MacLaSa displays good linguistic quality compared to previous method, emphasizing the benefits of learning a compact latent space. Our approach also exhibits substantial advantages in generation efficiency. Compared to the parameterefficient prefix-based Contrastive method, our method demonstrates a remarkable 5.9× faster in inference speeds. In summary, MacLaSa surpasses existing baselines in attribute correctness and textual quality while keeping high inference speeds." }, { "figure_ref": [], "heading": "Human Evaluations", "publication_ref": [], "table_ref": [], "text": "The human evaluation results for the multi-aspect control task can be found in evaluation results, annotators favor our approach as it delivers the highest text quality among the baselines. Overall, our model demonstrates superior performance in both attribute correctness and textual quality. Both automatic and human evaluations demonstrate that our proposed MacLaSa outperforms other baseline models in terms of attribute correctness and linguistic quality, while maintaining a high inference speed." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Effects of VAE Losses", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We conduct an ablation study to verify the effects of the classification loss L C and aspect discrepancy loss L D . The results are shown in Table 1. Removing L C causes the latent space to collapse completely. The correctness scores drop drastically as the model can hardly distinguish between representations of different attributes within the same aspect. Removing L D degrades attribute correctness since we cannot alleviate domain gaps between different data sources. Interestingly, without L D , the distance between sam- ple points from different aspects increases, leading our model to generate sentences mapped from sparser regions. This results in a minor decrease in fluency while slightly increasing diversity." }, { "figure_ref": [], "heading": "Effects of Samplers", "publication_ref": [ "b19", "b32", "b30" ], "table_ref": [ "tab_3" ], "text": "To demonstrate the superiority of our ODE-based sampler, we compare it with other standard samplers. For fair comparison, we fix the parameters of VAE and choose different samplers for multi-aspect control text generation. We first implement a random sampler by directly drawing samples from the latent space using p GAN (described in §3.4). We also compared it with a gradient-based sampler using Langevin Dynamics (Kumar et al., 2022;Qin et al., 2022). The automatic evaluation results are shown in Table 3. Random sampling directly from the latent space can only generate representations with single attributes, highlighting the necessity of using a specific sampler. While the LD-based sampler can generate high-quality sentences, it sacrifices attribute alignment, resulting in low attribute relevance. This may be because LD is sensitive and unrobust to hyperparameters (Nie et al., 2021). In contrast, our ODE-based sampler outperforms LD in terms of attribute alignment and textual diversity.\nTo investigate the impact of p GAN , we conduct experiments by removing the GAN network and directly drawing latent representations from the standard Gaussian distribution N (0, I). As shown in Table 3, without the GAN, our model cannot accurately estimate the attribute space, resulting in decreased attribute relevance and textual quality." }, { "figure_ref": [], "heading": "Visualization of Latent Space", "publication_ref": [], "table_ref": [], "text": "To provide an intuitive impression of the estimated latent space," }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "Generated Sentences" }, { "figure_ref": [], "heading": "Contrastive", "publication_ref": [], "table_ref": [], "text": "Once upon a time, not so bad ASTRONAUT Tragic end-Mariners collapse to sweep from the cliff in the AL wildgoose division.\nThe country turns its tail WESTMINSTER is in many ways and SOUTHAMPTON FALL seems to have the same boring name of it all NBA names that is.\nThe president of the country not to be? Your unk will not like your unk, your unk says the singer, doesn." }, { "figure_ref": [], "heading": "Distribution", "publication_ref": [], "table_ref": [], "text": "The last time was bad. The first time is bad. The third time is bad. And the fourth is worse.\nThe horse is bad. The horse was bad in the first round of contest, showing a loss to rival South Korea after an earlier victory.\nThe road is bad. The road was bad in the first round of competition, ending up with a record-breaking 30-year drought in the U.S. Open..." }, { "figure_ref": [ "fig_1" ], "heading": "MacLaSa", "publication_ref": [ "b34" ], "table_ref": [], "text": "The president of the country can't hang on to results. After Sunday's debacle in Philadelphia, there is little hope that Tedford University can maintain its ranking in the top 10 in the country.\nThe horse was all wrong: Rossi Causes world championship leader Valentino Rossi suffered from an unusual form of ... The last time they clashed, they failed to meet expectations for this matchup to be made at the WNBA Finals and they have not been... we use the t-SNE technique (Van der Maaten and Hinton, 2008) to visualize part of our estimated latent space with four attributes: positive, negative, world and sports in Figure 3. As shown, (1) attribute distributions within the same aspect are well separated due to the classification loss L C that helps our model distinguish mutually exclusive attributes.\n(2) The distribution centers of sentiment and topic aspects are close to each other because we introduced L D to penalize the distance between them to eliminate domain gaps, which helps generating high-quality multi-aspect sentences. We also notice that the combination of negative-world is tighter than that of negative-sports because world news often covers negative events such as war, disease, and famine. This observation aligns with our experimental results in Appendix A." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "To better understand the benefits of learning a compact latent space for generative models, we randomly present generated examples in Table 4. When generating sentences with the attribute combination negative and sports, the Contrastive method can generate attribute-related words like \"tragic\" and \"NBA\"; however, the semantic coherence of the sentences is insufficient. This observa-tion is consistent with the results of both automatic and human evaluations (see § 4.4). One possible explanation is that the prefixes used for sentiment and topic control are trained independently, causing the two learned prefixes to exhibit different language habits and leading to incoherent expressions when combined for multi-aspect control. Conversely, the Distribution method can generate fluent sentences that display multiple attributes but struggles with varying expressions. For instance, Distribution tends to use the word \"bad\" to convey negative emotions, and its sentence structure is often repetitive, such as \"The [noun] was bad in the first round of \". Our proposed MacLaSa can generate numerous attribute-related content, such as \"there is little hope\" and \"world championship leader\", in a fluent manner. By minimizing the discrepancy between sentiment and topic representations in the latent space, we merge high-quality representations related to attribute information, resulting in more coherent expression." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this study, we introduce a novel method, namely MacLaSa, for multi-aspect controllable text generation that estimates a compact, low-dimensional latent space and employs a fast ODE-based sampler for efficient sampling. Our experiments on the two-aspect control task demonstrate the effectiveness and efficiency of our approach. Additionally, we carry out in-depth analytical experiments to emphasize the impact of each module and visualize the estimated latent space. In the future, we aim to expand our work by incorporating arbitrary attribute discriminators into the diffusion process using a plug-and-play approach. Furthermore, we plan to explore more powerful models to enhance the linguistic quality of generated sentences." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "One of the limitations of the current MacLaSa approach is that when a new aspect or attribute is introduced, the entire VAE framework needs to be retrained to accommodate the unseen attributes. This retraining process can often be time-consuming and computationally expensive, posing a significant challenge in dynamic environments where new aspects may frequently emerge. Moreover, due to the notorious KL vanishing issue, the training process of the VAE framework is not stable and requires a significant amount of skill and experience to address. The KL vanishing problem refers to the situation where, during the training process, the KL divergence term may approach zero. This can lead to a poorly constrained latent space, resulting in the model generating samples that lack diversity and are not representative of the true data distribution. To tackle this issue, we adopt several techniques, which are described in § 4.1." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "We honor and support the EMNLP code of Ethics. The paper focuses on controlled text generation, which aims to generate text with desired aspects. We recognize that controlled text generation may be misused to generate harmful texts, e.g., fake news. However, our method can also help to eliminate toxic information in pre-trained language models by introducing specific attribute classifiers. Overall, it is meaningful to continue research into this work based on predecessors. Besides, the datasets used in this paper are all from previously published work and do not involve privacy or ethical issues." }, { "figure_ref": [], "heading": "A Detailed Results of Multi-aspect Control", "publication_ref": [ "b26" ], "table_ref": [ "tab_7" ], "text": "We exhibit the detailed results of eight combinations (two sentiment attributes × four topic attributes) on multi-aspect control in Table 5. We compare with three types of baselines:\n(1) optimization-based methods, PPLM and Mix&Match.\n(2) prefix-based method Contrastive, and (3) methods that manipulate the latent space, for example, LatentOps and the Distribution method. For automatic evaluation metrics, we finetune two RoBERTa-Large (Liu et al., 2019) discriminators to assess the attribute accuracy scores for both sentiment and topic aspects simultaneously. Perplexity (PPL) is employed to gauge the linguistic quality of the generated sentences. Additionally, we compute the Distinctness score to appraise the textual diversity, reporting Distinct-1 and Distinct-2 in our paper. We also report the standard deviations, which stand for the stability of models among different runs.\nWe observe that PPLM demonstrates strong controllability in specific combinations, such as the Positive-Sci./Tech pairing. However, the performance of each combination varies significantly, resulting in subpar average results. This phenomenon also exists for DEXPERTS and MaRCo. While Mix&Match and the Contrastive method excel at attribute alignment, their linguistic quality leaves much to be desired. We postulate that this is due to Mix&Match employing a Metropolis-Hastings sampling scheme for high-dimensional text space sampling, which is hindered by the discrete nature of text space and prevents smooth text generation. The Contrastive method posits that contrasting relationships between individual attributes within each aspect aid in training attribute controllers, but it neglects the differences between aspects, compromising overall textual quality. Regarding the two latent space manipulation methods, LatentOps exhibits moderate performance in both attribute relevance and textual quality, while the Distribution method generates fluent sentences with the desired attributes but lacks diversity.\nOur method attains a remarkable average accuracy of 59.18% across the eight combinations, boasting a 11.62% improvement compared to the most powerful baseline and showcasing the exceptional controllability of our approach. Additionally, our technique excels in both linguistic quality and textual diversity. MacLaSa delivers well-rounded " }, { "figure_ref": [], "heading": "B Distribution of Attribute Space", "publication_ref": [], "table_ref": [], "text": "In Figure 4, we employ the t-SNE technique to project hidden representations from four attributes into 2D for visualization: positive, negative, business, and sci./tech. This offers insight into a portion of the estimated latent space. We observe that, on one hand, the two sentiment attributes are distinctly separated due to the classification loss L C , which also applies to the topic aspects. Conversely, the distribution centers of the two aspects are situated closely together, as a result of the aspect discrepancy loss penalty L D . Overall, the observed attribute space distribution aligns with our expectations. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Key R&D Program of China (2022YFB3103700, 2022YFB3103704), the National Natural Science Foundation of China (NSFC) under Grants No. 62276248, and the Youth Innovation Promotion Association CAS under Grants No. 2023111. Liang Pang is also supported by Beijing Academy of Artificial Intelligence (BAAI)." } ]
Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously. Traditional methods either require expensive iteration / searching within the discrete text space during the decoding stage, or train separate controllers for each aspect, resulting in a degradation of text quality due to the discrepancy between different aspects. To address these limitations, we introduce a novel approach for Multi-aspect control, namely MacLaSa, that estimates compact Latent space for multiple aspects, and performs efficient Sampling with a fast sampler. To eliminate the domain discrepancies between different aspects, we first utilize a variational autoencoder (VAE) network to map text sequences from various data sources into close latent representations. The estimated latent space enables the formulation of joint energy-based models and the plugging in of arbitrary attribute discriminators to achieve multiaspect control. Afterwards, we draw latent samples with a fast sampler based on ordinary differential equations and feed sampled examples to the VAE decoder to produce target text sequences. Experimental results demonstrate that MacLaSa outperforms strong baselines on both attribute relevance and textual quality while maintaining a high inference speed.
MacLaSa: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space
[ { "figure_caption": "Figure 2 :2Figure 2: An overview of MacLaSa. Left: Build latent space for MacLaSa. We utilize the VAE framework with two additional losses to build a compact latent space. Top Right: Formulate joint EBMs. We formulate the latent-space EBMs of latent representation and attribute to facilitate the plug in of multiple attribute constraint classifiers. Bottom Right Sample with ODE. We adopt a fast ODE-based sampler to perform efficient sampling from the EBMs, and feed samples to the VAE decoder to output desired multi-aspect sentences.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Projection of four attributes of two aspects from latent space via t-SNE.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Automatic results on multi-aspect control. We average the correctness scores of eight combinations(two sentiment attributes × four topic attributes) as the final results for each method. Detailed results of each combination are listed in Appendix A.", "figure_data": "MethodCorrectness (%)Text FluencyDiversityEfficiencySenti. & Topic Acc. ↑PPL ↓Distinct-1 ↑ Distinct-2 ↑ Time (s) ↓optimization-based methodPPLM18.14 ± 0.4525.59 ± 1.090.230.6440.56DEXPERTS23.93 ± 1.1138.70 ± 2.510.230.700.64MaRCo27.81 ± 1.9418.87 ± 1.850.180.580.40Mix&Match50.17 ± 2.0768.72 ± 0.970.360.84164.60prefix-based methodContrastive53.02 ± 1.5252.56 ± 11.970.220.710.59method that manipulates latent spaceLatentOps44.41 ± 5.7226.11 ± 1.460.160.550.10Distribution49.79 ± 1.9912.48 ± 0.520.080.280.04MacLaSa59.18 ± 0.8128.19 ± 1.260.160.600.10w/o LC47.54 ± 12.6727.91 ± 1.100.150.570.10w/o LD51.18 ± 3.9028.49 ± 0.440.180.620.10MethodCorrectness↑ Fluency↑ Diversity↑PPLM1.962.672.54DEXPERTS1.982.381.88MaRCo2.082.782.65Mix&Match1.211.382.13Contrastive2.042.292.38LatentOps2.212.212.38Distribution2.672.672.63MacLaSa3.543.252.96", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Human evaluations on multi-aspect control.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The inter-annotator agreement is 0.32", "figure_data": "MethodCorrectness (%)Text QualitySentiment & Topic ↑PPL ↓Random13.2432.28LD26.936.70ODE (MacLaSa)58.2226.86w/o pGAN37.8236.93", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Automatic results of comparison of different samplers.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Example cases of generated sentences with attribute combination negative and sports. Red highlights sentiment-related contents. Blue highlights topic-related contents. Underlined are the input prompts.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "concerning attribute alignment, linguistic quality, and diversity. We also evaluate the inference speed for sentence generation, with the results displayed in Table1. The experimental findings indicate that MacLaSa maintains a high inference speed as well.", "figure_data": "PositiveNegativeBusinessSci./TechSentiment CenterTopic CenterFigure 4: Projection of part of estimated attribute spacewith t-SNE.", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Detailed results of each combination on multi-aspect control.", "figure_data": "MethodsCombinationCorrectness (%) Senti. & Topic Acc. ↑Text Quality PPL ↓Diversity Distinct-1 ↑ Distinct-2 ↑Positive-World20.36 ± 1.6925.47 ± 1.700.230.64Positive-Sports16.53 ± 1.1325.78 ± 1.300.230.63Positive-Business25.24 ± 2.9626.66 ± 1.260.240.64Positive-Sci./Tech61.73 ± 0.6625.06 ± 1.530.240.66PPLMNegative-World3.87 ± 1.9925.27 ± 1.230.230.64Negative-Sports2.27 ± 0.5725.96 ± 1.540.230.63Negative-Business1.78 ± 1.2626.11 ± 1.200.230.64Negative-Sci./Tech13.29 ± 1.8224.40 ± 1.110.240.66Average18.14 ± 0.4525.59 ± 1.090.230.64Positive-World34.22 ± 4.2437.36 ± 3.460.240.72Positive-Sports8.40 ± 2.6637.36 ± 3.460.240.72Positive-Business10.98 ± 1.6737.36 ± 3.460.240.72Positive-Sci./Tech45.02 ± 4.3137.36 ± 3.460.240.72DEXPERTSNegative-World9.47 ± 2.6840.03 ± 2.350.210.68Negative-Sports8.17 ± 2.2740.03 ± 2.350.210.68Negative-Business10.98 ± 1.5040.03 ± 2.350.210.68Negative-Sci./Tech63.64 ± 8.7340.03 ± 2.350.210.68Average23.93 ± 1.1138.70 ± 2.510.230.70Positive-World36.22 ± 8.0417.13 ± 1.510.180.57Positive-Sports37.11 ± 25.2318.16 ± 1.470.170.55Positive-Business38.89 ± 8.3419.43 ± 2.130.190.59Positive-Sci./Tech50.00 ± 5.2117.91 ± 1.390.180.57MaRCoNegative-World8.22 ± 4.9118.79 ± 1.880.190.59Negative-Sports10.89 ± 0.3819.94 ± 2.850.170.57Negative-Business22.89 ± 20.0720.51 ± 2.450.190.59Negative-Sci./Tech18.22 ± 5.3919.06 ± 1.910.180.59Average27.81 ± 1.9418.87 ± 1.850.180.58Positive-World58.89 ± 0.8361.27 ± 0.790.360.84Positive-Sports58.89 ± 5.0666.58 ± 2.520.350.84Positive-Business39.78 ± 1.6665.89 ± 1.770.350.84Positive-Sci./Tech65.33 ± 2.4969.07 ± 2.170.360.84Mix&MatchNegative-World41.55 ± 1.6669.49 ± 1.140.350.84Negative-Sports47.33 ± 8.1372.72 ± 1.330.360.84Negative-Business31.56 ± 5.1571.61 ± 3.870.350.84Negative-Sci./Tech58.00 ± 4.7573.08 ± 2.060.370.84Average50.17 ± 2.0768.72 ± 0.970.360.84Positive-World67.87 ± 1.1348.15 ± 15.740.230.72Positive-Sports70.31 ± 5.5552.36 ± 8.740.210.70Positive-Business53.16 ± 5.0056.13 ± 14.350.220.72Positive-Sci./Tech51.96 ± 3.0945.03 ± 12.270.230.71ContrastiveNegative-World40.94 ± 4.2651.27 ± 15.520.220.70Negative-Sports40.71 ± 10.6559.77 ± 8.870.210.71Negative-Business48.84 ± 6.9561.91 ± 15.140.200.70Negative-Sci./Tech50.40 ± 3.9545.86 ± 9.810.230.71Average53.02 ± 1.5252.56 ± 11.970.220.71Positive-World57.96 ± 5.0724.79 ± 3.340.170.56Positive-Sports63.47 ± 11.0128.01 ± 1.800.160.55Positive-Business61.73 ± 9.3625.73 ± 1.840.140.52Positive-Sci./Tech39.64 ± 22.0726.49 ± 1.730.170.55LatentOpsNegative-World34.62 ± 1.5924.98 ± 1.560.160.55Negative-Sports40.41 ± 9.7225.14 ± 1.480.140.52Negative-Business25.74 ± 2.4127.30 ± 2.110.150.54Negative-Sci./Tech31.56 ± 2.5326.49 ± 0.990.160.57Average44.41 ± 5.7226.11 ± 1.460.160.55Positive-World37.42 ± 4.3813.34 ± 0.130.090.30Positive-Sports71.60 ± 4.3914.67 ± 0.530.090.29Positive-Business72.80 ± 6.4511.23 ± 1.000.070.25Positive-Sci./Tech72.80 ± 11.0712.41 ± 0.640.080.28DistributionNegative-World46.80 ± 10.8911.89 ± 1.120.070.28Negative-Sports35.91 ± 7.8412.99 ± 0.570.080.28Negative-Business26.09 ± 5.6011.03 ± 0.110.070.25Negative-Sci./Tech34.86 ± 6.2512.25 ± 0.930.080.27Average49.79 ± 1.9912.48 ± 0.520.080.28Positive-World59.47 ± 6.6626.26 ± 0.200.190.65Positive-Sports87.93 ± 4.2028.69 ± 1.780.160.57Positive-Business82.87 ± 3.2727.67 ± 1.550.150.57Positive-Sci./Tech76.34 ± 0.4628.77 ± 2.030.160.60MacLaSaNegative-World56.54 ± 1.4726.28 ± 1.260.160.59Negative-Sports38.00 ± 2.6732.23 ± 0.200.170.61Negative-Business31.40 ± 4.0729.06 ± 1.120.150.59Negative-Sci./Tech44.74 ± 0.3431.95 ± 0.480.170.62Average59.18 ± 0.8128.19 ± 1.260.160.60", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
Hanxing Ding; Liang Pang; Zihao Wei; Huawei Shen; Xueqi Cheng; Tat-Seng Chua
[ { "authors": "Sumanta Bhattacharyya; Amirmohammad Rooshenas; Subhajit Naskar; Simeng Sun; Mohit Iyyer; Andrew Mccallum", "journal": "", "ref_id": "b0", "title": "Energy-based reranking: Improving neural machine translation using energybased models", "year": "2021" }, { "authors": "R Samuel; Luke Bowman; Oriol Vilnis; Andrew M Vinyals; Rafal Dai; Samy Józefowicz; Bengio", "journal": "", "ref_id": "b1", "title": "Generating sentences from a continuous space", "year": "2016-08-11" }, { "authors": "Alvin Chan; Yew-Soon Ong; Bill Pung; Aston Zhang; Jie Fu", "journal": "", "ref_id": "b2", "title": "Cocon: A self-supervised approach for controlled text generation", "year": "2021-05-03" }, { "authors": "Tian Qi; Chen ; Yulia Rubanova; Jesse Bettencourt; David Duvenaud", "journal": "", "ref_id": "b3", "title": "Neural ordinary differential equations", "year": "2018-12-03" }, { "authors": "Sumanth Dathathri; Andrea Madotto; Janice Lan; Jane Hung; Eric Frank; Piero Molino; Jason Yosinski; Rosanne Liu", "journal": "", "ref_id": "b4", "title": "Plug and play language models: A simple approach to controlled text generation", "year": "2020-04-26" }, { "authors": "Jessica Ficler; Yoav Goldberg", "journal": "", "ref_id": "b5", "title": "Controlling linguistic style aspects in neural language generation", "year": "2017" }, { "authors": "Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron C Courville; Yoshua Bengio", "journal": "", "ref_id": "b6", "title": "Generative adversarial nets", "year": "2014-12-08" }, { "authors": "Yuxuan Gu; Xiaocheng Feng; Sicheng Ma; Jiaming Wu; Heng Gong; Bing Qin; ; ", "journal": "", "ref_id": "b7", "title": "Improving controllable text generation with position-aware weighted decoding", "year": "2022-05-22" }, { "authors": "Yuxuan Gu; Xiaocheng Feng; Sicheng Ma; Lingyuan Zhang; Heng Gong; Bing Qin", "journal": "", "ref_id": "b8", "title": "Controllable text generation via probability density estimation in the latent space", "year": "2022" }, { "authors": "Yuxuan Gu; Xiaocheng Feng; Sicheng Ma; Lingyuan Zhang; Heng Gong; Bing Qin", "journal": "", "ref_id": "b9", "title": "A distributional lens for multi-aspect controllable text generation", "year": "2022-12-07" }, { "authors": "Skyler Hallinan; Alisa Liu; Yejin Choi; Maarten Sap", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Detoxifying text with marco: Controllable revision with experts and anti-experts", "year": "2023-07-09" }, { "authors": "Changying Hao; Liang Pang; Yanyan Lan; Yan Wang; Jiafeng Guo; Xueqi Cheng", "journal": "AAAI Press", "ref_id": "b11", "title": "Sketch and customize: A counterfactual story generator", "year": "2021-02-02" }, { "authors": "Zhiting Hu; Li Erran; Li ", "journal": "", "ref_id": "b12", "title": "A causal lens for controllable text generation", "year": "2021-12-06" }, { "authors": "Zhiting Hu; Zichao Yang; Xiaodan Liang; Ruslan Salakhutdinov; Eric P Xing", "journal": "", "ref_id": "b13", "title": "Toward controlled generation of text", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "Xuancheng Huang; Zijun Liu; Maosong Sun Peng; Tao Li; Yang Li; Liu", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "An extensible plugand-play method for multi-aspect controllable text generation", "year": "2023" }, { "authors": "Nitish Shirish Keskar; Bryan Mccann; Lav R Varshney; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b16", "title": "CTRL: A conditional transformer language model for controllable generation", "year": "2019" }, { "authors": "Ben Krause; Akhilesh Deepak Gotmare; Bryan Mccann; Nitish Shirish Keskar; R Shafiq; Richard Joty; Nazneen Socher; Rajani Fatema", "journal": "", "ref_id": "b17", "title": "Gedi: Generative discriminator guided sequence generation", "year": "2021-11" }, { "authors": "Sachin Kumar; Eric Malmi; Aliaksei Severyn; Yulia Tsvetkov", "journal": "", "ref_id": "b18", "title": "Controlled text generation as continuous optimization with multiple constraints", "year": "2021-12-06" }, { "authors": "Sachin Kumar; Biswajit Paria; Yulia Tsvetkov", "journal": "", "ref_id": "b19", "title": "Constrained sampling from language models via langevin dynamics in embedding spaces", "year": "2022" }, { "authors": "Guillaume Lample; Myle Ott; Alexis Conneau; Ludovic Denoyer; Marc'aurelio Ranzato", "journal": "", "ref_id": "b20", "title": "Phrasebased & neural unsupervised machine translation", "year": "2018" }, { "authors": "Chunyuan Li; Xiang Gao; Yuan Li; Baolin Peng; Xiujun Li; Yizhe Zhang; Jianfeng Gao", "journal": "", "ref_id": "b21", "title": "Optimus: Organizing sentences via pre-trained modeling of a latent space", "year": "2020" }, { "authors": "Jiwei Li; Michel Galley; Chris Brockett; Jianfeng Gao; Bill Dolan", "journal": "", "ref_id": "b22", "title": "A diversity-promoting objective function for neural conversation models", "year": "2016" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "", "ref_id": "b23", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Alisa Liu; Maarten Sap; Ximing Lu; Swabha Swayamdipta; Chandra Bhagavatula; Noah A Smith; Yejin Choi", "journal": "", "ref_id": "b24", "title": "Dexperts: Decoding-time controlled text generation with experts and anti-experts", "year": "2021-08-01" }, { "authors": "Guangyi Liu; Zeyu Feng; Yuan Gao; Zichao Yang; Xiaodan Liang; Junwei Bao; Xiaodong He; Shuguang Cui; Zhen Li; Zhiting Hu", "journal": "", "ref_id": "b25", "title": "Composable text controls in latent space with odes", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b26", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b27", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Andrew L Maas; Raymond E Daly; Peter T Pham; Dan Huang; Andrew Y Ng; Christopher Potts", "journal": "", "ref_id": "b28", "title": "Learning word vectors for sentiment analysis", "year": "2011" }, { "authors": "Fatemehsadat Mireshghallah; Kartik Goyal; Taylor Berg-Kirkpatrick", "journal": "", "ref_id": "b29", "title": "Mix and match: Learningfree controllable text generationusing energy language models", "year": "2022" }, { "authors": "Weili Nie; Arash Vahdat; Anima Anandkumar", "journal": "", "ref_id": "b30", "title": "Controllable and compositional generation with latent-space energy-based models", "year": "2021-12-06" }, { "authors": "Jing Qian; Li Dong; Yelong Shen; Furu Wei; Weizhu Chen", "journal": "", "ref_id": "b31", "title": "Controllable natural language generation with contrastive prefixes", "year": "2022" }, { "authors": "Lianhui Qin; Sean Welleck; Daniel Khashabi; Yejin Choi", "journal": "CoRR", "ref_id": "b32", "title": "COLD decoding: Energy-based constrained text generation with langevin dynamics", "year": "2022" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; Diederik P Kingma; Abhishek Kumar; Stefano Ermon; Ben Poole", "journal": "", "ref_id": "b33", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021-05-03" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b34", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Fei Xiao; Liang Pang; Yanyan Lan; Yan Wang; Huawei Shen; Xueqi Cheng", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Transductive learning for unsupervised text style transfer", "year": "2021-07-11" }, { "authors": "Kevin Yang; Dan Klein", "journal": "", "ref_id": "b36", "title": "FUDGE: controlled text generation with future discriminators", "year": "2021-06-06" }, { "authors": "Kexin Yang; Dayiheng Liu; Wenqiang Lei; Baosong Yang; Mingfeng Xue; Boxing Chen; Jun Xie", "journal": "CoRR", "ref_id": "b37", "title": "Tailor: A prompt-based approach to attributebased controlled text generation", "year": "2022" }, { "authors": "Dian Yu; Zhou Yu; Kenji Sagae", "journal": "", "ref_id": "b38", "title": "Attribute alignment: Controlling text generation from pretrained language models", "year": "2021" }, { "authors": "Hanqing Zhang; Haolin Song; Shaoyu Li; Ming Zhou; Dawei Song", "journal": "", "ref_id": "b39", "title": "A survey of controllable text generation using transformer-based pre-trained language models", "year": "2022" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Character-level convolutional networks for text classification", "year": "2015" }, { "authors": "Mengjie Zhao; Tao Lin; Fei Mi; Martin Jaggi; Hinrich Schütze", "journal": "", "ref_id": "b41", "title": "Masking as an efficient alternative to finetuning for pretrained language models", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 305.75, 142.73, 220.02, 40.73 ], "formula_id": "formula_0", "formula_text": "A = {A 1 , • • • , A N }, where each aspect A n contains |A n | attributes, given by {a 1 n , • • • , a |An| n }." }, { "formula_coordinates": [ 3, 312.24, 700.1, 212.77, 18.66 ], "formula_id": "formula_1", "formula_text": "LE = -E q ϕ (z|x) [log p θ (x|z)] + KL(q ϕ (z|x)∥pprior (z)),(1)" }, { "formula_coordinates": [ 4, 123.95, 144.33, 370.5, 123.45 ], "formula_id": "formula_2", "formula_text": "' VAE Encoder Sample from 𝑝 #$% Multi- aspect Sentence ODE solver" }, { "formula_coordinates": [ 4, 103.24, 525.75, 186.5, 30.97 ], "formula_id": "formula_3", "formula_text": "LC = - N n=1 |An| j=1 i∈S j n log pπ n a j n | zi ,(2)" }, { "formula_coordinates": [ 4, 79.79, 676.35, 209.94, 27.58 ], "formula_id": "formula_4", "formula_text": "LD = 1≤n 1 <n 2 ≤N i∈Sn 1 zi |Sn 1 | - j∈Sn 2 zj |Sn 2 | 2 , (3)" }, { "formula_coordinates": [ 4, 358.24, 434.62, 166.77, 8.31 ], "formula_id": "formula_5", "formula_text": "L = w1LE + w2LC + w3LD.(4)" }, { "formula_coordinates": [ 4, 316.47, 630.56, 208.54, 10.13 ], "formula_id": "formula_6", "formula_text": "p(z, a) := pprior(z)p(a|z) = pprior(z) • e -E(a|z) /Z, (5)" }, { "formula_coordinates": [ 4, 361.49, 749.86, 163.53, 26.81 ], "formula_id": "formula_7", "formula_text": "E(a|z) = N n=1 λnEn (a * n |z) .(6)" }, { "formula_coordinates": [ 5, 74.61, 133.42, 215.13, 28.97 ], "formula_id": "formula_8", "formula_text": "En (a * n |z) = -fn(z) a j n + log k exp fn(z) a k n ,(7)" }, { "formula_coordinates": [ 5, 106.86, 470.96, 182.87, 19.74 ], "formula_id": "formula_9", "formula_text": "dx = - 1 2 β(t) [x + ∇x log pt(x, a)] dt,(8)" }, { "formula_coordinates": [ 5, 81.26, 580.95, 208.48, 41.39 ], "formula_id": "formula_10", "formula_text": "dz = - 1 2 β(t) [z + ∇z log pt(z, a)] dt = - 1 2 β(t) [z -∇zEt(a|z) + ∇z log pt(z)] dt.(9)" }, { "formula_coordinates": [ 5, 84.29, 687.26, 205.45, 69.16 ], "formula_id": "formula_11", "formula_text": "dz = - 1 2 β(t) z -∇zE(a|z) - 1 2 ∇z∥z∥ 2 2 dt = 1 2 β(t)∇zE (a|z) dt = 1 2 β(t) n ∇zλnEn (a * n |z) dt.(10)" } ]
10.18653/v1/N19-1388
2023-10-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b0", "b7", "b11", "b7", "b3", "b37", "b37", "b9", "b17", "b12", "b28", "b24", "b23", "b29", "b1", "b8" ], "table_ref": [], "text": "Multilingual neural machine translation (MNMT) makes it possible to train a single model that supports translation from multiple source languages into multiple target languages. This has attracted a lot of attention in the field of machine translation (Johnson et al., 2017;Aharoni et al., 2019;Fan et al., 2021). MNMT is appealing for two reasons: first, it can transfer the knowledge learned by the model from high-resource to low-resource languages, especially in zero-shot scenarios (Gu et al., 2019;Zhang et al., 2020a); second, it uses 1 Our source code is available at https://github.com/ lavine-lmu/Bi-ACL Figure 1: Method Overview. Our approach mainly consists of three parts: online constrained beam search, bidirectional autoencoder and bidirectional contrastive learning. Our approach explores the scenarios of using only target-side monolingual data and a bilingual dictionary to simultaneously alleviate the data imbalance and representation degeneration issues in large-scale MNMT model. only one unified model to translate between multiple language pairs, which saves on training and deployment costs.\nAlthough significant improvements have been made recently, we argue that there are still two major challenges to be addressed: i) MNMT models suffer from poor performance on long-tail languages (i.e., very low-resource languages), for which parallel corpora are insufficient or nonexisting. We call this the data imbalance problem. For instance 2 , 21% of the language pairs in the m2m_100 model (Fan et al., 2021) have a BLEU score of less than 1 and more than 50% have a BLEU score of less than 5. Only 13% have a BLEU score over 20. For example, the average BLEU score for the language pairs with Irish as the target language is only 0.09. ii) Degeneration of MNMT models stems from the anisotropic distribution of token representations, i.e., their representations reside in a narrow subset of the entire space (Zhang et al., 2020b). This is called the representation degeneration problem. It can lead to a prevalent issue in large-scale MNMT: the model copies sentences from the source sentence or translates them into the wrong language (off-target problem; Zhang et al., 2020a).\nTo address the data imbalance problem, prior work has attempted to improve the performance of a machine translation model without using any parallel data. On the one hand, unsupervised machine translation (Lample et al., 2018a,b) attempts to learn models relying only on monolingual data. On the other hand, bilingual dictionaries have shown to be helpful for machine translation models (Duan et al., 2020;Wang et al., 2022). What these approaches have in common is that they only require data that is both more accessible and cheaper than parallel data. As an example, 70% of the languages in the world have bilingual lexicons or word lists available (Wang et al., 2022).\nRepresentation degeneration is a prevalent problem in text generation (Gao et al., 2018) and machine translation models (Kudugunta et al., 2019). Contrastive learning (Hadsell et al., 2006) aims to bring similar sentences in the model close together and dissimilar sentences far from each other in the representation space. This is an effective solution to the representation problem in machine translation (Pan et al., 2021;Li et al., 2022). However, the naïve contrastive learning framework that utilizes random non-target sequences as negative examples is suboptimal, because they are easily distinguishable from the correct output (Lee et al., 2020).\nTo address both problems mentioned above, we present a novel multilingual NMT approach which leverages plentiful data sources: target-side monolingual data and a bilingual dictionary. Specifically, we start by using constrained beam search (Post and Vilar, 2018) to construct pseudo-parallel data in an online mode. To overcome the data imbalance problem, we propose training a bidirectional autoencoder, while to address representation degeneration, we use bidirectional contrastive learning. Finally, we use a curriculum learning (Bengio et al., 2009) sampling strategy. This uses the score given by token coverage in the bilingual dictionary to rearrange the order of training examples, such that sentences with more tokens in the dictionary are seen earlier and more frequently during training.\nIn summary, we make the following contributions: i) We propose a novel approach that uses only target-side monolingual data and a bilingual dictionary to improve MNT performance. ii) We define two modules, bidirectional autoencoder and bidirectional contrastive learning, to address the data imbalance and representation degeneration prob-lem. iii) We show that our method demonstrates zero-shot domain transfer and language transfer capability. iv) We also show that our method is an effective solution for both the repetition (Fu et al., 2021) and the off-target (Zhang et al., 2020a) problems in large-scale MNMT models." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b7", "b31", "b2", "b41", "b4", "b42", "b28", "b34" ], "table_ref": [], "text": "Multilingual Neural Machine Translation. MNMT is rapidly moving towards developing large models that enable translation between an increasing number of language pairs. Fan et al. (2021) proposed m2m_100 model that enables translation between 100 languages. Siddhant et al. (2022) and Costa-jussà et al. (2022) extend the current MNMT models to support translation between more than 200 languages using supervised and self-supervised learning methods.\nAutoencoder. An autoencoder (AE) is a generative model that is able to generate its own input. There are many variants of AE that can be useful for machine translation. Zhang et al. (2016) and Eikema and Aziz (2019) propose using a variational autoencoder to improve the performance of machine translation models. A variant of the same generative model is the denoising autoencoder, which is an important component of unsupervised machine translation models (Lample et al., 2018a). However, the utility of autoencoders has not been fully explored for MNMT. To the best of our knowledge, we are the first to propose training an autoencoder using only target-side monolingual data and a bilingual dictionary to improve low-resource MNMT. Contrastive Learning. Contrastive learning is a technique that clusters similar data together in a representation space while it simultaneously separates the representation of dissimilar sentences. It is useful for many natural language processing tasks (Zhang et al., 2022). Recently, Pan et al. (2021) and Vamvas and Sennrich (2021) used contrastive learning to improve machine translation and obtained promising results. However, these methods use the random replacing technique to construct the negative examples, which often leads to a significant divergence between the semantically similar sentences and the ground-truth sentence in the model representation space. This large changes makes the model more difficult to distinguish correct sentence from incorrect ones. We use small perturbations to construct negative examples, ensuring their proximity to the ground-truth sentence within the semantic space, which significantly mitigates the aforementioned issue." }, { "figure_ref": [ "fig_1" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Our goal is to overcome the data imbalance and representation degeneration issues in the MNMT model. We aim to improve the performance of MNMT without using parallel data, instead relying only on target-side monolingual data and a bilingual dictionary. Our approach contains three parts: online pseudo-parallel data construction (Section 3.1), bidirectional autoencoder (Section 3.2) and bidirectional contrastive learning (Section 3.3). Figure 1 illustrates the overview of our method. The architectures of the bidirectional autoencoder (left) and bidirectional contrastive learning (right) are presented in Figure 2." }, { "figure_ref": [], "heading": "Online Pseudo-Parallel Data Construction", "publication_ref": [ "b29" ], "table_ref": [], "text": "Let us assume that we want to improve performance when translating from source language ℓ s to target language ℓ t . We start with a monolingual set of sentences from the target language, denoted as D ℓ t mono , a bilingual dictionary, denoted as D ℓt→ℓs dict , and a target monolingual sentence with tt tokens, denoted as\nX ℓt i = {x 1 , ..., x tt }, X ℓt i ∈ D ℓ t\nmono . We use lexically constrained decoding (i.e., constrained beam search; Post and Vilar, 2018) to generate a pseudo source language sentence X ℓs i = {x 1 , ..., x ss } in an online mode:\nX ℓs i = gen(X ℓs i |θ, X ℓt i , D ℓt→ℓs dict )(1)\nwhere gen(•) is the lexically constrained beam search function and θ denotes the parameters of the model. It is worth noting that parameters θ will not be updated during the generation process, but will be updated in the following steps (Section 3.2 and Section 3.3)." }, { "figure_ref": [], "heading": "Bidirectional Autoencoder", "publication_ref": [ "b35" ], "table_ref": [], "text": "An autoencoder (Vincent et al., 2008) first aims to learn how to efficiently compress and encode data, then to reconstruct the data back from the reduced encoded representation to a representation that is as close as possible to the original input.\nWe propose performing autoencoding using only target-side monolingual data. This is different from prior work on UNMT, which uses both source and target-side data (Lample et al., 2018a). Our bidirectional autoencoder contains two parts: backward autoencoder (Section 3.2.1) and forward autoencoder (Section 3.2.2)." }, { "figure_ref": [], "heading": "Backward Autoencoder", "publication_ref": [], "table_ref": [], "text": "After we obtain X ℓs i from Eq. 1, we have the pseudo-parallel pairs (X ℓt i , X ℓs i ) ∈ D pse . Then, we feed X ℓt i to the MNMT model to get the contextual output embedding Z ℓs bkd . Formally, the encoder generates a contextual embedding H ℓt i given X ℓt i and l t as input, which is in turn given as input to the decoder (together with l s ) to generate Z ℓs bkd :\nH ℓ t i = Encoder(X ℓt i , ℓ t ), Z ℓs bkd = Decoder(H ℓ t i , ℓ s )(2)\nFinally, the backward autoencoder loss is formulated as follows:\nL AE_bkd = - log P θ X ℓs i | X ℓt i , Z ℓs bkd (3)" }, { "figure_ref": [], "heading": "Forward Autoencoder", "publication_ref": [], "table_ref": [], "text": "Given Z ℓs bkd from Eq. 2, we feed it to the MNMT model and get the contextual output denoted as Z ℓ t fwd :\nH ℓs i = Encoder(Z ℓs bkd , ℓ s ), Z ℓ t fwd = Decoder(H ℓs i , ℓ t )(4)\nThe forward auto-encoder loss is given by:\nL AE_fwd = - log P θ X ℓt i | Z ℓs bkd , Z ℓ t fwd\n(5)" }, { "figure_ref": [], "heading": "Bidirectional Contrastive Learning", "publication_ref": [ "b28" ], "table_ref": [], "text": "The main challenge in contrastive learning is to construct the positive and negative examples. Naive contrastive learning (Pan et al., 2021) both in the target-side sentence. Specifically, to generate a negative example, we add a small perturbation δ i = {δ 1 . . . δ T } to H i , which is the hidden representation of the source-side sentence. We construct positive examples by adding a perturbation we add the perturbation to ensure that the resulting embedding space is not already in a close proximity or far apart from the original embedding space. More details on how to generate the perturbation of δ i and ζ i can be found in Appendix A.\nx t 1 x t 2 … x t m x t 1 x t 2 … x t q h t 1 h t 2 … h t m z s 1 z s 2 … z s n Z s bkd h s 1 h s 2 … h s n Z t f wd z s 1 z s 2 … z s n Encoder Decoder x t 1 x t 2 … x t m h t 1 h t 2 … h t m C s bkd δ 1 δ 2 δ m h -t 1 h -t 2 … h -t m c s 1 c s 2 … c s n ζ 1 ζ 2 ζ n c + s 1 c + s 2 … c + s n h s 1 h s 2 … h s n h -s 1 h -s 2 … h -s n C t fwd c t 1 c t 2 … c t q c + t 1 c + t 2 … c + t q c s 1 c s 2 … c s n δ n … + + + + + ζ 1 ζ 2 ζ n … + + + + + … + + + δ 1 δ 2 … + + +\nζ i = {ζ 1 . . . ζ T } to H i ," }, { "figure_ref": [], "heading": "Backward Contrastive Learning", "publication_ref": [], "table_ref": [], "text": "Given pseudo-parallel pairs (X ℓt i ′ , X ℓs i ′ ) ∈ D pse from Eq. 1, we first feed X ℓt i ′ to the MNMT model to generate the contextual embedding H ℓt i ′ . Then, we add a small perturbation δ\n(i ′ )\nbkd after H ℓt i ′ to form the negative example, denoted as H -ℓ t i ′ . Finally, the contextual output of the decoder C ℓs bkd is generated by feeding H -ℓt i ′ to the decoder, and the positive ex-ample H +ℓs i ′ is generated by adding another small perturbation ζ\n(i ′ ) bkd . H ℓ t i ′ = Encoder(X ℓt i ′ , ℓ t ), H -ℓ t i ′ = H ℓ t i ′ + δ (i ′ ) bkd , C ℓs bkd = Decoder(H -ℓ t i ′ , ℓ s ), H +ℓs i ′ = Decoder(C ℓs bkd , ℓ s ) + ζ (i ′ ) bkd ,(6)\nFinally, the backward contrastive learning loss is formulated as follows:\nL CL_bkd = - log e sim + (H ℓ t i ′ ,H +ℓs i ′ )/τ H -ℓ t i ′ e sim -(H ℓ t i ′ ,H -ℓ t i ′ )/τ (7)" }, { "figure_ref": [], "heading": "Forward Contrastive Learning", "publication_ref": [], "table_ref": [], "text": "After we get C ℓs bkd from Eq. 6, we feed C ℓs bkd and the small perturbation δ\n(i ′ )\nfwd to the MNMT model to obtain the contextual output denoted as C ℓ t fwd and the negative example H -ℓs i ′ . Then, we feed C ℓ t fwd and another small perturbation ζ\n(i ′ )\nf wd to generate a positive example denoted as H +ℓ t i ′ .\nH ℓs i ′ = Encoder(C ℓs bkd , ℓ s ), H -ℓs i ′ = H ℓs i ′ + δ (i ′ ) f wd , C ℓ t fwd = Decoder(H -ℓs i ′ , ℓ t ), H +ℓ t i ′ = Decoder(C ℓ t fwd , ℓ t ) + ζ (i ′ ) f wd ,(8)\nFinally, the forward contrastive learning loss is given by the following equation:\nL CL_fwd = - log e sim + (H ℓs i ′ ,H +ℓ t i ′ )/τ H -ℓs i ′ e sim -(H ℓs i ′ ,H -ℓs i ′ )/τ (9)" }, { "figure_ref": [], "heading": "Curriculum Learning", "publication_ref": [ "b1", "b32", "b43" ], "table_ref": [], "text": "Curriculum Learning (Bengio et al., 2009) suggests starting with easier tasks and progressively gaining experience to process more complex tasks, which has been proved to be useful in machine translation (Stojanovski and Fraser, 2019;Zhang et al., 2019;Lai et al., 2022b). In our training process, we first compute token coverage for each monolingual sentence using the bilingual dictionary. This score is used to determine a curriculum to sample the sentences for each batch, so that higher-scored sentences are selected early on during training." }, { "figure_ref": [], "heading": "Training Objective", "publication_ref": [], "table_ref": [], "text": "The model can be trained by minimizing a composite loss from Eq. 3, 5, 7 and 9 as follows:\nL * =λ(L AE_bkd + L AE_f wd )+ (1 -λ)(L CL_bkd + L CL_f wd ) (10\n)\nWhere λ is the balancing factor between the autoencoder and contrastive learning component." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b5", "b7", "b16", "b14", "b37", "b38", "b10", "b36" ], "table_ref": [], "text": "Datasets. We conduct three group of experiments: bilingual setting, multilingual setting and highresource setting. In the bilingual setting, we focus on improving the performance on a specific long-tail language pair. We choose 10 language pairs at random that have BLEU < 2.5 in the original m2m_100 model and a considerable amount of monolingual data in the target language in newscrawl. 4 The language pairs cover the following languages (ISO 639-1 language code5 ): en, ta, kk, ar, ca, ga, bs, ko, ka, tr, af, hi, jv, ml. In the multilingual setting, we aim to improve the performance on long-tail language pairs, which share the same target language. We randomly select 10 languages where the average BLEU score on the language pairs with the same target language is less than 2.5. For the languages not covered from news-crawl, we use the monolingual data from CCAligned6 (El- Kishky et al., 2020). The languages we use are: ta, hy, ka, be, kk, az, mn, gu. For the high-resource setting, we aim to validate whether our proposed method also works for high-resource languages.\nWe randomly select 6 language pairs that cover the following language codes: en, de, fr, cs.\nDictionaries. We extract bilingual dictionaries using the wiktextextract7 tool. For pairs not involving English, we pivot through English. Given a source language ℓ s and a target language ℓ t , the intersection of the two respective bilingual dictionaries with English creates a bilingual dictionary D ℓs→ℓt dict from ℓ s to ℓ t . The statistics of the dictionaries can be seen in Appendix D.1. Data Preprocessing. For the monolingual data, we first use a language detection tool8 (langid) to filter out sentences with mixed language. We proceed to remove the sentences containing at least 50% punctuation and filter out duplicated sentences. To control the influence of corpus size on our experimental results, we limit the monolingual data of all languages to 1M. For dictionaries, we also use langid to filter out the wrong languages both on the source and target side.\nBaselines. We compare our methods to the following baselines: i) m2m: Using the original m2m_100 model (Fan et al., 2021) to generate translations. ii) pivot_en: Using English as a pivot language, we leverage m2m_100 to translate targetside monolingual data to English and then translate English to the source language. Following this method, we finetune the m2m_100 model using the pseudo-parallel data. iii) BT: Back-Translate (Sennrich et al., 2016) target-side monolingual data using m2m_100 model to generate the pseudo sourcetarget parallel dataset, then finetune the m2m_100 model using this data. iv) wbw_lm: Use a bilingual dictionary, cross-lingual word embeddings and a target-side language model to translate word-byword and then improve the translation through a (Koehn, 2004).\ntarget-side denoising model (Kim et al., 2018). v) syn_lexicon: Replace the words in the target monolingual sentence with the corresponding source language words in a bilingual dictionary and use the pseudo-parallel data to finetune the m2m_100 model (Wang et al., 2022).\nImplementation. We use m2m, released in the HuggingFace repository 9 (Wolf et al., 2020). For the wbw_lm baseline, monolingual word embeddings are directly obtained from the fasttext website 10 and cross-lingual embeddings are trained using a bilingual dictionary as a supervision signal.\nWe set λ = 0.7 in all our experiments (the effect of different λ can be find in Appendix E.5).\nEvaluation. We measure case-sensitive detokenized BLEU and statistical significant testing as implemented in SacreBLEU 11 All results are computed on the devtest dataset of Flores101 12 (Goyal et al., 2022). To evaluate the isotropy 13 of the MNMT model, we adopt the I 1 and I 2 9 github.com/huggingface/transformers 10 https://fasttext.cc/docs/en/crawl-vectors. html 11 github.com/mjpost/sacrebleu 12 github.com/facebookresearch/flores 13 The representation in MNMT model is not uniformly distributed in all directions but instead occupying a narrow cone in the semantic space, we call this 'anisotropy'. isotropy measures from Wang et al. (2019), with I 1 (W) ∈ [0, 1] and I 2 (W) ≥ 0, where W is the model matrix from the whole model parameter θ. Larger I 1 (W) and smaller I 2 (W) indicate a more isotropic embedding space in the MNMT model. Please refer to Appendix B for more details on I 1 and I 2 ." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_3", "tab_2", "tab_1" ], "text": "Table 1 shows the main results on low-resource language pairs in a bilingual and multilingual setting. Table 3 shows results on high-resource languagepairs in a bilingual setting, while Table 2 presents an isotropic embedding space analysis for the bilingual setting. Low-Resource Language Pairs in a Bilingual Setting. As shown in Table 1, the baselines perform poorly and several of them are worse than the original m2m_100 model. This can be attributed to the fact that their performance depends on the translation quality in the direction of source language to English and English to target language (pivot_en), the quality in the reverse direction (BT), the quality of cross-lingual word-embeddings (wbw_lm) and the token coverage in bilingual dictionary (syn_lexicon). Our method outperforms the baselines across all language pairs, even when the performance of the language pair is poor in the original m2m_100 model. In addition, using the curriculum learning sampling strategy further improves our model's performance.\nI 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ m2m 0." }, { "figure_ref": [], "heading": "Low-Resource Language Pairs in a Multilingual", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Setting. In the middle part of Table 1, we show the average BLEU scores of all language pairs with the same target language. Our approach consistently shows promising results across all languages.\nBased on the results shown in the lower part of the same Table , we notice that the BLEU scores obtained in the multilingual setting on a specific language pair outperform the scores obtained in the bilingual setting. For example, we get 3.68 BLEU points for af→ta in the bilingual setting, while we get 4.16 in the multilingual setting. This confirms our intuition that knowledge transfer between different languages in the MNMT model when using a multilingual setting is beneficial (see more details in Section 6.2)." }, { "figure_ref": [], "heading": "High-Resource Language Pairs in a Bilingual", "publication_ref": [ "b25", "b19", "b15" ], "table_ref": [ "tab_3", "tab_1", "tab_1", "tab_2", "tab_4" ], "text": "Setting. As shown in Table 3, baseline systems do not perform well on all high-resource pairs due to the same reasons as in the long-tail languages setting. Our approach outperforms the baselines on all high-resource pairs. In addition, curriculum learning takes full advantage of the original model in the high-resource setting, with stronger gains in performance than in the low-resource setting. In-terestingly, our findings reveal that back translation does not yield optimal results in both low and high resource settings. In low-resource languages, the performance of the language pair and its reverse direction in the original m2m_100 model is significantly poor (i.e., nearly zero). Consequently, the use of back-translation results in a performance that is inferior to that of m2m_100. For high-resource languages, the language pairs already exhibit strong performance in the original m2m_100 model. This makes it challenging to demonstrate that the incorporation of additional pseudo-parallel data can outperform the non-utilization of the pseudo-corpus. Another potential concern is that the large amount of monolingual data we employ, coupled with the substantial amount of pseudo-parallel data derived from back translation, may disrupt the pre-trained model. This observation aligns with the findings of Liao et al. (2021) and Lai et al. (2021).\nStatistical Significance Tests. The use of BLEU in isolation as the single metric for evaluating the quality of a method has recently received criticism (Kocmi et al., 2021). Therefore, we conduct statistical significance testing in the low-resource setting to demonstrate the difference as well as the superiority of our method over other baseline systems. As can be seen in Table 1, our method outperforms the baseline by significant differences, which is even more evident in the case study in Table 10. This is because the baseline system faces the serious problems of generating duplicate words (repeat problem) and translating to the wrong language (off-target problem), while our method avoids these two problems.\nIsotropy Analysis. It is clear from Table 2 that the embedding space on the encoder side is more isotropic than on the decoder side. This is because we only use the target-side monolingual data to improve the decoder of the MNMT model. Compared to other baseline systems, we get a higher I 1 and lower I 2 score, which shows a more isotropic embedding space in our methods. An interesting finding is that the difference in isotropic space between high-resource language pairs is not significant. This phenomenon is because the original m2m_100 model already performs very well on high-resource language pairs and the representation degeneration is not substantial for those language pairs. In addition, the phenomenon is consistent with the findings in Table 4.\nL AE_bkd L AE_f wd L CL_bkd L CL_f wd en→ta ta→tr en→de BLEU I 1 ↑ I 2 ↓ BLEU I 1 ↑ I 2 ↓ BLEU I 1 ↑ I 2 ↓ #1 √ × × × 2." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct additional experiments to better understand the strengths of our proposed methods. We first investigate the impact of four components on the results through an ablation study (Section 6.1). Then, we evaluate the zeroshot domain transfer ability and language transfer ability of our method (Section 6.2). Finally, we evaluate some impact factors (the quality of bilingual dictionary and the amount of monolingual data) on our proposed method (Section 6.3) and present a case study to show the strengths of our approach in solving the repetition problem and offtarget issues in MNMT model." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Our training objective function, shown in Eq. 10, contains four loss functions. We perform an ablation study on en→ta, ta→ar and en→de translation tasks to understand the contribution of each loss function. The experiments in Table 4 are divided into four groups, each group representing the number of loss functions. We have the following three findings: i) #1 clearly shows that the bidirectional autoencoder losses (L AE_bkd and L AE_f wd ) play a more critical role than the bidirectional contrastive learning losses (L CL_bkd and L CL_f wd ) in terms of BLEU score. However, bidirectional contrastive losses are more important than bidirectional autoencoder losses in terms of I 1 and I 2 score. This could be the case because contrastive learning aims to improve the MNMT model's isotropic embedding space rather than the translation from source language to target language. ii) Using forward direction losses results in a better translation quality compared to backward direction losses (#1). This is because our goal is to improve the performance from source language to target language, which is the forward direction in the loss functions. iii) The more loss functions there are, the better the performance. The combination of all four loss functions yields the best performance. iv) We show that the I 1 and I 2 scores in high-resource language pairs (en→de) do not have a significant change as the original embedding space is already isotropic." }, { "figure_ref": [], "heading": "Domain Transfer and Language Transfer", "publication_ref": [], "table_ref": [], "text": "Motivated by recent work on the domain and language transfer ability of MNMT models (Lai et al., 2022a), we conduct a number of experiments with extensive analysis to validate the zero-shot domain transfer ability, as well as the language transfer ability of our proposed method. We have the following findings: i) Our proposed method works well not only on the Flores101 datasets (domains similar to training data of the original m2m_100 model), but also on other domains. This supports the domain transfer ability of our proposed method. ii) We show that the transfer ability is more obvious in the multilingual setting than in the bilingual setting, which is consistent with the conclusion from Table 6 in the multilingual setting. More details can be found in Appendix E.3 and E.4." }, { "figure_ref": [], "heading": "Further Investigation", "publication_ref": [], "table_ref": [], "text": "To investigate two other important factors in our proposed methods, we conducted additional experiments to evaluate the impact of the quality of the dictionary and the amount of monolingual data. In general, we observe that better performance can be obtained by utilizing a high-quality bilingual dictionary. In addition, the size of the monolingual data used is not proportional to the performance improvement. More details can be found in Appendix E.1 and E.2. Also, compared with the baseline models, our method has strengths in solving repetition and off-target problems, which are two common issues in large-scale MNMT models. More details can be found in Appendix E.6." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "To address the data imbalance and representation degeneration problem in MNMT, we present a framework named Bi-ACL which improves the performance of MNMT models using only target-side monolingual data and a bilingual dictionary. We employ a bidirectional autoencoder and bidirectional contrastive learning, which prove to be effective both on long-tail languages and high-resource languages. We also find that Bi-ACL shows language transfer and domain transfer ability in zeroshot scenarios. In addition, Bi-ACL provides a paradigm that an inexpensive bilingual lexicon and monolingual data should be fully exploited when there are no bilingual parallel corpora, which we believe more researchers in the community should be aware of." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b39", "b26" ], "table_ref": [], "text": "This work has two main limitations. i) We only evaluated the proposed method on the machine translation task, however, Bi-ACL should work well on other NLP tasks, such as text generation or question answering task, because our framework only depends on the bilingual dictionary and monolingual data, which can be easily found on the internet for many language pairs. ii) We only evaluated Bi-ACL using m2m_100 as a pretrained model. However, we believe that our approach would also work with other pretrained models, such as mT5 (Xue et al., 2021) and mBART (Liu et al., 2020). Because the two components (bidirectional autoencoder and bidirectional contrastive learning) we proposed can be seen as plugins, they could be easily added to any pretrained model." }, { "figure_ref": [], "heading": "A Contrastive Learning", "publication_ref": [ "b23" ], "table_ref": [], "text": "Our approach is different from traditional contrastive learning, which takes a ground-truth sentence pair as a positive example and a random nontarget sentence pair in the same batch as a negative example. Motivated by Lee et al. (2020), we construct positive and negative examples automatically." }, { "figure_ref": [], "heading": "A.1 Negative Example Formulation", "publication_ref": [], "table_ref": [], "text": "As described in Section 3.3, to generate a negative example, we add a small perturbation δ i = {δ 1 . . . δ T } to the H i , which is the hidden representation of the source-side sentence. As seen in Eq. 6, the negative example is denoted as H -ℓ t i ′ , and is formulated as the sum of the original contextual embedding H ℓ t i ′ of the target language sentence X ℓt i ′ and the perturbation δ\n(i ′ ) bkd . Finally, δ (i ′ )\nbkd is formulated as the conditional log likelihood with respect to δ. δ\n(i ′ )\nbkd is semantically very dissimilar to X ℓt i ′ , but very close to the hidden representation H -ℓ t i ′ in the embedding space.\nH ℓ t i ′ = Encoder(X ℓt i ′ , ℓ t ), H -ℓ t i ′ = H ℓ t i ′ + δ (i ′ ) bkd δ (i ′ ) bkd = arg min δ,∥δ∥ 2 ≤ϵ log p θ X ℓs i ′ | X ℓt i ′ ; H ℓ t i ′ + δ\nwhere ϵ ∈ (0, 1] is a parameter that controls the perturbation and θ denotes the parameters of the MNMT model." }, { "figure_ref": [], "heading": "A.2 Positive Example Formulation", "publication_ref": [], "table_ref": [], "text": "As shown in Eq. 6, we create a positive example of the target sentence by adding a perturbation\nζ i = {ζ 1 . . . ζ T } to H i ,\nwhich is the hidden state of the target-side sentence. The objective of the perturbation ζ\n(i ′ )\nbkd is to minimize the KL divergence between the perturbed conditional distribution and the original conditional distribution as follows:\nζ (i ′ ) bkd = arg min D KL p θ * X ℓs i ′ | X ℓt i ′ ∥p θ Xℓs i ′ | Xℓt i ′ , (11\n) where θ * is the copy of the model parameter θ. As a result, the positive example is semantically similar to X ℓs i ′ and dissimilar to the contextual embedding of the target sentence in the embedding space." }, { "figure_ref": [], "heading": "B Evaluation of Isotropy", "publication_ref": [ "b36", "b7", "b10" ], "table_ref": [ "tab_2" ], "text": "We use I 1 and I 2 scores from Wang et al. (2019) to characterize the isotropy of the output embedding Figure 3: BLEU score statistics of the m2m_100 model (Fan et al., 2021) on Flores101 dataset (Goyal et al., 2022) for 102 × 101 = 10302 language pairs. Each bar denotes the number of language pairs in the interval of the BLEU score.\nspace.\nI 1 (W ) = min s∈S Z(s) max s∈S Z(s) I 2 (W ) = s∈S (Z(s) -Z(s)) 2 |S| Z(s) 2\nwhere Z(s) = n i=1 exp s T w i is close to some constant with high probability for all unit vectors s if the embedding matrix W is isotropic (w i ∈ W ). S is the set of eigenvectors of W ⊤ W. I 2 is the sample standard deviation of Z(s) normalized by its average Z(s)). We have I 1 (W ) ∈ [0, 1] and I 2 (W ) ≥ 0. Larger I 1 (W ) and smaller I 2 (W ) indicate more isotropic for word embedding space.\nIn this work, we randomly select 128 sentences from Flores101 benchmark to compute these two criteria. The results are shown in Table 2." }, { "figure_ref": [], "heading": "C Model Configuration", "publication_ref": [ "b27" ], "table_ref": [], "text": "We use the m2m_100 model with 418MB parameters implemented in Huggingface. In our experiments, we use the AdamW (Loshchilov and Hutter, 2018) optimizer and the learning rate are initial to 2e -5 with a dropout probability 0.1. We trained our models on one machine with 4 NVIDIA V100 GPUs. The batch size is set to 8 per GPU during training. To have a fair comparison, all experiments are trained for 3 epochs." }, { "figure_ref": [], "heading": "D Statistics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1 Statistics of BLEU scores in m2m_100", "publication_ref": [], "table_ref": [], "text": "Figure 3 shows the BLEU scores of the m2m_100 model on all 10302 supported language pairs. We see that 21% of the langauge pairs have a BLEU score of almost 0 and more than 50% have a BLEU score of less than 5." }, { "figure_ref": [], "heading": "D.2 Statistics of Bilingual Dictionaries", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Table 7 shows the size of the bilingual dictionaries used in a bilingual setting. For the multilingual setting, we will publish our code to generate the bilingual dictionary for any language pair." }, { "figure_ref": [], "heading": "E Further Analysis E.1 Quality of the Bilingual Dictionary", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "To investigate whether the quality of bilingual dictionary affects the performance of our method, we conduct additional experiments using the Panlex dictionary 14 , a big dataset that covers 5,700 lan- guages. We evaluate the performance on en→ta, ca→ta, ga→bs and ta→tr translation tasks.\nAs seen in Table 8, using the dictionary mined from wikitionary results in a better performance than using the panlex dictionary. The reason for this is that, while Panlex supports bilingual dictionaries for many language pairs, we discovered that the quality of them is quite low, especially when English is not one of the two languages in the language pair." }, { "figure_ref": [], "heading": "E.2 Amount of Monolingual Data", "publication_ref": [ "b6", "b30" ], "table_ref": [], "text": "As described in Section 3.4, we use the bilingual dictionary coverage ϕ as the curriculum to order the training batch. In this section, we aim to investigate how the number of monolingual data affects the experimental results. A smaller ϕ means a larger number of monolingual data. We conduct experiments on en→ta, en→kk, ar→ta and ca→ta translation tasks with a different ϕ.\nAs seen in of monolingual data is not proportional to the experimental performance. This is because a large percentage of words in a sentence are not covered by lexicons in the bilingual dictionaries, the performance of constrained beam search is limited. This phenomenon is consistent with the conclusion that the effect of the size of the pseudo parallel corpus in data augmentation (Fadaee et al., 2017) and back-translation (Sennrich et al., 2016) on the experimental results, i.e., that the performance of machine translation is not proportional to the size of the pseudo parallel corpus." }, { "figure_ref": [], "heading": "E.3 Domain Transfer", "publication_ref": [], "table_ref": [ "tab_5", "tab_5", "tab_1" ], "text": "To investigate the domain transfer ability of our approach, we first conduct experiments on en→ta, ka→ar, ta→tr translation tasks, then evaluate the performance in a zero-shot setting on three different domains (TED, QED and KDE) which are publicly available datasets from OPUS15 (Tiedemann, 2012) and on the Flores101 benchmark. The results is shown in Table 5.\nAccording to Table 5, the performance of the baseline systems is even worse than the original m2m_100 model, which suggests that they do not show domain robustness nor domain transfer ability due to poor performance (see Table 1). In contrast, our proposed method works well not only on the Flores101 datasets (domains similar to training data of the original m2m_100 model), but also on other domains." }, { "figure_ref": [], "heading": "E.4 Language Transfer", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "To investigate the language transfer ability, we use the model trained on a specific language (pair) to generate text for another language (pair) both in the bilingual and multilingual settings. For the bilingual setting, we run experiments to assess the language transfer ability between en→ta and ar→ta translation tasks. For the multilingual setting, we focus on translation scores between ta and be. The results is shown in Table 6.\nAs indicated in Table 6, we observe that the performance in our method outperforms the other baselines both in the bilingual and in the multilingual setting. We also discover that the transfer ability is more obvious in the multilingual setting than in the bilingual setting. This phenomenon is consistent with the conclusion from Table 1 in the multilingual setting. We believe that this can be attributed to the fact that in a multilingual setting, the language is used for all language pairs that share Example 1 (Repetition Problem) Source (English): \"We now have 4-month-old mice that are non-diabetic that used to be diabetic,\" he added.\nReference (Kazakh): «Қазір бізде диабетпен ауырған, бірақ қазір диабеті жоқ 4 айлық тышқандар бар», -деп қосты ол." }, { "figure_ref": [], "heading": "m2m", "publication_ref": [], "table_ref": [], "text": "«Жеңіске үлес қосқандар», «Жеңіске үлес қосқандар», «Жеңіске үлес қосқандар», «Жеңіске үлес қосқандар».\npivot_en \" Біз қазір Жеңіске бізге айлық Жеңіске келетін диабетті бізге пайдаланылып Жеңіске диабеттік,\" деді қосады." }, { "figure_ref": [], "heading": "BT", "publication_ref": [], "table_ref": [], "text": "Сондай-ақ, бүгінгі таңда Жеңіске пайдаланатын 4 Жеңіске мешіт не диабет емес, деп Жеңіске ҚазАқпарат.\nwbw_lm \" Біз қазір екенін айлық Мұқағали бізге келетін диабетті бізге пайдаланылып жатқанымыз диабеттік,\" деді қосады.\nsyn_lexicon \" біз қазір болу 4-ай бұрынғы мысал сол болу емес диабетик сол қосық болу диабетик\", ол қосу." }, { "figure_ref": [], "heading": "Bi-ACL w/o Curriculum", "publication_ref": [], "table_ref": [], "text": "Біз бізде диабетпен қосқандар, сол қазір диабеті айлық 4 айлық емес бар , ҚазАқпарат." }, { "figure_ref": [], "heading": "Bi-ACL (ours)", "publication_ref": [], "table_ref": [], "text": "Біз бізде диабетпен қосқандар, сол қазір келетін айлық жоқ 4 айлық емес бар , ҚазАқпарат.\nExample 2 (Off-target Problem)\nSource (English): 'Their thermal behavior is not as steady as large caves on Earth that often maintain a fairly constant temperature, but it is consistent with these being deep holes in the ground,\" said Glen Cushing of the United States Geological Survey (USGS) Astrogeology Team and of Northern Arizona University located in Flagstaff, Arizona.'\nReference (Tamil): அதிகா&க' வா)காள&+ அைடயாள/ைத0 ச& பா3/த ப4ற6, வா)காள3 அ7த வா)68 ெப:;ய4< உைறைய8 ேபா@வா3 மBCD வா)காள3 பதிேவ:;< ைகெயா8பD இ@வா3. ப4ெரGH ேத3த< ச:டD, நடவ;)ைககைள க@ைமயாக ெசய<ப@/JகிறJ." }, { "figure_ref": [], "heading": "m2m", "publication_ref": [], "table_ref": [], "text": "ஆLகில/தி< இைத Single Orgasm, Multiple Orgasm எ+CD _Cகிறா3க'.\npivot_en ெபaDபாbD இ+cடாகிராமி< உ'ள இ7த Glen Cushing of the United States Geological Survey (USGS) Astrogeology Team and of Northern Arizona University located in Flagstaff, Arizona." }, { "figure_ref": [], "heading": "BT", "publication_ref": [], "table_ref": [], "text": "Their thermal behavior is not as steady as large caves on பா3/த ப4ற6, வா)காள3 அ7த வா)68 அ7த வா)68 ெப:;ய4< உைறைய8\nGlen Cushing of the United States Geological Survey (USGS) Astrogeology Team and of Northern Arizona University located in Flagstaff, Arizona.\nwbw_lm Their thermal behavior is not as steady as large caves அைடயாளD, அ7த வா)காள3 வ4uD அ7த but it is consistent with these being deep holes in the ground, ப4ரா+c ேத3த< ச:டD ேபா< க:@8பா@கvட+ நைடwைற8ப@/JD இ7த வழ)6.\nsyn_lexicon Their thermal behavior is not as steady as large caves on Earth that அ;)க; பராம&8y ஒa மிக நிர7தர ெவ8பநிைல, ஆனா< அJ ஒ+றாக _ட இ7த இa' holes in the ground,\" எ+ Glen nited States Geological Survey (USGS) Astrogeology Team and of Northern Arizona University located in Flagstaff, Arizona.\nBi-ACL w/o Curriculum அதிகா&க' வா)காள&+ வா)காள3 ச& பா3/த ப4ற6, வா)காள3 அ7த வா)68 ெபள3 வ4uD அ7த வ:டா @வா3 மBCD வா)காள3 பதிேவ:;< அ7த வா)6. ப4ெரGH ேத3த< ச:டD, நடவ;)ைககைள க@ைமயாக ெசய<ப@/JகிறJ." }, { "figure_ref": [], "heading": "Bi-ACL (ours)", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "அதிகா&க' அ7த வா)காள3 அைடயாளD, அ7த ப4ற6, வா)காள3 அ7த வா)68 ெப:;ய4< உைறைய8 ேபா@வா3 மBCD வா)காள3 பதிேவ:;< மBCD ைகெயu/J அ7த வா)6 ச:டD, நடவ;)ைககைள wைற8ப@/JD இ7த வழ)6.\nTable 10: Case study the same target language, which can be seen as common information for all language pairs. E.5 The effect of λ\nIn Section 3.5, we set a λ to balance the importance of both autoencoding and contrastive loss to our model. From Figure 4, we show that the autoencoding loss plays a more important role than contrastive loss in terms of BLEU. When λ = 0.7, we got the best performance both in long-tail language pair and high-resource langauge pair." }, { "figure_ref": [], "heading": "E.6 Case Study", "publication_ref": [], "table_ref": [], "text": "We now present qualitative results on how our method addresses the repetition and off-target problems. For the first example in Figure 10, we find that other baseline systems suffer from a severe repetition problem. This is attributed to a poor decoder.\nIn contrast, our method does not have a repetition problem, most likely because we enhanced the representation of the decoder through a bidirectional contrastive loss. For the second example, we show that while the off-target problem is prevalent in baseline systems, our method seems to provide an effective solution to it." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was supported by funding from China Scholarship Council (CSC). This work has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement #640550). This work was also supported by the DFG (grant FR 2829/4-1)." } ]
Despite advances in multilingual neural machine translation (MNMT), we argue that there are still two major challenges in this area: data imbalance and representation degeneration. The data imbalance problem refers to the imbalance in the amount of parallel corpora for all language pairs, especially for long-tail languages (i.e., very low-resource languages). The representation degeneration problem refers to the problem of encoded tokens tending to appear only in a small subspace of the full space available to the MNMT model. To solve these two issues, we propose Bi-ACL, a framework which only requires target-side monolingual data and a bilingual dictionary to improve the performance of the MNMT model. We define two modules, named bidirectional autoencoder and bidirectional contrastive learning, which we combine with an online constrained beam search and a curriculum learning sampling strategy. Extensive experiments show that our proposed method is more effective than strong baselines both in long-tail languages and in high-resource languages. We also demonstrate that our approach is capable of transferring knowledge between domains and languages in zero-shot scenarios 1 .
Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation
[ { "figure_caption": "Figure 2 :2Figure 2: Model architecture: bidirectional autoencoder (left) and bidirectional contrastive learning (right). The symbols are consistent with the description in Section 3.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "which is the hidden state of the target-side sentence. Different from Pan et al. (2021) who use a random replacing technique that may result in meaningless negative examples (already well-discriminated in the embedding space),", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Main Results: BLEU scores for low-resource language pairs in the bilingual, multilingual setting, and 10 randomly selected language pairs in the multilingual setting. Language pairs with", "figure_data": "Models", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Main Results: Isotropic embedding space analysis in ar→ta and ta→tr translation task. The definitions of I 1 and I 2 can be found in Appendix B.", "figure_data": "042 20.017 0.012 26.639 0.036 20.408 0.006 26.901 0.058 16.521 0.016 24.695pivot_en0.034 22.852 0.008 24.472 0.019 22.889 0.007 25.977 0.056 16.843 0.016 24.763BT0.011 25.825 0.007 25.797 0.028 22.009 0.009 27.492 0.074 14.774 0.015 24.878wbw_lm0.023 23.485 0.015 24.746 0.038 19.389 0.010 26.320 0.037 19.099 0.015 24.935syn_lexicon0.059 17.513 0.015 25.694 0.028 20.640 0.013 26.475 0.020 23.859 0.014 24.137Bi-ACL w/o Curriculum 0.074 16.174 0.017 24.176 0.039 19.139 0.018 24.712 0.078 14.165 0.017 24.128Bi-ACL (ours)0.086 15.714 0.020 23.251 0.043 18.672 0.021 22.716 0.086 13.666 0.017 24.067Modelsen→de en→fr en→cs de→fr de→cs fr→csm2m22.7932.5021.6528.5320.73 20.30pivot_en---11.687.096.60BT24.0827.7121.5219.4517.41 17.00wbw_lm7.5211.539.048.389.44 10.15syn_lexicon5.3512.5610.905.908.398.89Bi-ACL w/o Curriculum25.1735.5223.9129.4322.71 22.04Bi-ACL (ours)27.7637.8425.8930.6623.80 23.56∆+4.97+5.34+4.24+2.13+3.07 +3.26", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Main Results: BLEU scores for high-resource language pairs in the bilingual setting.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of four loss functions on en→ta and ta→ar translation task. \" √ \" means the loss function is included in the training objective while \"×\" means it is not. Both I 1 and I 2 score are computed in the decoder side.", "figure_data": "51 0.005 30.7373.34 0.004 32.378 23.14 0.011 24.876", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Domain transfer: BLEU scores on en→ta, ka→ar, ta→tr in different domains.", "figure_data": "en→taka→arta→trFlores TED QED KDE Flores TED QED KDE Flores TED QED KDEm2m2.122.191.160.572.144.471.00 13.351.412.031.328.10pivot_en----0.150.210.080.531.381.340.863.93BT0.760.260.010.360.060.240.090.642.050.890.632.81wbw_lm2.761.970.290.672.860.170.126.242.261.531.198.70syn_lexicon1.331.190.080.200.851.650.45 10.640.570.410.557.08Bi-ACL w/o Curriculum 4.572.621.240.753.924.611.20 13.824.182.391.68 11.75Bi-ACL5.142.841.411.054.764.931.57 14.624.972.811.96 12.74∆+3.02 +0.65 +0.25 +0.48 +2.62 +0.46 +0.57 +1.27 +3.56 +0.78 +0.64 +4.64Bilingual Settingen2ta⇒ar2ta ar2ta⇒en2tam2m0.342.12pivot_en-0.34BT0.280.55wbw_lm1.072.68syn_lexicon0.431.86Bi-ACL w/o curriculum2.084.53Bi-ACL transfer2.745.78Φ+0.42+0.64Multilingual Settingta⇒bebe⇒tam2m1.951.46Bi-ACL2.732.31Bi-ACL transfer3.673.42Φ+0.55+0.88Table 6: Language transfer: BLEU scores on langauge(pair) transfer ablity both on bilingual setting and multi-lingual setting. 'A⇒B' means from language (pair) Atransfer to language (pair) B. Φ denotes the improve-ment on our proposed methods.", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Statistics of bilingual dictionaries.", "figure_data": "14 https://panlex.org", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The effect of bilingual dictionary quality on experimental performance in terms of BLEU score.", "figure_data": "", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Table 9, we observe that the amount", "figure_data": "BLEU2.5 3.0 3.5 4.0 4.5 5.02.53 3.612.76 2.83 3.75 3.92 3.97 3.153.87 4.194.29 4.414.83 4.785.14 5.02 4.97 4.634.58 4.394.03 4.12 en2ta ta2trBLEU25 26 27 28 29 3024.73 24.82 24.95 27.15 27.36 27.6925.76 28.2725.98 28.8826.48 29.2527.26 29.8727.76 30.6627.03 29.8425.62 29.42 29.24 26.61 en2de de2fr0.00.20.40.60.81.00.00.20.40.60.81.0(a) Long-Tail Language Pair(b) High-Resource Language PairFigure 4: The effect of different λen→ta en→kk ar→ta ca→tam2m2.120.260.341.75ϕ = 0.53.700.841.222.28ϕ = 0.63.600.751.072.26ϕ = 0.73.640.231.152.24ϕ = 0.83.820.461.282.52ϕ = 0.95.142.592.323.50ϕ = 1.03.270.071.012.15", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The effect of monolingual corpus size in experimental results in terms of BLEU score. The smaller the value of ϕ (bilingual dictionary coverage) the larger the monolingual corpus.", "figure_data": "", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" } ]
Wen Lai; Alexandra Chronopoulou; Alexander Fraser
[ { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Yoshua Bengio; Jérôme Louradour; Ronan Collobert; Jason Weston", "journal": "", "ref_id": "b1", "title": "Curriculum learning", "year": "2009" }, { "authors": "James Marta R Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Maillard", "journal": "", "ref_id": "b2", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "Baijun Xiangyu Duan; Hao Ji; Min Jia; Min Tan; Boxing Zhang; Weihua Chen; Yue Luo; Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Bilingual dictionary based neural machine translation without using parallel sentences", "year": "2020" }, { "authors": "Bryan Eikema; Wilker Aziz", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Auto-encoding variational neural machine translation", "year": "2019" }, { "authors": "Ahmed El-Kishky; Vishrav Chaudhary; Francisco Guzmán; Philipp Koehn", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "CCAligned: A massive collection of cross-lingual web-document pairs", "year": "2020" }, { "authors": "Marzieh Fadaee; Arianna Bisazza; Christof Monz", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Data augmentation for low-resource neural machine translation", "year": "2017" }, { "authors": "Angela Fan; Shruti Bhosale; Holger Schwenk; Zhiyi Ma; Ahmed El-Kishky; Siddharth Goyal; Mandeep Baines; Onur Celebi; Guillaume Wenzek; Vishrav Chaudhary", "journal": "Journal of Machine Learning Research", "ref_id": "b7", "title": "Beyond english-centric multilingual machine translation", "year": "2021" }, { "authors": "Zihao Fu; Wai Lam; Anthony ; Man-Cho So; Bei Shi", "journal": "", "ref_id": "b8", "title": "A theoretical analysis of the repetition problem in text generation", "year": "2021" }, { "authors": "Jun Gao; Di He; Xu Tan; Tao Qin; Liwei Wang; Tieyan Liu", "journal": "", "ref_id": "b9", "title": "Representation degeneration problem in training natural language generation models", "year": "2018" }, { "authors": "Naman Goyal; Cynthia Gao; Vishrav Chaudhary; Peng-Jen Chen; Guillaume Wenzek; Da Ju; Sanjana Krishnan; Marc'aurelio Ranzato; Francisco Guzmán; Angela Fan", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "The Flores-101 evaluation benchmark for low-resource and multilingual machine translation", "year": "2022" }, { "authors": "Jiatao Gu; Yong Wang; Kyunghyun Cho; O K Victor; Li", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Improved zero-shot neural machine translation via ignoring spurious correlations", "year": "2019" }, { "authors": "Raia Hadsell; Sumit Chopra; Yann Lecun", "journal": "IEEE", "ref_id": "b12", "title": "Dimensionality reduction by learning an invariant mapping", "year": "2006" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b13", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "year": "2017" }, { "authors": "Yunsu Kim; Jiahui Geng; Hermann Ney", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Improving unsupervised word-by-word translation with language model and denoising autoencoder", "year": "2018" }, { "authors": "Tom Kocmi; Christian Federmann; Roman Grundkiewicz; Marcin Junczys-Dowmunt; Hitokazu Matsushita; Arul Menezes", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "To ship or not to ship: An extensive evaluation of automatic metrics for machine translation", "year": "2021" }, { "authors": "Philipp Koehn", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Statistical significance tests for machine translation evaluation", "year": "2004" }, { "authors": "Sneha Kudugunta; Ankur Bapna; Isaac Caswell; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Investigating multilingual NMT representations at scale", "year": "2019" }, { "authors": "Wen Lai; Alexandra Chronopoulou; Alexander Fraser", "journal": "", "ref_id": "b18", "title": "a. m 4 adapter: Multilingual multidomain adaptation for machine translation with a meta-adapter", "year": "2022" }, { "authors": "Wen Lai; Jindřich Libovický; Alexander Fraser", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "The LMU Munich system for the WMT 2021 large-scale multilingual machine translation shared task", "year": "2021" }, { "authors": "Wen Lai; Jindřich Libovický; Alexander Fraser", "journal": "International Committee on Computational Linguistics", "ref_id": "b20", "title": "Improving both domain robustness and domain adaptability in machine translation", "year": "2022" }, { "authors": "Guillaume Lample; Alexis Conneau; Ludovic Denoyer; Marc'aurelio Ranzato", "journal": "", "ref_id": "b21", "title": "Unsupervised machine translation using monolingual corpora only", "year": "2018" }, { "authors": "Guillaume Lample; Myle Ott; Alexis Conneau; Ludovic Denoyer; Marc'aurelio Ranzato", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Phrasebased & neural unsupervised machine translation", "year": "2018" }, { "authors": "Seanie Lee; Dong Bok Lee; Sung Ju Hwang", "journal": "", "ref_id": "b23", "title": "Contrastive learning with adversarial perturbations for conditional text generation", "year": "2020" }, { "authors": "Yaoyiran Li; Fangyu Liu; Nigel Collier; Anna Korhonen; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Improving word translation via two-stage contrastive learning", "year": "2022" }, { "authors": "Baohao Liao; Shahram Khadivi; Sanjika Hewavitharana", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Back-translation for large-scale multilingual machine translation", "year": "2021" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "Multilingual denoising pretraining for neural machine translation", "year": "2020" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b27", "title": "Decoupled weight decay regularization", "year": "2018" }, { "authors": "Xiao Pan; Mingxuan Wang; Liwei Wu; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Contrastive learning for many-to-many multilingual neural machine translation", "year": "2021" }, { "authors": "Matt Post; David Vilar", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Fast lexically constrained decoding with dynamic beam allocation for neural machine translation", "year": "2018" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Improving neural machine translation models with monolingual data", "year": "2016" }, { "authors": "Aditya Siddhant; Ankur Bapna; Orhan Firat; Yuan Cao; Mia Xu Chen; Isaac Caswell; Xavier Garcia", "journal": "", "ref_id": "b31", "title": "Towards the next 1000 languages in multilingual machine translation: Exploring the synergy between supervised and self-supervised learning", "year": "2022" }, { "authors": "Dario Stojanovski; Alexander Fraser", "journal": "European Association for Machine Translation", "ref_id": "b32", "title": "Improving anaphora resolution in neural machine translation using curriculum learning", "year": "2019" }, { "authors": "Jörg Tiedemann", "journal": "European Language Resources Association (ELRA", "ref_id": "b33", "title": "Parallel data, tools and interfaces in OPUS", "year": "2012" }, { "authors": "Jannis Vamvas; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Contrastive conditioning for assessing disambiguation in MT: A case study of distilled bias", "year": "2021" }, { "authors": "Pascal Vincent; Hugo Larochelle; Yoshua Bengio; Pierre-Antoine Manzagol", "journal": "", "ref_id": "b35", "title": "Extracting and composing robust features with denoising autoencoders", "year": "2008" }, { "authors": "Lingxiao Wang; Jing Huang; Kevin Huang; Ziniu Hu; Guangtao Wang; Quanquan Gu", "journal": "", "ref_id": "b36", "title": "Improving neural language generation with spectrum control", "year": "2019" }, { "authors": "Xinyi Wang; Sebastian Ruder; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Expanding pretrained models to thousands more languages via lexicon-based adaptation", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Biao Zhang; Philip Williams; Ivan Titov; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Improving massively multilingual neural machine translation and zero-shot translation", "year": "2020" }, { "authors": "Biao Zhang; Deyi Xiong; Jinsong Su; Hong Duan; Min Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Variational neural machine translation", "year": "2016" }, { "authors": "Rui Zhang; Yangfeng Ji; Yue Zhang; Rebecca J Passonneau", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Contrastive data and learning for natural language processing", "year": "2022" }, { "authors": "Xuan Zhang; Pamela Shapiro; Gaurav Kumar; Paul Mcnamee; Marine Carpuat; Kevin Duh", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Curriculum learning for domain adaptation in neural machine translation", "year": "2019" }, { "authors": "Zhong Zhang; Chongming Gao; Cong Xu; Rui Miao; Qinli Yang; Junming Shao", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Revisiting representation degeneration problem in language modeling", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 123.9, 410.6, 149.3, 21.33 ], "formula_id": "formula_0", "formula_text": "X ℓt i = {x 1 , ..., x tt }, X ℓt i ∈ D ℓ t" }, { "formula_coordinates": [ 3, 108.38, 485.21, 181.48, 20.67 ], "formula_id": "formula_1", "formula_text": "X ℓs i = gen(X ℓs i |θ, X ℓt i , D ℓt→ℓs dict )(1)" }, { "formula_coordinates": [ 3, 355.55, 196.64, 169.59, 33.71 ], "formula_id": "formula_2", "formula_text": "H ℓ t i = Encoder(X ℓt i , ℓ t ), Z ℓs bkd = Decoder(H ℓ t i , ℓ s )(2)" }, { "formula_coordinates": [ 3, 317.27, 277.52, 207.87, 30.13 ], "formula_id": "formula_3", "formula_text": "L AE_bkd = - log P θ X ℓs i | X ℓt i , Z ℓs bkd (3)" }, { "formula_coordinates": [ 3, 351.91, 388.74, 173.23, 33.66 ], "formula_id": "formula_4", "formula_text": "H ℓs i = Encoder(Z ℓs bkd , ℓ s ), Z ℓ t fwd = Decoder(H ℓs i , ℓ t )(4)" }, { "formula_coordinates": [ 3, 314.62, 458.32, 194.3, 20.72 ], "formula_id": "formula_5", "formula_text": "L AE_fwd = - log P θ X ℓt i | Z ℓs bkd , Z ℓ t fwd" }, { "formula_coordinates": [ 4, 74.61, 75.37, 450.08, 302.24 ], "formula_id": "formula_6", "formula_text": "x t 1 x t 2 … x t m x t 1 x t 2 … x t q h t 1 h t 2 … h t m z s 1 z s 2 … z s n Z s bkd h s 1 h s 2 … h s n Z t f wd z s 1 z s 2 … z s n Encoder Decoder x t 1 x t 2 … x t m h t 1 h t 2 … h t m C s bkd δ 1 δ 2 δ m h -t 1 h -t 2 … h -t m c s 1 c s 2 … c s n ζ 1 ζ 2 ζ n c + s 1 c + s 2 … c + s n h s 1 h s 2 … h s n h -s 1 h -s 2 … h -s n C t fwd c t 1 c t 2 … c t q c + t 1 c + t 2 … c + t q c s 1 c s 2 … c s n δ n … + + + + + ζ 1 ζ 2 ζ n … + + + + + … + + + δ 1 δ 2 … + + +" }, { "formula_coordinates": [ 4, 70.87, 501.02, 99.73, 18.93 ], "formula_id": "formula_7", "formula_text": "ζ i = {ζ 1 . . . ζ T } to H i ," }, { "formula_coordinates": [ 4, 199.01, 713.79, 12.15, 8.44 ], "formula_id": "formula_8", "formula_text": "(i ′ )" }, { "formula_coordinates": [ 4, 334.16, 445.54, 190.98, 101.88 ], "formula_id": "formula_9", "formula_text": "(i ′ ) bkd . H ℓ t i ′ = Encoder(X ℓt i ′ , ℓ t ), H -ℓ t i ′ = H ℓ t i ′ + δ (i ′ ) bkd , C ℓs bkd = Decoder(H -ℓ t i ′ , ℓ s ), H +ℓs i ′ = Decoder(C ℓs bkd , ℓ s ) + ζ (i ′ ) bkd ,(6)" }, { "formula_coordinates": [ 4, 312.14, 604.89, 213.01, 52.53 ], "formula_id": "formula_10", "formula_text": "L CL_bkd = - log e sim + (H ℓ t i ′ ,H +ℓs i ′ )/τ H -ℓ t i ′ e sim -(H ℓ t i ′ ,H -ℓ t i ′ )/τ (7)" }, { "formula_coordinates": [ 4, 407.12, 695.79, 11.98, 8.44 ], "formula_id": "formula_11", "formula_text": "(i ′ )" }, { "formula_coordinates": [ 4, 449.92, 741.97, 12.15, 8.44 ], "formula_id": "formula_12", "formula_text": "(i ′ )" }, { "formula_coordinates": [ 5, 98.54, 92.83, 191.33, 74.59 ], "formula_id": "formula_13", "formula_text": "H ℓs i ′ = Encoder(C ℓs bkd , ℓ s ), H -ℓs i ′ = H ℓs i ′ + δ (i ′ ) f wd , C ℓ t fwd = Decoder(H -ℓs i ′ , ℓ t ), H +ℓ t i ′ = Decoder(C ℓ t fwd , ℓ t ) + ζ (i ′ ) f wd ,(8)" }, { "formula_coordinates": [ 5, 76.57, 212.57, 213.29, 51.05 ], "formula_id": "formula_14", "formula_text": "L CL_fwd = - log e sim + (H ℓs i ′ ,H +ℓ t i ′ )/τ H -ℓs i ′ e sim -(H ℓs i ′ ,H -ℓs i ′ )/τ (9)" }, { "formula_coordinates": [ 5, 93.38, 502.5, 191.94, 36.96 ], "formula_id": "formula_15", "formula_text": "L * =λ(L AE_bkd + L AE_f wd )+ (1 -λ)(L CL_bkd + L CL_f wd ) (10" }, { "formula_coordinates": [ 5, 285.32, 513.63, 4.54, 9.46 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 7, 77.48, 103.67, 440.33, 18.82 ], "formula_id": "formula_17", "formula_text": "I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ I 1 ↑ I 2 ↓ m2m 0." }, { "formula_coordinates": [ 8, 77.58, 75.51, 440.12, 52.83 ], "formula_id": "formula_18", "formula_text": "L AE_bkd L AE_f wd L CL_bkd L CL_f wd en→ta ta→tr en→de BLEU I 1 ↑ I 2 ↓ BLEU I 1 ↑ I 2 ↓ BLEU I 1 ↑ I 2 ↓ #1 √ × × × 2." }, { "formula_coordinates": [ 13, 169.79, 308.45, 75.62, 17.71 ], "formula_id": "formula_19", "formula_text": "(i ′ ) bkd . Finally, δ (i ′ )" }, { "formula_coordinates": [ 13, 101.62, 337.24, 12.15, 8.44 ], "formula_id": "formula_20", "formula_text": "(i ′ )" }, { "formula_coordinates": [ 13, 78.66, 390.91, 195.76, 62.96 ], "formula_id": "formula_21", "formula_text": "H ℓ t i ′ = Encoder(X ℓt i ′ , ℓ t ), H -ℓ t i ′ = H ℓ t i ′ + δ (i ′ ) bkd δ (i ′ ) bkd = arg min δ,∥δ∥ 2 ≤ϵ log p θ X ℓs i ′ | X ℓt i ′ ; H ℓ t i ′ + δ" }, { "formula_coordinates": [ 13, 70.87, 558.45, 103.36, 18.93 ], "formula_id": "formula_22", "formula_text": "ζ i = {ζ 1 . . . ζ T } to H i ," }, { "formula_coordinates": [ 13, 131.32, 583.05, 12.15, 8.44 ], "formula_id": "formula_23", "formula_text": "(i ′ )" }, { "formula_coordinates": [ 13, 71.96, 635.22, 213.84, 26.29 ], "formula_id": "formula_24", "formula_text": "ζ (i ′ ) bkd = arg min D KL p θ * X ℓs i ′ | X ℓt i ′ ∥p θ Xℓs i ′ | Xℓt i ′ , (11" }, { "formula_coordinates": [ 13, 335.53, 317.97, 157.79, 68.3 ], "formula_id": "formula_25", "formula_text": "I 1 (W ) = min s∈S Z(s) max s∈S Z(s) I 2 (W ) = s∈S (Z(s) -Z(s)) 2 |S| Z(s) 2" } ]
2023-05-22
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "named as ChatGenImage. The core idea behind it is to leverage the complementary strengths of diverse models to establish a highly effective and user-friendly pipeline for interactive data augmentation. In this work, we extensively study how LLMs communicate with AIGC model to achieve more controllable image generation and make the first attempt to collaborate them for automatic data augmentation for a variety of downstream tasks. Finally, we present fascinating results obtained from our ChatGenImage framework and demonstrate the powerful potential of our synthetic data for systematic vision adaptation. Our codes are available at https://github.com/Yuqifan1117/ Labal-Anything-Pipeline." }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b11", "b13", "b12", "b22", "b4", "b5" ], "table_ref": [], "text": "In the past decade, deep learning techniques have demonstrated promising performance across diverse tasks, owing to the availability of large-scale annotated data [10,12,19]. However, it is time-consuming and expensive to manually collect a large-scale annotated dataset containing every possible domain for robust training. Besides, the problem of cross-domain and long-tail distributions within existing datasets have a detrimental effect on the performance and robustness of vision models, thereby impeding their generalization ability to novel categories or unseen domains. This promotes us to explore a less labor-intensive way to harvest labeled data containing multiple domains in one step for robust vision tasks.\nOne effective strategy to improve generalization and robustness is to enlarge the scale of training data by intricate augmentations [14]. There are several GAN-based models [7,17] generating images for vision tasks, but their applicability remains constrained by their narrow focus on specific settings or small scales. Recently, AIGC models [27][28][29] have emerged as promising candidates for generating high-quality synthetic data, with the ability to address the limitations of the existing dataset. There are several early attempts at exploring synthetic data from generative models for data augmentation [3, 13,23,37]. Albeit promising, early works usually produce simple scenarios or object-centric images only by global constraints (i.e., \"airplane\" or \"a white airplane hovering over a beach and a city\".), which limits downstream models' perception of intricate scenes and fine-grained attributes. Additionally, these methods concentrate on generating images under typical scenarios (e.g., daylight, field), while neglecting less common but predictable circumstances (e.g., snow, forest, night). This limitation may impede the ability of deep learning models to generalize when deployed in real-world environments that exhibit unseen test distributions.\nIn this paper, we present a novel approach named Chat-GenImage that facilitates more controllabel data augmentation. ChatGenImage harnesses the collaborative power of the LLM and AIGC models, enabling iterative communication between them in a cost-effective and controllable manner. This automatically iterative process facilitates the generation of high-quality synthetic images depicting complex scenes and diverse domains, along with fine-grained annotations. Our fundamental intuition is that large language models have remarkable capabilities to perform new tasks in a zero-shot manner when presented with well-crafted instruction prompts [11,[34][35][36]. We discover that these LLMs like ChatGPT possess the capability to autonomously navigate image editing processes. By strategically designing ap-propriate prompts, LLMs can leverage the inherent knowledge within the system and effectively guide the AIGC models to produce highly controllable and intricate images. While ChatGPT contains diverse world knowledge for simulating the human brain's efficient processing, it is nontrival to elicit this knowledge from it for data augmentation with automatic labeling because ChatGPT is a pure language model that lacks the ability to visually perceive any information. We explore this issue in the context of generative data augmentation, showing that language can act as a bridge connecting LLMs and AIGC models, producing elaborate images for downstream tasks by globally controllable prompts and iteratively local editing instructions.\nTo this end, we demonstrate three key findings. First, we find that the LLM such as ChatGPT contains a wealth of conceptual knowledge and can imagine vivid descriptions even with only one label word (e.g. A dog playing in a lush green park, with a frisbee in its mouth. The dog should have a shiny coat of fur.) [6,33]. We further obverse that the existing AIGC models can only generate simple image with few objects and backgrounds, which are not diverse for domain generalization [20]. Thus, we establish the iterative pipeline to repair missing details and refine generated images with the help of label foundation toolkits and local editing prompts. Finally, we demonstrate our method flow to produce large amounts of high-quality synthetic data with fine-grained labels in a scalable manner for data augmentation in data scarcity scenarios." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Large Language Models", "publication_ref": [ "b20", "b25" ], "table_ref": [], "text": "In recent years, the field of natural language processing has been revolutionized by the emergence of large language models (LLMs), exemplified by models such as PaLM [8], ChatGPT, and LLaMa [33]. Moreover, the remarkable performance of LLMs in zero-shot and few-shot generalization has sparked a growing trend in utilizing autoregressive language models for vision-language tasks [21,26]. However, the generalization of LLMs does not translate well to visual tasks [9,20]. Unlike previous works, we utilize LLMs to enrich training data for fine-tuning in downstream tasks instead of directly transferring by contrastive learning." }, { "figure_ref": [], "heading": "Text-to-Image Diffusion Models", "publication_ref": [], "table_ref": [], "text": "Recently, diffusion models have become a promising generative modeling framework, achieving stateof-the-art performance on image generation tasks [27][28][29]. GLIDE [25] studies diffusion models for the textconditional image synthesis by classifier-free guidance strategies. InstructPix2Pix [5] proposes a effective framework to edit images with human instructions, which opens up new opportunities for controllable image creation by Label word: dog." }, { "figure_ref": [], "heading": "Can you give me complex scene images and object-centric images with rich annotations about dog for data-augmentation?", "publication_ref": [], "table_ref": [], "text": "A white dog sits on a wooden bench in a peaceful park with a white cat nearby, surrounded by tall trees and lush grass.\nIn the distance, families enjoy a picnic at a nearby table, and a beautiful butterfly is flying in front of the dog.\nCreate a stunning close-up scene of a dog playing in a lush green park, with a frisbee in its mouth. The dog should have a shiny coat of fur, with its tongue lolling out in excitement. Its eyes should be bright and alert, with its ears perked up in attention." }, { "figure_ref": [], "heading": "Object-centric descriptions", "publication_ref": [], "table_ref": [], "text": "Complex scene descriptions user-written instructions. However, existing SOTA text-toimage models require longer and more complex prompts to yield impressive outcomes, which is less user-friendly. Thus, we provide a powerful and user-friendly pipeline to generate more elaborate images through iterative refinement with the aid of large language models." }, { "figure_ref": [], "heading": "Synthetic Data for Visual Tasks", "publication_ref": [ "b3", "b12", "b22", "b30", "b12", "b22" ], "table_ref": [], "text": "Recently, there has been an increasing interest in using high-quality synthetic data to augment training data for downstream tasks [3,4,13,23]. PET [31] primarily focuses on a semi-supervised situation to automatically generate abundant labeled data for augmentation. [13] use GLIDE to generate abundant class-conditioned images and explore the effectiveness of synthetic data for image recognition tasks in data-scarce settings. For the task of few-shot object detection, a method proposed in [23] involves selecting representative samples from a large-scale synthetic dataset to potentially enhance the performance of FSOD models. Here we present a novel approach for generating high-quality synthetic data by leveraging state-of-the-art text-to-image models and LLMs. Our method eliminates the need for expensive prompt engineering by introducing a unified framework that produces abundant and elaborate images with annotations in a single pipeline." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "ChatGenImage is a labeling collaboration framework that involves a mentor large language model (LLM) and numerous AIGC models as controllable image creators, and labeling foundation models for executing the labeling task. The workflow of Label Anything consists of two stages: Language Enhancement Image Initialization and Iteratively Local Refinement and Labeling, as shown in Figure 2. 1) During the first stage, An LLM (e.g., ChatGPT) analyze the label word from the user input, and generate complex scene descriptions and object-centric descriptions in Global Prompts Brainstorming. Then, AIGC generators initialize controllable images based on the global constraints from the LLM. 2) During the second stage, the LLM produces Label Editing Prompts based on the high-quality pseudo labels automatically obtained from the Label Foundation Toolkit and employs them to iteratively control the process of local image editing. Based on that, AIGC models can perform Controllable Image Editing from both background and foreground to obtain more diversified synthetic images. Through our approach, the generated images are modified to align with the complex annotations, resulting in high-quality synthetic data suitable for data augmentation in downstream tasks." }, { "figure_ref": [ "fig_1" ], "heading": "Global Prompts Brainstorming", "publication_ref": [ "b12" ], "table_ref": [], "text": "Due to the limited knowledge of the large language model about the AIGC model, it is not capable of providing appropriate prompts for the AIGC model. Therefore, ChatGenImage utilizes a hybrid approach that combines both specification-based instruction and demonstrationbased learning to effectively guide the AIGC model in generating high-quality synthetic data. Specification-based Instruction. The prompt specification can serve as a standard template for large language models to comprehend visual attributes of the specific concept, thereby facilitating the sensible scene imagination for a given word through slot filling. However, using category names alone in AIGC models may limit their ability to perceive visual features, leading to ambiguous image generation [13]. To help the large language model imagine effective scene descriptions, ChatGenImage prompts focus on descriptive features rather than broad categories. In the first stage of ChatGenImage, the large language model take the Label Word from the user and construct several relevant descriptions as its Visual Feature for global prompt brainstorming. Moreover, we propose to automatically obtain appropriate visual features by prompting the LLM to describe the visual features that distinguish that category in a photograph. We demonstrate the prompt process for visual feature descriptions and controllable generation in Table 1. Demonstration-based learning. ChatGenImage utilizes the in-context learning capability of LLMs and injects several demonstrations into the prompt learning, helping large language models to better understand the parameter criteria for conditional image generation. Each demonstration is a group of input and output on scene prompts brainstorming--the user's request in standard templates and the expected image descriptions for AIGC models. Furthermore, these demonstrations consist of complex scene descriptions and object-centric descriptions, as shown in Figure 2, effectively aid ChatGenImage in understanding the given label's attributes in various environments and imagining reasonable prompts for high-quality image synthesis." }, { "figure_ref": [], "heading": "Label Foundation Toolkit", "publication_ref": [ "b23", "b21", "b23", "b21" ], "table_ref": [], "text": "Since ChatGPT is a pure language model and cannot \"see\" any visual information, we present the initialized images to several powerful label foundation toolkits (i.e., Segment Anything Model [18], Grounding DINO [24], and BLIP2 [22]) and serve them as sensors in the system to provide perceptual information to the ChatGPT. Segment Anything Model (SAM) is a large ViT-based model trained on the large visual corpus (SA-1B) [18], which has demonstrated promising zero-shot segmentation capabilities in various scenarios and the great potential for data labeling in visual tasks. But it needs precise prompts (like boxes/points) to generate accurate masks and lacks category predictions or annotations for each mask. Grounding DINO is a strong zero-shot detector which is capable of to generate high quality boxes and labels with free-form text [24], which can also serves as box prompts generator for SAM. Our approach combines the strengths of Grounding DINO and SAM to detect and segment comprehensive regions in each synthetic image. This builds a powerful pipeline for complex visual scene labeling and produces abundant fine-grained pseudo labels for training. BLIP2 is a language-vision model that seamlessly integrates visual input into text sequences to facilitate overall visual perception of LLMs [22]. By combining BLIP2 with the aforementioned visual models, our approach can automatically generate high-quality text descriptions for synthetic images. The LLMs then use these descriptions to understand image regions and return local editing prompts for controllable and diverse image refinement." }, { "figure_ref": [], "heading": "Local Editing Prompt", "publication_ref": [], "table_ref": [], "text": "Despite the ability of AIGC models to generate labeled images through prompt-based techniques with the aid of LLMs, their effectiveness in depicting intricate scenes and fine-grained attributes remains limited due to their inclination to create object-centric images with only global constraints. Besides, we observe that the generated images contain fewer objects, which poses a challenge in constructing complex scenes for demanding downstream tasks (e.g., scene graph generation and visual question answering). Thus we further introduce local editing prompt in the iterative pipeline for fine-grained image refinement.\nIn detail, we design a iterative communication that encourages ChatGPT to provide a series of informative feedbacks based on the generated images from AIGC models and corresponding labels from label foundation toolkits. Since language models are blind to the initialized image, ChatGPT cannot partially edited the initial image directly. Hence, we employ a predefined prompt template and populate the slots with the corresponding caption and object box coordinates identified by the visual foundation models. This template is subsequently utilized by the LLMs to produce novel scenes that comprise new backgrounds and additional objects. It is worth noting that ChatGPT can voluntarily select a reasonable location for editing based on human instructions and its underlying knowledge, autonomously generating accurate local editing prompts. Then the AIGC model use the resulting prompts to edit the images and improve their quality by adding missing details." }, { "figure_ref": [], "heading": "Controllable Image Editing", "publication_ref": [ "b0" ], "table_ref": [], "text": "To efficiently generate a significant amount of images with complex scenes and rich annotations in a low-resource manner, It is necessary to collaborate with various controllable editing models that can perform controllable image Background Imagination. We notice that retrieved or directly generated images are usually restricted to to a single domain, thereby leading to a constraint in the development of robust visual models [9]. Furthermore, it is impractical to obtain labeled data for all possible anticipated settings at once is often impractical due to the significant expense involved. However, acquiring linguistic knowledge of the anticipated domain shift is a more cost-effective and accessible approach.\nHence, we leverage the ChatGPT to generate novel backgrounds for the original image and employ Instruct-Pix2Pix [5] to substitute different backgrounds, generating a vast collection of composite images across various domains in a cost-effective manner. To preserve the foreground semantics of images after background modification, we perform target detection on both the original and modified images, and apply filter rules to exclude images with missing objects. Foreground Object Filling. To avoid altering the semantic information of the source image, we propose a method to increase the complexity of the image scene by filling foreground objects. The necessary object labels, coordinates, and their interactions with the scene (i.e., {label: 'rocks', box: [200, 50, 300, 150], relationship: 'near the cabin and surrounded by trees'}) can be obtained by filtering the local editing prompts automatically. Once collect sufficient possible boxes through ChatGPT, we can use Blended Latent Diffusion [1,2] to fill novel objects in the specific position of the foreground. It is worth noting that we filter out the overlapping bounding boxes to ensure that the original semantics are preserved. In this way, we greatly enrich the spatial interaction of objects in the generated image." }, { "figure_ref": [], "heading": "boat ducks boat", "publication_ref": [], "table_ref": [], "text": "Step 1: Background Imagination\nStep 2: Foreground Object Filling\nStep 3: Iteratively Object Refinement A) Raw Image B) Object Detection C) Segmentation " }, { "figure_ref": [], "heading": "Image Filtering Rules", "publication_ref": [ "b15" ], "table_ref": [], "text": "Though Most state-of-the-art AIGC methods generate astonishing images, they may have several visual and textual issues: 1) boundary incoherence, 2) semantic mismatch and 3) object missing. It is essential to establish robust image filtering rules that can effectively evaluate the synthetic images and filter out those low-quality results. To address above challenges, we introduce a Pixel Checking (PC) and a Semantic Checking (SC) strategy for the generated images from the perspective of visual pixels and textual semantics. Pixel Checking. To ensure the boundary consistency of the edited image, we evaluate the fidelity of the generated image. IS [30], FID [15] SceneFID [32] are common metrics to evaluate the fidelity of general images in different scales. However, all of these metrics rely on ground truth labels, which are not suitable for assessing images generated by stable diffusion models [27]. Therefore, we exploit the SSIM and PSNR [16] to explore structural similarity and pixel similarity between the locally edited image and the original image for pixel checking. We employ a threshold strategy of PSNR and SSIM between the original and edited images, minimizing artifacts and incoherence at the editing boundary to preserve the global coherence of the image during the local editing process. Semantic Checking. Considering that local image editing may introduce undesired items to destroy the semantics of Global Prompt Brainstorming: Create a stunning close-up scene of a dog playing in a lush green park, with a frisbee in its mouth. The dog should have a shiny coat of fur, with its tongue lolling out in excitement. Its eyes should be bright and alert, with its ears perked up in attention. A white dog sits on a wooden bench in a peaceful park surrounded by tall trees. Initial caption A white dog sits on wooden bench." }, { "figure_ref": [], "heading": "Can you give me complex scene images and object-centric images with rich annotations about dog for data-augmentation?", "publication_ref": [], "table_ref": [], "text": "Query: {'Label word': dog}" }, { "figure_ref": [], "heading": "AIGC Image Initialization:", "publication_ref": [], "table_ref": [], "text": "Label Foundation Toolkit:\n{\"caption\": \"there is a dog sitting on a bench in a field\", \"mask\": [{\"value\": 0, \"label\": \"background\"} , {\"value\": 1, \"label\": \"bench\", \"logit\": 0. " }, { "figure_ref": [], "heading": "Local Background Editing:", "publication_ref": [], "table_ref": [], "text": "'background': 'forest', 'description': 'A dog sits on a cozy bench in a lush forest.' 'background': 'park', 'description': 'A dog sits on a wooden bench in a peaceful park.' 'background': 'desert', 'description': 'A dog rests on a weathered bench in the arid desert.'" }, { "figure_ref": [], "heading": "AIGC Image Iterative Imagination", "publication_ref": [ "b25" ], "table_ref": [], "text": "A dog sits on a wooden bench in a peaceful park.\nA dog sits on a cozy bench in a lush forest.\nA dog rests on a weathered bench in the arid desert.\nFine-grained caption: A dog sits on a wooden bench in a peaceful park. A butterfly is flying in the air. A white cat is sitting beside the dog. In the background, there is a lush forest and a big lawn the original image, we evaluate the semantics and object detection of the generated image during semantic checking. Specifically, we generate a set of image candidates based on scene descriptions of specific label words during both global initialization and local image editing. Then, we use the CLIP similarity score [26] to evaluate the semantic alignment between the image candidates and textual constraints. We rank the image candidates based on the score and filter out the low-confidence images to obtain most matching ones as the final result. Besides, as the we employ open vocabulary object detection on the images after background editing. We only retain those images that can identify the novel background and original foreground objects to keep the original semantics and enhance the downstream utility of the edited images." }, { "figure_ref": [], "heading": "AIGC Objects Iterative Refinement", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this version, we discuss our main experiments setup and several results. We demonstrate the effectiveness of our ChatGenImage on interactive data synthesis through qualitative results, showing the potential of using ChatGenImage for systematic vision adaptation. In the next release, we will further explore how to better leverage the synthetic data obtained from our ChatGenImage framework for better downstream task generalization. " }, { "figure_ref": [], "heading": "Setting", "publication_ref": [], "table_ref": [], "text": "In our experiments, we employed the gpt-3.5-turbo variants of the GPT models as the large language models (i.e., ChatGPT), which are publicly accessible through the Ope-nAI API 1 . To make the LLM output more stable, we set the decoding temperature to 0. For AIGC models, we uniformly set the pixel of the picture to 512×512 to save the memory overhead. Also to adapt to the whole system, we use stable diffusion v1.5 as the AIGC base model with the same default parameters as the original setting [28]. We provide detailed prompts designed for the Visual Descriptor, AIGC Creator, Scene Imagination, Box Candidates Generation in the step of Global Prompts Brainstorming and Local Editing Prompts in Table 1, where {variable} indicates that the slot needs to be populated with the corresponding text before the prompt can be fed into the LLM. This label pipeline is based on a single Nvidia RTX 3090 1 https://platform.openai.com/ GPU, which is affordable for most people." }, { "figure_ref": [ "fig_4", "fig_5", "fig_6" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "We evaluate the generated image and fine-grained annotations in our ChatGenImage in two cases (i.e., complex scene description and object-centric image generation). In Figures 4,we show several dialogue demonstrations and qualitative analysis for above two different cases of requirement data, respectively. We collect several label words of rare and endangered species for testing, which have few photos in the web and are unfamiliar to most image classifiers. We compare our approach with LLM descriptions and original images generated by naive AIGC models. The result is shown in Figure 5. The experimental result indicates that the proposed ChatGenImage is both general and controllable, effectively creating robust images even for unfamiliar and rare concepts. Through interactive communication with the LLM, the AIGC can learn the specific descriptions of novel concepts and complete controllable gen- eration in different domains via iterative image editing. Besides, we explore the effectiveness of the ChatGenImage for complex scene descriptions and whether the LLM help AIGC models iteratively fill accurate objects in the foreground. In Figure 6, we show that the LLM provides several information comprising of object coordinates and relation interactions, which is then used by Stable Diffusion to generate diverse backgrounds (e.g., mountain, forest) and incorporate relevant objects (e.g., snow trees, stream and rocks) to produce a rich image depicting a complex scene." }, { "figure_ref": [], "heading": "Applications", "publication_ref": [], "table_ref": [], "text": "ChatGenImage, as an interactive data synthesis framework, provides two generation modes and fine-grained images with various domains to easily meet the requirements of different field. Moreover, the usage of synthetic data from ChatGenImage enables more generalizable of downstream tasks in complex application scenarios.\nSince ChatGenImage can iteratively generate large amount of diverse images via LLM-AIGC collaboration, it can provide extra unseen data domains for systematic vi-sion adapation. We will evaluate ChatGenImgae on several domain adaptation benchmarks to investigate how to enrich the existing dataset with synthetic data and construct adaptive visual perception system in a cost-less manner." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "ChatGenImage is a versatile tool that combines the capabilities of LLMs, AIGC models and powerful label foundation toolkits. Based on the collaboration of diverse models, ChatGenImage enables to generate fine-grained images with rich annotations for data augmentation. In the future, we will further develop our approach to support more complex and challengeable scenarios, like fine-grained humanobject interaction, action editing, etc., and apply it to more realistic applications for systematic vision adaptation." }, { "figure_ref": [], "heading": "Visual Descriptor", "publication_ref": [], "table_ref": [], "text": "Prompt \"role\": \"system\", \"content\": \"You are an expert in the field of vision and graphics, please fully consider the input concept or topic, give me the most important fine-grained visual features of the input concept or category based on the Wikipedia. Only give me several phrases or keywords as more as possible.\" \"role\": \"user\", \"content\": \"Q: What are useful visual features for distinguishing a {category name} in a photo? A: There are several useful visual features to tell there is a {category name} in a photo:\" AIGC Creator Prompt \"role\": \"system\", \"content\": \"The AI assistant is a professional data specialist. The AI assistant can imagine different real scenes according to the input caption and return the background and closely related foreground objects in a specific format. Note that the background should be described with a single word or phrase, and each background contains five main objects.\" {Demonstration} \"role\": \"user\", \"content\": \"Give me {prompt number} high quality prompts for text-to-image models about the amazing close-up realistic scene containing the {label word}. Scene prompts should also contain other objects as more as possible. The {label word} is with {visual feature}.\" Demonstration Sure, here are 1 high quality prompts for text-to-image models about the amazing close-up realistic scene containing the red panda: Bring a photo-realistic close-up scene of a red panda to life, set against a backdrop of a rustic temple in a mountainous region. The panda should be situated in front of the temple, with its round ears, short legs, and bushy tail with alternating red and white rings visible. The temple should have ornate carvings and architecture, with stone lanterns and a small pond nearby. The panda's white paws with black claws should be visible as it sits atop a stone pedestal, perhaps with some cherry blossom petals scattered around it. The overall scene should have a peaceful and serene atmosphere.\nVisual Feature Label word: \"Lemur\" -Large, forward-facing eyes with binocular vision; Soft, thick fur covering the body -Long, bushy tail; Striking coloration patterns such as black and white rings -Slender fingers and toes with opposable thumbs for grasping and climbing; Prominent ears that can be pointed or rounded with tufts of fur -Wet, reflective nose; Rounded head with a shortened snout and large ears -Relatively small body size, typically weighing between 2 and 5 kilograms; Distinctive vocalization or call that can vary between species and subspecies Scene Imagination Prompt \"role\": \"system\", \"content\": \"The AI assistant is a professional data specialist. The AI assistant can imagine different real scenes according to the input caption and return the background and closely related foreground objects in a specific format. Note that the background should be described with a single word or phrase, and each background contains five main objects.\" \"role\": \"user\", \"content\": \"Give me scene number real scene descriptions based on the context caption. The scene objects should consist exist objects, and also contain five additional objects associated with the background. Each scene description is a complex sentence containing the above objects. Return the result in the following format: 'background':[], 'objects':[], 'description':[]. Only return the result.\"" }, { "figure_ref": [], "heading": "Box Candidates Generation", "publication_ref": [], "table_ref": [], "text": "Prompt \"Please make {prompt number} possible prediction of the remaining box coordinates with different box size based on the dense description \"{caption}\". Note that the image size is (512,512), and the existing box coordinates are [{existing boxes info}]. Based on the layout of the objects, predict the possible number reasonable box coordinates of the following objects {target objects}. The size of the {target objects} box should be based on the category and the size of other object boxes, and the width and height of the box should be greater than 75 and less than 300. Only return each result in the following format: \"label\":, \"box\":, \"relationship\":\" Demonstration \"{caption}\": 'there is a dog sitting on a bench in a field.' \"{existing boxes info}\": \"value\": 1, \"label\": \"bench\", \"logit\": 0.84, \"box\": [33. 93, 224.34, 463.20, 491.01], \"value\": 2, \"label\": \"dog\", \"logit\": 0.43, \"box\": [175. 71, 116.29, 311.58, 367.13] Return Results: \"label\": 'cat', \"box\": [343. 23, 176.29, 467.23, 353.13], \"relationship\": 'sitting next to the dog.' Table 1. The details of the prompt design in ChatGenImage. There are injectable slots in the prompts, such as Caption, Visual Feature, and Existing Boxes Info. These slots imply visual perceptions that help LLM building multimodal awareness and are uniformly replaced with the contents from visual foundation models before being fed into the LLM." } ]
Scene Imagination: A colorful rug anchors a stylish bedroom with artwork and curtains.A colorful rug anchors a stylish bedroom with a comfortable sofa and a coffee table with vases. Color-coordinated throw pillows and artwork on the walls as well as chandeliers, curtains and dressers tie the rooms together.
Interactive Data Synthesis for Systematic Vision Adaptation via LLMs-AIGCs Collaboration
[ { "figure_caption": "Caption: In a peaceful park with tall trees, a white dog rests on a bench while a beautiful butterfly flies above. A nearby cat soaks up the sun lazily.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "➢Figure 2 .2Figure 2. Language as a bridge for LLMs (e.g. ChatGPT) and AIGC models (e.g. Stable Diffusion) can iteratively control image generation and automatic labeling. The LLM first generates global prompts to guide AIGC models in generating initialization images, then iteratively refines them using automatically generated fine-grained annotations as local constraint prompts to produce diverse and complex scenes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 11ChatGenImage Pipeline 1: Initialization: Label w 0 , Init Image I 0 = SD(w 0 ) 2: Caption c = BLIP(I 0 ) 3: Object Boxes b = GroundingDINO(I 0 ) 4: Segment Masks m = SAM(I 0 ), iterative i 5: Image = {(I 0 , c, b, m)} 6: repeat 7: i ← i -1 8: Scenes = ChatGPT(c, b, m), n ←length(Scenes) I= Editing(I 0 , Scene[ background ]) 12: Update I= Filling(I 0 , Scene[ objects ]) 13: Update c, b, m by 'Label Foundation Toolkit' 14: Update Image = Image ∪ {(I, c, b, m)} if not Filter(I) 15: end for 16: until i == 0 Output: Image = {(I i , c i , b i , m i )} editing based on both global and local prompts. The total process is shown in Algorithm 1. Besides, we use image filtering rules to figure out those representative samples as valid results and utilize the label foundation toolkit to get high-quality annotations for downstream tasks.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Visualization results of Iteratively Local Refinement and Labeling. It contains three steps: 1) Background Imagination, 2) Iteratively Object Filling, 3) Label anything in the image via visual foundation models.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Qualitative analysis of object-centric image generation and complex scene description with multiple background and objects.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization results of ChatGenImage for object-centric image generation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Visualization results of ChatGenImage for complex scene imagination.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Please make {prompt number} possible prediction of the remaining box coordinates with different box size based on the dense description \"{caption}\". Note that the image size is (512,512), and the existing box coordinates are [{existing boxes info}]. Based on the layout of the objects, predict the possible number reasonable box coordinates of the following objects {target objects}……Only return each result in the following", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Simple Caption: there is a red cabin in the woods with a wooden deck.", "figure_data": "Human InputLabel word: houseChatGPT Imagination Scene InitializationAICG Iteratively Local Editing & Toolkit LabelingA charming red cabin perched onmountains in the shadow snow-capped trees, with a sweepingview of the surroundingevergreen forest. A rushing riverwinds its way through the valleybelow, and a small rock in anearby meadow with cloud soarin a Mountainoverhead.Add SnowAdd Snow-capped treeAdd CloudAdd RockLabels & BoxesA cozy red cabin nestled in theheart of a dense forest,surrounded by towering trees andthe gentle murmur of a nearbyriver. The wooden deck offers aperfect spot to relax, with a stonepath leading deep into thepeaceful setting. Close by, therein a Forestwas a moss-covered rock.Add LeafAdd Cobbled RoadAdd StreamAdd Green TreesLabels & BoxesA cozy red cabin nestled in apeaceful meadow, surrounded bycolorful wildflowers andfluttering butterflies. Rolling hillsstretch out in the distance, and aflock of sheep grazes contentedlynearby. A rustic windmill adds tothe pastoral charm of the scene.in a MeadowAdd CowsAdd Butterfly & SheepAdd WildflowersAdd WindmillLabels & BoxesA rustic red cabin situated on theedge of a tranquil lake, with awooden deck extending out overthe water. Boats bob gentlynearby, and fish occasionallybreak the surface. Birds flit about,and a small dock offers a spot tocast a line or simply enjoy thein a Lakepeaceful scenery.Add RockAdd BirdsAdd DucksAdd BoatsLabels & Boxes", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Qifan Yu; Juncheng Li; Wentao Ye; Siliang Tang; Yueting Zhuang
[ { "authors": "Omri Avrahami; Ohad Fried; Dani Lischinski", "journal": "", "ref_id": "b0", "title": "Blended latent diffusion", "year": "2022" }, { "authors": "Omri Avrahami; Dani Lischinski; Ohad Fried", "journal": "", "ref_id": "b1", "title": "Blended diffusion for text-driven editing of natural images", "year": "2022" }, { "authors": "Hritik Bansal; Aditya Grover", "journal": "", "ref_id": "b2", "title": "Leaving reality to imagination: Robust classification via generated datasets", "year": "2023" }, { "authors": "Yasser Benigmim; Subhankar Roy; Slim Essid; Vicky Kalogeiton; Stéphane Lathuilière", "journal": "", "ref_id": "b3", "title": "One-shot unsupervised domain adaptation with personalized diffusion models", "year": "2023" }, { "authors": "Tim Brooks; Aleksander Holynski; Alexei A Efros", "journal": "", "ref_id": "b4", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2022" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg", "journal": "", "ref_id": "b5", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Jaehoon Choi; Taekyung Kim; Changick Kim", "journal": "", "ref_id": "b6", "title": "Selfensembling with gan-based data augmentation for domain adaptation in semantic segmentation", "year": "2019" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b7", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Lisa Dunlap; Clara Mohri; Devin Guillory; Han Zhang; Trevor Darrell; Joseph E Gonzalez; Aditi Raghunanthan; Anja Rohrbach", "journal": "", "ref_id": "b8", "title": "Using language to extend to unseen domains", "year": "2022" }, { "authors": "Ross Girshick", "journal": "", "ref_id": "b9", "title": "Fast r-cnn", "year": "2015" }, { "authors": "Tanmay Gupta; Aniruddha Kembhavi", "journal": "", "ref_id": "b10", "title": "Visual programming: Compositional visual reasoning without training", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b11", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Ruifei He; Shuyang Sun; Xin Yu; Chuhui Xue; Wenqing Zhang; Philip Torr; Song Bai; Xiaojuan Qi", "journal": "", "ref_id": "b12", "title": "Is synthetic data from generative models ready for image recognition", "year": "2022" }, { "authors": "Dan Hendrycks; Steven Basart; Norman Mu; Saurav Kadavath; Frank Wang; Evan Dorundo; Rahul Desai; Tyler Zhu; Samyak Parajuli; Mike Guo", "journal": "", "ref_id": "b13", "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "year": "2021" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Alain Hore; Djemel Ziou", "journal": "IEEE", "ref_id": "b15", "title": "Image quality metrics: Psnr vs. ssim", "year": "2010" }, { "authors": "Ali Jahanian; Xavier Puig; Yonglong Tian; Phillip Isola", "journal": "", "ref_id": "b16", "title": "Generative models as a data source for multiview representation learning", "year": "2021" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b17", "title": "Segment anything", "year": "2023" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Communications of the ACM", "ref_id": "b18", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "Ananya Kumar; Aditi Raghunathan; Robbie Jones; Tengyu Ma; Percy Liang", "journal": "", "ref_id": "b19", "title": "Fine-tuning can distort pretrained features and underperform out-of-distribution", "year": "2022" }, { "authors": "Juncheng Li; Xin He; Longhui Wei; Long Qian; Linchao Zhu; Lingxi Xie; Yueting Zhuang; Qi Tian; Siliang Tang", "journal": "", "ref_id": "b20", "title": "Fine-grained semantically aligned vision-language pre-training", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b21", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Shaobo Lin; Kun Wang; Xingyu Zeng; Rui Zhao", "journal": "", "ref_id": "b22", "title": "Explore the power of synthetic data on few-shot object detection", "year": "2023" }, { "authors": "Shilong Liu; Zhaoyang Zeng; Tianhe Ren; Feng Li; Hao Zhang; Jie Yang; Chunyuan Li; Jianwei Yang; Hang Su; Jun Zhu", "journal": "", "ref_id": "b23", "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection", "year": "2023" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b24", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b25", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b26", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b27", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Tim Salimans; Ian Goodfellow; Wojciech Zaremba; Vicki Cheung; Alec Radford; Xi Chen", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Improved techniques for training gans", "year": "2016" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "", "ref_id": "b30", "title": "It's not just size that matters: Small language models are also few-shot learners", "year": "2020" }, { "authors": "Tristan Sylvain; Pengchuan Zhang; Yoshua Bengio; Devon Hjelm; Shikhar Sharma", "journal": "", "ref_id": "b31", "title": "Object-centric image generation from layouts", "year": "2021" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b32", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b33", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b34", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Chenfei Wu; Shengming Yin; Weizhen Qi; Xiaodong Wang; Zecheng Tang; Nan Duan", "journal": "", "ref_id": "b35", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Yuxuan Zhang; Huan Ling; Jun Gao; Kangxue Yin; Jean-Francois Lafleche; Adela Barriuso; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b36", "title": "Datasetgan: Efficient labeled data factory with minimal human effort", "year": "2021" } ]
[]
2023-05-22
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b6", "b7", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26" ], "table_ref": [], "text": "Iris recognition systems have been widely used due to the uniqueness and stability of iris patterns. However, the vulnerability to presentation attacks brings security threats to these systems. To tackle this issue, iris presentation attack detection (PAD) has been introduced to ensure the reliability of the recognition systems [1]. [2][3][4][5] divide iris images into multiple patches to enhance patch-wise attack information, in order to exploit the intricate texture patterns of the iris. More recently, attention mechanism [6][7][8] has been introduced for a more interpretable focus on attack-related features.\nDespite the remarkable success, most iris PAD works focus on intra-dataset evaluations [7,8], which assume training and testing samples are from similar distributions. This assumption may not always hold in practical scenarios. The potential discrepancies are from capturing equipment, illuminations, background, etc. To solve this problem, domain adaptation (DA) [8][9][10] and domain generalization (DG) [11][12][13] techniques have been extensively studied in the PAD field. DA aims to minimize the domain discrepancy between a labeled source domain and an unlabeled target domain, while DG learns to generalize to an unknown target domain with multiple source domains. Conventional DG methods align features to a domain-agnostic space [14,15] so that the shared space can generalize well even on the unseen domains. For a more realistic scenario single domain generalization (Single-DG), data generation is used to enlarge the distribution shifts and learn the domaininvariant representations [16,17]. Although domain-invariant features are common among different domains, the neglected domainspecific features could still promote performance in each domain. Concretely, each domain, or each sample, has its unique characteristics and can be regarded as a latent space. Forced invariance among latent spaces strengthens the model's generalization ability on unseen domains, yet it inevitably discards discriminative information of individual latent spaces that could have facilitated visual tasks.\nIn general, we usually have images from only one single domain in real-world iris PAD applications. To fully utilize the complementary information for generalized detection with a single domain, we propose a novel Single Domain Dynamic Generalization (SDDG) framework to dynamically extract the domain-invariant and domainspecific features. To the best of our knowledge, this is the first work to exploit dynamic generalization for Single-DG. The domaininvariant part is achieved by a common convolution followed by Instance Normalization (IN) [18], which is invariant to appearance changes [19]. The domain-specific one is formed by multiple convolutions with a dynamic adaptor. The dynamic adaptor predicts the weights of each convolution in a sample-wise manner and is further combined with information maximization [20,21] to encourage disparate representations. Thus, our dynamic block is capable of adapting to the characteristics of each sample, and both domain-invariant and domain-specific features are used to improve the generalization.\nNevertheless, common DG paradigm requires the availability of data from multiple source domains, whereas Single-DG encounters a more realistic challenge with only one single source domain available [22]. Although PAD samples from different domains are not available in Single-DG, we can easily get access to a large number of natural images from common datasets, such as ImageNet [23] and COCO [24]. These natural images cover adequate diverse scenes and can benefit the generalization ability.\nIn order to best take advantage of diverse distributions from numerous natural images, we propose a meta-learning based paradigm and learn to generalize on the various unseen domains. We first conduct normal training based on source images in the meta-train phase. Then we learn to generalize to the perturbed images in the meta-test phase. Specifically, we perturb the original image based on Fourier transformation with natural images. Fourier transformation has the property that the phase component of Fourier spectrum preserves high-level semantics of the original signal, while the amplitude component contains low-level statistics [25]. Following [26], we conduct the perturbation through Mixup [27] in the amplitude spectrum and generate diverse images. Moreover, we aim to adaptively learn domain-invariant and domain-specific representations on unseen domains and correctly adjust the sample-wise weights with the dynamic adaptor. Hence, instead of directly applying vanilla " }, { "figure_ref": [ "fig_0" ], "heading": "PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "Given a training set from a single domain, the proposed Single Domain Dynamic Generalization (SDDG) framework adapts network parameters on a per-sample basis and generalizes to diverse domains with natural images. To leverage the domain-specific features as the complementation to the domain-invariant ones, we design a novel dynamic block, which contains two branches. The domain-invariant branch diminishes instance style with Instance Normalization (IN) and reduces domain discrepancy. The domain-specific branch has multiple convolutions and utilizes the dynamic adaptor to adjust the sample-wise weights for its convolutions. The whole network is composed of a feature extractor (denoted as F), a dynamic block (denoted as D), and a classifier (denoted as C), and it is integrated into the meta-learning paradigm. We generate the amplitude perturbed images through Fourier transformation and cover diverse domains with natural images. The various domains seen in the metatest phase facilitate the generalization ability of the network. The overall SDDG framework is illustrated in Fig. 1." }, { "figure_ref": [ "fig_1" ], "heading": "Dynamic Block", "publication_ref": [ "b18", "b28", "b19", "b20" ], "table_ref": [], "text": "Given a feature map F ∈ R C×H×W of the input image X, dynamic block decomposes it to the domain-invariant and domain-specific part at the same time. The whole diagram is shown in Fig. 2. It has two branches. The invariant branch is composed of a common convolution with IN, while the specific one contains multiple convolutions with a dynamic adaptor varying for different samples. Domain-Invariant Branch. We combine a common convolution with IN to remove instance-specific characteristics and increase generalization ability of domain-invariant branch. Instead of adopting BN to normalize activations in a mini-batch, IN discards instance-specific information and has demonstrated its capacity in domain generalization [19,29]. Thus, we learn the domain-invariant feature by:\nFinv = ReLU(IN(f 3×3 (F ))),(1)\nwhere f 3×3 represents a convolution operation with the filter size of 3 × 3 and ReLU is used as an activation function. Domain-Specific Branch. Vanilla convolution has a drawback that it has fixed parameters and easily degrades on the unseen domains. To tackle this problem, we introduce a dynamic adaptor to automatically adapt among multiple convolutions. Each sample is regarded as a latent domain and the dynamic adaptor adjusts the network parameters accordingly.\nTo be specific, the branch has K convolutions in total and the dynamic adaptor predicts the weights W ∈ R K for each convolution based on the input feature map:\nW = d(F ),(2)\nwhere d is the dynamic adaptor with Pooling-FC-ReLU-FC-Softmax structure. The domain-specific feature is the linear combination of K convolutions with dynamic weights:\nFspec = K k=1 w k • f 3×3 k (F ).(3)\nTo further increase the diversity of different convolutions, we adopt the Information Maximization (IM) loss LIM [20,21] to maximize the mutual information between inputs and dynamic weights. It is achieved by entropy minimization Lent and diversity regularization L div of the dynamic adaptor predictions:\nLent(θ F , θ D(X) ) = -E {W i } N i=1 K k=1 w i k log w i k , L div (θ F , θ D(X) ) = K k=1 ŵk log ŵk = DKL( ŵ, 1 K 1K ) -log K, LIM (θ F , θ D(X) ) = Lent(θ F , θ D(X) ) + L div (θ F , θ D(X) ),(4)\nwhere θ F and θ D(X) are the parameters of the feature extractor and dynamic block, W i is the K-dimensional weights of the input sample Xi from the dynamic adaptor, w k is the k-th dimension of W, and\nŵ = E {W i } N i=1 [w i ]\nis the mean weights of N samples. As a consequence, the domain-specific branch can dynamically adapt networks in a sample-wise manner and learn more disparate representations via information maximization.\nThe final dynamic feature is obtained by simply combing the above two features. Our dynamic block ensures domain invariance with IN and adjusts to sample-wise specificity with the dynamic adaptor. Therefore, both domain-invariant and domain-specific features are well represented to facilitate the generalization ability." }, { "figure_ref": [], "heading": "Meta-Learning Optimization", "publication_ref": [ "b22", "b24", "b25", "b26", "b29" ], "table_ref": [], "text": "Single domain generalization is a more realistic but challenging problem for iris PAD. The model needs to generalize to diverse domains with single domain samples. Although data from other domains are not available, we can easily get access to numerous nature images from ImageNet [23]. Thus, we perturb source images through Fourier transformation to imitate the distribution changes in different domains. To empower networks to generalize on the unseen domains, we propose a meta-learning based paradigm. The learning algorithm consists of two phases: meta-train and meta-test. We first conduct normal training based on the single source domain S in the meta-train phase. Then we learn to generalize to the diverse images from perturbed source domains S + in the meta-test phase. The whole process is summarized in Algorithm 1.\nMeta-Train. During the meta-train phase, we randomly sample batches on the single source domain S and conduct classification based on cross-entropy loss:\nL Cls(S) (θ F , θ D , θ C ) = -E (X i ,Y i )∼S C c=1 Y i c log C(D(F(X i )))c,(5)\nwhere θ C is the parameters of the classifier. The gradient of θ D is calculated with respect to L Cls , and we update the parameters as follows:\nθ D = θ D -α∇ θ D L Cls(S) (θ F , θ D , θ C ).(6)\nBesides, the dynamic block D is optimized using Eq. 4. Note that we only use IM loss in the meta-train phase, therefore, the dynamic adaptor can adaptively combine different convolutions according to the characteristics of each sample in meta-test.\nMeta-Test. In order to imitate the domain shifts from various target domains, we perturb the source images with ImageNet natural images. As the phase component of Fourier spectrum has the semantic-preserving property [25,26], we conduct perturbation through Mixup [27] in the amplitude spectrum. Given a source image Xs and a natural image Xn, we first apply Fourier transformation F with the FFT algorithm [30] and get the corresponding amplitude components A(X) and phase components P(X). Then the amplitude spectrum of A(Xs) is perturbed by A(Xn) through linear interpolation:\nÂ(Xs) = (1 -λ)A(Xs) + λA(Xn),(7)\nwhere λ ∼ U (0, η) and η controls the strength of the perturbation. The perturbed amplitude spectrum Â(Xs) is combined with the original phase spectrum P(Xs). Finally, we generate the perturbed image X s + through inverse Fourier transformation F -1 .\nIn the meta-test phase, we first sample batches on the single source domain S and then perturb them to the extended domains S + . The meta-test evaluation simulates testing on domains with different distributions, and we encourage our model trained on a single domain to generalize well on the unseen domains. Thus, cross-entropy classification loss is minimized on the perturbed domain S + :\nL Cls(S + ) (θ F , θ D , θ C ) = -E (X i ,Y i )∼S + C c=1 Y i c log C(D(F(X i )))c.(8)\nMeta-Optimization. The meta-train and meta-test are optimized simultaneously. We jointly train the three modules in our network by: Evaluate L Cls(S + ) (θ F , θ D , θ C ) using Eq. 8" }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "Meta-optimization: Update θ F , θ D , θ C using Eq. 9 8: end while\nθ C ← θ C -β∇ θ C (L Cls(S) (θ F , θ D , θ C ) + L Cls(S + ) (θ F , θ D , θ C )), θ D ← θ D -β∇ θ D (L Cls(S) (θ F , θ D , θ C ) + µL IM (S) (θ F , θ D ) + L Cls(S + ) (θ F , θ D , θ C )), θ F ← θ F -β∇ θ F (L Cls(S) (θ F , θ D , θ C ) + µL IM (S) (θ F , θ D ) + L Cls(S + ) (θ F , θ D , θ C )).(9)\n3. EXPERIMENTS" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b30", "b25" ], "table_ref": [], "text": "Datasets. We evaluate our method on the LivDet-Iris 2017 dataset [31], which consists of 4 different datasets. However, the Warsaw dataset is no longer publicly available, so the experiments are based on the remaining Clarkson, Notre Dame, and IIITD-WVU datasets. Presentation attacks include printed iris images, patterned contact lenses, and printouts of patterned contact lenses. Implementation Details. The input image is grayscale and of 200 × 200 size. Random cropping is performed in the training phase. We adopt ResNet18 as backbone and initial it with ImageNet pre-trained model. The learning rates α, β are set as 1e-3. Following [26], the perturbation strength η is set to 1. The convolution number K of the dynamic branch and the balance weight µ of the IM loss are 3 and 1.\nEvaluation Metrics. For single domain generalization, we use a single dataset for training and conduct evaluations in the rest. Half Total Error Rate (HTER) is employed to measure the performance." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Effect of different components. Our method contains three different components: dynamic block, IM loss LIM , and meta-learning. Table 1 shows the contribution of each component in the proposed SDDG. We can observe all the components promote the average HTER. Adding dynamic block to the vanilla backbone already boosts the performance both without meta-learning and with it. It verifies that the dynamic representations are effective by leveraging domain-invariant and domain-specific information. Further incorporation of LIM shows additional improvement with more disparate features. The best and the second best performance of each dataset are almost achieved under meta-learning optimization, which validates the importance of the meta-learning paradigm for Single-DG. The combination of all the components gives best average HTER 12.90%, outperforming baseline by 11.01%.\nSensitivity of hyperparameters. To validate the significance of the convolution number K in the dynamic branch and balance weight µ of the IM loss, we conduct sensitivity analysis of the hyperparameters. Fig. 3 shows the average HTER in single domain generalization by varying K ∈ {2, 3, 4, 5}, µ ∈ {0, 0.5, 1.0, 1.5, 2.0}.\nThe performance of varying convolution numbers is generally stable varying within 1%, and best performance is obtained with K = 3. While introducing LIM is beneficial for dynamic learning, the moderate µ provides more promising results." }, { "figure_ref": [], "heading": "Visualization", "publication_ref": [ "b31" ], "table_ref": [], "text": "In order to investigate how our dynamic block adapts according to samples, we visualize the dynamic weight W with t-SNE [32]. We can observe that it learns to differentiate attack classes even when the whole network is only trained with binary class labels. It confirms that our dynamic block is capable of adjusting based on the characteristics of each sample even in the challenging Single-DG scenario." }, { "figure_ref": [], "heading": "Comparison with State-of-the-Art Methods", "publication_ref": [ "b6", "b6", "b27" ], "table_ref": [ "tab_1" ], "text": "As shown in Table 2, our method outperforms all the state-of-the-art iris PAD methods. PBS [7] and A-PBS [7] method. Besides, we implement MLDG [28] to compare the vanilla meta-learning. With a single source domain available, we randomly sample batches on domain S for both meta-train and meta-test, and meta-optimization is conducted in the classifier. To match the number of parameters increased by the dynamic block, we add an additional convolution to backbone ResNet18 after the feature extractor. The performance is much worse than the proposed SDDG. We further adapt it with perturbed domains S + for meta-test, which is referred to as MLDG+. As we can see, MLDG learns to generalize much better with S + but is still inferior to the proposed SDDG, which also validates the efficacy of our meta-learning based dynamic framework for Single-DG." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel single domain dynamic generalization framework for iris presentation detection. Unlike previous methods, SDDG can dynamically adapt based on the characteristics of each sample and exploit domain-invariant and domain-specific features simultaneously. Further incorporation of information maximization loss encourages more disparate representations. Having images perturbed with numerous natural images, the meta-learning paradigm empowers the network to generalize on various unseen domains. Comprehensive experiments validate the effectiveness of the proposed method for single domain generalization." } ]
Iris presentation attack detection (PAD) has achieved great success under intra-domain settings but easily degrades on unseen domains. Conventional domain generalization methods mitigate the gap by learning domain-invariant features. However, they ignore the discriminative information in the domain-specific features. Moreover, we usually face a more realistic scenario with only one single domain available for training. To tackle the above issues, we propose a Single Domain Dynamic Generalization (SDDG) framework, which simultaneously exploits domain-invariant and domain-specific features on a per-sample basis and learns to generalize to various unseen domains with numerous natural images. Specifically, a dynamic block is designed to adaptively adjust the network with a dynamic adaptor. And an information maximization loss is further combined to increase diversity. The whole network is integrated into the metalearning paradigm. We generate amplitude perturbed images and cover diverse domains with natural images. Therefore, the network can learn to generalize to the perturbed domains in the meta-test phase. Extensive experiments show the proposed method is effective and outperforms the state-of-the-art on LivDet-Iris 2017 dataset.
SINGLE DOMAIN DYNAMIC GENERALIZATION FOR IRIS PRESENTATION ATTACK DETECTION
[ { "figure_caption": "Fig. 1 .1Fig. 1. Overview of the proposed SDDG framework.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Illustration of dynamic block.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 2 : 4 : 5 :1245Meta Learning of SDDG Require: Input: A single source domain S Initialization: Model parameters θ F , θ D , θ C . Hyperparameters: Learning rate α and β, perturbation strength η, loss balance µ. 1: while not done do Meta-train: Sample batch on the source domain S 3:Evaluate L Cls(S) (θ F , θ D , θ C ) using Eq. 5 Compute θ D using Eq. 6Meta-test: Sample batch on S and perturb it to S + 6:", "figure_data": "", "figure_id": "fig_2", "figure_label": "1245", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Evaluation of hyperparameters in Single-DG.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig.4. The t-SNE[32] visualization of dynamic weights for all samples from the LivDet-Iris 2017 dataset when SDDG is trained on (a) IIITD-WVU, (b) NotreDame, (c) Clarkson. Note that NotreDame dataset only has contact lens attack, so the corresponding dynamic weights between contact attack and print attack are less discriminative than the others.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Ablation study on different components of SDDG.", "figure_data": "Dynamic BlockL IMMeta-LearningIIITD-WVU NotreDame ClarksonNotreDame IIITD-WVU Clarkson IIITD-WVU NotreDame ClarksonAverage---7.3345.6920.9711.2329.4628.8323.92--10.8342.7515.6314.9526.7017.8121.44-7.9428.8915.8412.2225.9520.3918.54-9.0622.5017.8413.1617.4511.3315.226.0319.3616.6910.2016.688.4712.90", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison to existing SoTA methods on LivDet-Iris 2017 dataset under cross-dataset settings. † denotes the reimplemented results.", "figure_data": "Trained Dataset Tested DatasetIIITD-WVU NotreDame ClarksonNotreDame IIITD-WVU Clarkson IIITD-WVU NotreDame ClarksonAveragePBS [7]16.8647.1717.4945.3142.4832.4233.62A-PBS [7]27.6121.999.4922.4634.1723.0823.13D-NetPAD [6] †11.5843.6619.478.8627.4417.3121.39FAM+FMM [8]5.8126.0315.0710.5122.0620.9216.73MLDG [28] †8.9242.269.7811.3627.8725.3920.93MLDG+10.7817.1018.3318.3419.4110.3915.73SDDG6.0319.3616.6910.2016.688.4712.90", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Yachun Li; Jingjing Wang; Yuhui Chen; Di Xie; Shiliang Pu
[ { "authors": "Aidan Boyd; Zhaoyuan Fang; Adam Czajka; Kevin W Bowyer", "journal": "Pattern Recognition Letters", "ref_id": "b0", "title": "Iris presentation attack detection: Where are we now?", "year": "2020" }, { "authors": "Lingxiao He; Haiqing Li; Fei Liu; Nianfeng Liu; Zhenan Sun; Zhaofeng He", "journal": "", "ref_id": "b1", "title": "Multi-patch convolution neural network for iris liveness detection", "year": "2016" }, { "authors": "Ramachandra Raghavendra; Christoph Kiran B Raja; Busch", "journal": "", "ref_id": "b2", "title": "Contlensnet: Robust iris contact lens detection using deep convolutional neural networks", "year": "2017" }, { "authors": "Steven Hoffman; Renu Sharma; Arun Ross", "journal": "", "ref_id": "b3", "title": "Convolutional neural networks for iris presentation attack detection: Toward cross-dataset and cross-sensor generalization", "year": "2018" }, { "authors": "Meiling Fang; Naser Damer; Florian Kirchbuchner; Arjan Kuijper", "journal": "IJCB", "ref_id": "b4", "title": "Micro stripes analyses for iris presentation attack detection", "year": "2020" }, { "authors": "Renu Sharma; Arun Ross", "journal": "IJCB", "ref_id": "b5", "title": "D-netpad: An explainable and interpretable iris presentation attack detector", "year": "2020" }, { "authors": "Meiling Fang; Naser Damer; Fadi Boutros; Florian Kirchbuchner; Arjan Kuijper", "journal": "", "ref_id": "b6", "title": "Iris presentation attack detection by attention-based and deep pixel-wise binary supervision network", "year": "2021" }, { "authors": "Yachun Li; Ying Lian; Jingjing Wang; Yuhui Chen; Chunmao Wang; Shiliang Pu", "journal": "", "ref_id": "b7", "title": "Few-shot one-class domain adaptation based on frequency for iris presentation attack detection", "year": "2022" }, { "authors": "Jingjing Wang; Jingyi Zhang; Ying Bian; Youyi Cai; Chunmao Wang; Shiliang Pu", "journal": "", "ref_id": "b8", "title": "Self-domain adaptation for face antispoofing", "year": "2021" }, { "authors": "Qianyu Zhou; Ke-Yue Zhang; Taiping Yao; Ran Yi; Kekai Sheng; Shouhong Ding; Lizhuang Ma", "journal": "", "ref_id": "b9", "title": "Generative domain adaptation for face anti-spoofing", "year": "2022" }, { "authors": "Rui Shao; Xiangyuan Lan; Pong C Yuen", "journal": "", "ref_id": "b10", "title": "Regularized fine-grained meta face anti-spoofing", "year": "2020" }, { "authors": "Yunpei Jia; Jie Zhang; Shiguang Shan; Xilin Chen", "journal": "", "ref_id": "b11", "title": "Single-side domain generalization for face anti-spoofing", "year": "2020" }, { "authors": "Zhuo Wang; Zezheng Wang; Zitong Yu; Weihong Deng; Jiahong Li; Tingting Gao; Zhongyuan Wang", "journal": "", "ref_id": "b12", "title": "Domain generalization via shuffled style assembly for face anti-spoofing", "year": "2022" }, { "authors": "Eric Tzeng; Judy Hoffman; Ning Zhang; Kate Saenko; Trevor Darrell", "journal": "", "ref_id": "b13", "title": "Deep domain confusion: Maximizing for domain invariance", "year": "2014" }, { "authors": "Yaroslav Ganin; Evgeniya Ustinova; Hana Ajakan; Pascal Germain; Hugo Larochelle; Mario Franc ¸ois Laviolette; Victor Marchand; Lempitsky", "journal": "The journal of machine learning research", "ref_id": "b14", "title": "Domain-adversarial training of neural networks", "year": "2016" }, { "authors": "Lei Li; Ke Gao; Juan Cao; Ziyao Huang; Yepeng Weng; Xiaoyue Mi; Zhengze Yu; Xiaoya Li; Boyang Xia", "journal": "", "ref_id": "b15", "title": "Progressive domain expansion network for single domain generalization", "year": "2021" }, { "authors": "Zijian Wang; Yadan Luo; Ruihong Qiu; Zi Huang; Mahsa Baktashmotlagh", "journal": "", "ref_id": "b16", "title": "Learning to diversify for single domain generalization", "year": "2021" }, { "authors": "Dmitry Ulyanov; Andrea Vedaldi; Victor Lempitsky", "journal": "", "ref_id": "b17", "title": "Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis", "year": "2017" }, { "authors": "Xingang Pan; Ping Luo; Jianping Shi; Xiaoou Tang", "journal": "", "ref_id": "b18", "title": "Two at once: Enhancing learning and generalization capacities via ibn-net", "year": "2018" }, { "authors": "Weihua Hu; Takeru Miyato; Seiya Tokui; Eiichi Matsumoto; Masashi Sugiyama", "journal": "", "ref_id": "b19", "title": "Learning discrete representations via information maximizing self-augmented training", "year": "2017" }, { "authors": "Jian Liang; Dapeng Hu; Jiashi Feng", "journal": "", "ref_id": "b20", "title": "Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation", "year": "2020" }, { "authors": "Fengchun Qiao; Long Zhao; Xi Peng", "journal": "", "ref_id": "b21", "title": "Learning to learn single domain generalization", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b22", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b23", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Alan V Oppenheim; Jae S Lim", "journal": "Proceedings of the IEEE", "ref_id": "b24", "title": "The importance of phase in signals", "year": "1981" }, { "authors": "Qinwei Xu; Ruipeng Zhang; Ya Zhang; Yanfeng Wang; Qi Tian", "journal": "", "ref_id": "b25", "title": "A fourier-based framework for domain generalization", "year": "2021" }, { "authors": "Hongyi Zhang; Moustapha Cisse; David Yann N Dauphin; Lopez-Paz", "journal": "ICLR", "ref_id": "b26", "title": "mixup: Beyond empirical risk minimization", "year": "2018" }, { "authors": "Da Li; Yongxin Yang; Yi-Zhe Song; Timothy Hospedales", "journal": "", "ref_id": "b27", "title": "Learning to generalize: Meta-learning for domain generalization", "year": "2018" }, { "authors": "Seonguk Seo; Yumin Suh; Dongwan Kim; Geeho Kim; Jongwoo Han; Bohyung Han", "journal": "", "ref_id": "b28", "title": "Learning to optimize domain specific normalization for domain generalization", "year": "2020" }, { "authors": "J Henri; Nussbaumer", "journal": "Springer", "ref_id": "b29", "title": "The fast fourier transform", "year": "1981" }, { "authors": "David Yambay; Benedict Becker; Naman Kohli; Daksha Yadav; Adam Czajka; Kevin W Bowyer; Stephanie Schuckers; Richa Singh; Mayank Vatsa; Afzel Noore", "journal": "", "ref_id": "b30", "title": "Livdet iris 2017-iris liveness detection competition 2017", "year": "2017" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b31", "title": "Visualizing data using t-sne", "year": "2008" } ]
[ { "formula_coordinates": [ 2, 381.36, 253.3, 177.64, 10.33 ], "formula_id": "formula_0", "formula_text": "Finv = ReLU(IN(f 3×3 (F ))),(1)" }, { "formula_coordinates": [ 2, 415.19, 388.1, 143.8, 8.06 ], "formula_id": "formula_1", "formula_text": "W = d(F ),(2)" }, { "formula_coordinates": [ 2, 379.47, 436.99, 179.52, 17.62 ], "formula_id": "formula_2", "formula_text": "Fspec = K k=1 w k • f 3×3 k (F ).(3)" }, { "formula_coordinates": [ 2, 325.4, 516.57, 233.59, 72.39 ], "formula_id": "formula_3", "formula_text": "Lent(θ F , θ D(X) ) = -E {W i } N i=1 K k=1 w i k log w i k , L div (θ F , θ D(X) ) = K k=1 ŵk log ŵk = DKL( ŵ, 1 K 1K ) -log K, LIM (θ F , θ D(X) ) = Lent(θ F , θ D(X) ) + L div (θ F , θ D(X) ),(4)" }, { "formula_coordinates": [ 2, 317.1, 628.46, 69.43, 13.65 ], "formula_id": "formula_4", "formula_text": "ŵ = E {W i } N i=1 [w i ]" }, { "formula_coordinates": [ 3, 69.64, 267.44, 228.56, 30.86 ], "formula_id": "formula_5", "formula_text": "L Cls(S) (θ F , θ D , θ C ) = -E (X i ,Y i )∼S C c=1 Y i c log C(D(F(X i )))c,(5)" }, { "formula_coordinates": [ 3, 100.78, 346.7, 197.42, 9.51 ], "formula_id": "formula_6", "formula_text": "θ D = θ D -α∇ θ D L Cls(S) (θ F , θ D , θ C ).(6)" }, { "formula_coordinates": [ 3, 109.78, 518.93, 188.42, 10.33 ], "formula_id": "formula_7", "formula_text": "Â(Xs) = (1 -λ)A(Xs) + λA(Xn),(7)" }, { "formula_coordinates": [ 3, 66.83, 654.46, 231.37, 31.18 ], "formula_id": "formula_8", "formula_text": "L Cls(S + ) (θ F , θ D , θ C ) = -E (X i ,Y i )∼S + C c=1 Y i c log C(D(F(X i )))c.(8)" }, { "formula_coordinates": [ 3, 316.16, 242.75, 242.84, 73.21 ], "formula_id": "formula_9", "formula_text": "θ C ← θ C -β∇ θ C (L Cls(S) (θ F , θ D , θ C ) + L Cls(S + ) (θ F , θ D , θ C )), θ D ← θ D -β∇ θ D (L Cls(S) (θ F , θ D , θ C ) + µL IM (S) (θ F , θ D ) + L Cls(S + ) (θ F , θ D , θ C )), θ F ← θ F -β∇ θ F (L Cls(S) (θ F , θ D , θ C ) + µL IM (S) (θ F , θ D ) + L Cls(S + ) (θ F , θ D , θ C )).(9)" } ]
10.1093/icesjms/fsx216
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b52", "b3", "b32", "b18", "b21", "b17", "b33", "b38", "b14", "b7", "b9", "b2", "b46", "b39", "b45", "b5", "b23", "b22", "b46", "b27", "b40", "b10", "b45", "b48", "b0", "b19", "b1", "b0", "b46", "b47", "b1" ], "table_ref": [], "text": "Labeled data is the fuel of modern deep learning. However, the time-consuming manual labeling process is one of the main limitations of machine learning [53]. Therefore, current research efforts try to mitigate this issue by using unlabeled data [54,4,55] or forms of self-supervision [33,19,22,18]. Following the datacentric paradigm, another approach focuses on improving data quality rather than quantity [34,39,15]. This line of research concludes that one single annotation is not enough to capture ambiguous samples [8,10,3,47], where different annotators will provide different annotations for the same image. These cases are common in most real-world datasets [56, 40,46,6] and would require multiple annotations per image to accurately estimate its label distribution. Yet, established benchmarks such as ImageNet or CIFAR [24,23] are currently not considering this issue which significantly limits their use in the development of methods that generalize well for ambiguous real-world data. Fig. 1: Illustration of distribution shift -We are interested in the ground-truth label distribution (blue) which is costly to obtain due to multiple required annotations per image. Thus, we propose to use proposals as guidance during the annotation to approximate the distribution more cost efficiently (red). However, this distribution might be shifted toward the proposed class. We provide with CleverLabel (green) a method to improve the biased label distribution (red) to be closer to the original unbiased distribution (blue). Additionally, we provide with SPA an algorithm to simulate and analyze the distribution shift. The concrete effects are shown in the right example for the MiceBone dataset on a public benchmark [47] with the proposal marked by x.\nAcquiring multiple annotations per sample introduces an additional labeling effort, necessitating a trade-off between label quality and quantity. While semi-supervised learning potentially reduces the amount of labeled data, the issue of label quality still arises for the remaining portion of labeled data [28]. One possible solution for handling ambiguous data is using proposal guided annotations [41,11] which have been shown to lead to faster and more consistent annotations [46,49]. However, this approach suffers from two potential issues: (1) Humans tend towards deciding in favor of the provided proposal [20]. This default effect introduces a bias, since the proposed class will be annotated more often than it would have been without the proposal. Thus, an average across multiple annotation results in a skewed distribution towards the proposed class as shown in Figure 1. (2) Real human annotations are required during development which prevents rapid prototyping of proposal systems.\nWe provide with CleverLabel and SPA two methods to overcome these two issues. Regarding issue (1), we propose Cost-effective LabEling using Validated proposal-guidEd annotations and Repaired LABELs (CleverLabel) which uses a single class per image as proposal to speed-up the annotation process. As noted above, this might skew the label distribution towards the proposed class which can be corrected with CleverLabel. We evaluate the data quality improvement achieved by training a network on labels generated by CleverLabel by comparing the network's predicted label probability distribution to the ground truth label distribution, which is calculated by averaging labels across multiple annotations as in [47]. Improved data quality is indicated by a reduction in the difference between the predicted distribution and the ground truth distribution. In addi-tion, based on a previously published user study [48], we empirically investigate the influence of proposals on the annotator's labeling decisions. Regarding issue (2), we propose Simulated Proposal Acceptance (SPA), a mathematical model that mimics the human behavior during proposal-based labeling. We evaluate CleverLabel and SPA with respect to their technical feasibility and their benefit when applied to simulated and real-world proposal acceptance data. Finally, we evaluate these methods on a real-world benchmark and we provide general guidelines on how to annotate ambiguous data based on the gained insights.\nOverall, our contributions commit to three different areas of interest: (1) For improving label quality, we provide the novel method CleverLabel and show across multiple simulated and real world datasets a relative improvement of up to 29.8% with 30.0% reduced costs in comparison to the state of the art. ( 2) For annotating real-world ambiguous data, we provide annotation guidelines based on our analysis, in which cases to use proposals during the annotation. (3) For researching of countering the effect of proposals on human annotation behavior, we provide our simulation of proposal acceptance (SPA) as an analysis tool. SPA is motivated by theory and shows similar behavior to human annotators on realworld tasks. It is important to note that this research allowed us to achieve the previous contributions. We provide a theoretical justification for SPA and show that it behaves similarly to human annotators." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b37", "b46", "b35", "b23", "b4", "b42", "b0", "b44", "b50", "b31", "b7", "b36", "b7", "b9", "b15", "b2", "b29", "b30", "b34", "b1", "b40", "b10", "b28", "b49", "b11", "b45", "b47", "b49", "b8", "b16", "b13", "b20", "b51", "b43", "b0", "b41" ], "table_ref": [], "text": "Data and especially high-quality labeled data is important for modern machine learning [62,38]. Hence, the labeling process is most important in uncertain cases or in ambiguous cases as defined by [47]. However, labeling is also not easy in these cases as demonstrated by the difficulties of melanoma skin cancer classification [36]. The issue of data ambiguity still remains even in large datasets like ImageNet [24] despite heavy cleaning efforts [5,59]. The reasons for this issue can arise for example from image artifacts like low resolution [43], inconsistent definitions [58], uncertainty of the data or annotators [1,45] or subjective interpretations [51,32].\nIt is important to look at data creation as part of the problem task because it can greatly impact the results. Recent works have shown that differences can depend on the aggregation of labels between annotators [61,8], the selection of image data sources on the web [37], if soft or hard labels are used as label representation [8,10,16,3] or the usage of label smoothing [30,31,35]. In this work we concentrate on the labeling step issues only. Simply applying SSL only partially solves the problem as it tends to overfit [2]. Hence labeling is necessary and the goal should be to label better and more.\nA commonly used idea we want to focus on is proposal-based labeling. It is also known as verification-based labeling [41], label-spreading [11], semi-automatic labeling [29], or suggestion-based annotation [50]. [12] showed that proposalbased data labeling increases both accuracy and speed for their user study (n=54) which is in agreement with proof-of-concepts by [46,48]. The annotation suggestions for the segmentation and classification in diagnostic reasoning Fig. 2: Average annotation probability of a proposed class with the proposal unknown (GT, Unbiased) and known (Biased) to the annotators in four evaluated datasets. The proposal increases the probability in all observed cases, revealing a clear default effect in the investigated study. Its value is shown without any further processing (Biased) and with the contributed correction (CleverLabel) which consistently reduces the difference to the unbiased probabilities. texts had positive effects on label speed and performance without an introduction of a noteworthy bias [50]. We continue this research for the problem of image classification and show that a bias is introduced and how it can be modeled and reversed.\nAcceptance or rejection of a proposal was previously modeled e.g. for the review process of scientific publications [9]. They applied a Gaussian process model to simulate the impact of human bias on the acceptance of a paper, but rely on a per annotator knowledge. A simulation framework for instance-dependent noisy labels is presented in [17,14] by using a pseudo-labeling paradigm and [21] uses latent autoregressive time series model for label quality in crowd sourced labeling. Another aspect of labeling are annotation guidelines which can also have an impact on data quality as [52] demonstrate for app reviews. We do not consider guidelines as biases, instead they are a part of data semantics and use only real annotations per image.This has the benefit of avoiding unrealistic synthetic patterns as shown by [60] and simplifies the required knowledge which makes the process more easily applicable.\nNote that active learning [44] is a very different approach, in which the model in the loop decides which data-point is annotated next and the model is incrementally retrained. It is outside the scope of this article and it might not be suited for a low number of samples with high ambiguity as indicated by [57]. Consensus processes [1,42] where a joined statement is reached manually or with technical support are also out of scope." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b40", "b28", "b49", "b19", "b29", "b30", "b34", "b9" ], "table_ref": [], "text": "Previous research on proposal-based systems [41,29,50] suggests an influence of the default effect bias on the label distribution. While its impact is assessed as negligible in some cases, it circumvents the analysis of an unbiased annotation Algorithm 1 Simulated Proposal Acceptance (SPA)\nRequire: Proposal ρx; a x i ∈ {0} K Calculate acceptance probability A r ← random(0,1) if r ≤ A then Accept proposal a x i,ρx ← 1 else Sample from remaining classes k ← sampled from P (L x = k | ρx = k) a x i,k ← 1 end if\ndistribution [20] which can be desirable, e.g. in medical diagnostics. As we can identify a significant bias in our own proposal-based annotation pipeline for several datasets (see Fig. 2), two questions arise: how to mitigate the observed default effect and how it was introduced?\nIn this section, we provide methods to answer both questions. Before we can mitigate the observed default effect, we have to understand how it was introduced. Thus, we introduce simulated proposal acceptance (SPA) with the goal of reproducing the human behavior for annotating images with the guidance of proposals. SPA can be used to simulate the labeling process and allow experimental analysis and algorithm development before conducting large scale human annotations with proposals. Building on this understanding, we propose Clever-Label which uses two approaches for improving the biased label distribution to mitigate the default effect: 1. a heuristic approach of class distribution blending (CB) 2. a theoretically motivated bias correction (BC). CleverLabel can be applied to biased distributions generated by humans or to simulated results of SPA.\nFor a problem with K ∈ N classes let L x and L x b be random variables mapping an unbiased or biased annotation of an image x to the selected class k. Their probability distributions P (L x = k) and P (L x b = k) describe the probability that image x is of class k according to a set of unbiased or biased annotations. As discussed in the literature [30,31,35,10], we do not restrict the distribution of L x further e.g. to only hard labels and instead assume, that we can approximate it via the average of N annotations by\nP (L x = k) ≈ N -1 i=0 a x i,k\nN with a x i,k ∈ {0, 1} the i-th annotation for the class k which is one if the class k was selected by the i-th annotator or zero, otherwise. The default effect can cause a bias, P (L x = k) = P (L x b = k) for at least one class k. Especially, for the proposed class ρ x it can be expected that P (L x = ρ x ) < P (L x b = ρ x )." }, { "figure_ref": [ "fig_2" ], "heading": "Simulated Proposal Acceptance", "publication_ref": [], "table_ref": [], "text": "Given both unbiased as well as biased annotations for the same datasets, we analyze the influence of proposals on an annotator's choice. We notice that a main characteristic is that the acceptance probability increases almost linearly with the ground truth probability of the proposal, P (L x = ρ x ), as shown in the main diagonal in Figure 3. If a proposal was rejected, the annotation was mainly influenced by the ground truth probability of the remaining classes. This observation leads to the following model: For a given proposal ρ x , we calculate the probability A that it gets accepted by an annotator as\nA = δ + (1 * -δ)P (L x = ρ x )(1)\nwith δ ∈ [0, 1]. 1 * is an upper-bound for the linear interpolation which should be close to one. The offset parameter δ can be explained due to the most likely higher probability for the proposed class. We also find that this parameter is dataset dependent because for example with a lower image quality the annotator is inclined to accept a more unlikely proposal. In subsection 2.3, we provide more details on how to calculate these values.\nWith this acceptance probability we can now generate simulated annotations a x i,k ∈ {0, 1} as in Algorithm 1 and describe the biased distribution similar to the unbiased distribution via\nP (L x b = k) ≈ N -1 i=0 a x i,k\nN with N describing the number of simulated annotations. The full source-code is in the supplementary and describes all corner cases e.g. P (L x ρ x ) = 1. An experimental validation of this method can be found in subsection 3.1." }, { "figure_ref": [], "heading": "CleverLabel", "publication_ref": [ "b6" ], "table_ref": [], "text": "Class distribution Blending (CB) A label of an image is in general sample dependent but [7] showed that certain classes are more likely to be confused than others. Thus, we propose to blend the estimated distribution P (L x b = k) with a class dependent probability distribution c( k, k) to include this information. This class probability distribution describes how likely k can be confused with any other given class k. These probabilities can either be given by domain experts or approximated on a small subset of the data as shown in subsection 2.3. The blending can be calculated as µP (L x b = k) + (1 -µ)c( k, k) with the most likely class k = argmax j∈{1,..,K} P (L x b = j) and blending parameter µ ∈ [0, 1]. This approach can be interpreted as a smoothing of the estimated distribution which is especially useful in cases with a small number of annotations.\nBias Correction (BC) In subsection 2.1, we proposed a model to use the knowledge of the unbiased distribution P (L x = k) to simulate the biased distribution P (L x b = k) under the influence of the proposals ρ x . In this section, we formulate the reverse direction for correcting the bias to a certain degree.\nAccording to Equation 1, for k = ρ x we can approximate\nB := P (L x = ρ x ) = A -δ 1 * -δ ≈ |Mρ x | N -δ 1 * -δ , with M ρx = {i | i ∈ N, i ≤ N , a i,ρx = 1}\nthe indices of the annotations with an accepted proposal. Note that we have to clamp the results to the interval [0, 1] to receive valid probabilities for numerical reasons. For k = ρ x we deduce the probability from the reject case of Algorithm 1\nP (L x = k | L x = ρ x ) = P (L x b = k | L x = ρ x ) ⇔ P (L x = k, L x = ρ x ) P (L x = ρ x ) = P (L x b = k | L x = ρ x ) ⇔ P (L x = ρ x ) = (1 -B)P (L x b = k | L x = ρ x ) ≈ (1 -B) • i ∈Mρ x a i,k N -|M ρx | .\nThis results in a approximate formula for the original ground truth distribution which relies only on the annotations with proposals. The joined distribution is deducted in the supplementary. It is important to note that the quality of these approximations relies on a large enough number of annotations N ." }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [ "b47", "b26", "b46" ], "table_ref": [ "tab_0" ], "text": "We use a small user study which was proposed in [48] to develop / verify our proposal acceptance on different subsets. The original data consists of four dataset with multiple annotations per image. We focus on the no proposal and class label proposal annotation approaches but the results for e.g. specific DC3 cluster proposals are similar and can be found in the supplementary. We calculated the ground-truth dataset dependent offset δ with a light weight approximation described in the supplementary. An overview about the calculated offsets is given in Table 1 in combination with the values of the user study where applicable. Due to the fact, that it can not be expected, that this parameter can approximated in reality with a high precision we use for all experiment except otherwise stated, a balancing threshold µ = 0.75, 1 * = 0.99 and δ = 0.1. More details about the selection of these parameters are given in the supplementary.\nThe class distributions used for blending are approximated on 100 random images with 10 annotations sampled from the ground truth distribution. For a better comparability, we do not investigate different amounts of images and annotations for different datasets but we believe a lower cost solution is possible especially on smaller datasets such as QualityMRI. For this reason, we ignore this static cost in the following evaluations. If not otherwise stated, we use the method DivideMix [27] and its implementation in [47] to generate the proposals. With other methods the results are very similar and thus are excluded because they do not add further insights. We include the original labels which are used to train the method in the outputted label distribution by blending it with the output in proportion to the used number of annotations. Please see the supplementary for more details about the reproducibility." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "We show that SPA and our label improvements can be used to create / reverse a biased distribution, respectively. In three subsections, we show that both directions are technically feasible and are beneficial in practical applications. Each section initially gives a short motivation, describes the evaluation metrics and provides the actual results." }, { "figure_ref": [], "heading": "Simulated Proposal Acceptance", "publication_ref": [], "table_ref": [], "text": "We need to verify that the our proposed method SPA is a good approximation of the reality in comparison to other methods and that an implementation is technically feasible." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Metrics & Comparisons", "publication_ref": [ "b47" ], "table_ref": [ "tab_1" ], "text": "We know for the evaluation of every image x the proposed class ρ x , the annotated class a x and the soft ground truth distribution P (L x = k) for class k in the study [48]. For the evaluation , we calculate a matrix between the proposed class probability and the actually annotated class probability by aggregating the probabilities into uncertainty bins with a size of 0.2 and a special bin for 0.0, which results in total in 6 bins. Normalized examples are in Figure 3.\nOur proposed method is composed of two parts: ACCEPT with offset δ the proposal, otherwise use the GT distribution for selection as defined in subsection 2.1. Other possible components would be RANDOM ly annotate a label, use the most LIKELY class label or any combination of the before. We compare our proposed method (ACCEPT+GT) against 6 other methods: AC-CEPT+LIKELY, 2*ACCEPT+GT, 2*ACCEPT+RANDOM, RANDOM, GT, LIKELY. Two ACCEPTS mean that we use an offset acceptance of the proposal and then an offset acceptance of the most likely class if it was not the original proposal. As metric, we use half of the sum of differences (SOD) between the real matrix M r and the simulated matrix M s or as formula SOD(M r , M s ) = 0.5 * ( i,j abs(M ri,j -M si,j )) which is the number of differently assigned images to all bins asides from duplicates. We report the normalized SOD by the total number of entries in M r as the average with standard deviation across three repetitions and include more results e.g. proposals based on DC3 clusters in the supplementary. The δ of the User Study was used for the simulation. We developed our method and the comparisons on the datasets Turkey and Plankton and only verified on MiceBone and CIFAR10H.\nResults A visual comparison of the real and simulated results for all uncertainty bins can be seen in Figure 3. The main diagonal line contains the accepted proposals while the rest, especially the upper right corner are the rejected images. We see that the presented matrices are very similar, even in overlapping regions between accepted and rejected proposals as for uncertainty bin 0.41-0.6 of the proposed and annotated classes.\nIn Table 2, we compare the proposed method with six other possible algorithms. We see that our proposed method is for all datasets one of the best methods. However, some methods e.g. 2*ACCEPT+GT can sometimes even be better. This data allows two main conclusions. SPA is clearly better than some naive approaches like RANDOM or GT. SPA is not optimal. It can neither reproduce the real results completely nor is the best method across all datasets. However, it shows very strong performance and is less complex then e.g. 2*AC-CEPT + RANDOM. We conclude that SPA is at a sweet spot between simplicity and correctness." }, { "figure_ref": [], "heading": "Label Improvement", "publication_ref": [], "table_ref": [], "text": "We show that CB and BC lead to similar increased results on simulated and real biased distributions while the similarity illustrates the practical benefit of SPA." }, { "figure_ref": [], "heading": "Metrics & Comparison", "publication_ref": [ "b24", "b47" ], "table_ref": [ "tab_0" ], "text": "As a metric, we use the Kullback-Leibler divergence [25] between the soft ground truth P (L x = k) and the estimated distribution. We generate the skewed distributions either by our method SPA or use real proposal acceptance data from [48]. The reported results are the median performance across different annotation offsets or datasets for the synthetic and real data, respectively. For the real data, we used the calculated δ defined in Table 1 for the simulation but as stated above δ = 0.1 for the correction in CleverLabel. The method GT is the baseline and samples annotations directly from P (L x = k).\nThe full results are in the supplementary." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "If we look at the results on the synthetic data created by SPA in Figure 4a, we see the expected trend of improved results with more annotations. While using only CB is the best method with one annotation, the performance is surpassed by all other methods with enough annotations. The baseline (GT) is especially with the combination of blending (+CB) the best method for most number of annotations. Our label improvement (CleverLabel) is in most cases the second best method and blending is a major component (+CB). The bias correction (+BC) improves the results further for higher number of annotations at around 20+. Using the correct offset (+ δ GT) during the correction which was used in the simulation of SPA, is of lower importance. When we look at the full results in the supplementary, wee see benefits of a better δ at an offset larger than 0.4 and more annotations than 5. We conclude that label improvement is possible for synthetic and real data and that the combination of CB and BC with an offset of 0.1 is in most cases the strongest improvement. The real results in Figure 4b show the similar trends as in the synthetic data. However, the baseline method without blending performs stronger and some trends are not observable because we only have up to 12 annotations. The correct value for the offsets is even less important in the real data, most likely because the effect is diminished by the difference of the simulation and reality. It is important to note that we keep the same notation with CleverLabel but in this case SPA was not used to generate the biased distribution but real annotations with proposals. Overall, the results analysis on synthetic and real data is similar and thus SPA can be used as a valid tool during method development.\nIt should be pointed out that the cost of labeling is not equivalent to the number annotations as we can expect a speedup of annotations when using proposals. For example, CleverLabel often performs slightly worse than GT in Figure 4b. Considering a speedup of 2, we actually have to compare CleverLabel with 5 annotations to GT at around 3, as explained in the budget calculation in subsection 3.3." }, { "figure_ref": [], "heading": "Benchmark evaluation", "publication_ref": [ "b46" ], "table_ref": [], "text": "We show the results for CleverLabel on [47]." }, { "figure_ref": [], "heading": "Metrics & Comparison", "publication_ref": [ "b26", "b25", "b24", "b12", "b47", "b48" ], "table_ref": [ "tab_0" ], "text": "We compare against the top three benchmarked methods: Baseline, DivideMix and Pseudo v2 soft. Baseline just samples from the ground-truth but still performed the best with a high number of annotations. DivideMix was proposed by [27] and Pseudo v2 soft (Pseudo soft) uses Pseudo-Labels [26] of soft labels to improve the labels. We evaluate the Kullback-Leibler divergence (KL) [25] between the ground truth and the output of the second stage (the evaluation with a fixed model) and KL between ground truth and the input of the second stage ( KL). We also provide an additional ablation where we replaced the fixed model in the second sage with a visual transformer [13]. The hyperparameters of the transformer were not tuned for each dataset but kept to common recommendations.The speedup S which can be expected due to using proposals depends on the dataset and used approach. For this reason, we include this parameter in our comparison with the values of 1 (no speedup), 2.5 as in [48] or 10 as in [49]. S is used to calculate the budget as initial supervision per image (in. sup.) +( percentage annotated of X• number of annotations per image )/S). In. sup. describes the percentage of labeled data that is annotated once in phase one of the benchmark. For the skewed distribution generation which is correct by CleverLabel, we used SPA with the calculated δ in Table 1. For CleverLabel a heuristically chosen δ = 0.1 was used if not otherwise stated (+GT δ). The results are the median scores of all datasets of the averages across three folds. Full results including mean scores are in the supplementary. The marker and color define the method, while the line style to the next node on the x-axis visualizes the initial supervision, logarithmic scaled budgets" }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Results", "publication_ref": [ "b12" ], "table_ref": [ "tab_2" ], "text": "We present in Figure 5a a comparison of our method CleverLabel with previous state-of-the-art methods on the benchmark with an initial supervision of 100%. Even if we assume no speedup, we can achieve lower KL scores than all previous methods, regardless of the used number of annotations. Our proposed label improvement with class blending can also be applied to samples from the ground truth distribution (GT + CB) and achieves the best results in many cases. Due to the fact that it does not leverage proposals it can not benefit from any speedups S. If we take these speedups into consideration, CleverLabel can achieve the best results across all budgets except for outliers.\nWe investigate lower budgets where the initial supervision could be below 100% in Figure 5b. The full results can be found in the supplementary. If we compare our method to the combined Pareto front of all previous reported results, we see a clear improvement regardless of the expected speedup. Two additional major interesting findings can be taken from our results. Firstly, the percentage of labeled data which is equal to the initial supervision for CleverLabel (violet,blue,lightblue) is important as we see improved results from initial supervision of 20 to 50 to 100%. This effect is mitigated with higher speedups because then CleverLabel can achieve lower budget results not possible by other initial supervisions. Secondly, we can improve the results further by using proposals also on the unlabeled data (inc. un., red,orange,yellow) after this initialization. This increases the budget because the percentage of labeled data is 100% regardless of the initial supervision but results in improved scores. With S = 10 we can even improve the previous state of the art (Pseudo soft, in. sup 20%, 5 annotations) at the budget of 1.0 from 0.40/0.47 to 0.30/0.33 at a budget of 0.7 which is a relative improvement of 25%/29.8% with median/ mean aggregation.\nIn Table 3, we conduct several ablations to investigate the impact of individual parts of our method. Comparing KL and KL scores, we see similar trends between each other and to subsection 3.2. Class blending (CB) is an important part of improved scores but the impact is stronger for KL. A different blending threshold (µ = 0.25) which prefers the sample independent class distribution leads in most cases to similar or worse results than our selection of 0.75. Bias Correction (BC) and the correct GT offset have a measurable impact on the KL while on KL we almost no difference but a saturation at around 0.24 for all approaches most likely due the used network backbone. With a different backbone e.g. a transformer [13] we can verify that BC positively impacts the results." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b47", "b0" ], "table_ref": [], "text": "In summary, we analyzed the introduced bias during the labeling process when using proposals by developing a simulation of this bias and provided two methods for reducing the proposal-introduced bias. We could show that our methods outperform current state of the art methods on the same or even lower labeling budgets. For low annotation budgets, we have even surpassed our newly proposed baseline of class blending in combination with annotation without proposals.\nCost is already a limiting factor when annotating data and thus only results with a better performance for a budget of less than one (which equals the current annotation of every image once) can be expected to be applied in real world applications. We achieved this goal with CleverLabel with speedups larger than 4 with is reasonable based on previously reported values [48].\nBased on our research, how should one annotate ambiguous image classification data? While there currently is no strategy for every case, the problem can be broken down into the two major questions as depicted in Figure 6. Firstly, is a bias in the data acceptable? Be aware that in CleverLabel all labels are human validated and that many consensus process already use an agreement system [1] with multiple reviewers. If a small bias is acceptable you can directly Fig. 6: Flowchart about how to annotate ambiguous data based on the questions if an introduced bias is acceptable and if the expected speedup S is high (> 3) use proposals and an optional correction like CleverLabel. However, if a bias is not acceptable, the second major question is the expected speedup by using proposals for annotating your specific data. In case of a high expected speedup, the trade-off between the introduced bias and the ability to mitigate it with BC and CB favors CleverLabel. For a low speedup, we recommend avoiding proposals and to rely on class blending which is applicable to any dataset if you can estimate the class transitions as described in subsection 2.3. It is difficult to determine the exact trade-off point, because CB improves the results with fewer (10-) annotations, BC improves the results at above (20+) and both each other. Based on this research, we recommend a rough speedup threshold of around three for the trade-off." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We aim at a general approach for different datasets but this results in nonoptimal solutions for individual datasets. Multiple extensions for SPA like different kinds of simulated annotators would be possible but would require a larger user study for evaluation. In subsection 3.2, we compared our simulation with real data on four datasets, but a larger comparison was not feasible. It is important to note that SPA must not replace human evaluation but should be used for method development and hypothesis testing before an expensive human study which is needed to verify results. We gave a proof of concept about the benefit of bias correction with higher annotation counts with a stronger backbone like transformers. A full reevaluation of the benchmark was not feasible and it is questionable if it would lead to new insights because the scores might be lower but are expected to show similar relations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Data quality is important but comes at a high cost. Proposals can reduce this cost but introduce a bias. We propose to mitigate this issue by simple heuristics and a theoretically motivated bias correction which makes them broader applicable and achieve up to 29.8% relative better scores with reduced cost of 30%. This analysis is only possible due to our new proposed method SPA and results in general guidelines for how to annotate ambiguous data." }, { "figure_ref": [], "heading": "Ethical Statement", "publication_ref": [], "table_ref": [], "text": "While proposal-based labeling offers several benefits, it generally introduces a bias which might have ethical implications depending on the use case. We believe that it is important to take steps to improve proposal-guided annotations and introduce CleverLabel in order to enhance their quality by mitigating, however not eliminating, the effects of bias. Every operator must consciously decide whether the resulting reduced bias has a negative effect for their own application. CleverLabel aims at the utilization of all available data. As ambiguous labels can introduce additional data uncertainty into a model, while it is common to exclude these data from training. However, we expect that their consideration can also provide more nuanced information, potentially allowing for more accurate and fair decision-making. By this means, ambiguous labels may result in less overconfident models.\nThis work aims to facilitate and encourage further research in this field. While the contributions can be used to investigate a variety of research questions in order to improve the accuracy of predictions, we cannot identify any direct negative ethical impacts. " }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Theory for bias correction\nIn the main paper, we calculated the approximated probabilities for the two cases in the if-clause in Algorithm 1. These probabilities are\nB = P (L x = k | L x = ρ x ) ≈ |Mρ x | N -δ 0.99 -δ C k := P (L x = k | L x = ρ x ) ≈ i ∈Mρ x a i,k N -|M ρx | .\n(\n)2\nwith M ρx = {i | i ∈ N, i ≤ N , a i,ρx = 1} the indices of the N annotations with an accepted proposal for class k.\nWe can than estimate the complete distribution by\nP (L x = k) = P (L x = k, L x = ρ x ) + P (L x = k, L x = ρ x ) = P (L x = k | L x = ρ x )P (L x = ρ x ) + P (L x = k | L x = ρ x )P (L x = ρ x ) = 1 ρx (k) • B + C k • (1 -B)(3)\nwith indicator function 1 ρx (k)." }, { "figure_ref": [], "heading": "A.2 The selection of parameters for SPA", "publication_ref": [ "b47" ], "table_ref": [ "tab_4" ], "text": "In Table 4, we provide an overview about other investigated values for the upper bound of Equation 1. We call this upper bound 1 * . We did not investigate the value 1 * = 1 because we expect that proposals are always rejected by some annotators either due errors or different preferences. This table also shows the results of the simulation for certain and ambiguous predictions of DC3 as defined in [48]. Please note that the provided results are from a previous version with slightly different implementation details and thus the results are not directly comparable to the ones in the main paper but quite close. We see that a higher 1 * is better across all datasets. Moreover, DC3 predictions result in a similar score but predictions on certain data are a bit better while on ambiguous data they are marginally worse. The parameters δ and µ were selected heuristically with the following motivations. We normally want to correct only a bit with regard to δ. We weight the calculated distribution more than the class based distribution with µ. Both heuristically chosen values yielded robust results in preliminary studies. Based on the ablation in the main paper, we see that a more realistic δ (with regard to the GT δ) has a significant but small impact and a lower value of µ yields a inferior result. " }, { "figure_ref": [], "heading": "A.3 Approximation of GT δ", "publication_ref": [], "table_ref": [], "text": "We estimate the dataset dependent offset δ based on 20 images with 0.2 < P (L x = ρ x ) ≤ 0.4 for all datasets. While this approach leads to the most robust results in comparison to the other reported methods below, we notice a constant underestimation of δ (see supplementary) and thus scale it with 1.3 to compensate for it. We credit this issue to the fact that one author was the annotator and thus was unwillingly influenced by the known lower ground truth distribution.\nAside from the above presented method, we investigate two other non-working approaches for estimating δ. We report them here to let fellow researchers know what has not worked for us.\nThe first method is not using proposals with a ground truth probability of 0.2-0.4 but of 0. In theory, this approach is equal to the proposed one in the paper. However, due to the fact that only low quality proposals are shown and the fact the annotator was aware of the effect, no annotations (except for Plankton) have been accepted. Thus, a calculation of δ was not possible. This effect is still present in the data shown in the paper but does not decrease the result to zero which allows a rescaling as counter action.\nThe second method is based on the probability theory and our approximation of the introduced bias. We can estimate for a second, different proposal ρ = 0, we can conclude that δ = 0 because A should be greater or equal than δ. Analogously, we can conclude these results for A .\nx = ρ x C k := P (L x = k | L x = ρ x ) ≈ i ∈M ρ x a i,k N -|M ρ x | D k := P (L x = k, L x = ρ x | L x = ρ x ) = 1 ρ x (k) • P (L x = ρ x | L x = ρ x ) ≈1 ρ x (k) • C ρ x D k := P (L x = k, L x = ρ x | L x = ρ x ) ≈ 1 ρx (k) • C ρx E k := P (L x = k | L x = ρ x , L x = ρ x ) ≈ a∈M ρx ρ x a k |M ρxρ x | (4) with M ρ x = {i | i ∈ N, i ≤ N , a i,ρ x = 1}\nM ρxρ x = {a i | i ∈ N, i ≤ N , a i,ρx = 1 = a i,ρ x } ∪ {a i | i ∈ N, i ≤ N , a i,ρx = 1 = a i,ρ x }\nAssume none of the above cases are given and P (L x = ρ x ) = 0, then P (L x = ρ x ) = 1 and A ≈ 1 which is aside from unlikely cases a contradiction to the cases above. Thus, we can conclude most likely that P (L x = ρ x ) = 0 and\nP (L x = ρ x ) = 0.\nIf none of the above cases applies and E k = 0 for k, we can conclude that M ρxρ x = {} is empty and no annotations aside from ρ x and ρ x exist and we can assume P (L x ∈ {ρ x , ρ x }) = 0. We know from the definition\nC ρx ≈ P (L x = ρ x | L x = ρ x ) and C ρ x ≈ P (L x = ρ x | L x = ρ x ).\nIn combination with Equation 3, we have two formulas for P (L x = k) which are conditioned both on the unknown δ which allows the calculation of it. δ = 0 because even classes k with P (L x = k) should be annotated to some degree based if δ > 0.\nBased on the above conclusion, we can assume most likely for ρ x = k = ρ x , that P (L x = ρ x ) = 0 , P (L x = ρ x ) = 0 and E k = 0 or we could have already approximated δ.\nWe know that for ρ\nx = k = ρ x P (L x = k | L x = ρ x ) • P (L x = ρ x ) = P (L x = k, L x = ρ x ) = P (L x = k, L x = ρ x , L x = ρ x ) + P (L x = k, L x = ρ x , L x = ρ x ) = P (L x = k, L x = ρ x | L x = ρ x ) • P (L x = ρ x ) + P (L x = k | L x = ρ x , L x = ρ x ) • P (L x = ρ x , L x = ρ x ) = P (L x = k, L x = ρ x | L x = ρ x ) • P (L x = ρ x ) + P (L x = k | L x = ρ x , L x = ρ x ) • P (L x = ρ x | L x = ρ x ) • P (L x = ρ x )(5)\nThus, by dividing by P (L x = ρ x ) = 0, we get\nP (L x = k | L x = ρ x ) = P (L x = k, L x = ρ x | L x = ρ x ) + P (L x = k | L x = ρ x , L x = ρ x ) • P (L x = ρ x | L x = ρ x ) ⇔ C k = D k + E k • P (L x = ρ x | L x = ρ x )(6)\nAnalogously, we get also\nC k = D k +E k •P (L x = ρ x | L x = ρ x ) for P (L x = ρ x ) = 0\nWe also know that\nP (L x = ρ x , L x = ρ x ) = P (L x = ρ x | L x = ρ x ) • P (L x = ρ x ) = P (L x = ρ x | L x = ρ x ) • P (L x = ρ x ) ⇔ P (L x = ρ x ) = P (L x = ρ x ) P (L x = ρ x | L x = ρ x ) • P (L x = ρ x | L x = ρ x ) ⇔ P (L x = ρ x ) = P (L x = ρ x ) P (L x = ρ x | L x = ρ x ) • P (L x = ρ x | L x = ρ x )(7)\nIf we assume that the fractures are about one, we get\nP (L x = ρ x ) ≈ P (L x = ρ x | L x = ρ x ) P (L x = ρ x ) ≈ P (L x = ρ x | L x = ρ x ). (8\n)\nThis assumption is reasonable as a rough approximation because it is the same probability once with and once without the quite general inequality condition.\nFollowing the previous results we get\nP (L x = ρ x ) ≈ C k -D k E k P (L x = ρ x ) ≈ C k -D k E k(9)\nfor every ρ x = k = ρ x . We can estimate δ based on\nP (L x = ρ x ) = 1 -P (L x = ρ x ) with A ≈ δ + (0.99 -δ)P (L x = ρ x ) ⇔ A ≈ δ + 0.99 • P (L x = ρ x ) -δ • P (L x = ρ x ) ⇔ A ≈ δ(1 -P (L x = ρ x )) + 0.99 • P (L x = ρ x ) ⇔ δ ≈ A -0.99 • P (L x = ρ x ) P (L x = ρ x )(10)\nAnalogously, we can calculate δ for\nP (L x = ρ x ) = 1 -P (L x = ρ x ).\nTo summarize, we provided various approximations which can be used to estimate δ for every image x. Based, on the assumption that δ is fixed, we have taken all calculated δ below a threshold of 0.8 and used the median to approximate δ for the complete dataset. The threshold is reasonable to remove very unlikely candidates.\nWe can show that this median approximation generates reasonable values for δ on synthethic data. However, it is not robust and not reliably working on real data. We credit this issue to the fact that some of the above approximation are not exact enough and that the available real world data mostly include correlated proposals ρ x and ρ x . This correlation is based on the fact that the most likely and the second most likely predictions of a network have been used." }, { "figure_ref": [], "heading": "A.4 Implementation details of simulated proposal acceptance", "publication_ref": [], "table_ref": [], "text": "Below the main parts of the python source code are provided for SPA and BC.\nThis code especially explains how we sample from the remaining classes in the cases P (L x = ρ x ) = 1. It is important to note that these cases can only happen in up to 1% of the mentioned alternatives. Due the creation of the acceptance probability in SPA, the proposal ρ x is accepted in 99% of the cases for P (L x = ρ x ) = 1. We looked at our experiments and found at most several dozens of images where this issue occurred. Based on the provided code, the first class is then simulated to be annotated. In a reimplementation, it might be beneficial to change this behavior to a random class, but we do not estimate the major impact on the results based on the low number of samples. In this section, we report the used transition matrices c for class distribution blending (see subsection 2.2). The matrices are given as a python dictionary to indicate which class belongs to which row. Additionally we provide the expected Kullback-Leibler divergence when using only these blended probabilities as a ground truth distribution P (L x = k). # KL 0 . 4 2 8 6 2 5 7 1 2 4 6 6 4 7 5 5" }, { "figure_ref": [], "heading": "# s o f t g t : v e c t o r f o r s p e c i f i c image o f ground t r u t h # p r o b a b i l i t i e s # p r o p o s a l a c c e p t a n c e o f f s e t : o f f s e t f o r d a t a s e t , # c a l l e d \\ d e l t a i n paper # s i m u l a t i o n r e p e t i t i o n s : number o f r e p e t i t i o n s # aka t h e number o f s i m u l a t e d a n n o t a t i o n s", "publication_ref": [], "table_ref": [], "text": "' Benthic ' : { ' c o r a l ' : [ 0 . 8 1 4 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 5 7 , 0 . 1 1 4 , 0 . 0 1 4 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' c r u s t a c e a n ' :\n[ 0 . 0 4 3 , 0 . 8 4 3 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 1 1 4 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' cucumber ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 0 0 , 0 . 0 0 0 , 0 . 1 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' e n c r u s t i n g ' : [ 0 . 0 2 4 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 7 5 6 , 0 . 0 4 0 , 0 . 0 5 2 , 0 . 0 0 0 , 0 . 1 2 8 ] , ' o t h e r f a u n a ' : [ 0 . 0 2 1 , 0 . 0 1 6 , 0 . 0 0 0 , 0 . 0 3 7 , 0 . 8 0 5 , 0 . 0 4 2 , 0 . 0 0 0 , 0 . 0 7 9 ] , ' sponge ' :\n[ 0 . 0 0 0 , 0 . 0 1 9 , 0 . 0 0 0 , 0 . 0 6 2 , 0 . 0 4 4 , 0 . 8 4 4 , 0 . 0 2 5 , 0 . 0 0 6 ] , ' s t a r ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 3 0 , 0 . 0 0 0 , 0 . 9 7 0 , 0 . 0 0 0 ] , ' worm ' :\n[ 0 . 0 1 7 , 0 . 0 0 0 , 0 . 0 1 7 , 0 . 0 7 5 , 0 . 0 5 8 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 8 3 3 ] } , # KL: 0 . 1 8 3 0 4 3 5 5 0 4 7 0 6 8 0 0 8 'CIFAR10H ' : { ' a i r p l a n e ' : [ 0 . 9 5 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 1 3 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 3 7 , 0 . 0 0 0 ] , ' automobile ' :\n[ 0 . 0 0 0 , 0 . 9 7 8 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 2 2 ] , ' b i r d ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 2 5 , 0 . 0 0 8 , 0 . 0 3 3 , 0 . 0 1 7 , 0 . 0 0 0 , 0 . 0 1 7 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' cat ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 8 , 0 . 8 7 5 , 0 . 0 1 7 , 0 . 0 4 2 , 0 . 0 4 2 , 0 . 0 0 8 , 0 . 0 0 0 , 0 . 0 0 8 ] , ' deer ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 2 9 , 0 . 0 3 6 , 0 . 0 0 0 , 0 . 0 3 6 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' dog ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 4 4 , 0 . 0 0 0 , 0 . 9 5 6 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' f r o g ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 1 7 , 0 . 0 0 8 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 7 5 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' h o r s e ' :\n[ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 1 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' s h i p ' :\n[ 0 . 0 0 8 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 7 7 , 0 . 0 1 5 ] , ' truck ' :\n[ 0 . 0 1 4 , 0 . 0 2 9 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 ' Plankton ' : { ' bubbles ' : [ 0 . 9 5 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 5 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' c o l l o d a r i a b l a c k ' : [ 0 . 0 0 0 , 1 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' c o l l o d a r i a g l o b u l e ' : [ 0 . 0 0 0 , 0 . 0 3 3 , 0 . 9 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 6 7 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' cop ' : [ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 1 1 , 0 . 0 0 0 , 0 . 0 3 3 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 5 6 ] , ' det ' : [ 0 . 0 3 3 , 0 . 0 1 1 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 8 0 0 , 0 . 1 1 1 , 0 . 0 0 0 , 0 . 0 4 4 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' n o f i t ' : [ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 1 5 , 0 . 0 2 1 , 0 . 9 4 1 , 0 . 0 0 0 , 0 . 0 0 3 , 0 . 0 0 9 , 0 . 0 1 2 ] , ' p h y t o p u f f ' : [ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 2 5 , 0 . 0 0 0 , 0 . 9 0 0 , 0 . 0 7 5 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' p h y t o t u f t ' : [ 0 . 0 1 3 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 3 1 , 0 . 0 0 0 , 0 . 0 0 6 , 0 . 9 5 0 , 0 . 0 0 0 , 0 . 0 0 0 ] , ' p r o r h i z a r i a p h a e o d a r i a ' : [ 0 . 0 0 0 , 0 . 0 2 5 , 0 . 0 0 8 , 0 . 0 0 0 , 0 . 0 0 8 , 0 . 0 3 3 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 9 2 5 , 0 . 0 0 0 ] , ' shrimp ' : [ 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 1 7 , 0 . 1 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . ' T r e e v e r s i t y #1 ': { ' bark ' : [ 0 . 9 3 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 0 0 , 0 . 0 6 0 , 0 . 0 1 0 ] , ' bud ' :\n[ 0 . 0 0 5 , 0 . 8 1 9 , 0 . 0 6 7 , 0 . 0 6 7 , 0 . 0 3 8 , 0 . 0 0 5 ] , ' f l o w e r ' :\n[ 0 . 0 0 0 , 0 . 0 3 4 , 0 . 8 8 3 , 0 . 0 4 6 , 0 . 0 3 1 , 0 . 0 0 6 ] , ' f r u i t ' :\n[ 0 . 0 0 0 , 0 . 0 5 0 , 0 . 0 3 6 , 0 . 8 3 6 , 0 . 0 6 4 , 0 . 0 1 4 ] , ' l e a f ' :\n[ 0 . 0 0 0 , 0 . 0 3 3 , 0 . 0 0 0 , 0 . 0 3 3 , 0 . 8 8 9 , 0 . 0 4 4 ] , ' w h o l e p l a n t ' :\n[ 0 . 0 0 9 , 0 . 0 0 9 , 0 . 0 1 8 , 0 . 0 0 0 , 0 . 0 1 8 , 0 . 9 [ 0 . 0 6 0 , 0 . 7 8 0 , 0 . 1 6 0 ] , ' p l u m a g e i n j u r y ' :\n[ 0 . 0 1 2 , 0 . 0 3 9 , 0 . 9 4 9 ] } ," }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "A.6 Full result tables", "publication_ref": [], "table_ref": [ "tab_5", "tab_6", "tab_8", "tab_7", "tab_2", "tab_0", "tab_0", "tab_4", "tab_5", "tab_9", "tab_2" ], "text": "This subsection provides all mentioned extended and full results of the paper. They are best viewed digitally.\nLabel improvement Full results from subsection 3.2 in Table 5 and Table 6.\nBenchmark Full results from subsection 3.3 in Table 12, Table 7 and Table 8 for Figure 5a; in Table 13, Table 9 and Table 10 for Figure 5b; in Table 11,Table 14, Table 15, Table 16 for Table 3. Only CB -0.19 ± 0.01 -0.18 ± 0.01 --0.18 ± 0.01 QualityMRI CleverLabel (+ GT δ) -0.05 ± 0.00 -0.04 ± 0.00 --0.05 ± 0.00 QualityMRI Only CB -0.06 ± 0.00 -0.05 ± 0.01 --0.05 ± 0.01 Synthetic CleverLabel (+ GT δ) -0.07 ± 0.00 -0.06 ± 0.00 --0.04 ± 0.00 Synthetic Only CB -0.08 ± 0.00 -0.07 ± 0.00 --0.07 ± 0.00 Treeversity#1 CleverLabel (+ GT δ) -0.24 ± 0.01 -0.21 ± 0.01 --0.20 ± 0.01 Treeversity#1\nOnly CB -0.24 ± 0.01 -0.21 ± 0.01 --0.21 ± 0.01 Treeversity#6 CleverLabel (+ GT δ) -0.24 ± 0.01 -0.18 ± 0.01 --0.15 ± 0.00 Treeversity#6\nOnly CB -0.24 ± 0.01 -0.18 ± 0.01 --0.16 ± 0.01 Turkey CleverLabel (+ GT δ) -0.18 ± 0.02 -0.16 ± 0.01 --0.16 ± 0.01 Turkey Only CB -0.18 ± 0.00 -0.16 ± 0.01 --0.16 ± 0.01" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Baseline 0.69 ± 0.08( 1.00) 0.37 ± 0.04( 3.00) 0.39 ± 0.07( 5.00) 0.33 ± 0.04( 10.00) ---DivideMix 0.52 ± 0.06( 1.00) 0.56 ± 0.11( 3.00) 0.54 ± 0.05( 5.00) 0.64 ± 0.14( 10.00) ---Pseudo soft 0.52 ± 0.07( 1.00) 0.39 ± 0.04( 3.00) 0.36 ± 0.03( 5.00) 0.33 ± 0.02( 10.00) ---GT+CB 0.60 ± 0.29( 1.00) 0.32 ± 0.03( 3.00) 0.30 ± 0.01( 5.00) 0.30 ± 0.02( 10.00) ---CleverLabel (ours) 0.37 ± 0.02( 2.00) 0.33 ± 0.02( 4.00) 0.32 ± 0.03( 6.00) 0.30 ± 0.02( 11.00) 0.31 ± 0.03( 21.00) 0.29 ± 0.02( 51.00) 0.29 ± 0.02( 101.00) CleverLabel (ours) S=2.5 0.37 ± 0.02( 1.40) 0.33 ± 0.02( 2.20) 0.32 ± 0.03( 3.00) 0.30 ± 0.02( 5.00) 0.31 ± 0.03( 9.00) 0.29 ± 0.02( 21.00) 0.29 ± 0.02( 41.00) CleverLabel (ours) S=10 0.37 ± 0.02( 1.10) 0.33 ± 0.02( 1.30) 0.32 ± 0.03( 1.50) 0.30 ± 0.02( 2.00) 0.31 ± 0.03( 3.00) 0.29 ± 0.02( 6.00) 0.29 ± 0.02( 11.00) CleverLabel (ours) S=2.5 0.18 ± 0.01( 0.28) 0.18 ± 0.01( 0.44) 0.17 ± 0.01( 0.60) 0.17 ± 0.01( 1.00) 0.12 ± 0.01( 0.70) -0.12 ± 0.01( 1.10) 0.12 ± 0.01( 1.50) 0.12 ± 0.00( 2.50) 0.08 ± 0.01( 1.40) 0.07 ± 0.01( 2.20) 0.08 ± 0.00( 3.00) 0.08 ± 0.00( 5.00) Synthetic CleverLabel (ours) S=10 0.18 ± 0.01( 0.22) 0.18 ± 0.01( 0.26) 0.17 ± 0.01( 0.30) 0.17 ± 0.01( 0.40) 0.12 ± 0.01( 0.55) -0.12 ± 0.01( 0.65) 0.12 ± 0.01( 0.75) 0.12 ± 0.00( CleverLabel (ours) 0.90 ± 0.01 0.44 ± 0.00 0.32 ± 0.00 0.19 ± 0.00 0.12 ± 0.00 0.08 ± 0.00 0.07 ± 0.00 Synthetic CleverLabel (+ GT δ) 0.90 ± 0.01 0.49 ± 0.01 0.38 ± 0.00 0.21 ± 0.01 0.13 ± 0.00 0.07 ± 0.00 0.06 ± 0.00 Synthetic Only CB 0.90 ± 0.01 0.42 ± 0.00 0.30 ± 0.01 0.19 ± 0.00 0.13 ± 0.00 0.09 ± 0.00 0.08 ± 0.00 Synthetic Only CB (µ=0.25) 0.64 ± 0.01 0.39 ± 0.00 0.34 ± 0.01 0.27 ± 0.00 ---Synthetic\nOnly BC -1.52 ± 0.01 1.16 ± 0.01 0.54 ± 0.01 ---Treeversity#1 CleverLabel (ours) 0.76 ± 0.04 0.27 ± 0.02 0.20 ± 0.01 0.12 ± 0.01 0.08 ± 0.00 0.06 ± 0.00 0.05 ± 0.00 Treeversity#1 CleverLabel (+ GT δ) 0.76 ± 0.04 0.27 ± 0.02 0.20 ± 0.01 0.12 ± 0.01 0.08 ± 0.00 0.05 ± 0.00 0.04 ± 0.00 Treeversity#1 Only CB 0.76 ± 0.04 0.27 ± 0.02 0.20 ± 0.01 0.12 ± 0.01 0.09 ± 0.00 0.07 ± 0.00 0.06 ± 0.00 Treeversity#1 Only CB (µ=0.25) 0.64 ± 0.03 0.30 ± 0.01 0.25 ± 0.00 0.20 ± 0.00 ---Treeversity#1\nOnly BC -0.68 ± 0.04 0.50 ± 0.02 0.23 ± 0.01 ---Treeversity#6 CleverLabel (ours) 1.11 ± 0.03 0.46 ± 0.01 0.30 ± 0.01 0.18 ± 0.01 0.11 ± 0.00 0.07 ± 0.00 0.06 ± 0.00 Treeversity#6 CleverLabel (+ GT δ) 1.11 ± 0.03 0.50 ± 0.01 0.36 ± 0.01 0.20 ± 0.00 0.12 ± 0.00 0.07 ± 0.00 0.06 ± 0.00 Treeversity#6\nOnly CB 1.11 ± 0.03 0.45 ± 0.01 0.29 ± 0.00 0.17 ± 0.00 0.12 ± 0.00 0.09 ± 0.00 0.08 ± 0.00 Treeversity#6 Only CB (µ=0.25) 0.74 ± 0.02 0.42 ± 0.01 0.33 ± 0.01 0.27 ± 0.01 ---Treeversity#6\nOnly BC -1.85 ± 0.01 1.38 ± 0.03 0.61 ± 0.02 ---Turkey CleverLabel (ours) 0.38 ± 0.03 0.14 ± 0.01 0.10 ± 0.01 0.06 ± 0.00 0.04 ± 0.00 0.03 ± 0.00 0.02 ± 0.00 Turkey CleverLabel (+ GT δ) 0.38 ± 0.03 0.14 ± 0.01 0.10 ± 0.01 0.06 ± 0.00 0.04 ± 0.00 0.03 ± 0.00 0.02 ± 0.00 Turkey Only CB 0.38 ± 0.03 0.14 ± 0.01 0.10 ± 0.01 0.06 ± 0.01 0.04 ± 0.00 0.03 ± 0.00 0.03 ± 0.00 Turkey Only CB (µ=0.25) 0.27 ± 0.02 0.14 ± 0.01 0.12 ± 0.01 0.10 ± 0.01 ---Turkey\nOnly BC -0.43 ± 0.03 0.28 ± 0.02 0.13 ± 0.01 ---" } ]
High-quality data is crucial for the success of machine learning, but labeling large datasets is often a time-consuming and costly process. While semi-supervised learning can help mitigate the need for labeled data, label quality remains an open issue due to ambiguity and disagreement among annotators. Thus, we use proposal-guided annotations as one option which leads to more consistency between annotators. However, proposing a label increases the probability of the annotators deciding in favor of this specific label. This introduces a bias which we can simulate and remove. We propose a new method CleverLabel for Cost-effective LabEling using Validated proposal-guidEd annotations and Repaired LABELs. CleverLabel can reduce labeling costs by up to 30.0%, while achieving a relative improvement in Kullback-Leibler divergence of up to 29.8% compared to the previous state-of-the-art on a multi-domain real-world image classification benchmark. CleverLabel offers a novel solution to the challenge of efficiently labeling large datasets while also improving the label quality.
Label Smarter, Not Harder: CleverLabel for Faster Annotation of Ambiguous Image Classification with Higher Quality
[ { "figure_caption": "Fig. 3 :3Fig. 3: Visual comparison the uncertainty bins for real vs. simulated proposal acceptance on the MiceBone dataset, Normalized per row / per proposal uncertainty bin, meaning that e.g. in Real M r that if a class with soft ground truth probability 0.21-0.4 is proposed, in 0.10 of cases a class with ground truth probability .041-0.6 is annotated. Hence, some cells are 0 by default.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Evaluation of label improvement on synthetic data created with SPA and real user study across different amounts of annotations. Results are clamped for visualization to the range 0 to 1.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) Comparisons of benchmark results with 100% in. sup. (b) Pareto front visualization of benchmark results with 20,50, 100% in. sup.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Left: Compares previous state-of-the-art (first three), new baseline (GT+CB) and our method (CleverLabel) including different speedups S. Right:The marker and color define the method, while the line style to the next node on the x-axis visualizes the initial supervision, logarithmic scaled budgets", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "the indices of the N annotations (a ) with an accepted proposal and", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "the annotations not accepting one of the two proposals. Firstly, we look at the corner cases A ≈ |Mρ x | N or A := |M ρ x | N are zero or one. In these cases, we can approximate δ directly. If A ≈ |Mρ x | N = 1, we can make the simplified conclusion that P (L x = ρ x ) = 1 and thus all other elements of the distribution are zero P (L x = ρ x ) = 0. Thus, we can approximate δ ≈ A because P (L x = ρ x ) = 0 and only δ can influence the result. If A ≈ |Mρ x | N", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 # c h e c k i f t h e c a l c u l a t e d d i s t r i b u t i o n # s h o u l d be c o r r e c t e d w i t h BC i f c o r r e c t w i t h o f f s e t < 0 :#10# p r o p o s e d c l a s s : t h e p r o p o s e d c l a s s l a b e l # aka \\ r h o x i n paper # e x e c u t e s i m u a l t i o n ( m u l t i p l e t i m e s ) s i m u l a t e d l a b e l = np . z e r o s ( ( len ( s o f t g t ) ) ) f o r j in range ( s i m u l a t i o n r e p e t i t i o n s ) : # c a l c u l a t e p r o p o s a l a c c e p t a n c e a c c e p t r a t e = p r o p o s a l a c c e p t a n c e o f f s e t +\\ ( ( 0 . 9 9p r o p o s a l a c c e p t a n c e o f f s e t ) ) * \\ s o f t g t [ p r o p o s e d c l a s s ] s i m u l a t e d c l a s s = -1 # i d e a : a c c e p t r a t e i n c r e a s e s w i t h # r a i s i n g s o f t g t v a l u e # random g e n e r a t e d v a l u e b e t w e e n 0 and 1 i f random ( ) <= a c c e p t r a t e : s i m u l a t e d c l a s s = p r o p o s e d c l a s s e l s e : # i d e a 2 : s e l e c t b a s e d on s o f t g t # p r o p o s e d can not be s e l e c t e d anymore max value = 1s o f t g t [ p r o p o s e d c l a s s ] r a n d s e l e c t = random ( ) * max value sum gt = 0 # i n c r e a s e c o l l e c t i v e p r o b a b i l i t y # u n t i l r a n d s e l e c t i s s m a l l e r f o r k , g in enumerate ( s o f t g t ) : i f p r o p o s e d c l a s s != k : # i g n o r e p r o p o s e d e l e m e n t # u p d a t e c o l l e c t i v e p r o b a b i l i t y sum gt += g i f r a n d s e l e c t <= sum gt : s i m u l a t e d c l a s s = k break a s s e r t s i m u l a t e d c l a s s != -1 s i m u l a t e d l a b e l [ s i m u l a t e d c l a s s ] += No c o r r e c t i o n s return s i m u l a t e d l a b e l / s i m u l a t i o n r e p e t i t i o n s e l s e : # c o r r e c t w i t h e s t i m a t e d o f f s e t # need t o l o w e r t h i s s c o r e # b a s e d on c o n f i r m a t i o n o f f s e t pc = s i m u l a t e d l a b e l [ p r o p o s e d c l a s s ] / s i m u l a t i o n r e p e t i t i o n s c o r r e c t e d = ( pcc o r r e c t w i t h o f f s e t ) / ( 0 . 9 9c o r r e c t w i t h o f f s e t ) c o r r e c t e d = min( 1 , max( 0 , c o r r e c t e d ) ) m = max( 1 , s i m u l a t i o n r e p e t i t i o n ss i m u l a t e d l a b e l [ p r o p o s e d c l a s s ] ) p = s i m u l a t e d l a b e l / m p * = ( 1c o r r e c t e d ) # r e s c a l e w i t h l e f t o v e r s p [ p r o p o s e d c l a s s ] = c o r r e c t e d return p A.5 Used transition matrices", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": ": { ' h e a d i n j u r y ' :[ 0 . 8 3 3 , 0 . 0 4 7 , 0 . 1 2 0 ] , ' n o t i n j u r e d ' :", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "20) 0.26 ± 0.02( 3.00) 0.25 ± 0.01( 5.00) 0.27 ± 0.02( 9.00) 0.25 ± 0.02( 21.00) 0.24 ± 0.01( 41.00) CleverLabel (ours) S=10 0.29 ± 0.02( 1.10) 0.28 ± 0.01( 1.30) 0.26 ± 0.02( 1.50) 0.25 ± 0.01( 2.00) 0.27 ± 0.02( 3.00) 0.25 ± 0.02( 6.00) 0.24 ± 0.01( 11.00) ", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Used offsets (δ) for proposal acceptance", "figure_data": "datasetBenthicCIFAR10HMiceBonePigPlanktonUser StudyN/A9.73%36.36%N/A57.84%Calculated40.17%0.00%41.03%25.72%64.81%datasetQualityMRISynthethicTreeversity#1 Treeversity#6 TurkeyUser StudyN/AN/AN/AN/A21.64%Calculated0.00%26.08%26.08%20.67%14.17%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of investigated methods for the proposal acceptance, results within a one percent boundary of the best results are marked bold † datasets used only for validation,", "figure_data": "SOD in [%]CIFAR10H †MiceBone †PlanktonTurkeyACCEPT+GT (Ours)1.97 ± 0.06 2.57 ± 0.16 3.62 ± 0.05 4.77 ± 0.13ACCEPT+LIKELY1.20 ± 0.25 7.88 ± 0.065.50 ± 0.157.21 ± 0.322*ACCEPT+GT2.79 ± 0.223.38 ± 0.26 3.25 ± 0.13 3.78 ± 0.282*ACCEPT+RANDOM 3.03 ± 0.114.76 ± 0.114.00 ± 0.31 4.64 ± 0.28RANDOM86.37 ± 0.1350.92 ± 0.6481.51 ± 0.1854.83 ± 0.93GT5.00 ± 0.0310.34 ± 0.1910.59 ± 0.115.69 ± 0.30LIKELY4.08 ± 0.0016.41 ± 0.0011.85 ± 0.0011.47 ± 0.00", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Benchmark results ablation study, first block of rows KL result on normal benchmark, second block of rows KL on benchmark, last block of rows KL results on benchmark but with ViT as backbone, all results are median aggregations across the datasets ± 0.03 0.32 ± 0.01 0.25 ± 0.01 0.16 ± 0.00 0.11 ± 0.00 0.08 ± 0.00 0.07 ± 0.00 CleverLabel (+ GT δ) 0.68 ± 0.03 0.41 ± 0.01 0.29 ± 0.01 0.16 ± 0.00 0.10 ± 0.00 0.05 ± 0.00 0.04 ± 0.00", "figure_data": "method135102050100CleverLabel (ours)0.29 ± 0.02 0.28 ± 0.01 0.26 ± 0.02 0.25 ± 0.01 0.27 ± 0.02 0.25 ± 0.02 0.24 ± 0.01CleverLabel (+ GT δ) 0.30 ± 0.02 0.28 ± 0.01 0.27 ± 0.01 0.25 ± 0.02 0.24 ± 0.01 0.24 ± 0.01 0.24 ± 0.01Only CB0.34 ± 0.03 0.28 ± 0.01 0.27 ± 0.01 0.25 ± 0.02 0.25 ± 0.01 0.25 ± 0.02 0.25 ± 0.02Only CB (µ=0.25)0.33 ± 0.02 0.28 ± 0.01 0.33 ± 0.02 0.29 ± 0.01---Only BC-0.30 ± 0.02 0.29 ± 0.02 0.26 ± 0.02---CleverLabel (ours) 0.68 Only CB 0.68 ± 0.03 0.32 ± 0.01 0.25 ± 0.01 0.16 ± 0.01 0.12 ± 0.00 0.09 ± 0.00 0.08 ± 0.00Only CB (µ=0.25)0.55 ± 0.02 0.33 ± 0.01 0.29 ± 0.01 0.24 ± 0.01---Only BC-1.22 ± 0.04 0.78 ± 0.02 0.36 ± 0.02---CleverLabel (+ GT δ)-0.22 ± 0.01-0.18 ± 0.01--0.16 ± 0.01Only CB-0.20 ± 0.01-0.18 ± 0.01--0.17 ± 0.01", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "54. Singh, A., Nowak, R., Zhu, J.: Unlabeled data: Now it helps, now it doesn't. Wei, J., Zhu, Z., Cheng, H., Liu, T., Niu, G., Liu, Y.: Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations (2021) 61. Wei, J., Zhu, Z., Luo, T., Amid, E., Kumar, A., Liu, Y.: To Aggregate or Not?Learning with Separate Noisy Labels (jun 2022), http://arxiv.org/abs/2206. 07181 62. Yun, S., Oh, S.J., Heo, B., Han, D., Choe, J., Chun, S.: Re-Labeling ImageNet:", "figure_data": "Ad-vances in neural information processing systems 21 (2008)55. Sohn, K., Berthelot, D., Li, C.L., Zhang, Z., Carlini, N., Cubuk, E.D., Kurakin,A., Zhang, H., Raffel, C.: FixMatch: Simplifying Semi-Supervised Learning withConsistency and Confidence. Advances in Neural Information Processing Systems33 pre-proceedings (NeurIPS 2020) (2020)56. Tarling, P., Cantor, M., Clapés, A., Escalera, S.: Deep learning with self-supervisionand uncertainty regularization to count fish in underwater images pp. 1-22 (2021)57. Tifrea, A., Clarysse, J., Yang, F.: Uniform versus uncertainty sampling: Whenbeing active is less efficient than staying passive (i) (2022), http://arxiv.org/abs/2212.0077258. Uijlings, J., Mensink, T., Ferrari, V.: The Missing Link: Finding label relationsacross datasets (jun 2022), http://arxiv.org/abs/2206.0445359. Vasudevan, V., Caine, B., Gontijo-Lopes, R., Fridovich-Keil, S., Roelofs, R.: Whendoes dough become a bagel? Analyzing the remaining mistakes on ImageNet(2022), http://arxiv.org/abs/2205.0459660.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of different values 1 * for the proposal acceptance across normal and DC3 predicions.", "figure_data": "Prediction1 * CIFAR10H MiceBone Plankton TurkeyNormal0.816.6814.1515.5915.17Normal0.99.167.726.778.19Normal0.955.504.922.705.44Normal0.992.435.094.264.24DC3 Certain0.818.5217.6516.1015.58DC3 Certain0.99.839.147.678.94DC3 Certain0.955.665.383.555.92DC3 Certain0.991.952.523.214.77DC3 Ambiguous 0.816.9912.6416.5113.46DC3 Ambiguous 0.910.838.159.287.33DC3 Ambiguous 0.957.626.285.714.73DC3 Ambiguous 0.995.365.954.213.44", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Simulation of SPA on Synthetic data, full results for visualized median aggregation in Figure4a", "figure_data": "offsetmethod135 10 20 50 1000GT3.69 1.47 0.89 0.43 0.21 0.08 0.040GT+CB0.91 0.35 0.21 0.12 0.07 0.05 0.040SPA3.67 1.46 0.90 0.43 0.20 0.08 0.030Only BC3.72 1.49 0.90 0.42 0.20 0.07 0.040Only CB0.56 0.46 0.39 0.37 0.33 0.34 0.320CleverLabel (+ GT δ) 0.90 0.34 0.21 0.12 0.07 0.05 0.040CleverLabel0.93 0.36 0.23 0.14 0.09 0.06 0.060.1GT3.67 1.46 0.89 0.44 0.21 0.08 0.040.1GT+CB0.86 0.32 0.21 0.12 0.08 0.05 0.040.1SPA3.95 1.58 0.94 0.46 0.22 0.10 0.060.1Only BC3.92 1.56 0.96 0.51 0.25 0.10 0.050.1Only CB0.58 0.47 0.41 0.37 0.35 0.33 0.320.1 CleverLabel (+ GT δ) 0.94 0.36 0.24 0.14 0.09 0.06 0.040.1CleverLabel0.95 0.36 0.24 0.14 0.09 0.06 0.050.4GT3.71 1.48 0.89 0.45 0.20 0.08 0.040.4GT+CB0.87 0.34 0.22 0.12 0.08 0.05 0.040.4SPA4.75 2.22 1.44 0.82 0.48 0.31 0.250.4Only BC4.64 2.56 1.69 0.86 0.42 0.18 0.090.4Only CB0.75 0.61 0.54 0.45 0.39 0.33 0.340.4 CleverLabel (+ GT δ) 1.20 0.63 0.44 0.24 0.14 0.08 0.060.4CleverLabel1.21 0.62 0.46 0.31 0.26 0.21 0.190.6GT3.75 1.47 0.90 0.43 0.21 0.08 0.040.6GT+CB0.96 0.35 0.21 0.12 0.08 0.05 0.040.6SPA5.18 3.08 2.24 1.33 0.83 0.56 0.480.6Only BC5.19 3.25 2.63 1.37 0.67 0.28 0.150.6Only CB0.89 0.71 0.65 0.55 0.47 0.36 0.340.6 CleverLabel (+ GT δ) 1.47 0.92 0.64 0.37 0.22 0.11 0.070.6CleverLabel1.47 0.91 0.71 0.56 0.50 0.46 0.46", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Simulation of SPA on Real data, full results for visualized median aggregation in Figure4b", "figure_data": "datasetmethod135 12CIFAR10HGT0.71 0.25 0.21 0.13CIFAR10HGT+CB0.42 0.11 0.11 0.07CIFAR10HSPA0.81 0.53 0.37 0.30CIFAR10HOnly BC0.81 0.53 0.37 0.30CIFAR10HOnly CB0.42 0.34 0.28 0.20CIFAR10H CleverLabel (+ GT δ) 0.47 0.26 0.20 0.16CIFAR10HCleverLabel0.47 0.26 0.19 0.16TurkeyGT1.37 0.51 0.27 0.11TurkeyGT+CB0.35 0.14 0.08 0.04TurkeySPA2.64 1.87 0.66 0.22TurkeyOnly BC2.64 1.86 0.65 0.22TurkeyOnly CB0.49 0.47 0.38 0.15TurkeyCleverLabel (+ GT δ) 0.77 0.61 0.25 0.08TurkeyCleverLabel0.77 0.61 0.26 0.08MiceBoneGT2.36 0.83 0.51 0.23MiceBoneGT+CB0.48 0.17 0.12 0.06MiceBoneSPA2.58 2.36 1.08 0.57MiceBoneOnly BC2.58 2.36 1.17 0.61MiceBoneOnly CB0.31 0.32 0.29 0.21MiceBone CleverLabel (+ GT δ) 0.56 0.54 0.29 0.15MiceBoneCleverLabel0.56 0.54 0.26 0.16PlanktonGT0.91 0.35 0.20 0.07PlanktonGT+CB0.54 0.20 0.12 0.04PlanktonSPA1.71 1.39 1.00 0.59PlanktonOnly BC1.71 1.65 1.28 0.77PlanktonOnly CB1.00 1.75 1.74 0.77Plankton CleverLabel (+ GT δ) 1.10 1.12 0.85 0.50PlanktonCleverLabel1.10 0.84 0.58 0.39", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Benchmark results with 100% in. sup. with median aggregation, metric: KL ± STD (budget), columns are the number of annotations used, S describes the expected speedup ± 0.02( 2.00) 0.28 ± 0.01( 4.00) 0.26 ± 0.02( 6.00) 0.25 ± 0.01( 11.00) 0.27 ± 0.02( 21.00) 0.25 ± 0.02( 51.00) 0.24 ± 0.01( 101.00) CleverLabel (ours) S=2.5 0.29 ± 0.02( 1.40) 0.28 ± 0.01( 2.", "figure_data": "method135102050100Baseline0.52 ± 0.04( 1.00) 0.34 ± 0.02( 3.00) 0.37 ± 0.03( 5.00) 0.29 ± 0.02( 10.00)---DivideMix0.44 ± 0.05( 1.00) 0.47 ± 0.03( 3.00) 0.41 ± 0.04( 5.00) 0.45 ± 0.04( 10.00)---Pseudo soft0.43 ± 0.07( 1.00) 0.37 ± 0.03( 3.00) 0.34 ± 0.02( 5.00) 0.30 ± 0.02( 10.00)---GT+CB0.44 ± 0.03( 1.00) 0.26 ± 0.01( 3.00) 0.25 ± 0.01( 5.00) 0.24 ± 0.01( 10.00)---CleverLabel (ours)0.29", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Benchmark results with 100% in. sup. without aggregation, metric: KL ± STD (budget), columns are the number of annotations used, S describes the expected speedup ± 0.03( 2.00) 0.76 ± 0.06( 4.00) 0.71 ± 0.05( 6.00) 0.67 ± 0.02( 11.00) 0.68 ± 0.02( 21.00) 0.67 ± 0.06( 51.00) 0.65 ± 0.03( 101.00) Benthic CleverLabel (ours) S=2.5 0.78 ± 0.03( 1.40) 0.76 ± 0.06( 2.20) 0.71 ± 0.05( 3.00) 0.67 ± 0.02( 5.00) 0.68 ± 0.02( 9.00) 0.67 ± 0.06( 21.00) 0.65 ± 0.03( 41.00) Benthic CleverLabel (ours) S=10 0.78 ± 0.03( 1.10) 0.76 ± 0.06( 1.30) 0.71 ± 0.05( 1.50) 0.67 ± 0.02( 2.00) 0.68 ± 0.02( 3.00) 0.67 ± 0.06( 6.00) 0.65 ± 0.03( 11.00)", "figure_data": "datasetmethod135102050100BenthicBaseline1.17 ± 0.04( 1.00) 0.81 ± 0.04( 3.00) 0.76 ± 0.04( 5.00) 0.75 ± 0.03( 10.00)---BenthicDivideMix0.87 ± 0.07( 1.00) 0.82 ± 0.00( 3.00) 0.84 ± 0.02( 5.00) 0.83 ± 0.05( 10.00)---BenthicPseudo soft1.00 ± 0.08( 1.00) 0.76 ± 0.06( 3.00) 0.71 ± 0.05( 5.00) 0.70 ± 0.02( 10.00)---BenthicGT+CB0.81 ± 0.04( 1.00) 0.68 ± 0.02( 3.00) 0.67 ± 0.00( 5.00) 0.65 ± 0.01( 10.00)---BenthicCleverLabel (ours)0.78", "figure_id": "tab_8", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Benchmark results ablation study, metric: KL ± STD but evaluated on a transformer backbone, columns are the number of annotations used GT δ) -0.50 ± 0.02 -0.46 ± 0.01 --0.43 ± 0.01 Benthic Only CB -0.47 ± 0.00 -0.46 ± 0.02 --0.45 ± 0.01 CIFAR10H CleverLabel (+ GT δ) -0.10 ± 0.00 -0.10 ± 0.00 --0.10 ± 0.00 CIFAR10H Only CB -0.10 ± 0.00 -0.10 ± 0.00 --0.10 ± 0.00 MiceBone CleverLabel (+ GT δ) -0.22 ± 0.02 -0.19 ± 0.02 --0.17 ± 0.02 MiceBone Only CB -0.21 ± 0.02 -0.19 ± 0.01 --0.19 ± 0.01 Pig CleverLabel (+ GT δ) -0.45 ± 0.01 -0.38 ± 0.01 --0.34 ± 0.01 Pig Only CB -0.44 ± 0.01 -0.39 ± 0.01 --0.36 ± 0.01 Plankton CleverLabel (+ GT δ) -0.22 ± 0.01 -0.19 ± 0.01 --0.16 ± 0.01 Plankton", "figure_data": "datasetmethod1351020 50100BenthicCleverLabel (+", "figure_id": "tab_9", "figure_label": "16", "figure_type": "table" } ]
Lars Schmarje; Vasco Grossmann; Tim Michels; Jakob Nazarenus; Monty Santarossa; Claudius Zelenka; Reinhard Koch
[ { "authors": "P F E E Addison; D J Collins; R Trebilco; S Howe; N Bax; P Hedge; G Jones; P Miloslavich; C Roelfsema; M Sams; R D Stuart-Smith; P Scanes; P Von Baumgarten; A Mcquatters-Gollop", "journal": "ICES Journal of Marine Science", "ref_id": "b0", "title": "A new wave of marine evidence-based management: Emerging challenges and solutions to transform monitoring, evaluating, and reporting", "year": "2018" }, { "authors": "E Arazo; D Ortego; P Albert; N E O'connor; K Mcguinness", "journal": "", "ref_id": "b1", "title": "Pseudolabeling and confirmation bias in deep semi-supervised learning", "year": "2020" }, { "authors": "V Basile; M Fell; T Fornaciari; D Hovy; S Paun; B Plank; M Poesio; A Uma", "journal": "BPPF", "ref_id": "b2", "title": "We Need to Consider Disagreement in Evaluation", "year": "2021" }, { "authors": "D Berthelot; N Carlini; I Goodfellow; N Papernot; A Oliver; C A Raffel", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Mixmatch: A holistic approach to semi-supervised learning", "year": "2019" }, { "authors": "L Beyer; O J Hénaff; A Kolesnikov; X Zhai; A Van Den Oord", "journal": "", "ref_id": "b4", "title": "Are we done with ImageNet?", "year": "2020" }, { "authors": "J Brünger; S Dippel; R Koch; C Veit", "journal": "Animal", "ref_id": "b5", "title": "Tailception': using neural networks for assessing tail lesions on pictures of pig carcasses", "year": "2019" }, { "authors": "M Collier; B Mustafa; E Kokiopoulou; R Jenatton; J Berent", "journal": "", "ref_id": "b6", "title": "Correlated Input-Dependent Label Noise in Large-Scale Image Classification", "year": "2021" }, { "authors": "K M Collins; U Bhatt; A Weller", "journal": "", "ref_id": "b7", "title": "Eliciting and Learning with Soft Labels from Every Annotator", "year": "2022-07" }, { "authors": "C Cortes; N D Lawrence", "journal": "", "ref_id": "b8", "title": "Inconsistency in conference peer review: revisiting the 2014 neurips experiment", "year": "2021" }, { "authors": "A M Davani; M Díaz; V Prabhakaran", "journal": "", "ref_id": "b9", "title": "Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations", "year": "2021" }, { "authors": "M Desmond; E Duesterwald; K Brimijoin; M Brachman; Q Pan", "journal": "PMLR", "ref_id": "b10", "title": "Semiautomated data labeling", "year": "2021" }, { "authors": "M Desmond; M Muller; Z Ashktorab; C Dugan; E Duesterwald; K Brimijoin; C Finegan-Dollak; M Brachman; A Sharma; N N Joshi; Q Pan", "journal": "Association for Computing Machinery", "ref_id": "b11", "title": "Increasing the speed and accuracy of data labeling through an ai assisted interface", "year": "2021" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly; J Uszkoreit; N Houlsby", "journal": "ICLR", "ref_id": "b12", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2021-10" }, { "authors": "Z Gao; F K Sun; M Yang; S Ren; Z Xiong; M Engeler; A Burazer; L Wildling; L Daniel; D S Boning", "journal": "", "ref_id": "b13", "title": "Learning from Multiple Annotator Noisy Labels via Sample-wise Label Fusion", "year": "2022" }, { "authors": "M L Gordon; K Zhou; K Patel; T Hashimoto; M S Bernstein", "journal": "ACM", "ref_id": "b14", "title": "The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality", "year": "2021" }, { "authors": "V Grossmann; L Schmarje; R Koch", "journal": "", "ref_id": "b15", "title": "Beyond Hard Labels: Investigating data label distributions", "year": "2022" }, { "authors": "K Gu; X Masotto; V Bachani; B Lakshminarayanan; J Nikodem; D Yin", "journal": "Machine Learning", "ref_id": "b16", "title": "An instance-dependent simulation framework for learning with label noise", "year": "2022" }, { "authors": "K He; X Chen; S Xie; Y Li; P Dollár; R Girshick", "journal": "", "ref_id": "b17", "title": "Masked Autoencoders Are Scalable Vision Learners", "year": "2021-11" }, { "authors": "D Hendrycks; M Mazeika; S Kadavath; D Song", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Using self-supervised learning can improve model robustness and uncertainty", "year": "2019" }, { "authors": "J M Jachimowicz; S Duncan; E U Weber; E J Johnson", "journal": "Behavioural Public Policy", "ref_id": "b19", "title": "When and why defaults influence decisions: A meta-analysis of default effects", "year": "2019" }, { "authors": "H Jung; Y Park; M Lease", "journal": "", "ref_id": "b20", "title": "Predicting next label quality: A time-series model of crowdwork", "year": "2014-09" }, { "authors": "A Kolesnikov; L Beyer; X Zhai; J Puigcerver; J Yung; S Gelly; N Houlsby", "journal": "", "ref_id": "b21", "title": "Big Transfer (BiT): General Visual Representation Learning", "year": "2020" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b22", "title": "Others: Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "S Kullback; R A Leibler", "journal": "Ann. Math. Statist", "ref_id": "b24", "title": "On Information and Sufficiency", "year": "1951" }, { "authors": "D H Lee", "journal": "", "ref_id": "b25", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "year": "2013" }, { "authors": "J Li; R Socher; S C H Hoi", "journal": "", "ref_id": "b26", "title": "DivideMix: Learning with Noisy Labels as Semisupervised Learning", "year": "2020" }, { "authors": "Y F Li; D M Liang", "journal": "Frontiers of Computer Science", "ref_id": "b27", "title": "Safe semi-supervised learning: a brief introduction", "year": "2019" }, { "authors": "D Lopresti; G Nagy", "journal": "IEEE", "ref_id": "b28", "title": "Optimal data partition for semi-automated labeling", "year": "2012" }, { "authors": "M Lukasik; S Bhojanapalli; A K Menon; S Kumar", "journal": "", "ref_id": "b29", "title": "Does label smoothing mitigate label noise", "year": "2020-03" }, { "authors": "T Lukov; N Zhao; G H Lee; S N Lim", "journal": "", "ref_id": "b30", "title": "Teaching with Soft Label Smoothing for Mitigating Noisy Labels in Facial Expressions", "year": "2022" }, { "authors": "M Mazeika; E Tang; A Zou; S Basart; J S Chan; D Song; D Forsyth; J Steinhardt; D Hendrycks", "journal": "", "ref_id": "b31", "title": "How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios (NeurIPS)", "year": "2022" }, { "authors": "I Misra; L V Maaten", "journal": "", "ref_id": "b32", "title": "Self-supervised learning of pretext-invariant representations", "year": "2020" }, { "authors": "M Motamedi; N Sakharnykh; T Kaldewey", "journal": "", "ref_id": "b33", "title": "A Data-Centric Approach for Training Deep Neural Networks with Less Data", "year": "2021" }, { "authors": "R Müller; S Kornblith; G Hinton", "journal": "", "ref_id": "b34", "title": "When Does Label Smoothing Help? Advances in neural information processing systems", "year": "2019-06" }, { "authors": "A Naeem; M S Farooq; A Khelifi; A Abid", "journal": "IEEE Access", "ref_id": "b35", "title": "Malignant melanoma classification using deep learning: datasets, performance measurements, challenges and opportunities", "year": "2020" }, { "authors": "T Nguyen; G Ilharco; M Wortsman; S Oh; L Schmidt", "journal": "", "ref_id": "b36", "title": "Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP", "year": "2022" }, { "authors": "C G Northcutt; A Athalye; J Mueller", "journal": "", "ref_id": "b37", "title": "Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks", "year": "2021" }, { "authors": "C G Northcutt; L Jiang; I L Chuang", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b38", "title": "Confident learning: Estimating uncertainty in dataset labels", "year": "2021" }, { "authors": "E A Ooms; H M Zonderland; M J C Eijkemans; M Kriege; B Mahdavian Delavary; C W Burger; A C Ansink", "journal": "The Breast", "ref_id": "b39", "title": "Mammography: Interobserver variability in breast density assessment", "year": "2007" }, { "authors": "D P Papadopoulos; E Weber; A Torralba", "journal": "", "ref_id": "b40", "title": "Scaling up instance annotation via label propagation", "year": "2021" }, { "authors": "B N Patel; L Rosenberg; G Willcox; D Baltaxe; M Lyons; J Irvin; P Rajpurkar; T Amrhein; R Gupta; S Halabi; C Langlotz; E Lo; J Mammarappallil; A J Mariano; G Riley; J Seekins; L Shen; E Zucker; M Lungren", "journal": "npj Digital Medicine", "ref_id": "b41", "title": "Human-machine partnership with artificial intelligence for chest radiograph diagnosis", "year": "2019" }, { "authors": "J Peterson; R Battleday; T Griffiths; O Russakovsky", "journal": "", "ref_id": "b42", "title": "Human uncertainty makes classification more robust", "year": "2019" }, { "authors": "P Ren; Y Xiao; X Chang; P Y Huang; Z Li; B B Gupta; X Chen; X Wang", "journal": "ACM computing surveys (CSUR)", "ref_id": "b43", "title": "A survey of deep active learning", "year": "2021" }, { "authors": "A Saleh; I H Laradji; D A Konovalov; M Bradley; D Vazquez; M Sheaves", "journal": "Scientific Reports", "ref_id": "b44", "title": "A realistic fish-habitat dataset to evaluate algorithms for underwater visual analysis", "year": "2020" }, { "authors": "L Schmarje; J Brünger; M Santarossa; S M Schröder; R Kiko; R Koch", "journal": "Sensors", "ref_id": "b45", "title": "Fuzzy Overclustering: Semi-Supervised Classification of Fuzzy Labels with Overclustering and Inverse Cross-Entropy", "year": "2021" }, { "authors": "L Schmarje; V Grossmann; C Zelenka; S Dippel; R Kiko; M Oszust; M Pastell; J Stracke; A Valros; N Volkmann; R Koch", "journal": "", "ref_id": "b46", "title": "Is one annotation enough? A data-centric image classification benchmark for noisy and ambiguous label estimation", "year": "2022" }, { "authors": "L Schmarje; M Santarossa; S M Schröder; C Zelenka; R Kiko; J Stracke; N Volkmann; R Koch", "journal": "", "ref_id": "b47", "title": "A data-centric approach for improving ambiguous labels with combined semi-supervised classification and clustering", "year": "2022" }, { "authors": "S M Schröder; R Kiko; R Koch", "journal": "Sensors", "ref_id": "b48", "title": "MorphoCluster: Efficient Annotation of Plankton images by Clustering", "year": "2020" }, { "authors": "C Schulz; C M Meyer; J Kiesewetter; M Sailer; E Bauer; M R Fischer; F Fischer; I Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Analysis of automatic annotation suggestions for hard discourse-level tasks in expert domains", "year": "2019-07" }, { "authors": "P Schustek; R Moreno-Bote", "journal": "PLOS Computational Biology", "ref_id": "b50", "title": "Instance-based generalization for human judgments about uncertainty", "year": "2018-06" }, { "authors": "F A Shah; K Sirts; D Pfahl", "journal": "", "ref_id": "b51", "title": "The impact of annotation guidelines and annotated data on extracting app features from app reviews", "year": "2018" }, { "authors": "V S Sheng; F Provost", "journal": "", "ref_id": "b52", "title": "Get Another Label ? Improving Data Quality and Data Mining Using Multiple , Noisy Labelers Categories and Subject Descriptors", "year": "2008" } ]
[ { "formula_coordinates": [ 5, 134.77, 134.82, 345.83, 97.3 ], "formula_id": "formula_0", "formula_text": "Require: Proposal ρx; a x i ∈ {0} K Calculate acceptance probability A r ← random(0,1) if r ≤ A then Accept proposal a x i,ρx ← 1 else Sample from remaining classes k ← sampled from P (L x = k | ρx = k) a x i,k ← 1 end if" }, { "formula_coordinates": [ 5, 296.42, 518.98, 104.08, 16.8 ], "formula_id": "formula_1", "formula_text": "P (L x = k) ≈ N -1 i=0 a x i,k" }, { "formula_coordinates": [ 6, 245.9, 173.97, 234.69, 11.72 ], "formula_id": "formula_2", "formula_text": "A = δ + (1 * -δ)P (L x = ρ x )(1)" }, { "formula_coordinates": [ 6, 265.64, 292.26, 109.21, 16.8 ], "formula_id": "formula_3", "formula_text": "P (L x b = k) ≈ N -1 i=0 a x i,k" }, { "formula_coordinates": [ 6, 134.77, 596.46, 261.19, 46.49 ], "formula_id": "formula_4", "formula_text": "B := P (L x = ρ x ) = A -δ 1 * -δ ≈ |Mρ x | N -δ 1 * -δ , with M ρx = {i | i ∈ N, i ≤ N , a i,ρx = 1}" }, { "formula_coordinates": [ 7, 188.72, 266.14, 237.91, 90.4 ], "formula_id": "formula_5", "formula_text": "P (L x = k | L x = ρ x ) = P (L x b = k | L x = ρ x ) ⇔ P (L x = k, L x = ρ x ) P (L x = ρ x ) = P (L x b = k | L x = ρ x ) ⇔ P (L x = ρ x ) = (1 -B)P (L x b = k | L x = ρ x ) ≈ (1 -B) • i ∈Mρ x a i,k N -|M ρx | ." }, { "formula_coordinates": [ 20, 201.87, 210.29, 211.62, 59.31 ], "formula_id": "formula_6", "formula_text": "B = P (L x = k | L x = ρ x ) ≈ |Mρ x | N -δ 0.99 -δ C k := P (L x = k | L x = ρ x ) ≈ i ∈Mρ x a i,k N -|M ρx | ." }, { "formula_coordinates": [ 20, 472.1, 235.41, 8.49, 8.74 ], "formula_id": "formula_7", "formula_text": ")2" }, { "formula_coordinates": [ 20, 147.04, 330.76, 333.55, 56.56 ], "formula_id": "formula_8", "formula_text": "P (L x = k) = P (L x = k, L x = ρ x ) + P (L x = k, L x = ρ x ) = P (L x = k | L x = ρ x )P (L x = ρ x ) + P (L x = k | L x = ρ x )P (L x = ρ x ) = 1 ρx (k) • B + C k • (1 -B)(3)" }, { "formula_coordinates": [ 22, 134.77, 130.95, 345.83, 185.72 ], "formula_id": "formula_9", "formula_text": "x = ρ x C k := P (L x = k | L x = ρ x ) ≈ i ∈M ρ x a i,k N -|M ρ x | D k := P (L x = k, L x = ρ x | L x = ρ x ) = 1 ρ x (k) • P (L x = ρ x | L x = ρ x ) ≈1 ρ x (k) • C ρ x D k := P (L x = k, L x = ρ x | L x = ρ x ) ≈ 1 ρx (k) • C ρx E k := P (L x = k | L x = ρ x , L x = ρ x ) ≈ a∈M ρx ρ x a k |M ρxρ x | (4) with M ρ x = {i | i ∈ N, i ≤ N , a i,ρ x = 1}" }, { "formula_coordinates": [ 22, 134.77, 317.39, 345.83, 25.72 ], "formula_id": "formula_10", "formula_text": "M ρxρ x = {a i | i ∈ N, i ≤ N , a i,ρx = 1 = a i,ρ x } ∪ {a i | i ∈ N, i ≤ N , a i,ρx = 1 = a i,ρ x }" }, { "formula_coordinates": [ 22, 134.77, 497.14, 71.81, 12.19 ], "formula_id": "formula_11", "formula_text": "P (L x = ρ x ) = 0." }, { "formula_coordinates": [ 22, 134.77, 542.06, 345.83, 26.03 ], "formula_id": "formula_12", "formula_text": "C ρx ≈ P (L x = ρ x | L x = ρ x ) and C ρ x ≈ P (L x = ρ x | L x = ρ x )." }, { "formula_coordinates": [ 23, 149.71, 118.99, 330.88, 121.41 ], "formula_id": "formula_13", "formula_text": "x = k = ρ x P (L x = k | L x = ρ x ) • P (L x = ρ x ) = P (L x = k, L x = ρ x ) = P (L x = k, L x = ρ x , L x = ρ x ) + P (L x = k, L x = ρ x , L x = ρ x ) = P (L x = k, L x = ρ x | L x = ρ x ) • P (L x = ρ x ) + P (L x = k | L x = ρ x , L x = ρ x ) • P (L x = ρ x , L x = ρ x ) = P (L x = k, L x = ρ x | L x = ρ x ) • P (L x = ρ x ) + P (L x = k | L x = ρ x , L x = ρ x ) • P (L x = ρ x | L x = ρ x ) • P (L x = ρ x )(5)" }, { "formula_coordinates": [ 23, 185.06, 279.68, 295.53, 57.52 ], "formula_id": "formula_14", "formula_text": "P (L x = k | L x = ρ x ) = P (L x = k, L x = ρ x | L x = ρ x ) + P (L x = k | L x = ρ x , L x = ρ x ) • P (L x = ρ x | L x = ρ x ) ⇔ C k = D k + E k • P (L x = ρ x | L x = ρ x )(6)" }, { "formula_coordinates": [ 23, 240.22, 346.39, 240.38, 12.55 ], "formula_id": "formula_15", "formula_text": "C k = D k +E k •P (L x = ρ x | L x = ρ x ) for P (L x = ρ x ) = 0" }, { "formula_coordinates": [ 23, 200.81, 376.51, 279.78, 129.62 ], "formula_id": "formula_16", "formula_text": "P (L x = ρ x , L x = ρ x ) = P (L x = ρ x | L x = ρ x ) • P (L x = ρ x ) = P (L x = ρ x | L x = ρ x ) • P (L x = ρ x ) ⇔ P (L x = ρ x ) = P (L x = ρ x ) P (L x = ρ x | L x = ρ x ) • P (L x = ρ x | L x = ρ x ) ⇔ P (L x = ρ x ) = P (L x = ρ x ) P (L x = ρ x | L x = ρ x ) • P (L x = ρ x | L x = ρ x )(7)" }, { "formula_coordinates": [ 23, 226.55, 533.74, 249.8, 27.64 ], "formula_id": "formula_17", "formula_text": "P (L x = ρ x ) ≈ P (L x = ρ x | L x = ρ x ) P (L x = ρ x ) ≈ P (L x = ρ x | L x = ρ x ). (8" }, { "formula_coordinates": [ 23, 476.35, 543.38, 4.24, 8.74 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 23, 255.78, 615.19, 224.81, 49.09 ], "formula_id": "formula_19", "formula_text": "P (L x = ρ x ) ≈ C k -D k E k P (L x = ρ x ) ≈ C k -D k E k(9)" }, { "formula_coordinates": [ 24, 205.38, 129.37, 275.21, 90.92 ], "formula_id": "formula_20", "formula_text": "P (L x = ρ x ) = 1 -P (L x = ρ x ) with A ≈ δ + (0.99 -δ)P (L x = ρ x ) ⇔ A ≈ δ + 0.99 • P (L x = ρ x ) -δ • P (L x = ρ x ) ⇔ A ≈ δ(1 -P (L x = ρ x )) + 0.99 • P (L x = ρ x ) ⇔ δ ≈ A -0.99 • P (L x = ρ x ) P (L x = ρ x )(10)" }, { "formula_coordinates": [ 24, 291.03, 229.39, 134.76, 12.19 ], "formula_id": "formula_21", "formula_text": "P (L x = ρ x ) = 1 -P (L x = ρ x )." } ]
2024-02-08
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b31", "b7", "b33", "b37", "b8", "b19", "b6", "b4", "b6", "b13", "b22", "b26", "b16", "b5" ], "table_ref": [], "text": "To be an agent is to intentionally cause events to occur through one's own actions. Humans operate with Agency to proactively plan their activities, direct their interaction and collaboration with other humans, and achieve their outcomes and goals (Bandura, 2001).\nAI researchers have long strived to develop autonomous agents that can effectively mimic human behavior (Park et al., 2023). Such agents can serve as non-player characters in games and virtual environments (Bates et al., 1994;Riedl and Bulitko, 2012;Volum et al., 2022), simulate human behavior (Binz and Schulz, 2023;Horton, 2023), and provide signer is working on selecting a chair design for a room and seeks assistance from an AI agent that can offer ideas and perspectives (Figure 1). An LLM without Agency may rely solely on the human to determine the chair's design, asking questions like \"What type of legs should we design for the chair?\". Such a system resembles a flexible version of the traditional form-filling user interface, with the agent contributing little to the outcome. On the other hand, an LLM that operates with Agency might volunteer knowledge in the form of expressed preferences (e.g., \"Should we design a chair with wooden legs?\"), motivate its suggestions (e.g., \"...wood would go well with the brown carpet\"), assert self-belief in its judgments (e.g., \"I'm still leaning towards wooden legs...\"), or self-adjust its behavior based on new information (\"Medium wood brown sounds like a great idea!\"). LLMs that operate with Agency may facilitate creative interaction to the satisfaction of both parties. Since the human has their own Agency, however, to determine the right balance in any interaction, we need to measure and control the Agency of the agent itself.\nAccordingly, we investigate an approach intended to measure and control what seems to be a desirable function in LLMs intended to facilitate human creativity. First, adopting the socialcognitive theory of Bandura (2001), we develop a framework of four features through which Agency may be expressed -Intentionality, Motivation, Self-Efficacy, and Self-Regulation. For each feature, we differentiate between how strongly or weakly it is expressed in a dialogue (Section 3). As a testbed, we choose a collaborative task that involves discussing the interior design of a room (Section 4), and collect a prototype dataset of 83 English human-human collaborative interior design conversations comprising 908 conversational snippets, annotated for Agency and its features on these conversational snippets (Section 5). 1 We analyze this dataset to study the factors that contribute to high-and low-Agency and find that strong expressions of intentionality significantly impact Agency in conversations (Section 6).\nTo assess the agentic capabilities of conversational systems, we introduce two new tasks -(1) Measuring Agency in Dialogue and (2) Generating Dialogue with Agency (Section 7 and 8). Evalua- 1 Code and dataset can be found at github.com/microsoft/agency-dialogue. tion of baseline approaches on these tasks shows that models that manifest features associated with high motivation, self-efficacy, and self-regulation are better perceived as being highly agentive.\n2 Agency: Background and Definition Social cognitive theory defines Agency as one's capability to influence the course of events through one's actions. The theory argues that people are proactive and self-regulating agents who actively strive to shape their environment, rather than simply being passive responders to external stimuli (Bandura, 1989(Bandura, , 2001;;Code, 2020). Here, we ask: Can LLMs be active contributors to their environment? How can they operate with Agency?\nAgency is commonly defined in terms of freedom and free will (Kant, 1951;Locke, 1978;Emirbayer and Mische, 1998).A focus on AI with complete \"free will\" might result in unintended outcomes that may be undesirable and potentially disruptive. We focus on how AI systems may express Agency through dialogue and how this Agency may be shared when interacting with humans.\nAgency can take different forms depending on the context and environment -Individual, Proxy, or Shared (Bandura, 2000). Individual Agency involves acting independently on one's own. Proxy Agency involves acting on behalf of someone else. Shared Agency involves multiple individuals working together jointly towards a common goal. Here, we focus on Shared Agency between humans and AI and develop methods to measure and control Agency of AI vis-a-vis humans." }, { "figure_ref": [], "heading": "Framework of Agency Features", "publication_ref": [ "b6" ], "table_ref": [], "text": "Our goal is to develop a framework for measuring and controlling Agency in LLMs. Here, we adopt the perspective of Agency as defined in Bandura (2001)'s social cognitive theory. Bandura (2001)'s work highlights four features through which humans exercise Agency -Intentionality, Motivation, Self-Efficacy, and Self-Regulation. Here, we adapt and synthesize these features based on how they may manifest in dialogue. We take a top-down approach, starting with their higher-level definitions and iteratively refining the definitions and their possible levels (e.g., how strongly or weakly they are expressed) in the context of dialogue.\nIntentionality. What do you intend to do? High Agency requires a strong intention, that includes plans or preferences for a task. Low Agency, meanwhile, is characterized by not having a preference or merely agreeing to another's preferences.\nWe characterize strong intentionality as expressing a clear preference (e.g., \"I want to have a blue-colored chair\"), moderate intentionality as multiple preferences (e.g., \"Should we use brown color or blue?\") or making a selection based on the choices offered by someone else (e.g., \"Between brown and blue, I will prefer brown\"), and no intentionality as not expressing any preference or accepting someone else's preference (e.g., \"Yes, brown color sounds good\").\nMotivation. Did you motivate your actions? To have higher Agency, we motivate our intentions through reasoning and evidence. Without such motivation, intentions are simply ideas, often lacking the capability to cause a change.\nWe characterize strong motivation as providing evidence in support of one's preference (e.g., \"I think a blue-colored chair will complement the wall\"), moderate motivation as agreeing with another person's preference and providing evidence in their favor (e.g., \"I agree. The blue color would match the walls\") or disagreeing with the other person and providing evidence against (e.g., \"I wonder if brown would feel too dull for this room\"), and no motivation as not providing any evidence.\nSelf-Efficacy. Do you have self-belief in your intentions? Another factor that contributes to one's Agency is the self-belief one has in their intentions. When one has a strong sense of self-belief, they are more likely to be persistent with their intentions.\nWe characterize strong self-efficacy as pursuing a preference for multiple turns even after the other person argues against it (e.g., \"I understand your point of view, but I still prefer the blue color\"), moderate self-efficacy as pursuing a preference for only one additional turn before giving up (e.g., \"I feel like the beige color would complement the wall better\"), and no self-efficacy as not pursuing their preference for additional turns after the other person argues against it (e.g., \"Sure, brown should work too\").\nSelf-Regulation. Can you adjust and adapt your intentions? In situations when an individual's initial intentions may not be optimal, it is necessary to monitor, adjust, and adapt them. Such selfadjustment allows better control over one's goals.\nWe characterize strong self-regulation as chang-ing to a different preference on one's own (e.g., \"How about using the beige color instead?\") or compromising one's preference (e.g., \"Let's compromise and design a beige-colored chair with a brown cushion\")2 , moderate self-regulation as changing one's preference to what someone else prefers (e.g., \"Ok, let's use the brown color\"), and no self-regulation as not changing what they originally preferred even after the other designer argued.\n4 Testbed: Collaborative Interior Design" }, { "figure_ref": [], "heading": "Goals", "publication_ref": [ "b12", "b29", "b11" ], "table_ref": [], "text": "We seek a testbed in which (a) human and AI can share Agency and work together as a team, and (b) the manner in which they express Agency has a significant impact on the task outcome. We focus on the emerging field of collaborative AI-based creative tasks (Clark et al., 2018;Oh et al., 2018;Chilton et al., 2019) that present significant complexities in how the Agency is shared and managed." }, { "figure_ref": [], "heading": "Description", "publication_ref": [ "b10", "b0" ], "table_ref": [], "text": "Here, we propose a dialogue-based collaborative interior design task as a testbed. In this task, the goal is to discuss how to design the room interiors.\nInterior design tasks can be broad and may involve complex components (e.g., color palette, furniture, accessories) as well as a series of steps to be followed. To narrow down the scope of our task, we focus on furnishing a room with a chair (building upon work on richly-annotated 3D object datasets like ShapeNet (Chang et al., 2015) and ShapeGlot (Achlioptas et al., 2019); Appendix E). In this task, a human and an AI are provided with a room layout and asked to collaboratively come up with a chair design to be placed in the room through text-based dialogue. This task is influenced by two questions related to human and AI Agency: (1) What preferences do each of the human and AI have for the chair design?; (2) How do they propose, motivate, pursue, and regulate their preferences? 5 Data Collection" }, { "figure_ref": [], "heading": "Human-Human Conversational Data", "publication_ref": [], "table_ref": [], "text": "To facilitate computational approaches for this task, we create a Wizard-of-Oz style English-language dialogue dataset in which two humans converse, exercise Agency by proposing, motivating, pursuing, and regulating their design preferences, and agreeing on a final chair design for a given room.\nRecruiting Interior Designers. Furnishing a room with a chair is a creative task that demands knowledge and/or expertise in interior design. We therefore leveraged UpWork (upwork.com), an online freelancing platform, to recruit 33 participants who self-reported as interior designers.\nCollaborative Design Procedure. In each data collection session, we randomly paired two interior designers. Before they began the dialogue, they were (1) shown a 3D layout of a room, designed with Planner5D (planner5d.com), (2) shown a few randomly selected chair examples from ShapeGlot, and (3) asked to write an initial preference for the chair design for the given room. Next, the two interior designers joined a chat room (through Chatplat (chatplat.com)). They were asked to collaboratively design a chair by proposing their preferences, motivating them based on evidence and reason, pursuing them over turns, and regulating them as needed.\nThe designers ended the chat on reaching a consensus on a design or if 30 minutes elapsed without full consensus. Next, they each individually wrote the design they came up with. Typically, the chair design consisted of different components of the chair, such as its overall style, color, legs, etc. Finally, they took an end-of-study questionnaire that asked:\n(1) Which design components were influenced by them? (High Agency);\n(2) Which design components were influenced in collaboration? (Medium Agency); (3) Which design components were influ-enced by the other designer? (Low Agency). We collected a total of 83 conversations." }, { "figure_ref": [], "heading": "Extracting Conversational Snippets", "publication_ref": [ "b25" ], "table_ref": [], "text": "To assess the degree of Agency exhibited by each designer, we need to determine who had the most influence on the chair design (Section 2) and what their Intentionality, Motivation, Self-Efficacy, and Self-Regulation were (Section 3). Because chair design involves multiple components, these notions are hard to quantify, as each may have been influenced by a different designer. Accordingly, we ask \"Who influenced a particular design component?.\"\nWe devise a mechanism to identify the design components being discussed (e.g., color, legs, arms) and extract the associated conversational turns.\nTo identify the design components, we use the final design written by the interior designers during data collection (Section 5.1). Using common list separators including commas, semi-colons, etc., we split each final design into several components. 3We observe that designers typically discuss these components one at a time (in no particular order).\nHere, we extract a contiguous sequence of utterances that represent the design element being discussed using embedding-based similarity of the design element and utterances. Let D i be a dialogue with utterances u i1 , u i2 , .... For a specific design component d ij in its final design (e.g., \"metal legs\"), we first retrieve the utterance u j that most closely matches with it (based on cosine similarity b/w RoBERTa embeddings (Liu et al., 2019)) -the conversational snippet associated with d ij should at least include u j . Next, we determine the contiguous utterances before and after this matched utterance that discuss the same higher-level design component (e.g., if d ij was \"metal legs\", the utterances may focus on discussion of the higher-level component \"legs\"). We create a simple k-means clustering method to infer the higher-level component being discussed in utterances through their \"design clusters\". Then, we extract all contiguous utterances before and after u j with the same design clusters as u j .\nUsing this method, we create a dataset of 454 conversational snippets, each paired with the discussed design component. For each snippet, we collect two Agency annotations (one for each designer; 454 * 2 = 908 total) as discussed next." }, { "figure_ref": [], "heading": "Annotating Agency Features", "publication_ref": [], "table_ref": [], "text": "Let C i be a conversational snippet b/w designers D i1 and D i2 . Then, for each D ij ∈ {D i1 , D i2 }, our goal is to annotate the Agency level and the expressed Intentionality, Motivation, Self-Efficacy, and Self-Regulation of D ij in C i .\nAnnotating Agency. To get annotations on Agency, we leverage the end-of-study questionnaire filled by the interior designers (Section 5.1). Based on this annotation, we assign labels of high agency (if influenced by self), medium agency (if influenced in collaboration), or low agency (if influenced by other).\nAnnotating Features of Agency. Agency and its features are conceptually nuanced, making crowdwork data collection approaches challenging. To ensure high inter-rater reliability of annotations, we hire a third-party annotation agency (TELUS International). Annotators were shown C i and asked to annotate the Agency features for each D ij based on our proposed framework. We collect three an-notations per snippet and observe an agreement of 77.09% (cohen's kappa of 0.53 for Intentionality, 0.52 for Motivation, 0.50 for Self-Efficacy, and 0.42 for Self-Regulation; data statistics in Appendix A)." }, { "figure_ref": [], "heading": "Insights into Agency in Conversations", "publication_ref": [], "table_ref": [], "text": "We use our dataset to investigate the factors that contribute to high-and low-Agency conversations." }, { "figure_ref": [ "fig_1" ], "heading": "Relationship b/w Agency and its Features", "publication_ref": [], "table_ref": [], "text": "Higher Agency is more likely with stronger expressions of Intentionality and Motivation. Figure 3 depicts the relationship between Agency and its features. Designers with strong Intentionality tend to exhibit higher Agency whereas those with lower Intentionality tend to exhibit lower Agency. Having a well-defined preference makes it easier to influence a task. Likewise with Motivation: higher Motivation correlates with higher Agency. However, designers express strong Motivation less often than Intentionality, irrespective of the Agency level.\nStrong Self-Efficacy and Self-Regulation are related to medium (collaborative) Agency. Interestingly, we find that expression of strong Selfefficacy is related to designs that are influenced equally by both designers, i.e. medium (collaborative) Agency. This may be because we characterize strong Self-Efficacy as the act of pursuing one's preference for multiple turns, which happens naturally when both designers have high influence, thus requiring more persuasion from both sides.\nWe see a similar pattern for Self-Regulationexpression of strong Self-Eegulation (i.e., open to updating preference via a compromise) is related to designs that are influenced equally by both designers. This highlights how collaboration often leads to increased openness to changing one's mind or compromising on mutual preferences.\nIntentionality significantly effects Agency. To assess which Agency features have the strongest effect on it, we conduct a mixed-effects regression analysis (Table 5). We find that Intentionality significantly effects Agency (p < 0.001)." }, { "figure_ref": [], "heading": "Agency and Task Satisfaction", "publication_ref": [], "table_ref": [], "text": "We collect annotations on the designs that designers were most/least satisfied with.\nLower Agency is associated with less satisfaction. We find that designers who are dissatisfied with a particular design component have less Agency over it. When a designer is dissatisfied, their Agency is 62.1% more likely to be low than to be high (42.7% vs. 26.3%; p < 0.05). This may be because individuals with less Agency are less likely to achieve their intention, motivation, and goals, resulting in lower levels of satisfaction." }, { "figure_ref": [], "heading": "Linguistic Attributes of High-and Low-Agency Conversations", "publication_ref": [ "b41" ], "table_ref": [], "text": "We use a simple GPT-4-based instruction prompting method (Ziems et al., 2023) to measure and compare the tentativeness (unsure or low on confidence), self-focus (focused solely on own arguments), reasoning strength (having strong arguments), and persuasion (trying to influence or convince) attributes of designers with high-and lowagency conversations (Figure 4; Appendix C).\nHigher tentativeness associated with low Agency. We find that designers who express higher tentativeness have low Agency in 44.04% of conversations, medium Agency in 31.77% of conversations, and high Agency in 24.19% of conversations. This suggests that a less decisive approach may lead to reduced influence or control in conversations.\nHigher self-focus, reasoning strength, and persuasiveness is associated with high agency. We find that designers who are more focused on self have high Agency in 41.97% of the conversations, those who have higher reasoning strength have higher Agency in 55.66% of the conversations, and those with higher persuasiveness have higher Agency in 51.21% of the conversations. This suggests that designers who emphasize their own intentions and motivations, exhibit sound reasoning, and effectively persuade others tend to have more influence or control in conversations 7 Task 1: Measuring Agency in Dialogue" }, { "figure_ref": [], "heading": "Task Formulation", "publication_ref": [ "b39" ], "table_ref": [], "text": "Our goal is to measure (a) Agency, (b) Intentionality, (c) Motivation, (d) Self-Efficacy, and (e) Self-Regulation of each user in a dialogue. We approach each of these five subtasks as multi-class classification problems. We experiment with two models -GPT-3 and GPT-4. We experiment with two prompting-based methods using Q/A (conversational question-answering) and chain-of-thought reasoning (Wei et al., 2022) (Appendix B) and with fine-tuning GPT-3 independently on each subtask." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "We create four random train-test splits (75:25) of our annotated dataset (Section 5.3) and report the mean performance on the test sets. Table 1 reports the macro-F1 values for the five subtasks (random baseline for each is 33% accurate as each has three distinct classes 4 ). GPT-3 (Q/A) struggles on all subtasks, with close to random performance on Agency, Motivation, and Self-Regulation. This highlights the challenging nature of these tasks, as they are hard to measure through simple inference or instructions. We find substantial gains using GPT-4 (CoT) and GPT-3 (CoT) over GPT-3 (Q/A).\nFine-tuned GPT-3 performs the best on all subtasks, demonstrating the utility of training on our entire dataset. Note that GPT-4 doesn't support finetuning.\n8 Task 2: Investigating Agency in Dialogue Systems\nWe investigate the feasibility of generating dialogues imbued with Agency and establish baseline performance of current large language models (LLMs). For a given LLM, the task is to have a conversation with a human or another LLM while exhibiting Agency and its features. We experiment with 4 different LLMs (Section 8.1) and 4 different prompting/finetuning methods (Section 8.2)\nProcedure. We facilitate dialogue between all possible pairs of models. We provide them with a common room description and a chair design element and individual design preferences (all three randomly chosen from our human-human conversation dataset (Section 5)). We let them talk to each other for 6 turns (90-percentile length value of conversational snippets in our dataset). For each pair of models, we generate 50 such conversations.\nEvaluation Metrics. We evaluate these LLMs on five metrics -(1) Agency; (2) Intentionality; (3) Motivation; (4) Self-Efficacy; (5) Self-Regulation. We apply the best-performing classification models from Section 7 to the generated dialogues to automatically measure these metrics. We report mean values with their level of significance." }, { "figure_ref": [], "heading": "Agency of LLMs", "publication_ref": [ "b9", "b36", "b15" ], "table_ref": [], "text": "We experiment with two commercial (GPT-4 (Ope-nAI, 2023) and GPT-3 (Brown et al., 2020)) and four research (Llama2-70b, Llama2-13b, Llama2-7b (Touvron et al., 2023), and Guanaco-65b (Dettmers et al., 2023)) LLMs ( GPT-4 demonstrates high Agency. Of the models tested, we find that GPT-4 demonstrates significantly higher Agency than others (p < 0.05). It particularly demonstrates the highest Intentionality which we found to have a strong correlation with Agency (Section 6.1). Also, both GPT-4 and GPT-3 demonstrate significantly higher Self-Efficacy, indicating effectiveness in pursuing preferences and arguments (p < 0.05).\nLlama2 demonstrates high Motivation, but low Self-Efficacy and Self-Regulation. We find that Llama2 variants demonstrate high Motivation, indicative of their reasoning capabilities that enable them to offer strong supportive evidence. However, they have lower Self-Efficacy and Self-Regulation indicating that it is relatively challenging to sustain their preferences and arguments, which may ultimately lead to lower agency. Guanaco similarly demonstrates significantly lower Self-Efficacy than other models (p < 0.05).\nLarger models demonstrate higher Intentionality, but lower Self-Efficacy. Llama2 variants with more parameters have higher Intentionality, but lower Self-Efficacy. This suggests that while a larger model size can enhance the expression of preferences, it might not necessarily facilitate the sustained pursuit of those preferences and reasons over multiple conversational turns. " }, { "figure_ref": [], "heading": "Variation in Agency based on Finetuning/Prompting Methods", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We investigate the variations in Agency based on four different finetuning/prompting methods. We use a single model in this experiment.5 \nFine-tuning. We use the dataset collected by us (Section 5) to fine-tune GPT-3 (Appendix B).\nInstruction Only. We prompt GPT-3 with the instruction used in Section 8.1.\nIn-Context Learning (ICL). We randomly retrieve k conversational snippets from our dataset and construct demonstration examples.\nIn-Context Learning w/ Agency Feature Examples (ICL-Agency). We retrieve k conversational snippets that score highly on our four Agency features and employ them as demonstration examples in a setup similar to the previous baseline.\nTable 2 shows the automatic evaluation results. The fine-tuned model struggles with this task. Qualitative analysis suggests that the generated responses from the fine-tuned model tend to be shorter, less natural, and less readable, potentially impacting its performance. In-Context Learning is better at expressing Intentionality and Motivation than the Instruction Only model, indicating that demonstration examples help. Finally, the highest value on all five metrics is achieved by In-Context Learning w/ Agency Feature Examples, highlighting the importance of incorporating examples related to these features in this task." }, { "figure_ref": [ "fig_2" ], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [], "text": "We evaluate the Agency of our best-performing method based on automatic evaluation, ICL-Agency, with human interior designers (Figure 5).\nProcedure. We recruit 13 interior designers from UpWork (upwork.com). In each evaluation ses-sion, we ask them to interact with two randomlyordered dialogue systems -ICL-Agency and one of the other three finetuning/prompting methodsone at a time. They were provided with a room description and a chair design element (e.g., material). After their interaction, we asked them to choose the chatbot that had the (1) higher Agency, (2) higher Intentionality, (3) higher Motivation, (4) higher Self-Efficacy, and (5) higher Self-Regulation. The designers conducted 231 comparative evaluations for a total of 231*2 = 462 interactions with LLM. 6Results. Consistent with the automatic evaluation results, ICL w/ Agency Features model is rated as having more Agency compared to other models and the Fine-tuning model is rated the worst. We do not observe significant differences in Intentionality between this model and the Instruction Only and In-Context Learning approaches. However, we find that this model is perceived as more effective in Motivation and Self-Efficacy, likely due to better access to relevant demonstration examples." }, { "figure_ref": [], "heading": "Further Related Work", "publication_ref": [ "b38", "b18", "b21", "b2", "b23", "b20", "b35", "b27", "b17", "b32", "b14", "b28" ], "table_ref": [], "text": "Previous dialogue research has studied personalized persuasive dialogue systems (Wang et al., 2019). Researchers have also built systems for negotiation tasks such as bargaining for goods (He et al., 2018;Joshi et al., 2021) and strategy games like Diplomacy (Bakhtin et al., 2022). Our work studies the broader concept of Agency and how dialogue systems may contribute to tasks through language. Research on creative AI has explored how collaboration b/w human and AI can be facilitated through dialogue in applications like collaborative drawing (Kim et al., 2019) and facial editing (Jiang et al., 2021). Here, we focus on the interior designing application as it presents significant complexity in terms of how Agency is shared.\nAgency has been studied in the context of undesirable biases in stories and narratives (Sap et al., 2017) and how controllable revisions can be used to portray characters with more power and agency (Ma et al., 2020). In other domains such as games, researchers have created frameworks of Agency between players (Harrell and Zhu, 2009;Pickett et al., 2015;Cole, 2018;Moallem and Raffe, 2020). Our work develops a framework for measuring Agency in dialogue and explores how dialogue systems can be imbued with Agency." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [ "b17", "b40" ], "table_ref": [], "text": "The idea of AI systems with Agency stems from the discourse surrounding the development of autonomous intelligent agents capable of mimicking human-like behavior and decision-making (Harrell and Zhu, 2009;Wen and Imamizu, 2022). Agency drives how an agent contributes to a given task. In settings like games or AI-assisted teaching, AI may be the one guiding the task (e.g., as a non-character player). Also, in creative applications, engaging with a reactive AI without intention, motivation, and goals may be perceived as less meaningful.\nThe ideal Agency of an agent would be defined by the task/application. Moreover, varying degrees of Agency might need to be manifested at different points in the interaction with a human. Learning how to best modulate the Agency based on the task and the ongoing human-LLM interaction forms an important future direction of work. Developing methods that effectively elicit and model task preferences for Agency and adapt LLMs based on the degrees to which they should actively contribute to the task, could be helpful in achieving this goal. Such methods could make use of the datasets and methods that we develop for assessing Agency levels of LLMs.\nThe four features of Agency can be in conflict with each other, as well as with the Agency of the interlocutor. Thus, understanding how to detect and measure these features can help create agents who might converse more naturally and match the character of their human interlocutor. Importantly, our measurements of Agency and its features may be used to control the level of Agency in dialogue systems since different individuals may have different preferences on the desired amount of Agency across the four Agency features.\nAlthough our dataset is focused on the domain of interior design, the Agency-related constructs that we introduce in this paper (e.g., Intentionality) may be associated with domain-independent pragmatic features (e.g., \"I would prefer\") and potentially permit adaptation to a variety of domains." }, { "figure_ref": [], "heading": "Ethics Statements", "publication_ref": [], "table_ref": [], "text": "This study was reviewed and approved by our Institutional Review Board. No demographic or Personal Identifiable Information was collected. Participants were paid $20 per conversational session lasting no more than 30 minutes. Participants were based in US or Canada as reported through Up-Work. Participant consent was obtained before starting the data collection.\nAgency is a property with much potential to enhance collaborative interactions between human users and conversational agents. Nevertheless, full Agency may have unintended undesirable and potentially disruptive outcomes. In particular, the potential demonstrated in this work to control the degree of Agency may result in conversational agents being misapplied in disinformation campaigns or to manipulate for, e.g., financial gain." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our experiments are restricted to the English language. We note that our dataset is focused on the domain of interior design. However, the Agency-related constructs we introduce in this paper, such as Intentionality, may also rely on domainindependent \"stylistic\" features (e.g., \"I would prefer\") and could potentially be adapted to a variety of domains, which forms an interesting future direction of research. Also, our automatic measurements of Agency and its features are limited by the performance of the Agency prediction methods we tested. Future work may focus on designing more accurate automated Agency measurements. " }, { "figure_ref": [], "heading": "A Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Low Medium High", "publication_ref": [], "table_ref": [], "text": "Agency 308 292 308 " }, { "figure_ref": [], "heading": "B Model Details", "publication_ref": [ "b9", "b39" ], "table_ref": [], "text": "We use text-davinci-003 for all of our GPT-3 models. For Agency measurement models (Section 7), we sample the highest probable next tokens by setting the temperature value to 0 (determinstic sampling). For dialogue generation models (Section 8), we use top-p sampling with p = 0.6. For in-context learning methods, we experimented with k = 5, 10, 15, and 20 and found k = 10 to be the most effective based on a qualitative assessment of 10 examples.\nGPT-3 (Q/A). We frame our measurement tasks as conversational question-answering. For a given conversational snippet, we ask GPT-3 (Brown et al., 2020) to answer the questions related to each of the five subtasks (same questions as asked during data collection (Section 5.3)). We present k = 10 demonstration examples, randomly sampled from our dataset (different examples for each of the five subtasks; Appendix G.1).\nGPT-3 (CoT) and GPT-4 (CoT). We use chain-ofthought (CoT) prompting (Wei et al., 2022) to reason about conversational snippets. We use k = 10 demonstration examples, randomly sampled from our dataset and manually write chain-of-thought prompts for each of the five subtasks Fine-tuning details. Since our goal is to simulate a dialogue agent with high Agency, for each conversational snippet, we label the designer who influenced the design (who had a higher agency) as \"AI\" and the other designer (who had a lower agency) as \"Human\". We fine-tune GPT-3 to generate AI utterances given all previous utterances in a conversational snippet and the instruction prompt developed for the Instruction Only baseline." }, { "figure_ref": [], "heading": "C Linguistic Attributes Measurement", "publication_ref": [], "table_ref": [], "text": "We compare the tentativeness, self-focus, reasoning, and persuasion of the designers using the following prompts. We randomly assign the names of Tom and Harry to the two designers.\nTentativeness. Your job is to assess tentativeness in a conversation between Tom and Harry about designing chairs. A tentaitve person will not be confident about their arguments.\nSelf-Focus. Your job is to assess self-focusedness in a conversation between Tom and Harry about designing chairs. A self-focused person will be more focused on their own arguments than the other person's arguments.\nReasoning. Your job is to assess reasoning strength in a conversation between Tom and Harry about designing chairs. A person with strong reasoning will have strong arguments.\nPersuasion. Your job is to assess persuasion in a conversation between Tom and Harry about designing chairs. A persuasive person will be able to convince the other person about their arguments." }, { "figure_ref": [], "heading": "D Human Evaluation Details", "publication_ref": [], "table_ref": [], "text": "We asked three evaluators to choose the chatbot that (1) had more influence over the final design (Agency); (2) was better able to express its design preference (Intentionality); (3) was better able to motivate their design preference (Motivation);\n(4) pursued their design preferences for a greater number of conversational turns (Self-Efficacy); (5) was better able to self-adjust their preference (Self-Regulation)." }, { "figure_ref": [], "heading": "E Why We Chose Collaborative Interior", "publication_ref": [ "b10", "b0", "b24" ], "table_ref": [], "text": "Designing as Our Testbed?\nHere, we propose a dialogue-based collaborative interior design task as a testbed. In this task, given a room setting, the goal is to discuss how to design the interiors of the room. We note that an interior design task can be broad and may involve a wide range of complex components (e.g., color palette, furniture, accessories) as well as a series of steps to be followed. Furthermore, due to a real-world room context, the task must be grounded with both vision and language components with an understanding of how threedimensional objects in a room (e.g., chairs, tables, plants, decor items) must be designed.\nHere, we build upon previous work on richlyannotated, large-scale datasets of 3D objects like ShapeNet (Chang et al., 2015) and subsequent works on understanding how fine-grained differences between objects are expressed in language like ShapeGlot (Achlioptas et al., 2019) and Part-Glot (Koo et al., 2022). Both ShapeGlot and PartGlot datasets provide us with richly annotated datasets of chairs. Therefore, we narrow down the scope of our task and specifically focus on furnishing a room with a chair. In this task, a human and an AI are provided with a room layout and asked to collaboratively come up with a design of a chair to be placed in the room through text-based interaction. " }, { "figure_ref": [], "heading": "F Analysis of Agency Features", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "H Reproducibility", "publication_ref": [], "table_ref": [], "text": "We release the code and datasets developed in this paper at github.com/microsoft/agency-dialogue.\nThe use of existing artifacts conformed to their intended use. We used the OpenAI library for GPT-3 and GPT-4 based models. We used A100 GPUs to perform inference on Llama2 and Guanaco. We use the scipy and statsmodel libraries for statistical tests in this paper." }, { "figure_ref": [], "heading": "I Human-Human Conversational Data", "publication_ref": [], "table_ref": [], "text": "Collection Instructions " }, { "figure_ref": [], "heading": "Instructions", "publication_ref": [], "table_ref": [], "text": "In this data collection study, you will plan to design an object in collaboration with another participant. You will access a website using a link that we provide. On the website, you will be paired with another participant, with whom you will interact, via a chat-like interface (text-only), to plan and negotiate what you collaboratively want to design." }, { "figure_ref": [], "heading": "Purpose of the Research", "publication_ref": [], "table_ref": [], "text": "The purpose of this research is to understand agency in human-human conversations and how to build a conversational AI agent with agency. Agency can be defined as the power one has to act upon their intrinsic motivation, preferences, and expertise. Here, we want to study how humans exercise agency in conversations, as well as, how AI agents can exercise agency through conversations.\nTowards this goal, we are collecting conversations around tasks involving two humans planning to collaboratively design an object (e.g., a chair). The conversational data would help us assess how humans use conversations to exercise their agency and how we can train AI agents to have agency, without becoming insensitive towards others or disregarding social norms." }, { "figure_ref": [], "heading": "The Setting", "publication_ref": [], "table_ref": [], "text": "You will be paired with another participant. You will both be shown a 3D model of a room. Here is an example room:\nWhat will you do?\nYou will be assigned an object (e.g., a chair). You will plan to design that object for the room, in collaboration with the other participant, through chat conversations.\nHere are the steps you will follow: a. The design you prefer might be different from the design which the other player prefers. b. Therefore, a key part of the collaborative designing process would be to communicate your individual preferences, negotiate, and find common ground. c. You will use the chatbox to plan, discuss and negotiate. d. You should try and convince the other player to agree on a design that is close to your preference.\ni. For example, you can try and explain why the design you prefer might be better. ii. At the same time, it is also important to understand the other player's preference. Knowing that can help you talk about the pros and cons of each design. iii. You can also discuss what adjustments can be made such that the final design satisfies the preferences of both the players. e. You should plan to spend ~30 minutes on the conversation.\nStep 2.2. Describe the final chair design: Both you and the other participant will be provided with a textbox, which you both will use to report the design that you agreed upon.\na. You should use this textbox to update the current design when you agree upon something (based on what is being discussed in the conversation). b. For example, if you are asked to design a chair, and if you are able to decide the high-level chair design first (e.g., a club chair), you can update it in the textbox, before proceeding to discuss the other characteristics (e.g., seat, arms, legs). c. Please be as specific as possible when describing your design.\nStep 3. Mark as finished and take a post-study questionnaire: When both you and the other player are done designing the object, you will mark the study as complete (using a provided option) and take a post-study questionnaire.\na. Note that you may not always reach an agreement with the other participant. But when you are done, you should still mark the task as finished and take the post-study questionnaire. b. You should plan to spend 15-20 minutes on the questionnaire." }, { "figure_ref": [], "heading": "Note:", "publication_ref": [], "table_ref": [], "text": "The conversations should only focus on object design. To keep the conversations natural, please do not discuss things related to these instructions directly in the conversation. For instance, you should not mention that you went through a process of selecting designs or writing a preference (e.g., do not say \"what is your preferred design?\" or \"my preferred design is…\"). Also, do not discuss any personal details." }, { "figure_ref": [], "heading": "Designer Utterance", "publication_ref": [], "table_ref": [], "text": "Designer 1: How about a desk chair for this area? Designer 2: There seems to be many possibilities for this space, would you agree? Yet I agree that some kind of chair for the desk is needed. Designer 1: The room has very clean lines with an Asian theme Designer 2: I think we need to support the minimalist lines of the overall space design. Not something too over-stuffed. Something with a contemporary feel. Designer 1: So maybe a more contemporary style of desk chair. Designer 1: Great minds! Designer 1: How do you feel about a tall back with tilt swivel and adjustable Designer 2: I believe so. Maybe one that is comfortable for sure -but not too closed in. There is the lovely background to consider. We don't want to block that. Designer 1: If not too tall, then maybe something mid back height? Designer 2: I think the height of the back should be carefully scaled -supportive but not so high that it obscures what is behind too much. Designer 1: Or shoulder height for support Designer 1: With arm support Designer 2: Agreed on shoulder height. Swiveling is good -also moving -like on casters may provide flexibility. Designer 1: Definitely casters Designer 2: I am concerned about tilting back since we do have some fragile decorative elements behind. Designer 1: Ok, so far... shoulder height desk chair with adjustable height, casters and arm rests Designer 2: I do agree that arm support is essential, especially if one is to feel comfortable while working.\nIt feels like this might be a consult room of sorts -so allowing the person to sit back in a more relaxed posture -resting arms off the table is good. Designer 1: Some tilts can be regulated and locked into place... not necessarily a full recline Designer 1: Perfect Designer 2: The materiality of the chair is something to consider. I see a lot of wood and timber detailing.\nIt might be nice to have the chair upholsterable -perhaps a nice leather back that would be shaped to lightly massage the back? Designer 1: Agree Designer 1: the leather would be a nice look in there Designer 2: Something that seems pillowy or wavy, but in a very restrained, minimalist sort of way Designer 1: Black would match the ottomans but a soft buttery cream/ ivory would add a soothing neutral to the aesthetic Designer 2: With the darker wood in the room and the leather chair -an accent material on the armrests might be nice to offsett -say a brushed steel or aluminum finish? Designer 1: I've seen the vertical channeling on a desk chair that is very classy looking Designer 1: The brushed steel frame would look nice in this room. I think wood would be a bit much. Designer 2: I think classic modern which always took a lot of inspiration from japanese design. The buttery cream is a lovely idea. Will provide a bright focal point and it will align with the colors of the fan. Designer 1: I think we have our chair! " }, { "figure_ref": [], "heading": "J Human Evaluation Experiment Instructions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Agency Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Study Goals", "publication_ref": [], "table_ref": [], "text": "The goal of this study is to interact with and evaluate chatbots." }, { "figure_ref": [], "heading": "Study Steps", "publication_ref": [], "table_ref": [], "text": "In the study, you will interact with two AI-based chatbots, one at a time. Each time, you will be provided with a room description and a speci c chair design component (e.g., the material to be used for a chair that will be placed in the room). Your task will be to collaborate with the chatbots to discuss and agree upon what the chair design component should be.\nIn the end, you will ll out a questionnaire in which you will be asked questions comparing the two chatbots. You will compare the chatbots based on whether they were able to pose, motivate, and stick to their own preferences and whether they were able to in uence the nal design.\nFew Important Things to Note Consent to the study By ticking this box, you are agreeing to be part of this data collection study. You also con rm that you understand what you are being asked to do. You may contact us if you think of a question later. You are free to release/quit the study at any time. Refusing to be in the experiment or stopping participation will involve no penalty or loss of bene ts to which you are otherwise entitled. To save a copy of the consent form and instructions, you can save/print this webpage (or nd the instructions here). You are not allowed to distribute these instructions and data for any purposes. You are also not allowed to use them outside this study." }, { "figure_ref": [], "heading": "Agree and Continue", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank Michael Xu for providing feedback on our study design and Alexandros Graikos for helping with ShapeNet and ShapeGlot datasets. We are also grateful to Ryen White and Nirupama Chandrasekaran for providing feedback through an initial pilot study. We also thank all the interior designers who contributed to data collection and human evaluation." } ]
Agency, the capacity to proactively shape events, is central to how humans interact and collaborate. While LLMs are being developed to simulate human behavior and serve as human-like agents, little attention has been given to the Agency that these models should possess in order to proactively manage the direction of interaction and collaboration. In this paper, we investigate Agency as a desirable function of LLMs, and how it can be measured and managed. We build on social-cognitive theory to develop a framework of features through which Agency is expressed in dialogue -indicating what you intend to do (Intentionality), motivating your intentions (Motivation), having self-belief in intentions (Self-Efficacy), and being able to self-adjust (Self-Regulation). We collect a new dataset of 83 human-human collaborative interior design conversations containing 908 conversational snippets annotated for Agency features. Using this dataset, we develop methods for measuring Agency of LLMs. Automatic and human evaluations show that models that manifest features associated with high Intentionality, Motivation, Self-Efficacy, and Self-Regulation are more likely to be perceived as strongly agentive.
Investigating Agency of LLMs in Human-AI Collaboration Tasks
[ { "figure_caption": "Figure 2 :2Figure 2: Overview of our data collection approach. (a) We start by collecting human-human conversations b/w interior designers. (b) We divide each conversation into snippets related to different chair features. (c) Finally, we collect annotations of Agency and its features on each conversational snippet.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The relationship between Agency and its features. (a) Designers with High Agency expressed strong Intentionality 26.5% more times than designers with Low Agency; (b) Designers with High Agency expressed strong motivation in support of their design preference 15.2% more times; (c), (d) Expression of strong Self-Efficacy and strong Self-Regulation was related with design elements that were influenced in collaboration.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Human Evaluation Results.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Instructions shown to the interior designers during the human-human conversational data collection. Continued on the next page (1/3).", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Instructions shown to the interior designers during the human-human conversational data collection. Continued on the next page (2/3).", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Instructions shown to the interior designers during the human-human conversational data collection (3/3).", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Instructions shown to the interior designers during the human evaluation experiment. Continued on the next page (1/2).", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "1.Aim to spend between 2 to 5 minutes per chatbot: You should aim to chat for around 2 to 5 minutes with each chatbot. 2. Chat only about the component you are assigned: Please chat only about the chair design component you are assigned. In some cases, the chatbot may try initiating a conversation about a different design component. However, that is not required, particularly after you have agreed on what the assigned design component should be. 3. Express your preferences: You may start by expressing your preference or by asking if the chatbot has any preference. 4. Negotiate what you don't like or agree with: If you do not agree with the preference of the chatbot, you should negotiate with it and try to convince it otherwise. 5. \"End Conversation and Continue\" once you are done: One both you and the chatbot have agreed upon what the design element should be, please use the \"End Conversation and Continue\" to proceed to the next step of the study.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Instructions shown to the interior designers during the human evaluation experiment (2/2).", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "MethodAgencyIMSESRLLMsGPT-41.111.461.591.970.83GPT-31.041.391.621.950.82Llama2-70b0.991.251.681.780.76Llama2-13b0.981.221.581.880.77Llama2-7b0.971.071.631.910.73Guanaco-65b0.911.231.531.490.83Finetuning/Prompting MethodsFine-tuning0.921.780.860.810.98Instruction0.961.621.711.630.97ICL0.981.811.781.350.98ICL-Agency1.221.901.981.980.98", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Each model/method is evaluated through simulated conversations with all other models/methods. For", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Agency distribution of the conversation snippets.", "figure_data": "Other Statistics. The conversations b/w interiordesigners in our dataset have 41.67 turns on av-erage. The extracted conversation snippets have4.21 turns on average. We find an average pairwiseagreement of 71.36% for Intentionality, 70.70% forMotivation, 85.21% for Self-Efficacy, and 81.09%for Self-Regulation.", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Example Human-Human Conversation in Our Dataset.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Ashish Sharma; Sudha Rao; Chris Brockett; Akanksha Malhotra; Nebojsa Jojic; Bill Dolan; Paul G Allen
[ { "authors": "Panos Achlioptas; Judy Fan; Robert Hawkins; Noah Goodman; Leonidas J Guibas", "journal": "", "ref_id": "b0", "title": "Shapeglot: Learning language for shape differentiation", "year": "2019" }, { "authors": "Daniel Adiwardana; Minh-Thang Luong; David R So; Jamie Hall; Noah Fiedel; Romal Thoppilan; Zi Yang; Apoorv Kulshreshtha; Gaurav Nemade; Yifeng Lu", "journal": "", "ref_id": "b1", "title": "Towards a human-like open-domain chatbot", "year": "2020" }, { "authors": "Anton Bakhtin; Noam Brown; Emily Dinan; Gabriele Farina; Colin Flaherty; Daniel Fried; Andrew Goff; Jonathan Gray; Hengyuan Hu", "journal": "Science", "ref_id": "b2", "title": "Humanlevel play in the game of diplomacy by combining language models with strategic reasoning", "year": "2022" }, { "authors": "Maryam Banaei; Ali Ahmadi; Abbas Yazdanfar", "journal": "Frontiers of Architectural Research", "ref_id": "b3", "title": "Application of ai methods in the clustering of architecture interior forms", "year": "2017" }, { "authors": "Albert Bandura", "journal": "American psychologist", "ref_id": "b4", "title": "Human agency in social cognitive theory", "year": "1989" }, { "authors": "Albert Bandura", "journal": "Current directions in psychological science", "ref_id": "b5", "title": "Exercise of human agency through collective efficacy", "year": "2000" }, { "authors": "Albert Bandura", "journal": "Annual review of psychology", "ref_id": "b6", "title": "Social cognitive theory: An agentic perspective", "year": "2001" }, { "authors": "Joseph Bates", "journal": "Communications of the ACM", "ref_id": "b7", "title": "The role of emotion in believable agents", "year": "1994" }, { "authors": "Marcel Binz; Eric Schulz", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b8", "title": "Using cognitive psychology to understand gpt-3", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "NeurIPS", "ref_id": "b9", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b10", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Savvas Lydia B Chilton; Maneesh Petridis; Agrawala", "journal": "", "ref_id": "b11", "title": "Visiblends: A flexible workflow for visual blends", "year": "2019" }, { "authors": "Elizabeth Clark; Anne Spencer Ross; Chenhao Tan; Yangfeng Ji; Noah A Smith", "journal": "", "ref_id": "b12", "title": "Creative writing with a machine in the loop: Case studies on slogans and stories", "year": "2018" }, { "authors": "Jillianne Code", "journal": "Frontiers in Genetics", "ref_id": "b13", "title": "Agency for learning: Intention, motivation, self-efficacy and self-regulation", "year": "2020" }, { "authors": "Alayna Cole", "journal": "", "ref_id": "b14", "title": "Connecting player and character agency in videogames", "year": "2018" }, { "authors": "Tim Dettmers; Artidoro Pagnoni; Ari Holtzman; Luke Zettlemoyer", "journal": "", "ref_id": "b15", "title": "Qlora: Efficient finetuning of quantized llms", "year": "2023" }, { "authors": "Mustafa Emirbayer; Ann Mische", "journal": "", "ref_id": "b16", "title": "What is agency? American journal of sociology", "year": "1998" }, { "authors": "Fox Harrell; Jichen Zhu", "journal": "", "ref_id": "b17", "title": "Agency play: Dimensions of agency for interactive narrative design", "year": "2009" }, { "authors": "He He; Derek Chen; Anusha Balakrishnan; Percy Liang", "journal": "", "ref_id": "b18", "title": "Decoupling strategy and generation in negotiation dialogues", "year": "2018" }, { "authors": "J John; Horton", "journal": "National Bureau of Economic Research", "ref_id": "b19", "title": "Large language models as simulated economic agents: What can we learn from homo silicus?", "year": "2023" }, { "authors": "Yuming Jiang; Ziqi Huang; Xingang Pan; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b20", "title": "Talk-to-edit: Fine-grained facial editing via dialog", "year": "2021" }, { "authors": "Rishabh Joshi; Vidhisha Balachandran; Shikhar Vashishth; Alan Black; Yulia Tsvetkov", "journal": "", "ref_id": "b21", "title": "Dialograph: Incorporating interpretable strategy-graph networks into negotiation dialogues", "year": "2021" }, { "authors": "Immanuel Kant", "journal": "Hafner", "ref_id": "b22", "title": "Critique of judgment", "year": "1951" }, { "authors": "Jin-Hwa Kim; Nikita Kitaev; Xinlei Chen; Marcus Rohrbach; Byoung-Tak Zhang; Yuandong Tian; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b23", "title": "Codraw: Collaborative drawing as a testbed for grounded goaldriven communication", "year": "2019" }, { "authors": "Juil Koo; Ian Huang; Panos Achlioptas; Leonidas J Guibas; Minhyuk Sung", "journal": "", "ref_id": "b24", "title": "Partglot: Learning shape part segmentation from language reference games", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b25", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "John Locke", "journal": "", "ref_id": "b26", "title": "Two treatises of government", "year": "1978" }, { "authors": "Xinyao Ma; Maarten Sap; Hannah Rashkin; Yejin Choi", "journal": "", "ref_id": "b27", "title": "Powertransformer: Unsupervised controllable revision for biased language correction", "year": "2020" }, { "authors": "Jonathan D Moallem; William L Raffe ", "journal": "IEEE", "ref_id": "b28", "title": "A review of agency architectures in interactive drama systems", "year": "2020" }, { "authors": "Changhoon Oh; Jungwoo Song; Jinhan Choi; Seonghyeon Kim; Sungwoo Lee; Bongwon Suh", "journal": "", "ref_id": "b29", "title": "I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence", "year": "2018" }, { "authors": " Openai", "journal": "", "ref_id": "b30", "title": "Gpt-4 technical report", "year": "2023" }, { "authors": "Sung Joon; Park; C O' Joseph; Carrie J Brien; Meredith Ringel Cai; Percy Morris; Michael S Liang; Bernstein", "journal": "", "ref_id": "b31", "title": "Generative agents: Interactive simulacra of human behavior", "year": "2023" }, { "authors": "Grant Pickett; Allan Fowler; Foaad Khosmood", "journal": "", "ref_id": "b32", "title": "Npcagency: conversational npc generation", "year": "2015" }, { "authors": "Mark Riedl; Vadim Bulitko", "journal": "", "ref_id": "b33", "title": "Interactive narrative: A novel application of artificial intelligence for computer games", "year": "2012" }, { "authors": "Stephen Roller; Emily Dinan; Naman Goyal; Da Ju; Mary Williamson; Yinhan Liu; Jing Xu; Myle Ott; Eric Michael Smith; Y-Lan Boureau", "journal": "", "ref_id": "b34", "title": "Recipes for building an open-domain chatbot", "year": "2021" }, { "authors": "Maarten Sap; Marcella ; Cindy Prasettio; Ari Holtzman; Hannah Rashkin; Yejin Choi", "journal": "", "ref_id": "b35", "title": "Connotation frames of power and agency in modern films", "year": "2017" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale", "journal": "", "ref_id": "b36", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Ryan Volum; Sudha Rao; Michael Xu; Gabriel Des-Garennes; Chris Brockett; Benjamin Van Durme; Olivia Deng; Akanksha Malhotra; William B Dolan", "journal": "", "ref_id": "b37", "title": "Craft an iron sword: Dynamically generating interactive game characters by prompting large language models tuned on code", "year": "2022" }, { "authors": "Xuewei Wang; Weiyan Shi; Richard Kim; Yoojung Oh; Sijia Yang; Jingwen Zhang; Zhou Yu", "journal": "", "ref_id": "b38", "title": "Persuasion for good: Towards a personalized persuasive dialogue system for social good", "year": "2019" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b39", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Wen Wen; Hiroshi Imamizu", "journal": "Nature Reviews Psychology", "ref_id": "b40", "title": "The sense of agency in perception, behaviour and human-machine interactions", "year": "2022" }, { "authors": "Caleb Ziems; William Held; Omar Shaikh; Jiaao Chen; Zhehao Zhang; Diyi Yang", "journal": "", "ref_id": "b41", "title": "Can large language models transform computational social science?", "year": "2023" } ]
[]
10.18653/v1/D19-1371
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b36", "b12", "b29", "b48", "b49", "b17", "b4", "b37", "b42", "b8", "b43", "b29", "b8", "b41", "b14", "b18", "b39", "b9", "b15", "b27", "b50", "b10", "b23" ], "table_ref": [ "tab_0" ], "text": "Pretrained language models (PTMs) (Peters et al., 2018;Devlin et al., 2019;Liu et al., 2019), trained on massive and heterogeneous corpora, have significantly improved the state-of-the-art across a variety of natural language processing tasks (Wang et al., 2022(Wang et al., , 2023)). Kaplan et al. (2020) found power laws relating cross entropy loss to the sizes of language models and their training datasets. As a result, the field has recently shifted toward larger models and large data (Brown et al., 2020;Rae et al., 2021;Smith et al., 2022;Chowdhery et al., 2022) in hopes of improving performance.\nHowever, training a state-of-the-art language model requires substantial computational resources which demand considerable energy, along with the associated financial and environmental costs (Strubell et al., 2019). For example, RoBERTa-Large (Liu et al., 2019), which was trained on 1000 V100 GPUs for approximately one day, has a computational cost of 4.36×10 21 FLOPs. Recently, Chowdhery et al. (2022) proposes PaLM, which consumes 580 times more FLOPs than RoBERTa-Large. PaLM was trained on 6144 TPU v4 chips for more than 1200 hours, which is unaffordable for most researchers. Therefore, finding ways to speed up pretraining is crucial for the development of pretrained model research.\nIn general, there are three main strategies used to speed up pretraining in NLP: parallel architectures, efficient model architectures, and novel pretraining tasks. The first one is to train a single model utilizing multiple GPUs distributed in many computational nodes (Wang et al., 2020b;Shazeer et al., 2018;Huang et al., 2019). Unfortunately, the gains in efficiency of this strategy depend entirely on the amount of computing hardware used. The second strategy is to improve model structures to reduce the computational complexity and therefore improve efficiency (Wang et al., 2020a;Katharopoulos et al., 2020;Roy et al., 2021). The last one explores more challenging pretraining tasks to accelerate a model's convergence (Clark et al., 2019;Joshi et al., 2020;Levine et al., 2020). However, their improvements are limited, with a reduction of less than an order of magnitude in computational expenses (measured in FLOPs).\nIn this paper, we aim to reduce the computational costs from data level (See Table 1). The PLMs are trained on the entire pretraining corpus D, which is task-agnostic. To take the downstream task into account, we hope to select the most relevant samples from the pretraining corpus based on the downstream data. Recently, Yao et al. (2022) proposes TLM, which retrieves data from a pretraining corpus using task data as queries. However, TLM remains task-agnostic, because it only considers text (i.e., X) similarities and ignores the label (i.e., Y) information.\nMotivated by influence function (Cook and Weisberg, 1982;Koh and Liang, 2017), we propose Influential Subset Selection (ISS) for language model, i.e. selecting the samples with the most positive influence on the downstream task. To calculate the label-aware influence value, ISS utilizes the derivation chain rule from a test objective to training samples. Nevertheless, directly applying the chain rule leads to computing the inverse of Hessian with the complexity of O(nq 2 + q 3 )(n is the number of examples and q is parameter size), which is computationally expensive and may run out-of-memory in neural networks. To address this problem, we propose a gradient matching based influence approximation method for selecting pretraining data, which estimates the influence score by matching the gradient values of pretraining samples and end-task samples. Our method avoids the computation of the inverse of Hessian and significantly speeds up the estimation time of influence.\nOur main contributions are summarized as follows:\n• We propose Influential Subset Selection for language model, which explicitly utilizes knowledge of the end-task to select the pretraining corpus.\n• We design a simple, efficient, gradient matching based method for influence estimation, which avoids the calculation of the inverse of Hessian and significantly speeds up the estimation time.\n• We evaluate the effectiveness of our method on eight tasks covering four domains. Notably, ISS outperforms PTMs (e.g. RoBERTa) with only 0.45% of the data and three orders of magnitude reduced FLOPS. Our code can be found at https://github.com/nitwtog/ISS." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Definition", "publication_ref": [], "table_ref": [], "text": "We assume an end-task dataset represented as T = (Z t ) where Z t = (x 1 t , y 1 t ), (x 2 t , y 2 t ), . . . , (x m t , y m t ) represents a set of texts with their ground truth labels. And we assume a large-scale pretraining corpus D = (Z p ), where Z p =\nx 1 p , x 2 p , . . . , x M p represents unlabeled data. We define f = f (head) • f (feat) , such that f (feat) (•; θ ∈ Θ) is a feature extractor that is transferable across learning stages (e.g. pretraining to finetuning) and f (head) (•; φ ∈ Φ) is a task-specific head that is not transferable. And we assume l p (z p , θ, φ p ) and l t (z t , θ, φ t ) are the loss functions of pretraining and end-task." }, { "figure_ref": [], "heading": "Influence Function", "publication_ref": [ "b10", "b23" ], "table_ref": [], "text": "Influence function (Cook and Weisberg, 1982;Koh and Liang, 2017) provides an efficient way to estimate the importance of a training sample. Considering a training sample z was weighted by a small during training, the empirical risk minimizer can be written as\nθ ,z = arg min θ∈Θ 1 n z i ∈D l (z i , θ) + • l(z, θ) (1)\nAssigning -1 n to is equivalent to removing the training example z p . Then, the influence of weighting z p on the parameters is given by\nI param (z) = d θ ,z d =0 = -H -1 θ ∇ θ l(z, θ) (2)\nwhere H θ = 1 n z i ∈D ∇ 2 θ l z i , θ is the Hessian and positive definite by assumption, I param (z) ∈ R N , N is the number of network parameters. Then, we can linearly approximate the parameter change due to removing z without retraining the model by computing θ-zθ ≈ -1 n I param (z)." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "We investigate an influence-based subset selection method to perform efficient pretraining while attempting to minimize accuracy loss on the end-task dataset (Section 3.1). Due to the high computational costs of influence function (Koh and Liang, 2017), we design an influence approximation strategy to speed up the calculation (Section 3.2)." }, { "figure_ref": [], "heading": "Influence of Pretraining Corpus", "publication_ref": [], "table_ref": [], "text": "PTMs used in previous works usually adopt language modeling as pretraining tasks, lacking taskspecific prior knowledge. However, we often know the end-task beforehand, so we can make specific choices about our pretraining regimen to improve end-task performance. Under this setting, we introduce Influential Subset Selection for language model, which measures the importance of pretraining samples by considering the X and Y information of the end-task simultaneously. Specifically, pretraining sample z p affects the prediction of end-task sample z t by influencing the parameters of the feature encoder θ. We can apply the chain rule to measure the influence of upweighting pretraining sample z p on the loss at end-task sample z t .\nI (z p , z t ) dl z t , θ ,z d =0 = ∇ θ l z t , θ d θ ,z d =0 = -∇ θ l z t , θ H -1 θ ∇ θ l(z p , θ)(3)\nThe more negative I (z p , z t ) is, the more positive influence z p can provide. However, computing the Hessian for the full training dataset is expensive, and inverting it is similarly prohibitive: with n training data points and p parameters, this computation requires O(n * p 2 + p 3 ) operations. It means that evaluating the influence of large-scale pretrained corpus is not achievable. Thus, we propose an influence approximation algorithm to speed up the estimation time." }, { "figure_ref": [], "heading": "Influence Approximation", "publication_ref": [], "table_ref": [], "text": "Motivated by calculus, the update of the model parameters is the result of cumulative updates over several training iterations. Similarly, the difference between the loss of test point z t at the end of training versus at the beginning of training can be decomposed along the path taken by the training process. Thus, we hypothesize that the influences of all training examples on a fixed test point z t is exactly the total reduction in loss on z t .\nAssume that we train the feature encoder by minimizing the pertaining loss l p (z p ; θ, φ), via an iterative optimization procedure (such as SGD) which utilizes one training example z p in iteration t. The parameters of the feature encoder before and after iteration t are θ t and θ t+1 respectively. The influence of z t on z p can be approximated in the following way. Figure 1: Illustration of gradient matching based influence approximation. g 1 and g 2 are the loss gradients of two different pretrained samples respectively, while g is the loss gradient of the end-task sample. The influence of a pretrained sample is measured by how a small step based on its gradient affects the loss on the end-task sample. Compared to g 1 , the update step of g 2 is more generalized.\nI (z p , z t ) = l t (z p , θ t ) -l t (z p , θ t+1 ) (4) 𝜃 𝑔 ! 𝑔 \" 𝑔 # 𝑔 \" # 𝑔 # > 𝑔 ! # 𝑔 #\nSuppose we are at point θ t , and we make a firstorder Taylor expansion of function l p (z p , θ t+1 ).\nl t (z p , θ t+1 ) =l t (z p , θ t ) + ∇ θ l t (z p , θ t ) • (θ t+1 -θ t ) + O θ t+1 -θ t 2\n(5) Assuming the model employs SGD as the optimizer, then the update in parameters is θ t+1 -θ t = -η t ∇ θ l p (z t , θ t ), where η t is the learning rate at iteration t. Eq. ( 5) guarantees approximation precision as long as the update magnitude of θ is sufficiently small. By substituting the parameter update formula and disregarding the higher-order term, we arrive at the following first-order approximation.\nlt z , θt -lt z , θt+1 ≈ ηt∇ θ lt z , θt • ∇ θ lp (zt, θt)(6)\nWe refer to this first-order approximation as gradient matching-based influence estimation. The full algorithm is provided in Algorithm 1.\nVisualisation We visualize our influence estimation method in Fig 1 . g 1 and g 2 are the loss gradients of two different pretrained samples respectively, while g is the loss gradient of the end-task sample. The influence of a pretrained sample can be viewed as the dot product of its gradient and the gradient of the end-task sample. Higher influence suggests that a network is learning parameters that generalize. \n(z i ) + l t (z i ) for z p ∈ D do Compute ∇ θ l p z p , θ, φp end for z ∈ T v do Compute ∇ θ l t z , θ, φt , for z p ∈ D do I (z p , z ) = ∇ θ l p z p , θ, φp • ∇ θ l t z , θ," }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "Based on the influence score, we select the most relevant samples from the pretraining corpus. Following TLM, we first select a subset via a BM25 retrieval method. Then, we compute the influence score based on this subset to make ISS scalable and efficient.\nMoreover, the number of parameters in largescale language models is very large, leading to very high dimensional gradients. To tackle this problem, we adopt a last-layer gradient approximation by only considering the last layer gradients of pretrained encoder. We select a subset of mini-batches by matching the weighted sum of mini-batch pretraining gradients to the mini-batch task gradients. Let B p and B t be the batch size of pretraining and end-task. The use of mini-batches considerably reduces the number of selection rounds during the ISS algorithm by a factor of B, resulting in B p * B t speed up." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "To evaluate the efficiency and generality of our approach, we conduct experiments in two settings: pretraining from scratch, and further pretraining." }, { "figure_ref": [], "heading": "Pretraining from Scratch", "publication_ref": [ "b13", "b50", "b25", "b11", "b16", "b31", "b20", "b52", "b33", "b32", "b50", "b50", "b12", "b29", "b13", "b50", "b50" ], "table_ref": [ "tab_3" ], "text": "Datasets. Following the setting of Gururangan et al. (2020); Yao et al. (2022), we conduct ex-periments on eight tasks covering four domains, including biomedical science, computer science, news, and reviews. The tasks represent both high-and low-resource (≤ 5K samples) settings, including CHEMPROT (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), SCIERC (Luan et al., 2018), HyPERPARTISAN (Kiesel et al., 2019), AGNEws (Zhang et al., 2015), HELPFULNESS (McAuley et al., 2015), IMDB (Maas et al., 2011). Table 2 reports the statistic results of various target datasets. Similar to TLM (Yao et al., 2022), we collect two pretraining corpora that respectively match the original corpora of BERT and RoBERTa. We name them C BERT and C RoBERT a , respectively.\nBaselines. We focus on comparison with general PLMs and TLM. Following Yao et al. (2022), we finetuned both BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) of base and large scales as our baselines. And we finetuned the released TLM models as baselines.\nEvaluation Strategy. The results of the experiment are the average performance of three random seeds with the standard deviations. Following Gururangan et al. (2020), we report the test micro-F1 for ChemProt and RCT, and macro-F1 for the rest of datasets. Following TLM (Yao et al., 2022), we set three pretraining scales, namely small, medium, and large scales. Differently, at the same scale, our method only utilizes 20% size of the TLM data. More detailed settings are shown in Table A.1 in Appendix.\nTraining Details. We utilize the randomly initialized BERT of base scale as our starter models. We mostly follow optimization, and hyperparameters choices used in Yao et al. (2022) Table 3: Evaluation results for ISS at three different training scales. For each task, we report the average F1 score across three random seeds with standard deviations as subscripts. We also show the number of parameters, the total training compute (FLOPs), and the size of training corpus for comparison." }, { "figure_ref": [], "heading": "Further Pretraining", "publication_ref": [ "b25", "b11", "b16", "b31", "b30", "b12", "b29", "b26", "b2", "b13" ], "table_ref": [], "text": "Datasets. We perform further pretraining in biomedical science and computer science domains. Specifically, we conduct experiments on four datasets, including CHEMPROT (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), SCIERC (Luan et al., 2018). For the pretraining stage, we collect the unlabeled datasets from S2ORC (Lo et al., 2020).\nBaselines. We select general PTMs (Devlin et al., 2019;Liu et al., 2019) and domain-specific further pretraining models (Lee et al., 2020;Beltagy et al., 2019;Gururangan et al., 2020) as our baselines. Finetuning on the end-task occurs after further pretraining on domain unlabeled corpora.\nEvaluation Strategy. Similar to pretraining from scratch, we report the average performance across three random seeds. And we report the micro-F1 for ChemProt and RCT, and macro-F1 for ACL-ARC and SCIERC.\nTraining Details. In this setting, we perform further pretraining on off-the-shelf pretrained models, such as BERT and RoBERTa. All experiments were conducted on 4 NVIDIA GeForce RTX 3090 GPUs. Detailed hyper-parameters are provided in Table A.2 in Appendix." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "In this section, we will discuss the results of comparing our methods against other baselines. " }, { "figure_ref": [], "heading": "Pretraining from Scratch", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "82.85", "publication_ref": [], "table_ref": [], "text": "Table 4: Evaluation results for ISS in further pretraining. We report the average F1 score across three random seeds with standard deviations as subscripts.\nsults that are better than or comparable to the PLM baselines with significant reductions in FLOPs and the size of training data. At the large scale, ISS achieves comparable results to RoBERTa-large, with an average of 0.19% of FLOPs and 0.45% of the training corpus. At the small and medium scales, ISS improves the performance by 0.29 and 0.74 points on average respectively; 2) At the same data scale, ISS significantly outperforms TLM, which indicates that task label information is crucial. And the influence-based subset selection can select more influential pertaining samples; 3) ISS could offer limited performance gains on highresource datasets. It demonstrates that the influence of the pretraining samples would be decreased as the task data grows sufficiently." }, { "figure_ref": [], "heading": "Further Pretraining", "publication_ref": [], "table_ref": [], "text": "We compared ISS with other domain-specific further pretraining methods. Differently, we initialize the network with off-the-shelf pretrained models to provide initialization and select influential subsets from the domain corpus. results. In conclusion, our method outperforms all the baselines, with significant reductions in FLOPs and the size of training data by one order of magnitude or more. It proves our approach is feasible." }, { "figure_ref": [], "heading": "Comparison of Pretraining Steps", "publication_ref": [], "table_ref": [], "text": "To validate the effect of pretraining steps, we compare the performance of ISS with TLM at different pretraining steps. The test results on the four tasks with different pretraining steps are shown in Figure 3. We observe that ISS could achieve the best performance with fewer steps on most of the datasets." }, { "figure_ref": [], "heading": "Subset Size for Pretraining", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "To compare the performance at different data scales, we extracted subsets from the TLM small-scale corpus at different scales via ISS and TLM, respectively. The results are shown in Table 5. We can observe that the performance of TLM becomes better as the dataset grows, but the best results are still lower than those of our method. In ISS, the F1-score would reach the top at the 20%-40% scale and gradually decrease as the data size grows. We believe that as the dataset expands, task-irrelevant or noisy data is added." }, { "figure_ref": [], "heading": "Last Better than First", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "As explained in Section 3.3, the last layer of gradients of the model encoder is only considered to speed up the computation. We have studied the relationship between the gradients at the different layers used in ISS and the corresponding performances. Table 3 shows the results on Chemprot and SciERC. We can observe that the closer the layer, to the task head, the better the selected subset works.\nThe phenomena suggest that different layers in the language model can capture different information, with layers closer to the task head learning more information about the task.\nTable 7 shows the times required by ISS calculating influences at the different layers. Overall, the time cost of selecting a subset is negligible compared to pretraining. In addition, the computational speed based on the last layer would be nearly double, compared to that at the embedding layer.\n6 Analysis" }, { "figure_ref": [ "fig_2" ], "heading": "Visualization of Pretrained Model", "publication_ref": [ "b45" ], "table_ref": [], "text": "We visualize the task data on ISS-small, BERT, and RoBERTa, using the t-SNE algorithm (Van der Maaten and Hinton, 2008). The results are shown in Figure 4. We can observe that the different classes of deep features in ISS-small formed tighter clusters, suggesting that ISS provides better initialization for downstream tasks. In contrast, the features learned by BERT and Roberta are distributed respectively in separate clusters with overlapping parts that could not be distinguished." }, { "figure_ref": [], "heading": "Analyzing of Task-influential Words", "publication_ref": [ "b28" ], "table_ref": [ "tab_7" ], "text": "We compute the point-wise mutual information (PMI) (Levy and Goldberg, 2014) between words and their corresponding labels in the task dataset. Briefly, PMI is used to measure the likelihood of two events occurring together, so the higher the PMI a word has, the more likely it is to be taskinfluential. We select words with high PMI as taskinfluential words, and compare their frequency in ISS-small and TLM-small datasets, respectively. As shown in Table 6, the word frequency in the ISS-small dataset is higher than that in the TLMsmall dataset. Thus, ISS may focus more on taskinfluential words.\n7 Related Work" }, { "figure_ref": [], "heading": "Efficient Pretraining for PLMs", "publication_ref": [ "b41", "b9", "b27" ], "table_ref": [], "text": "Many attempts have been made to improve the efficiency of pretraining. Parallel architectures (Shazeer et al., 2018;Wang et al., 2020b) are commonly used in pretraining. However, parallelism (Clark et al., 2019) applies the replaced token detection which is more challenging. PMI-Masking (Levine et al., 2020) selectively masks tokens based on their importance. However, their improvements are limited, with less than an order of magnitude reduction in computational expenses (measured in FLOPs). Orthogonal to these works, ISS investigates reducing training data redundancy by the influence of pretraining data points." }, { "figure_ref": [], "heading": "Further Pretraning in NLP", "publication_ref": [ "b13", "b2", "b1", "b26" ], "table_ref": [], "text": "Continually pretraining can effectively improve PTMs' performance on new domains or downstream tasks (Gururangan et al., 2020). To achieve it, most previous works continually optimize the pretrained model parameters on a large number of corpora collected from the target domain (e.g., scientific (Beltagy et al., 2019), finance (Araci, 2019) and bio-media (Lee et al., 2020)). However, it is computationally expensive to further pretrain the model on a large amount of unlabeled data and it may not be feasible to collect such a large scale of unlabeled data on certain domains. In contrast, ISS does not need any additional domain data and only utilizes the general corpus. In addition, our approach can also be employed for further pretraining, as we demonstrate in our experiments." }, { "figure_ref": [], "heading": "Dataset Pruning", "publication_ref": [ "b34", "b0", "b21", "b38", "b44", "b38", "b40", "b44", "b50" ], "table_ref": [], "text": "Dataset pruning is closely related to the coreset selection methods (Mirzasoleiman et al., 2020;Agarwal et al., 2004), which try to identify the most representative training samples. Several works (Killamsetty et al., 2021;Rebuffi et al., 2017;Toneva et al., 2018) have studied dataset pruning for efficient training of deep learning models in supervised learning and active learning scenarios.\nDataset pruning methods typically rely on a predefined criterion to compute a scalar score for each training example, e.g. the compactness (Rebuffi et al., 2017), diversity (Sener and Savarese, 2017), and forgetfulness (Toneva et al., 2018), and then rank and select the training data according to the computed score. Recently, Yao et al. (2022) proposed TLM for transfer learning, which retrieves a subset from the pretraining corpus that is more similar to the task corpus. However, these methods are heuristic and lack of generalization guarantee, they also discard the influence interaction between the collected samples. Our proposed method overcomes these shortcomings." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose Influential Subset Selection for language model, which aims to reduce the computational costs of pretraining from data level. Specifically, we introduce influence function to measure the importance of each pretraining sample. Moreover, we design a simple, efficient, gradient matching-based method for influence estimation, which significantly speeds up the estimation time.\nExperiments on various datasets demonstrate that our method achieves comparable performance with PTMs, with a reduction of training FLOPs by three orders of magnitude." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "There are two potential risks with our method. First, ISS trades generality for efficiency by learning only task-specific representations. Consequently, it may not be suitable for other tasks. Secondly, our method is hardly practical for few-shot or zeroshot learning, as few or no task data are available as anchor points. These potential risks are left to future work." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b43", "b3", "b5" ], "table_ref": [], "text": "Pretraining from scratch and further pretraining such as DAPT need large-scale unlabeled corpus to learn general knowledge, which results in corresponding greenhouse emissions due to energy consumption (Strubell et al., 2019). However, as shown in Section 5, our new efficient algorithms greatly increase the data efficiency of PTMs, reducing these harms as well as the various harms associated with labor for data collection. Our work introduces a new subset selection algorithm but leverages pre-existing datasets and models. Overall, this work inherits some of the risks of the original work upon which it is implemented, (such as bias (Bender et al., 2021) or privacy leakage (Carlini et al., 2021). " }, { "figure_ref": [], "heading": "A Detailed Experiment Settings", "publication_ref": [], "table_ref": [], "text": "" } ]
Pretrained language models have achieved remarkable success in various natural language processing tasks. However, pretraining has recently shifted toward larger models and larger data, and this has resulted in significant computational and energy costs. In this paper, we propose Influence Subset Selection (ISS) for language model, which explicitly utilizes end-task knowledge to select a tiny subset of the pretraining corpus. Specifically, the ISS selects the samples that will provide the most positive influence on the performance of the end-task. Furthermore, we design a gradient matching based influence estimation method, which can drastically reduce the computation time of influence. With only 0.45% of the data and a three-orders-of-magnitude lower computational cost, ISS outperformed pretrained models (e.g., RoBERTa) on eight datasets covering four domains.
Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model
[ { "figure_caption": ":loss landscape of pre-training : loss landscape of end-task : gradient of pre-training sample : gradient of end-task sample", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Visualization of sentence representation on Chemprot using the t-SNE algorithm (Van derMaaten and Hinton, 2008). Each color denotes a class.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Qualitative comparison between PLMs, TLM, and ISS(ours). X/Y-Dep means the pretraining data is X/Y dependent.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "φt ,", "figure_data": "endSort pretraining samples based on influenceAdd top k influential samples to SendReturn influential subset S", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": ". All experiments were conducted on 4 NVIDIA GeForce RTX 3090 GPUs. Detailed hyper-parameters are provided in Table A.1 in Appendix. Statistics of various target datasets. † indicates high-resource settings. FLOPs 2 AGNews Hyp. Help. IMDB ACL. SciERC Chem. RCT Avg.", "figure_data": "DomainTaskTrain Dev. Test ClassesBIOMEDCHEMPROT † RCT4169 2427 3469 18040 30212 3013513 5CSACL-ARC SCIERC1688 3219114 455139 9746 7NEWSHYPERPARTISAN † AGNEWS515 115000 5000 7600 65 652 4REVIEWS† HELPFULNESS 115251 5000 25000 † IMDB 20000 5000 250002 2", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table3shows the main results of ISS with the according TLM and PLMs baselines at three different scales. The followings are the related comparison and analysis we conducted: 1) ISS could achieve re-Comparison of ISS and TLM at different pretraining steps. The experiments were conducted on the small-scale dataset, and notably, the data scale of TLM was five times larger than ours.", "figure_data": ")VFRUH)VFRUH)VFRUH)VFRUH,66,66,66,667/07/07/07/0SUHWUDLQVWHSVSUHWUDLQVWHSVSUHWUDLQVWHSVSUHWUDLQVWHSV(a) Helpfullness(b) Chemprot(c) SciERC(d) ACL-ARCFigure 2: ModelParam Data FLOPsBIOMEDCSAvg.RCT Chem ACL SciERCBERT-Base109M 16G 2.79E19 87.00 81.94 69.45 80.98 79.84RoBERTa-base125M 160G 1.54E21 87.23 82.60 68.34 81.35 79.88SciBERT109M 15G 2.65E19-83.64 70.98 79.97-BioBERT109M 96G 1.80E20-76.46---DAPT125M 47G 1.58E18 87.6 84.2 75.480.8 82.00DAPT+TAPT125M 47G 1.77E18 87.8 84.4 75.681.3 82.2887.3683.9076.0683.91ISS-DAPT(BERT)109M 1.7G 6.9E1782.81±0.02±0.10±0.70±0.3887.5784.8876.7082.23ISS-DAPT(RoBERTa) 125M 1.7G 7.9E17±0.06±0.10±0.25±0.30", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Table 4 shows the main", "figure_data": "AGNewsSciERCChemprotISSTLMISSTLMISSTLM10%94.34 ±0.0894.08 ±0.0780.82 ±0.4181.41 ±0.1680.80 ±0.3480.15 ±0.3220%94.40 ±0.0694.16 ±0.0983.70 ±0.3181.21 ±0.4482.82 ±0.4181.51 ±0.5540%94.14 ±0.0594.05 ±0.1883.16 ±0.0782.48 ±0.4381.98 ±0.1481.75 ±0.0460%94.08 ±0.0294.07 ±0.0982.51 ±0.2983.05 ±0.2082.08 ±0.2281.80 ±0.4180%94.17 ±0.0494.27 ±0.0981.71 ±0.2481.75 ±0.1581.83 ±0.3081.86 ±0.47", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results on the development set with different data scales.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of the frequency of task influential words in different subsets.", "figure_data": ")VFRUH)VFRUHHPEHGGLQJOD\\HU OD\\HU OD\\HU OD\\HU /D\\HUQDPHHPEHGGLQJOD\\HU OD\\HU OD\\HU OD\\HU /D\\HUQDPH(a) Chemprot(b) SciERCFigure 3: F1-score results of ISS with gradients of dif-ferent layers (i.e., Embedding layer, 3/6/9/12-th Trans-former Block) over Chemprot and SciERC.Related-Label PMIAGNewsISS(small) /% TLM(small) /%immigrationWorld1.3410.00720.0070policyWorld1.1870.04930.0401chinaWorld0.3820.08360.0695medalsSports1.4000.01390.0136goldsSports1.4000.00090.0008sportsSports1.2930.04590.0454financialBusiness1.0540.07170.0567commerceBusiness0.8440.00970.0081businessBusiness0.7100.11700.0952automationSci/Tech1.4200.00430.0028internetSci/Tech1.2240.07290.0524technologySci/Tech1.1150.08640.0661", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparison of the speed of computing influences using different layers. The experiments were conducted on Chemport dataset.", "figure_data": "Layer nameCost timesSmallLargeEmbedding2.0 hours 5.2 hours3-th Transformer1.8 hours 4.8 hours6-th Transformer1.6 hours 4.4 hours9-th Transformer1.4 hours 4.0 hours12-th Transformer 1.1 hours 3.6 hours", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" } ]
Xiao Wang; Weikang Zhou; Qi Zhang; Jie Zhou; Songyang Gao; Junzhe Wang; Menghan Zhang; Xiang Gao; Yunwen Chen; Tao Gui
[ { "authors": "Sariel Pankaj K Agarwal; Har-Peled; Kasturi R Varadarajan", "journal": "Journal of the ACM (JACM)", "ref_id": "b0", "title": "Approximating extent measures of points", "year": "2004" }, { "authors": "Dogu Araci", "journal": "", "ref_id": "b1", "title": "Finbert: Financial sentiment analysis with pre-trained language models", "year": "2019" }, { "authors": "Iz Beltagy; Kyle Lo; Arman Cohan", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "SciB-ERT: A pretrained language model for scientific text", "year": "2019" }, { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "", "ref_id": "b3", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Nicholas Carlini; Florian Tramer; Eric Wallace; Matthew Jagielski; Ariel Herbert-Voss; Katherine Lee; Adam Roberts; Tom Brown; Dawn Song; Ulfar Erlingsson", "journal": "", "ref_id": "b5", "title": "Extracting training data from large language models", "year": "2021" }, { "authors": "Rewon Child; Scott Gray; Alec Radford; Ilya Sutskever", "journal": "", "ref_id": "b6", "title": "Generating long sequences with sparse transformers", "year": "2019" }, { "authors": "Valerii Krzysztof Marcin Choromanski; David Likhosherstov; Xingyou Dohan; Andreea Song; Tamas Gane; Peter Sarlos; Jared Quincy Hawkins; Afroz Davis; Lukasz Mohiuddin; Kaiser", "journal": "", "ref_id": "b7", "title": "Rethinking attention with performers", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b8", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b9", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "year": "2019" }, { "authors": "Dennis Cook; Sanford Weisberg", "journal": "Chapman and Hall", "ref_id": "b10", "title": "Residuals and influence in regression", "year": "1982" }, { "authors": "Franck Dernoncourt; Ji Young; Lee ", "journal": "Asian Federation of Natural Language Processing", "ref_id": "b11", "title": "PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": "Yanping Huang; Youlong Cheng; Ankur Bapna; Orhan Firat; Dehao Chen; Mia Chen; Hyoukjoong Lee; Jiquan Ngiam; V Quoc; Yonghui Le; Wu", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "year": "2019" }, { "authors": "Mandar Joshi; Danqi Chen; Yinhan Liu; Luke Daniel S Weld; Omer Zettlemoyer; Levy", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b15", "title": "Spanbert: Improving pre-training by representing and predicting spans", "year": "2020" }, { "authors": "David Jurgens; Srijan Kumar; Raine Hoover; Dan Mc-Farland; Dan Jurafsky", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b16", "title": "Measuring the evolution of a scientific field through citation frames", "year": "2018" }, { "authors": "Jared Kaplan; Sam Mccandlish; Tom Henighan; Tom B Brown; Benjamin Chess; Rewon Child; Scott Gray; Alec Radford; Jeffrey Wu; Dario Amodei", "journal": "", "ref_id": "b17", "title": "Scaling laws for neural language models", "year": "2020" }, { "authors": "Angelos Katharopoulos; Apoorv Vyas; Nikolaos Pappas; François Fleuret", "journal": "", "ref_id": "b18", "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b19", "title": "", "year": "" }, { "authors": "Johannes Kiesel; Maria Mestre; Rishabh Shukla; Emmanuel Vincent; Payam Adineh; David Corney; Benno Stein; Martin Potthast", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "SemEval-2019 task 4: Hyperpartisan news detection", "year": "2019" }, { "authors": "Krishnateja Killamsetty; Ganesh Durga; Abir Ramakrishnan; Rishabh De; Iyer", "journal": "", "ref_id": "b21", "title": "Grad-match: Gradient matching based data subset selection for efficient deep model training", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "Pang Wei; Koh ; Percy Liang", "journal": "", "ref_id": "b23", "title": "Understanding black-box predictions via influence functions", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "Jens Kringelum; Sonny Kim Kjaerulff; Søren Brunak; Ole Lund; Tudor I Oprea; Olivier Taboureau", "journal": "Database", "ref_id": "b25", "title": "Chemprot-3.0: a global chemical biology diseases mapping", "year": "2016" }, { "authors": "Jinhyuk Lee; Wonjin Yoon; Sungdong Kim; Donghyeon Kim; Sunkyu Kim; Chan Ho; So ; Jaewoo Kang", "journal": "Bioinformatics", "ref_id": "b26", "title": "Biobert: a pretrained biomedical language representation model for biomedical text mining", "year": "2020" }, { "authors": "Yoav Levine; Barak Lenz; Opher Lieber; Omri Abend; Kevin Leyton-Brown; Moshe Tennenholtz; Yoav Shoham", "journal": "", "ref_id": "b27", "title": "Pmi-masking: Principled masking of correlated spans", "year": "2020" }, { "authors": "Omer Levy; Yoav Goldberg", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Neural word embedding as implicit matrix factorization", "year": "2014" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b29", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Kyle Lo; Lucy Lu Wang; Mark Neumann; Rodney Kinney; Daniel Weld", "journal": "", "ref_id": "b30", "title": "S2ORC: The semantic scholar open research corpus", "year": "2020" }, { "authors": "Yi Luan; Luheng He; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction", "year": "2018" }, { "authors": "Andrew L Maas; Raymond E Daly; Peter T Pham; Dan Huang; Andrew Y Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Learning word vectors for sentiment analysis", "year": "2011" }, { "authors": "Julian Mcauley; Christopher Targett; Qinfeng Shi; Anton Van Den; Hengel", "journal": "", "ref_id": "b33", "title": "Image-based recommendations on styles and substitutes", "year": "2015" }, { "authors": "Baharan Mirzasoleiman; Jeff Bilmes; Jure Leskovec", "journal": "", "ref_id": "b34", "title": "Coresets for data-efficient training of machine learning models", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b35", "title": "", "year": "" }, { "authors": "Matthew E Peters; Mark Neumann; Mohit Iyyer; Matt Gardner; Christopher Clark; Kenton Lee; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Deep contextualized word representations", "year": "2018" }, { "authors": "Sebastian Jack W Rae; Trevor Borgeaud; Katie Cai; Jordan Millican; Francis Hoffmann; John Song; Sarah Aslanides; Roman Henderson; Susannah Ring; Young", "journal": "", "ref_id": "b37", "title": "Scaling language models: Methods, analysis & insights from training gopher", "year": "2021" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "", "ref_id": "b38", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "Aurko Roy; Mohammad Saffar; Ashish Vaswani; David Grangier", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b39", "title": "Efficient content-based sparse attention with routing transformers", "year": "2021" }, { "authors": "Ozan Sener; Silvio Savarese", "journal": "", "ref_id": "b40", "title": "Active learning for convolutional neural networks: A core-set approach", "year": "2017" }, { "authors": "Noam Shazeer; Youlong Cheng; Niki Parmar; Dustin Tran; Ashish Vaswani; Penporn Koanantakool; Peter Hawkins; Hyoukjoong Lee; Mingsheng Hong; Cliff Young", "journal": "Advances in neural information processing systems", "ref_id": "b41", "title": "Mesh-tensorflow: Deep learning for supercomputers", "year": "2018" }, { "authors": "Shaden Smith; Mostofa Patwary; Brandon Norick; Patrick Legresley; Samyam Rajbhandari; Jared Casper; Zhun Liu; Shrimai Prabhumoye; George Zerveas; Vijay Korthikanti", "journal": "", "ref_id": "b42", "title": "Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model", "year": "2022" }, { "authors": "Emma Strubell; Ananya Ganesh; Andrew Mccallum", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Energy and policy considerations for deep learning in NLP", "year": "2019" }, { "authors": "Mariya Toneva; Alessandro Sordoni; Remi Tachet Des Combes; Adam Trischler; Yoshua Bengio; Geoffrey J Gordon", "journal": "", "ref_id": "b44", "title": "An empirical study of example forgetting during deep neural network learning", "year": "2018" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b45", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Sinong Wang; Belinda Z Li; Madian Khabsa; Han Fang; Hao Ma; ; ", "journal": "", "ref_id": "b46", "title": "Linformer: Selfattention with linear complexity", "year": "2020" }, { "authors": "Wenhui Wang; Furu Wei; Li Dong; Hangbo Bao; Nan Yang; Ming Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b47", "title": "Minilm: Deep selfattention distillation for task-agnostic compression of pre-trained transformers", "year": "2020" }, { "authors": "Xiao Wang; Shihan Dou; Limao Xiong; Yicheng Zou; Qi Zhang; Tao Gui; Liang Qiao; Zhanzhan Cheng; Xuanjing Huang", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "MINER: Improving out-of-vocabulary named entity recognition from an information theoretic perspective", "year": "2022" }, { "authors": "Xiao Wang; Weikang Zhou; Can Zu; Han Xia; Tianze Chen; Yuansen Zhang; Rui Zheng; Junjie Ye; Qi Zhang; Tao Gui", "journal": "", "ref_id": "b49", "title": "Instructuie: Multitask instruction tuning for unified information extraction", "year": "2023" }, { "authors": "Xingcheng Yao; Yanan Zheng; Xiaocong Yang; Zhilin Yang", "journal": "", "ref_id": "b50", "title": "Nlp from scratch without largescale pretraining: A simple and efficient framework", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b51", "title": "", "year": "" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "Advances in neural information processing systems", "ref_id": "b52", "title": "Character-level convolutional networks for text classification", "year": "2015" } ]
[ { "formula_coordinates": [ 2, 317.11, 339.41, 207.3, 30.47 ], "formula_id": "formula_0", "formula_text": "θ ,z = arg min θ∈Θ 1 n z i ∈D l (z i , θ) + • l(z, θ) (1)" }, { "formula_coordinates": [ 2, 315.67, 435.61, 208.74, 33.45 ], "formula_id": "formula_1", "formula_text": "I param (z) = d θ ,z d =0 = -H -1 θ ∇ θ l(z, θ) (2)" }, { "formula_coordinates": [ 3, 78.44, 246.8, 210.69, 100.27 ], "formula_id": "formula_2", "formula_text": "I (z p , z t ) dl z t , θ ,z d =0 = ∇ θ l z t , θ d θ ,z d =0 = -∇ θ l z t , θ H -1 θ ∇ θ l(z p , θ)(3)" }, { "formula_coordinates": [ 3, 100, 92.95, 349.64, 681.25 ], "formula_id": "formula_3", "formula_text": "I (z p , z t ) = l t (z p , θ t ) -l t (z p , θ t+1 ) (4) 𝜃 𝑔 ! 𝑔 \" 𝑔 # 𝑔 \" # 𝑔 # > 𝑔 ! # 𝑔 #" }, { "formula_coordinates": [ 3, 308.68, 403.09, 213.19, 29.11 ], "formula_id": "formula_4", "formula_text": "l t (z p , θ t+1 ) =l t (z p , θ t ) + ∇ θ l t (z p , θ t ) • (θ t+1 -θ t ) + O θ t+1 -θ t 2" }, { "formula_coordinates": [ 3, 309.29, 590.07, 215.12, 18.02 ], "formula_id": "formula_5", "formula_text": "lt z , θt -lt z , θt+1 ≈ ηt∇ θ lt z , θt • ∇ θ lp (zt, θt)(6)" }, { "formula_coordinates": [ 4, 81.78, 173.49, 182.1, 119.12 ], "formula_id": "formula_6", "formula_text": "(z i ) + l t (z i ) for z p ∈ D do Compute ∇ θ l p z p , θ, φp end for z ∈ T v do Compute ∇ θ l t z , θ, φt , for z p ∈ D do I (z p , z ) = ∇ θ l p z p , θ, φp • ∇ θ l t z , θ," } ]
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b8", "b13", "b13", "b14", "b15", "b16", "b17", "b18" ], "table_ref": [], "text": "In this paper, we provide a modified version of conservative physics-informed neural networks (cPINN for short) to construct the weak solutions of hyperbolic conservation laws in non-conservative form. We use the generalized Buckley-Leverett (GBL for short ) equation with variable porosity in porous media to demonstrate our results. The GBL equation is read as\nϕũ t + f (ũ, x) x = 0, (x, t) ∈ R × R + ,(1.1)\nwhere ũ = ũ(x, t) is the saturation of the water, ϕ = ϕ(x) > 0 is the porosity of the medium, the positive constant M is the water over oil viscosity ratio, and the flux f (ũ, x) is defined as\nf (ũ, x) = ũ2 ũ2 + M (1 -ũ) 2 , 0 ≤ ũ ≤ 1, M > 0.\n(1.\n2)\nThe equation (1.1) models the motion of water flow in oil-water flows in a porous medium. The equation (1.1) is a scalar hyperbolic equation in non-conservative form. It can be re-written in a conservative form by the change of variable, as we let u(x, t) := ϕ(x)ũ(x, t).\n(1.3)\nThen we obtain the following equation in conservative form\nu t + f (u, ϕ(x)) x = 0, (x, t) ∈ R × R + ,(1.4)\nwhere the flux f (u, ϕ(x)) is defined as\nf (u, ϕ(x)) = u 2 u 2 + M [ϕ(x) -u] 2 , 0 ≤ u ≤ ϕ(x).\n(1.5)\nFor the derivation of the Buckley-Leverett equation, we refer the reader to [1,2]. In this paper, we focus on the case that the porosity ϕ is a piece-wise constant function so that ϕũ t is not defined in the sense of distribution in PDEs theory when u consists of discontinuities (or shock waves). We provide the application of cPINN to dedicate that the profiles of weak solutions to the Riemann problems of (1.1) and (1.4) are identical, which means that there is no difference for the machines whether the hyperbolic systems are in conservative form or not. Deep learning (DL for short) has clearly been integrated into many aspects of our daily lives. Among these are self-driving cars, face recognition technology, machine translation, and so forth. In addition, DL is employed in a fairly novel way to solve PDEs while adhering to any physics laws specified by the governed equations as the background knowledge. This concept is first materialized by [3] and is known as Physics-informed Neural Networks (PINN for short). As demonstrated in [4,5,6,7,8], PINN have been implemented auspiciously to solve a wide range of forward and inverse problems of PDEs. Nonetheless, the accuracy of the solution generated by machine is bounded below and the high training costs are some disadvantages of PINN [9]. Other than those two shortcomings, PINN's fundamental limitation is its inability to provide a satisfactory approximation to the PDEs with discontinuous solutions (for example, shock waves) [10].\nThe mathematical concept of using the neural networks to construct the solutions of hyperbolic systems of conservation laws is as follows. Consider a linear hyperbolic system with constant coefficients which is written as follows:\nu t + Au x = 0, (x, t) ∈ R × R + , u(x, 0) = u 0 (x), x ∈ R,(1.6)\nwhere u(x, t) is a vector-valued function in R p and A is a constant matrix of size p × p. Suppose that A has p distinct eigenvalues :\nλ 1 < λ 2 < • • • < λ p .\nThen the solution of the Cauchy problem (1.6) is given by\nu(x, t) = p k=1 l T k u 0 (x -λ k t)r k ,(1.7)\nwhere l k and r k are left and right eigenvectors of A for 1 ≤ k ≤ p respectively. Setting z k = l T k u 0 (x -λ k t). Then the formula (1.7) can be re-written as below\nu(x, t) = p k=1 l T k u 0 (x -λ k t)r k = p k=1 z k r k . (1.8)\nThus u(x, t) can be represented as a neural network with initial data being its activated function in Figure 1. In general, the activation functions in a neural network are hyperbolic tangent or sigmoid functions. With the results of the universal approximation theories in [11,12,13], the initial condition can be approximated by a neural network. Based on this fact, we believe that PINN can give an approximation solution to a Riemann problem of a nonlinear hyperbolic system. As we mentioned that the solution may be constructed by solving two stages of system (1.10), thus it is worth considering using different architectures of neural networks in each stage. More precisely, we divide the domain into two sub-domains, and we use one PINN to solve the problem in one subdomain. This strategy can be realized by applying conservative PINN. Conservative PINN (cPINN) is an extension of PINN where the algorithm's primary objective is to solve the conservation law. In fact, cPINN is an attempt to address the first two issues of PINN that were previously mentioned-the solution's accuracy and the high training cost. In cPINN, domains are divided into numerous non-overlapping subdomains. In each sub-domain, we consider different neural network architectures, such as ones with a different number of outputs (thus, scalar or system case), various numbers of hidden layers and neurons, distinct sets of hyper-parameters and parameters, different activation functions, different optimization techniques, distinct number of training and interior sample points, and so forth. This will increase our ability to select the best network for each sub-domain. In order to maintain the continuity, the solution in each subdomain is eventually pieced back together using the proper interface conditions. Despite the fact that the performance of cPINN on hyperbolic conservation laws was not thoroughly investigated in the original paper [9], we persist to use cPINN on our governed equations considering we have developed a completely distinct purpose for cPINN that allows us to implement various system and scalar architectures in each subdomain.\nOur goal is to simulate the Riemann problem of equation (1.4) by using a deep machine for the following Riemann problem\n   u t + f (u, ϕ(x)) x = 0, u(x, 0) = u L , x < 0, u R , x < 0,(1.9)\nwhere u L , u R are two constant states, and ϕ(x) = ϕ L when x < 0, and ϕ(x) = ϕ L when x > 0. Following the analysis in [14], we augment the equation (1.4) with ϕ t = 0. Then we obtain the following equivalent Riemann problem of 2 × 2 system of conservation laws\n   U t + F (U ) x = 0, U (x, 0) = U L , x < 0, U R , x < 0.\n(1.10)\nwhere\nU := (u, ϕ) T , F (U ) := (f (u, ϕ), 0) T , U L = (u L , ϕ L ) T ,U R = (u R , ϕ R ) T .\nThe solution of (1.10) consists of the elementary waves form each characteristic wave field, which is standing wave discontinuity from the linear degenerate field and the rarefaction, shock waves from the nonlinear genuinely field. Moreover, there is a path from U L to U R consisting of a sequence of corresponding wave curves in u -ϕ plane. Based on the results in [14], these waves are obtained by solving a two-by-two hyperbolic resonant system followed by a scalar hyperbolic equations with non-convex flux f (u, ϕ R ). When solving the Riemann problem (1.10), there is a time-independent wave in the solution, which is called the standing wave discontinuity, due to the fact that the first characteristic field is linear degenerate. For the second characteristic field which is genuinely nonlinear, the rarefaction curves are horizontal line in u -ϕ plane, thus ϕ stays constant on the rarefaction wave curves. Since the solution of (2.1) consists of a standing wave coming from the two-by-two system, and a time dependent nonlinear wave coming from the scalar equation, according to the construction of the solution, we propose to use two different neural networks, one for the case of system, and the other for the scalar equation respectively. These two neural networks are separated by an interface. From the observation of the standing shock and the Riemann data of ϕ, we are able to specify the location of interface, where is on the right hand side and close to the wall x = 0 (or t axis).\nIn this paper, we also consider the case that the initial data is critical. It means that the initial data is extremely close to zero. In this case, the weak solutions constructed by the original cPINN either have the wrong profile or the speeds are incorrect. To overcome the problem, we propose to re-scale the unknowns u and ϕ so that the new flux under the re-scaling becomes non-singular. As we observe, an entropy condition is required we deal with the discontinuous solutions in PINN and cPINN. The well-known entropy conditions that are frequently used in PINN are the ones invented by Oleinik [15,16], Kruzkhov [17], and the concept of entropy-entropy flux pair [18,19] in PDEs. In our framework, Oleinik's entropy condition is used and is needed to be modified to obtain the correct speeds of weak solutions in the critical case. In addition, the choice of the scaling parameters becomes an important issue. Under a suitable choice of re-scaling parameters and the re-scaling technique, we are able to construct the correct entropy solutions for the critical case.\nIn summary, the contributions of this paper are as follows:\n1. This study gives a general framework of constructing the weak solutions of the hyperbolic conservation laws in non-conservative form or the balance laws with discontinuous perturbations in the flux, for example, the generalized Buckley-Leverett equation with discontinuous porosity (1.1). To the best of our knowledge, PINN or cPINN have not been used for such kind of systems.\n2. The study of critical states-which, as far as we are aware, has not been taken into account in the equations that PINN and cPINN solved-are also covered in this study. To overcome the difficulty that emerges in critical states, we impose the re-scaling process on the unknowns.\nThe paper is organized as follows. The review of previous results for the theoretical analysis to the GBL equation is given in Section 2. The review of PINN and cPINN is addressed in Section 3. Our main results are given in Section 4, and the experimental results are in Section 5, followed by the conclusions in Section 6." }, { "figure_ref": [ "fig_1" ], "heading": "Theoretical Results on Riemann problem of the GBL equation", "publication_ref": [ "b13", "b19" ], "table_ref": [], "text": "The existence and behaviour of the weak solution to the Riemann problem has been proven in the paper [14], thus in this section, we briefly review the result. To convert the equation (1.4) into a 2 × 2 system of conservation laws, we add ϕ t = 0. Then the Riemann problem of the GBL equation is given by\n   U t + F (U ) x = 0, U (x, 0) = U L := (u L , ϕ L ) T , x < 0, U R := (u R , ϕ R ) T , x > 0,(2.1)\nwhere\nU := (u, ϕ) T and F (U ) := (f (u, ϕ), 0) T and ϕ L , u L , ϕ R and u R are constant with 0 < u L ≤ ϕ L , 0 < u L ≤ ϕ L . We say a L 1 function u is a weak function of (1.4) if for any ψ ∈ C ∞ 0 × [0, ∞), u satisfies t>0 uψ t + f (u, x)ψ x dxdt + R u 0 (x)ψ(x, 0)dx = 0. (2.2)\nBy direct computation, we know that the eigenvalues and their corresponding right eigenvectors are λ 0 (U ) = 0 and r 0\n(U ) = (f ϕ , -f u ) T ,(2.3\n)\nλ 1 (U ) = f u (U ) = 2M ϕu(ϕ -u) D 2 (u, ϕ) and r 1 (U ) = (1, 0) T ,(2.4)\nwhere\nD(u, ϕ) = u 2 + M (ϕ -u) 2 .\nIt is clear that the 0-th characteristic field is linear degenerate due to the fact that λ 0 is zero. We have to notice that the first characteristic field is neither genuinely nonlinear nor linear degenerate because ∇λ 1 (U ) • r 1 (U ) = f uu (U ) has a unique root in (0, ϕ). More precisely,\n∇λ 1 (U ) • r 1 (U ) = f uu (U ) = 2M ϕ D 3 (u, ϕ) -u 3 + (ϕ -u)(-3u 3 + 3M u(ϕ -u) + M (ϕ -u) 2 ) . (2.5)\nAfter straightly forward computation, we know that there exist a unique m * such that (u, ϕ) satisfies\n0 < u < ϕ < m * u, f uu < 0. (2.6) Similarly, as (u, ϕ) satisfies 0 < u < m * u < ϕ, f uu > 0. (2.7)\nFor convenience, we define the following two open regions : The construction of the self-similar type of weak solution to (2.1) by Lax method requirs to study the element waves for 0-th and 1-th wave fields and their wave curves in u -ϕ plane. Due to the fact that 0-th characteristic field is linear degenerate, the contact discontinuity connecting two constant states occurs in the solution of (2.1). Denoting {(u, ϕ)} to be the constant states of such contact discontinuity connected to {(u L , ϕ L }. Then according to the Rankine-Hugoniot condition and the speed of jump is zero, we obtain\nΩ -= {(u, ϕ) | 0 < u < ϕ < m * u}, Ω + = {(u, ϕ) | 0 < u < m * u < ϕ}. (2.8) (a) (b) (c) (d)\nf (u, ϕ) = f (u L , ϕ L ).(2.9)\nIt is equivalent to\nu 2 u 2 + M (ϕ -u) 2 = u 2 L u 2 L + M (ϕ L -u L ) 2 .\n(2.10)\nThe 0-th shock curve which is also the 0-th rarefaction curve is obtained immediately by solving (2.10). That is,\n{(u, ϕ)} satisfy u = u L ϕ L ϕ.\n(2.11)\nBefore we study the 1 -th characteristic field, we point out the fact that if U L ∈ Ω ± then the curve defined in (2.11) is in Ω ± .\nNext, for the 1 -th characteristic field (λ 1 , r 1 ). For both rarefaction curve and shock wave, we have ϕ = ϕ R . It follows that generalized elementary wave solves\nu t + f (u, ϕ R ) x = 0.\n(2.12)\nFirstly, we consider\nf uu (u L , ϕ R )f uu (u R , ϕ R ) > 0.\nThe elementary wave is either shock wave or rarefaction wave since U L and U R are in the same region\nΩ + (or Ω -). Secondly, if f uu (u L , ϕ R )f uu (u R , ϕ R ) < 0.\nIn this case, U L and U R are located in different regions, so the solution wave will cross the region from Ω + to Ω -(or reversely). The difficulty we have in this case is that the flux in equation (2.12) is neither convex nor concave. To overcome this difficulty, we define u * by\nf u (u * , ϕ R ) = f (u * , ϕ R ) -f (u R , ϕ R ) u * -u R . (2.13)\nThe existence of u * is stated in A.\nAccording to the Gelfond's construction from Oleinik's work [20], we have the following two results. Given U L ∈ Ω + . If u L < u * < u R , then the solution wave is a rarefaction-shock wave. If u * < u L < u R , then only shock connects two states. For the case, U L ∈ Ω -, the results are symmetric. In Figure 2, we show the wave curves for four different cases." }, { "figure_ref": [], "heading": "Conservative PINN (cPINN)", "publication_ref": [ "b21" ], "table_ref": [], "text": "In conservative physics-informed neural networks (cPINN), the domain is divided into several subdomains. For instance, as the domain is divided into N sd subdomains, the output for the n-th subdomain, is denoted by ûθn (z) for n = 1, 2, . . . , N sd . Thus, the final output after we stitch back all the subdomains can be written as \nûθ (z) = N sd n=1 ûθn (z). (3\nL(θ n ) = ω un M SE un + ω fn M SE fn + ω In (M SE f luxn + M SE avgn ), n = 1, 2, . . . , N sd ,(3.3)\nwhere the notations ω un , ω fn , and ω In are the training, interior, and interface weights, respectively. Furthermore, the mean square errors (MSE) on the n-th subdomain can be described as\nM SE un = 1 N un Nu n i=1 u i n -ûi θn (x i un , t i un ) 2 (3.4) M SE fn = 1 N fn N fn i=1 f (x i fn , t i fn ) 2 (3.5) M SE f luxn = 1 N In N In i=1 f n (x i In , t i In ) • n -f n + (x i In , t i In ) • n 2 (3.6) M SE avgn = 1 N In N In i=1 ûi θn (x i In , t i In ) -ûi θn (x i In , t i In ) 2 (3.7)\nwhere the parameters of the n-th subdomain are represented by the subscript θ n . The symbol f denotes the governing equation's residual; as computing the residual necessitates employing the derivatives of the independent variables based on the governing equation, automatic differentiation (AD) [22] is required. Additionally, f n represents the flux in the n-th subdomain. The adjacent subdomains are indicated by the superscript +. Moreover, the average ûi θn value throughout the shared interface across the subdomains is indicated as \nûavg = ûi θn (x i In , t i In ) ≜ ûi θn + ûi θ n + 2 . (3" }, { "figure_ref": [ "fig_2" ], "heading": "cPINN for Generalized Buckley-Leverett Equations", "publication_ref": [], "table_ref": [], "text": "The domain was divided into two subdomains, with subdomain 1 (SD1) handling the system case and subdomain 2 (SD2) solving the scalar case. As a result of dealing with the system case in subdomain 1, the neural network outputs are ϕ and u. The output of subdomain 2, however, is merely u, with the variable ϕ fixed at ϕ R . This implementation adheres to the theoretical procedure for determining the generalized Buckley-Leverett solution (explained in section 2). In addition, an extra constraint, such as the Oleinik entropy condition, must be enforced. The Oleinik entropy condition is essential when dealing with a solution that involves shock, the entropy condition is therefore applied in the second subdomain. To illustrate, Figure 3 depicts the schematic representation of cPINN used to solve the generalized Buckley-Leverett equation.\nIn this study, we investigate a variety of examples involving conservative and non-conservative forms, as well as non-critical and critical states. As a result, each type of case affects the loss function." }, { "figure_ref": [], "heading": "Conservative Form", "publication_ref": [], "table_ref": [], "text": "The Conservative Form is based on the equation (1.10)." }, { "figure_ref": [], "heading": "Non-Critical States", "publication_ref": [], "table_ref": [], "text": "In order to incorporate cPINN into our generalized Buckley-Leverett equation (1.10), we adjust the cPINN loss function (3.3) in subdomain 1 into the following equation.\nloss 1 = ω u 1 M SE u 1 + ω f 1 M SE f 1 + ω I 1 (M SE f lux 1 + M SE avg 1 ) (4.1)\nwhere\nM SE u 1 = 1 N u 1 Nu 1 i=1 u i -ûi θ 1 (x i u 1 , t i u 1 ) 2 + 1 N u 1 Nu 1 i=1 ϕ i -φi θ 1 (x i u 1 , t i u 1 ) 2 (4.2) M SE f 1 = 1 N f 1 N f 1 i=1 φi θ 1 (x i f 1 , t i f 1 ) t 2 + 1 N f 1 N f 1 i=1 ûi θ 1 (x i f 1 , t i f 1 ) t + f ûi θ 1 (x i f 1 , t i f 1 ), φi θ 1 (x i f 1 , t i f 1 ) x (4.3) M SE f lux 1 = 1 N I 1 N I 1 i=1 f ûi θ 1 (x i I 1 , t i I 1 ), φi θ 1 (x i I 1 , t i I 1 ) -f ûi θ 2 (x i I 1 , t i I 1 ), ϕ R (4.4) M SE avg 1 = 1 N I 1 N I 1 i=1 ûi θ 1 (x i I 1 , t i I 1 ) -ûi θ 1 (x i I 1 , t i I 1 ) 2 + 1 N I 1 N I 1 i=1 φi θ 1 (x i I 1 , t i I 1 ) - φi θ 1 (x i I 1 , t i I 1 ) 2 (4.5)\nwhere the notations (x i u 1 , t i u 1 ), (x i f 1 , t i f 1 ), and (x i I 1 , t i I 1 ) follows the equation (3.2), they indicate the respective training, interior, and interface points are randomly selected from subdomain 1. Moreover, the outputs of neural network in SD1 are ûi θ 1 and φi θ 1 , and the initial condition provides the value of u i and ϕ i . In addition, the residual f follows the equation (1.5). Also, the notation {{•}} is represented in equation (3.8). The parameters θ 1 are trained to minimize loss 1 (equation (4.1)) and belong to subdomain 1. Meanwhile, the parameters θ 2 are associated with subdomain 2.\nThe loss function for subdomain 2 is provided below.\nloss 2 = ω u 2 M SE u 2 + ω f 2 M SE f 2 + ω I 2 (M SE f lux 2 + M SE avg 2 ) (4.6)\nwhere\nM SE u 2 = 1 N u 2 N θ 2 i=1 u i θ 2 (x i u 2 , t i u 2 ) -ûi θ 2 (x i u 2 , t i u 2 ) 2 (4.7) M SE f 2 = 1 N f 2 N f 2 i=1 ûi θ 2 (x i f 2 , t i f 2 ) t + f ûi θ 2 (x i f 2 , t i f 2 ), ϕ R x 2 (4.8) M SE f lux 2 = 1 N I 1 N I 1 i=1 f ûi θ 1 (x i I 1 , t i I 1 ), φi θ 1 (x i I 1 , t i I 1 ) -f ûi θ 2 (x i I 1 , t i I 1 ), ϕ R (4.9) M SE avg 2 = 1 N I 1 N I 1 i=1 ûi θ 2 (x i I 1 , t i I 1 ) -ûi θ 1 (x i I 1 , t i I 1 ) 2 + 1 N I 1 N I 1 i=1 ϕ R - φi θ 1 (x i I 1 , t i I 1 ) 2 (4.10)\nThe notation's interpretation is similar to the one in loss 1 (4.1). Since we are aware that cPINN cannot accommodate moving shocks in solutions, we need an extra constraint to make the solution more reasonable; thus, the Oleinik entropy condition is incorporated in the loss function. The Oleinik entropy condition is denoted by f (u, ϕ), where\nf (u, ϕ) =        f1 (u, ϕ), u M > u R and u M > u * su, u M > u R and u M < u * f1 (u, ϕ), u M < u R and u M < u * su, u M < u R and u M > u * (4.11) and f1 (u, ϕ) = su, for shock f (u, ϕ),\nfor rarefaction (4.12)\nand s is the speed determined by the Rankine-Hugoniot condition." }, { "figure_ref": [ "fig_2" ], "heading": "Critical States", "publication_ref": [], "table_ref": [], "text": "For the critical states, we must rescale equation (1.10) by applying the relationship ϕ = δ 1 φ and u = δ 2 ū (4.13) resulting in the following equation.\n     φt = 0 δ 2 ūt + ū2 ū2 +M δ 1 δ 2 φ-ū 2 x = 0 (4.14)\nWhen the critical state takes place in U L , all that needs to be adjusted is the loss function in subdomain 1; as a result, M SE f 1 in loss 1 (equation (4.1)) needs to be modified based on equation (4.14), resulting in the equation shown below.\nM SE f 1 = 1 N f 1 N f 1 i=1 φi 1 t 2 + 1 N f 1 N f 1 i=1 δ 2 ûi 1 t + f ûi 1 , φi 1 x 2 ,(4.15)\nwhere\nf (u, ϕ) = u 2 u 2 + M δ 1 δ 2 ϕ -u 2 .(4.16)\nThe notations ûi 1 and φi 1 refer to the output of the neural network SD1 in Figure 3 from the randomly chosen interior points, and afterward φi 1 and ûi 1 are obtained by equation (4.13). When the critical occurs in U R , we only need to modify the loss function in subdomain 2 (equation (4.6)). Therefore, only M SE f 2 needs to be altered, leading to the following equation.\nM SE f 2 = 1 N f 2 N f 2 i=1 ûi 2 t + f ûi 2 , φR , δ 1 , δ 2 x 2 ,(4.17)\nwhere\nf (u, ϕ, δ 1 , δ 2 ) =        f1 (u, ϕ, δ 1 , δ 2 ), u M > u R and u M > u * (s/δ 2 )u, u M > u R and u M < u * f1 (u, ϕ, δ 1 , δ 2 ), u M < u R and u M < u * (s/δ 2 )u, u M < u R and u M > u * (4.18) and f1 (u, ϕ, δ 1 , δ 2 ) =    s δ 2 u, for shock 1 δ 2 u 2 u 2 +M δ 1 δ 2 ϕ-u 2 , for rarefaction\nwhere ûi 2 is the notation after feeding the input from the residual points and implementing û2 = δ 2 û2 . Note that the traveling shock's speed has been altered from the original Oleinik entropy condition (4.11) in the rescaling case." }, { "figure_ref": [], "heading": "Non-Conservative Form", "publication_ref": [], "table_ref": [], "text": "We obtained the non-conservative form of the Generalized Buckley-Leverett equation below by applying u = ϕũ to the conservative form (equation (1.10)).\nϕ t = 0 ϕũ t + g(ũ) x = 0 (4.19)\nwhere\ng(ũ) = ũ2 ũ2 +M (1-ũ) 2 ." }, { "figure_ref": [], "heading": "Non-Critical States", "publication_ref": [], "table_ref": [], "text": "The modification of the loss function of the non-conservative form is relatively straightforward, by utilizing equation (4.19) as the governed equation." }, { "figure_ref": [], "heading": "Critical States", "publication_ref": [], "table_ref": [], "text": "The non-conservative form of the rescaling in a critical state can be altered in a straightforward manner, similarly to the conservative form, by employing the relationships ϕ = δ 1 φ and u = δ 2 ū solely in the interior point. As a consequence, the underlying equation in the non-conservative for rescaling in a critical state is as follows.\n     φt = 0 δ 1 δ 2 φū t + ū2 ū2 +M 1 δ 2 φ-ū 2 x = 0 (4.20)" }, { "figure_ref": [], "heading": "Numerical results", "publication_ref": [], "table_ref": [], "text": "In this section, we detail the results of numerous numerical experiments. The cases are classified into two categories: conservative and non-conservative. Equation (1.10) gives the generalized Buckley-Leverett equation's conservative form, while equation (1.1) gives its non-conservative form. We also conducted studies on non-critical and critical states in each category. Each of the numerical experiment was performed out three times, hence the average L 2 norm given in this paper is the average of three runs. In addition, some comparisons with WENO5 are offered in section 5.3. The detail of the experimental settings is provided in B." }, { "figure_ref": [], "heading": "Non-Critical States", "publication_ref": [], "table_ref": [], "text": "We implement the loss in subdomain 1 (loss 1) and the loss in subdomain 2 (loss 2) for the conservative form in the non-critical cases by using the equations (4.1) and (4.6), respectively. Furthermore, all of the non-critical cases share the same domain, i.e., 0 ≤ t ≤ 3 and -1 ≤ x ≤ 10. The general initial conditions can be expressed as follows.\nU (x, 0) = U L , if x < 0 U R , else" }, { "figure_ref": [ "fig_1", "fig_4" ], "heading": "Case 1", "publication_ref": [], "table_ref": [], "text": "Following are the initial conditions for the first case in both conservative and non-conservative forms: respectively.\nU L = u L ϕ L = 0.6 0.7 and U R = u R ϕ R = 0.3 0.6 , (5\nThe theoretical illustration is available in Figure 2c. By the derivation of some algebraic calculations in equation (2.11), we obtain u M ≈ 0.51. The result of the conservative and the nonconservative form is depicted in Figure 4, with the average relative L 2 norm after three repetitions of the experiment being 8.96 • 10 -3 for the conservative, and 6.05 • 10 -3 for the non-conservative form. As demonstrated, both with initial condition in conservative (5.1) and non-conservative (5.2), we can successfully acquire the solution rarefaction and shock waves." }, { "figure_ref": [ "fig_1", "fig_5" ], "heading": "Case 2", "publication_ref": [], "table_ref": [], "text": "The following is the initial condition:\nU L = u L ϕ L = 0.45 0.8 and U R = u R ϕ R = 0.3 0.6 (5.3)\nfor the conservative form, and ŨL = ũL ϕ L = 0.5625 0.8 and ŨR = ũR ϕ R = 0.5 0.6 (5.4) for the non-conservative form. Figure 2d offers the theoretical illustration. Similar to Section 5.1.1, we apply equation (2.11) to obtain u M = 0.3375. The exact and predicted solution of cPINN in conservative and nonconservative forms are presented in Figure 5. The average relative L 2 norm is 1.11 • 10 -2 and 8.82 • 10 -3 , respectively for the conservative and non-conservative form. We notice that cPINN can accurately capture the solution in both conservative and non-conservative forms. In contrast to Case 5.1.1, the current case's solution consists only of shock waves. " }, { "figure_ref": [], "heading": "Critical States", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Case 3a", "publication_ref": [], "table_ref": [], "text": "This case has a domain of 0 ≤ t ≤ 3 and -1 ≤ x ≤ 10. The equations (4.1) and (4.6) are used for loss 1 and 2, respectively, where the loss function is equivalent to that of the non-critical state. The following is the initial condition for the conservative form of the critical state that occurs in U L :\nU L = u L ϕ L = 2 • 10 -4 0.1 and U R = u R ϕ R = 0.35 0.5 .(5.5)\nFurthermore, the initial condition for the non-conservative form where the critical state is in ŨL is as follows:\nŨL = ũL ϕ L = 2 • 10 -3 0.1 and ŨR = ũR ϕ R = 0.7 0.5 (5.6)\nThe theoretical illustration is available in Figure 2a. The outcome of the conservative form is shown in a red dashed line in Figure 6, with the average relative L 2 norm is 3.33 • 10 -1 . As one can see, the critical state is where cPINN in conservative form fails to function well. As a result, in the following case, we use the rescaling technique to address the problem in the conservative form. Surprisingly, in the critical state in ŨL , cPINN in a non-conservative form (marked by the cyan dashed line in Figure 6) works well. The average L 2 norm for the non-conservative form is 1.54 " }, { "figure_ref": [], "heading": "Case 3b", "publication_ref": [], "table_ref": [], "text": "This case has the same domain as Section 5.2.1, and equations (5.5) and (5.6) provide the initial condition in conservative and non-conservative forms, respectively. For the conservative form, we employ the rescaling technique to solve the issue when a critical state emerges in U L . We simply need to rescale in the first subdomain (loss 1) and alter the interior parts. Loss 1 (equation (4.1)) with the M SE f 1 (equation (4.15)) where δ 1 = 10 -2 and δ 2 = 10 -4 is therefore considered. We observe that rescaling improves cPINN's performance in the critical state as shown in orange-dots line in Figure 6. After three repetitions, we obtain the average relative L 2 norm for the conservative form as 1.73 • 10 -2 .\nAs shown in Section 5.2.1, the non-conservative form without rescaling has no trouble approximating the solution of the critical state in U L . However, we continue to implement the non-conservative form with rescaling experiment for research purposes. We used δ 1 = 10 -2 and δ 2 = 10 -3 as rescaling parameters to implement the rescaling method. Thus, the result can be seen as a pink-dashed line in Figure 6. As we suspect, non-conservative form with rescaling can handle the critical case in U L quite effectively. Furthermore, the relative L 2 norm of the non-conservative form with rescaling is 2.21•10 -2 , which is greater than the one without rescaling, indicating that, in this case, rescaling in the non-conservative form is unnecessary because it will not improve cPINN performance. 6: Comparisons are made between the exact and cPINN solutions for the GBL equation's critical case in U L in both conservative and non-conservative forms, also with and without rescaling. As one can observe, in conservative form, cPINN was unable to effectively handle the critical case (red dashed line); however, after applying the rescaling technique, the outcome is excellent (orange dots line). The average relative L 2 norm before rescaling is 3.33 • 10 -1 , whereas after rescaling, the relative L 2 norm is 1.73 • 10 -2 . Unexpectedly, the non-conservative form of the cPINN performs remarkably well when handling the critical case in U L (cyan dashed line). We also run the case where we implement the rescaling on the non-conservative form, and it works as expected (pink dashed line). Without and with rescaling, the average L 2 norm of the non-conservative form is 1.54 • 10 -3 and 2.21 • 10 -3 , respectively. Based on the average L 2 norm, it is not required to use the rescaling technique on the non-conservative form, in this case, as it will not enhance the performance of cPINN." }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "Case 4a", "publication_ref": [], "table_ref": [], "text": "For this case, we select the initial condition of the critical state occurs in U R in the conservative form as follows: non-conservative forms, as well as without and with rescaling applied to the GBL equation's critical state in U R are presented. In general, cPINN can perform pretty well when the critical state is in U R , regardless of conservative or non-conservative, without or with rescaling. The relative L 2 norm for the conservative form are 7.2 • 10 -2 and 8.82 • 10 -2 , for without and with rescaling, respectively. Additionally, for the non-conservative form, the relative L 2 norm are 6.09 • 10 -2 and 5.99 • 10 -2 for without and with rescaling, respectively. Based on the L 2 norm, the cPINN performance in the non-conservative form is slightly enhanced by rescaling. and (4.6), respectively, in SD1 and SD2. Figure 7 shows the results of the conservative and nonconservative forms without rescaling in red and cyan dashed lines, respectively. Based on Figure 7, both perform quite satisfactorily, with the conservative form's relative L 2 norm being 7.2 • 10 -2 and the non-conservative form's relative L 2 norm being 6.09 • 10 -2 .\nU L = u L ϕ L = 0.6 0.7 and U R = u R ϕ R = 4 • 10 -4 0.2 , (5" }, { "figure_ref": [ "fig_8" ], "heading": "Case 4b", "publication_ref": [], "table_ref": [], "text": "Given the initial conditions in equation (5.7) and (5.8), also the same domain as Case 5.2.3, we perform the rescaling technique. Take into account, for the conservative form, we simply need to adjust the loss in the second subdomain since the critical state currently exists in U R (in subdomain 2). As a consequence, for the conservative form, loss 2 is now represented by equation (4.6) with M SE f 2 in equation (4.17). In this particular case, for both the conservative and the nonconservative forms, we pick δ 1 = 1 and δ 2 = 0.8. The result for rescaling cPINN in the conservative form is depicted in Figure 7 by a cyan-dashed line, whereas the non-conservative form is shown by a pink-dashed line. The conservative and non-conservative forms' respective relative L 2 norms are 8.82 • 10 -2 and 5.99 • 10 -2 . As we discovered based on the relative L 2 norm, cPINN's performance is not improved by applying the rescaling technique in the conservative form. Rescaling in the non-conservative form, however, only slightly improves the performance." }, { "figure_ref": [ "fig_1", "fig_11" ], "heading": "Case 5a", "publication_ref": [], "table_ref": [], "text": "Another initial condition that we considered for the critical state occurs in U R is as follows:\nU L = u L ϕ L = 0.49 0.7 and U R = u R ϕ R = 4 • 10 -4 0.2 , (5.9)\nfor the conservative form. Additionally, the following is the initial condition for the non-conservative form when the critical state is in ŨR :\nŨL = ũL ϕ L = 0.7 0.7 and ŨR = ũR ϕ R = 2 • 10 -3 0.2 . (5.10)\nThe domain for this case is 0 ≤ t ≤ 3 and -1 ≤ x ≤ 25. Moreover, the theoretical illustration is available in Figure 2d. The outcomes of cPINN without the rescaling technique are shown in Figure 8; the red-dashed line and the cyan-dashed line, respectively, represent the results for the conservative and non-conservative forms. Both the conservative and non-conservative forms function sufficiently well in this situation. The evaluation of the exact and cPINN solutions is offered in both conservative and non-conservative forms, as well as without and with rescaling. For the conservative form, the relative L 2 norms are 7.87 • 10 -2 and 7.91 • 10 -2 , respectively for without and with rescaling. On the other hand, the relative L 2 norms for the non-conservative form are 8.94 • 10 -2 and 6.57 • 10 -2 , respectively, both without and with rescaling. As a result, only in its non-conservative form is rescaling able to improving cPINN performance." }, { "figure_ref": [ "fig_11" ], "heading": "Case 5b", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In this case, we use the rescaling technique to run cPINN in both the conservative and nonconservative forms using the identical domain and initial data from Section 5.2.5. In the conservative form, we choose the rescaling parameters as δ 1 = 1 and δ 2 = 0.8, whereas for the non-conservative form, δ 1 = 1 and δ 2 = 0.4 are selected. For the conservative and non-conservative forms, the relative L 2 norms are 7.91 • 10 -2 and 6.57 • 10 -2 , respectively. As we observed, rescaling causes the performance worse in the conservative form while improving it in the non-conservative form. The results also represented in Figure 8 as an orange-dots and pink-dashed line for the respective conservative and non-conservative forms.\nThe relative L 2 norm for each case, in both conservative and non-conservative forms, as well as for cases with and without the use of the rescaling approach, is contained in Table 1." }, { "figure_ref": [], "heading": "Non-Critical", "publication_ref": [], "table_ref": [], "text": "Critical \nU L U R" }, { "figure_ref": [], "heading": "Comparison with WENO5", "publication_ref": [ "b22", "b23" ], "table_ref": [], "text": "In this section, WENO5 and the performance of the conservative form of cPINN are examined.\nComponent-wisely, we implement the system (1.10) by coupling fifth-order WENO spatial discretization with third-order TVD Runge-Kutta time discretization [23,24]. The relative L 2 norm for both approaches is displayed in Table 2. In conservative form, WENO5 outperforms cPINN, with the exception of cases where critical occurs in U R (Case 4 and 5), as shown by Table 2. WENO5 for the non-conservative form, however, is yet unresolved, whereas cPINN is very capable of addressing the issue." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "Both cPINN in the conservative and non-conservative forms of the generalized Buckley-Leverett perform admirably for the non-critical cases. Nevertheless, certain rescaling modifications must be made in order for cPINN to function properly for the critical state in U L in conservative form. Surprisingly, the non-conservative form can approach the solution satisfactorily when a critical condition occurs in the U L without the requirement for a rescaling procedure. Furthermore, rescaling doesn't actually improve the performance of cPINN in the conservative form for the critical case in U R . In contrast, the rescaling improves the performance of cPINN in the critical case in U R in a non-conservative form. Finally, we assess the performance of WENO5 and cPINN. When cPINN is employed in its conservative form, WENO5 outperforms it. Meanwhile, WENO5 in its non-conservative form remains an open issue, whereas cPINN can manage it exceedingly well. Therefore, based on our studies, it is apparent that cPINN can effectively handle generalized Buckley-Leveret in both conservative and non-conservative forms, as well as in non-critical and critical (with some rescaling modifications) states." }, { "figure_ref": [], "heading": "A Existence of u *", "publication_ref": [], "table_ref": [], "text": "In this section, we solve the following equation which is the same as equation (2.13)\n, and the denominator of f is denoted by D(u, ϕ). For y ̸ = u R , equation (A.1) can be simplified\nAfter a routine computation, equation (A.2) can be written as follows\nwhere M = M/(M + 1). Due to the fact that u R is a zero of the cubic polynomial (A.3), the cubic polynomial can be factorized. Therefore we obtain\nIf ϕ R = 2u R , then q(y) is reduced to a quadratic polynomial.\nClearly, the zero is\nNote that the discriminant is non-negative, since\nby the fact that M > 1. For the sub-case, ϕ R > 2u R . We obtain that q(y) has only one positive zero u + and it is small than ϕ R . For the rest sub-case, ϕ R < 2u R . q(y) has two positive zeros, and u + < ϕ R < u -. Hence as long as ϕ R ̸ = 2u R , u + is the only one choice." }, { "figure_ref": [], "heading": "B Experiments Detail", "publication_ref": [ "b24", "b25" ], "table_ref": [], "text": "From the initial condition, we randomly selected 101 and 499 training points from SD1 and SD2, respectively. Moreover, using Latin Hypercube Sampling, 3000 points from SD1 are randomly selected for the interior. Depending on the cases, we sample 12500 or 17500 points for SD2. We use 12500 interior points for cases 1-3 and 17500 interior points for the remaining cases. Additionally, we arbitrarily selected 99 points from the interface, where the interface is always placed at x = 0.01. We use the Glorot uniform distribution [25] to initialize the parameters. Also, we implement tanh as the activation function in the hidden layer and sigmoid in the output layer.\nThe following describes the MLP architecture. Whereas the number of neurons in each case is the same (40 neurons), the number of hidden layers varies by subdomain. In subdomain 1 we utilize eight hidden layers, while in subdomain 2 we use ten hidden layers. For the optimization technique, we deploy Adam [26], and we train the neural network across 100,000 epochs. The initial learning rate is set to 10 -3 and decreases linearly during training." } ]
In this paper, a modified version of conservative Physics-informed Neural Networks (cPINN for short) is provided to construct the weak solutions of Riemann problem for the hyperbolic scalar conservation laws in non-conservative form. To demonstrate the results, we use the model of generalized Buckley-Leverett equation (GBL equation for short) with discontinuous porosity in porous media. By inventing a new unknown, the GBL equation is transformed into a two-by-two resonant hyperbolic conservation laws in conservative form. The modified method of cPINN is invented to overcome the difficulties due to the discontinuity of the porosity and the appearance of the critical states (near vacuum) in the Riemann data. We experiment with our idea by using a deep learning algorithm to solve the GBL equation in both conservative and nonconservative forms, as well as the cases of critical and non-critical states. This method provides a combination of two different neural networks and corresponding loss functions, one is for the two-by-two resonant hyperbolic system, and the other is for the scalar conservation law with a discontinuous perturbation term in the non-convex flux. The technique of re-scaling to the unknowns is adopted to avoid the oscillation of the Riemann solutions in the cases of critical Riemann data. The solutions constructed by the modified cPINN match the exact solutions constructed by the theoretical analysis for hyperbolic conservation laws. In addition, the solutions are identical in both conservative and non-conservative cases. Finally, we compare the performance of the modified cPINN with numerical method called WENO5. Whereas WENO5 struggles with the highly oscillation of approximate solutions for the Riemann problems of GBL equation in non-conservative form, cPINN works admirably.
Conservative Physics-Informed Neural Networks for Non-Conservative Hyperbolic Conservation Laws Near Critical States
[ { "figure_caption": "Figure 1 :1Figure 1: Neural network representation of formula (1.7).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: In 2a and 2b, f uu (U R ) < 0 and f uu (U L ) > 0. The difference is the location of u * . In 2a, u * > u M so that the we have a rarefcation shock wave connecting U M and U R . But in 2b, u * < u M , in this case U M and U R are connected by a shock wave. It is similar for the cases 2c and 2d.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The schematic of cPINN to solve generalized Buckley-Leverett equation. The domain is divided into two subdomains by an interface: the first subdomain solves a system, while the second subdomain handles a scalar equation with an entropy condition.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of the exact and predicted solution in conservative and nonconservative forms at various time-step for the GBL equation with initial value (5.1) and (5.2). The average L 2 norm is 8.96 • 10 -3 for the conservative form, and 6.05 • 10 -3 for the non-conservative form.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparison of the exact and predicted solution in conservative and nonconservative forms at various time-step for the GBL equation with initial value (5.3) and (5.4). The average L 2 norm are 1.11 • 10 -2 and 8.82 • 10 -3 for the conservative and nonconservative forms, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "FigureFigure 6: Comparisons are made between the exact and cPINN solutions for the GBL equation's critical case in U L in both conservative and non-conservative forms, also with and without rescaling. As one can observe, in conservative form, cPINN was unable to effectively handle the critical case (red dashed line); however, after applying the rescaling technique, the outcome is excellent (orange dots line). The average relative L 2 norm before rescaling is 3.33 • 10 -1 , whereas after rescaling, the relative L 2 norm is 1.73 • 10 -2 . Unexpectedly, the non-conservative form of the cPINN performs remarkably well when handling the critical case in U L (cyan dashed line). We also run the case where we implement the rescaling on the non-conservative form, and it works as expected (pink dashed line). Without and with rescaling, the average L 2 norm of the non-conservative form is 1.54 • 10 -3 and 2.21 • 10 -3 , respectively. Based on the average L 2 norm, it is not required to use the rescaling technique on the non-conservative form, in this case, as it will not enhance the performance of cPINN.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ". 7 )7while the subsequent is the initial condition for the non-conservative form: their domain is 0 ≤ t ≤ 3 and -1 ≤ x ≤ 25. The theoretical illustration is available in Figure2c. The loss function that is employed in the conservative form is provided in the equations(4.1) ", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: Comparison of the exact solution and the cPINN solution in conservative and non-conservative forms, as well as without and with rescaling applied to the GBL equation's critical state in U R are presented. In general, cPINN can perform pretty well when the critical state is in U R , regardless of conservative or non-conservative, without or with rescaling. The relative L 2 norm for the conservative form are 7.2 • 10 -2 and 8.82 • 10 -2 , for without and with rescaling, respectively. Additionally, for the non-conservative form, the relative L 2 norm are 6.09 • 10 -2 and 5.99 • 10 -2 for without and with rescaling, respectively. Based on the L 2 norm, the cPINN performance in the non-conservative form is slightly enhanced by rescaling.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure8: The evaluation of the exact and cPINN solutions is offered in both conservative and non-conservative forms, as well as without and with rescaling. For the conservative form, the relative L 2 norms are 7.87 • 10 -2 and 7.91 • 10 -2 , respectively for without and with rescaling. On the other hand, the relative L 2 norms for the non-conservative form are 8.94 • 10 -2 and 6.57 • 10 -2 , respectively, both without and with rescaling. As a result, only in its non-conservative form is rescaling able to improving cPINN performance.", "figure_data": "", "figure_id": "fig_11", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "• 10 -2 .", "figure_data": "t = 1t = 1.50.440.440.420.420.400.40u(x, t)0.36 0.38u(x, t)0.36 0.380.340.340.320.320.300.30024x6810024x6810t = 2t = 2.50.440.44Exact Conservative Non-Conservative0.420.420.400.40u(x, t)0.36 0.38u(x, t)0.36 0.380.340.340.320.320.300.30024x6810024x6810", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Relative L 2 norm from a number of cases, including both conservative and nonconservative forms as well as cases with and without rescaling.", "figure_data": "Case 4a:7.2 • 10 -2Case 1:Case 3a:Case 4b:Conservative8.96 • 10 -3 Case 2:3.33 • 10 -1 Case 3b:8.82 • 10 -2 Case 5a:1.11 • 10 -21.73 • 10 -27.87 • 10 -2Case 5b:7.91 • 10 -2Case 4a:6.09 • 10 -2Case 1:Case 3a:Case 4b:Non-Conservative6.05 • 10 -3 Case 2:1.54 • 10 -2 Case 3b:5.99 • 10 -2 Case 5a:8.82 • 10 -32.21 • 10 -28.94 • 10 -2Case 5b:6.57 • 10 -2", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" } ]
Reyna Quita; Yu-Shuo Chen; Hsin-Yi Lee; Alex C Hu; John M Hong
[ { "authors": "C J Van Duijn; L A Peletier; I S Pop", "journal": "SIAM Journal on Mathematical Analysis", "ref_id": "b0", "title": "A new class of entropy solutions of the buckley-leverett equation", "year": "2007" }, { "authors": "Ying Wang; Chiu-Yen Kao", "journal": "", "ref_id": "b1", "title": "Central schemes for the modified buckley-leverett equation", "year": "2013" }, { "authors": "M Raissi; P Perdikaris; G E Karniadakis", "journal": "Journal of Computational Physics", "ref_id": "b2", "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "year": "2019" }, { "authors": "Zhiping Mao; Ameya D Jagtap; George Em Karniadakis", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b3", "title": "Physics-informed neural networks for high-speed flows", "year": "2020" }, { "authors": "Wei-Fan Hu; Yi-Jun Shih; Te-Sheng Lin; Ming-Chih Lai", "journal": "", "ref_id": "b4", "title": "A shallow physics-informed neural network for solving partial differential equations on surfaces", "year": "2022" }, { "authors": "D Ameya; Zhiping Jagtap; Nikolaus Mao; George Em Adams; Karniadakis", "journal": "Journal of Computational Physics", "ref_id": "b5", "title": "Physicsinformed neural networks for inverse problems in supersonic flows", "year": "2022" }, { "authors": "D Ameya; Kenji Jagtap; George Em Kawaguchi; Karniadakis", "journal": "Journal of Computational Physics", "ref_id": "b6", "title": "Adaptive activation functions accelerate convergence in deep and physics-informed neural networks", "year": "2020" }, { "authors": "Wei-Fan Hu; Te-Sheng Lin; Ming-Chih Lai", "journal": "Journal of Computational Physics", "ref_id": "b7", "title": "A discontinuity capturing shallow neural network for elliptic interface problems", "year": "2022" }, { "authors": "D Ameya; Ehsan Jagtap; George Em Kharazmi; Karniadakis", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b8", "title": "Conservative physicsinformed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems", "year": "2020" }, { "authors": "Olga Fuks; Hamdi A Tchelepi", "journal": "Journal of Machine Learning for Modeling and Computing", "ref_id": "b9", "title": "Limitations of physics informed machine learning for nonlinear two-phase transport in porous media", "year": "2020" }, { "authors": "G Cybenko", "journal": "Math. Control Signal Systems", "ref_id": "b10", "title": "Approximation by superpositions of a sigmoidal function", "year": "1989" }, { "authors": "Kurt Hornik", "journal": "Neural Networks", "ref_id": "b11", "title": "Approximation capabilities of multilayer feedforward networks", "year": "1991" }, { "authors": "Allan Pinkus", "journal": "Acta Numerica", "ref_id": "b12", "title": "Approximation theory of the mlp model in neural networks", "year": "1999" }, { "authors": "John Meng-Kai Hong; Jiahong Wu; Juan-Ming Yuan", "journal": "Journal of Mathematical Physics", "ref_id": "b13", "title": "The generalized buckley-leverett and the regularized buckley-leverett equations", "year": "2012" }, { "authors": "Cedric G Fraces; Hamdi Tchelepi", "journal": "D", "ref_id": "b14", "title": "Physics Informed Deep Learning for Flow and Transport in Porous Media", "year": "2021" }, { "authors": "Waleed Diab; Mohammed Al; Kobaisi ", "journal": "", "ref_id": "b15", "title": "Pinns for the solution of the hyperbolic buckleyleverett problem with a non-convex flux function", "year": "2021" }, { "authors": "Tim De Ryck; Siddhartha Mishra; Roberto Molinaro", "journal": "", "ref_id": "b16", "title": "wpinns: Weak physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws", "year": "2022" }, { "authors": "Aidan Chaumet; Jan Giesselmann", "journal": "", "ref_id": "b17", "title": "Efficient wpinn-approximations to entropy solutions of hyperbolic conservation laws", "year": "2022" }, { "authors": "D Ameya; Zhiping Jagtap; Nikolaus Mao; George Em Adams; Karniadakis", "journal": "Journal of Computational Physics", "ref_id": "b18", "title": "Physicsinformed neural networks for inverse problems in supersonic flows", "year": "2022" }, { "authors": " Oa Oleinik", "journal": "Amer. Math. Soc. Transl", "ref_id": "b19", "title": "Discontinuous solutions of nonlinear differential equations", "year": "1963" }, { "authors": "L Michael; Stein", "journal": "Technometrics", "ref_id": "b20", "title": "Large sample properties of simulations using latin hypercube sampling", "year": "1987" }, { "authors": "Atilim Gunes Baydin; Barak A Pearlmutter; Alexey Andreyevich Radul; Jeffrey Mark Siskind", "journal": "Journal of Machine Learning Research", "ref_id": "b21", "title": "Automatic differentiation in machine learning: a survey", "year": "2018" }, { "authors": "Chi-Wang Shu", "journal": "SIAM Review", "ref_id": "b22", "title": "High order weighted essentially nonoscillatory schemes for convection dominated problems", "year": "2009" }, { "authors": "Chi-Wang Shu", "journal": "Springer", "ref_id": "b23", "title": "Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws", "year": "1998" }, { "authors": "Xavier Glorot; Yoshua Bengio", "journal": "PMLR", "ref_id": "b24", "title": "Understanding the difficulty of training deep feedforward neural networks", "year": "2010-05-15" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b25", "title": "Adam: A method for stochastic optimization", "year": "2014" } ]
[ { "formula_coordinates": [ 2, 216.48, 184.08, 323.52, 13.51 ], "formula_id": "formula_0", "formula_text": "ϕũ t + f (ũ, x) x = 0, (x, t) ∈ R × R + ,(1.1)" }, { "formula_coordinates": [ 2, 190.27, 245.78, 233.81, 24.43 ], "formula_id": "formula_1", "formula_text": "f (ũ, x) = ũ2 ũ2 + M (1 -ũ) 2 , 0 ≤ ũ ≤ 1, M > 0." }, { "formula_coordinates": [ 2, 211.33, 375.58, 328.67, 13.13 ], "formula_id": "formula_2", "formula_text": "u t + f (u, ϕ(x)) x = 0, (x, t) ∈ R × R + ,(1.4)" }, { "formula_coordinates": [ 2, 189.64, 421.39, 232.72, 26.38 ], "formula_id": "formula_3", "formula_text": "f (u, ϕ(x)) = u 2 u 2 + M [ϕ(x) -u] 2 , 0 ≤ u ≤ ϕ(x)." }, { "formula_coordinates": [ 2, 232.18, 721.42, 307.82, 26.13 ], "formula_id": "formula_4", "formula_text": "u t + Au x = 0, (x, t) ∈ R × R + , u(x, 0) = u 0 (x), x ∈ R,(1.6)" }, { "formula_coordinates": [ 3, 241.27, 112.52, 94.56, 10.63 ], "formula_id": "formula_5", "formula_text": "λ 1 < λ 2 < • • • < λ p ." }, { "formula_coordinates": [ 3, 236.13, 136.32, 303.87, 34.56 ], "formula_id": "formula_6", "formula_text": "u(x, t) = p k=1 l T k u 0 (x -λ k t)r k ,(1.7)" }, { "formula_coordinates": [ 3, 209.83, 208.97, 330.17, 34.56 ], "formula_id": "formula_7", "formula_text": "u(x, t) = p k=1 l T k u 0 (x -λ k t)r k = p k=1 z k r k . (1.8)" }, { "formula_coordinates": [ 4, 233.9, 123.33, 306.1, 39.57 ], "formula_id": "formula_8", "formula_text": "   u t + f (u, ϕ(x)) x = 0, u(x, 0) = u L , x < 0, u R , x < 0,(1.9)" }, { "formula_coordinates": [ 4, 232.1, 220.81, 135.44, 39.57 ], "formula_id": "formula_9", "formula_text": "   U t + F (U ) x = 0, U (x, 0) = U L , x < 0, U R , x < 0." }, { "formula_coordinates": [ 4, 103.55, 273.17, 326.59, 12.64 ], "formula_id": "formula_10", "formula_text": "U := (u, ϕ) T , F (U ) := (f (u, ϕ), 0) T , U L = (u L , ϕ L ) T ,U R = (u R , ϕ R ) T ." }, { "formula_coordinates": [ 5, 200.05, 322.9, 339.95, 39.57 ], "formula_id": "formula_11", "formula_text": "   U t + F (U ) x = 0, U (x, 0) = U L := (u L , ϕ L ) T , x < 0, U R := (u R , ϕ R ) T , x > 0,(2.1)" }, { "formula_coordinates": [ 5, 72, 369.56, 468, 64.23 ], "formula_id": "formula_12", "formula_text": "U := (u, ϕ) T and F (U ) := (f (u, ϕ), 0) T and ϕ L , u L , ϕ R and u R are constant with 0 < u L ≤ ϕ L , 0 < u L ≤ ϕ L . We say a L 1 function u is a weak function of (1.4) if for any ψ ∈ C ∞ 0 × [0, ∞), u satisfies t>0 uψ t + f (u, x)ψ x dxdt + R u 0 (x)ψ(x, 0)dx = 0. (2.2)" }, { "formula_coordinates": [ 5, 312.27, 463.27, 223.25, 13.27 ], "formula_id": "formula_13", "formula_text": "(U ) = (f ϕ , -f u ) T ,(2.3" }, { "formula_coordinates": [ 5, 172.65, 483.53, 367.35, 24.43 ], "formula_id": "formula_14", "formula_text": "λ 1 (U ) = f u (U ) = 2M ϕu(ϕ -u) D 2 (u, ϕ) and r 1 (U ) = (1, 0) T ,(2.4)" }, { "formula_coordinates": [ 5, 103.6, 514.52, 128.69, 11.52 ], "formula_id": "formula_15", "formula_text": "D(u, ϕ) = u 2 + M (ϕ -u) 2 ." }, { "formula_coordinates": [ 5, 80.46, 576.93, 459.54, 24.43 ], "formula_id": "formula_16", "formula_text": "∇λ 1 (U ) • r 1 (U ) = f uu (U ) = 2M ϕ D 3 (u, ϕ) -u 3 + (ϕ -u)(-3u 3 + 3M u(ϕ -u) + M (ϕ -u) 2 ) . (2.5)" }, { "formula_coordinates": [ 5, 72, 623.82, 468, 67.01 ], "formula_id": "formula_17", "formula_text": "0 < u < ϕ < m * u, f uu < 0. (2.6) Similarly, as (u, ϕ) satisfies 0 < u < m * u < ϕ, f uu > 0. (2.7)" }, { "formula_coordinates": [ 5, 229.87, 721.42, 310.13, 26.13 ], "formula_id": "formula_18", "formula_text": "Ω -= {(u, ϕ) | 0 < u < ϕ < m * u}, Ω + = {(u, ϕ) | 0 < u < m * u < ϕ}. (2.8) (a) (b) (c) (d)" }, { "formula_coordinates": [ 6, 258.36, 374.44, 281.64, 10.69 ], "formula_id": "formula_19", "formula_text": "f (u, ϕ) = f (u L , ϕ L ).(2.9)" }, { "formula_coordinates": [ 6, 213.62, 405.12, 185.96, 29.56 ], "formula_id": "formula_20", "formula_text": "u 2 u 2 + M (ϕ -u) 2 = u 2 L u 2 L + M (ϕ L -u L ) 2 ." }, { "formula_coordinates": [ 6, 149.03, 452.23, 179.7, 33.97 ], "formula_id": "formula_21", "formula_text": "{(u, ϕ)} satisfy u = u L ϕ L ϕ." }, { "formula_coordinates": [ 6, 260.53, 551.71, 90.95, 10.69 ], "formula_id": "formula_22", "formula_text": "u t + f (u, ϕ R ) x = 0." }, { "formula_coordinates": [ 6, 172.07, 572.12, 137.59, 10.69 ], "formula_id": "formula_23", "formula_text": "f uu (u L , ϕ R )f uu (u R , ϕ R ) > 0." }, { "formula_coordinates": [ 6, 89.56, 585.67, 315.57, 24.23 ], "formula_id": "formula_24", "formula_text": "Ω + (or Ω -). Secondly, if f uu (u L , ϕ R )f uu (u R , ϕ R ) < 0." }, { "formula_coordinates": [ 6, 219.76, 649.22, 320.24, 27.5 ], "formula_id": "formula_25", "formula_text": "f u (u * , ϕ R ) = f (u * , ϕ R ) -f (u R , ϕ R ) u * -u R . (2.13)" }, { "formula_coordinates": [ 7, 261.62, 185.42, 264.92, 34.02 ], "formula_id": "formula_26", "formula_text": "ûθ (z) = N sd n=1 ûθn (z). (3" }, { "formula_coordinates": [ 7, 91.22, 384.6, 448.78, 10.77 ], "formula_id": "formula_27", "formula_text": "L(θ n ) = ω un M SE un + ω fn M SE fn + ω In (M SE f luxn + M SE avgn ), n = 1, 2, . . . , N sd ,(3.3)" }, { "formula_coordinates": [ 7, 163.66, 445.88, 376.34, 154.08 ], "formula_id": "formula_28", "formula_text": "M SE un = 1 N un Nu n i=1 u i n -ûi θn (x i un , t i un ) 2 (3.4) M SE fn = 1 N fn N fn i=1 f (x i fn , t i fn ) 2 (3.5) M SE f luxn = 1 N In N In i=1 f n (x i In , t i In ) • n -f n + (x i In , t i In ) • n 2 (3.6) M SE avgn = 1 N In N In i=1 ûi θn (x i In , t i In ) -ûi θn (x i In , t i In ) 2 (3.7)" }, { "formula_coordinates": [ 7, 212.31, 700.51, 314.24, 27.42 ], "formula_id": "formula_29", "formula_text": "ûavg = ûi θn (x i In , t i In ) ≜ ûi θn + ûi θ n + 2 . (3" }, { "formula_coordinates": [ 8, 156.93, 734.53, 383.07, 11.5 ], "formula_id": "formula_30", "formula_text": "loss 1 = ω u 1 M SE u 1 + ω f 1 M SE f 1 + ω I 1 (M SE f lux 1 + M SE avg 1 ) (4.1)" }, { "formula_coordinates": [ 9, 132.86, 119.47, 407.14, 238.16 ], "formula_id": "formula_31", "formula_text": "M SE u 1 = 1 N u 1 Nu 1 i=1 u i -ûi θ 1 (x i u 1 , t i u 1 ) 2 + 1 N u 1 Nu 1 i=1 ϕ i -φi θ 1 (x i u 1 , t i u 1 ) 2 (4.2) M SE f 1 = 1 N f 1 N f 1 i=1 φi θ 1 (x i f 1 , t i f 1 ) t 2 + 1 N f 1 N f 1 i=1 ûi θ 1 (x i f 1 , t i f 1 ) t + f ûi θ 1 (x i f 1 , t i f 1 ), φi θ 1 (x i f 1 , t i f 1 ) x (4.3) M SE f lux 1 = 1 N I 1 N I 1 i=1 f ûi θ 1 (x i I 1 , t i I 1 ), φi θ 1 (x i I 1 , t i I 1 ) -f ûi θ 2 (x i I 1 , t i I 1 ), ϕ R (4.4) M SE avg 1 = 1 N I 1 N I 1 i=1 ûi θ 1 (x i I 1 , t i I 1 ) -ûi θ 1 (x i I 1 , t i I 1 ) 2 + 1 N I 1 N I 1 i=1 φi θ 1 (x i I 1 , t i I 1 ) - φi θ 1 (x i I 1 , t i I 1 ) 2 (4.5)" }, { "formula_coordinates": [ 9, 155.99, 491.36, 384.01, 11.49 ], "formula_id": "formula_32", "formula_text": "loss 2 = ω u 2 M SE u 2 + ω f 2 M SE f 2 + ω I 2 (M SE f lux 2 + M SE avg 2 ) (4.6)" }, { "formula_coordinates": [ 9, 132.93, 536.37, 407.07, 198.01 ], "formula_id": "formula_33", "formula_text": "M SE u 2 = 1 N u 2 N θ 2 i=1 u i θ 2 (x i u 2 , t i u 2 ) -ûi θ 2 (x i u 2 , t i u 2 ) 2 (4.7) M SE f 2 = 1 N f 2 N f 2 i=1 ûi θ 2 (x i f 2 , t i f 2 ) t + f ûi θ 2 (x i f 2 , t i f 2 ), ϕ R x 2 (4.8) M SE f lux 2 = 1 N I 1 N I 1 i=1 f ûi θ 1 (x i I 1 , t i I 1 ), φi θ 1 (x i I 1 , t i I 1 ) -f ûi θ 2 (x i I 1 , t i I 1 ), ϕ R (4.9) M SE avg 2 = 1 N I 1 N I 1 i=1 ûi θ 2 (x i I 1 , t i I 1 ) -ûi θ 1 (x i I 1 , t i I 1 ) 2 + 1 N I 1 N I 1 i=1 ϕ R - φi θ 1 (x i I 1 , t i I 1 ) 2 (4.10)" }, { "formula_coordinates": [ 10, 89.56, 161.45, 450.44, 96.92 ], "formula_id": "formula_34", "formula_text": "f (u, ϕ) =        f1 (u, ϕ), u M > u R and u M > u * su, u M > u R and u M < u * f1 (u, ϕ), u M < u R and u M < u * su, u M < u R and u M > u * (4.11) and f1 (u, ϕ) = su, for shock f (u, ϕ)," }, { "formula_coordinates": [ 10, 220.62, 387.91, 319.39, 49.19 ], "formula_id": "formula_35", "formula_text": "     φt = 0 δ 2 ūt + ū2 ū2 +M δ 1 δ 2 φ-ū 2 x = 0 (4.14)" }, { "formula_coordinates": [ 10, 159.9, 497.83, 380.1, 35.94 ], "formula_id": "formula_36", "formula_text": "M SE f 1 = 1 N f 1 N f 1 i=1 φi 1 t 2 + 1 N f 1 N f 1 i=1 δ 2 ûi 1 t + f ûi 1 , φi 1 x 2 ,(4.15)" }, { "formula_coordinates": [ 10, 238.48, 549.05, 301.52, 32.87 ], "formula_id": "formula_37", "formula_text": "f (u, ϕ) = u 2 u 2 + M δ 1 δ 2 ϕ -u 2 .(4.16)" }, { "formula_coordinates": [ 10, 193.86, 656.47, 346.14, 35.94 ], "formula_id": "formula_38", "formula_text": "M SE f 2 = 1 N f 2 N f 2 i=1 ûi 2 t + f ûi 2 , φR , δ 1 , δ 2 x 2 ,(4.17)" }, { "formula_coordinates": [ 11, 72, 107.25, 468, 111.97 ], "formula_id": "formula_39", "formula_text": "f (u, ϕ, δ 1 , δ 2 ) =        f1 (u, ϕ, δ 1 , δ 2 ), u M > u R and u M > u * (s/δ 2 )u, u M > u R and u M < u * f1 (u, ϕ, δ 1 , δ 2 ), u M < u R and u M < u * (s/δ 2 )u, u M < u R and u M > u * (4.18) and f1 (u, ϕ, δ 1 , δ 2 ) =    s δ 2 u, for shock 1 δ 2 u 2 u 2 +M δ 1 δ 2 ϕ-u 2 , for rarefaction" }, { "formula_coordinates": [ 11, 268.42, 346.01, 271.59, 24.18 ], "formula_id": "formula_40", "formula_text": "ϕ t = 0 ϕũ t + g(ũ) x = 0 (4.19)" }, { "formula_coordinates": [ 11, 103.55, 379.76, 91.38, 15.33 ], "formula_id": "formula_41", "formula_text": "g(ũ) = ũ2 ũ2 +M (1-ũ) 2 ." }, { "formula_coordinates": [ 11, 212.58, 558.01, 327.43, 49.19 ], "formula_id": "formula_42", "formula_text": "     φt = 0 δ 1 δ 2 φū t + ū2 ū2 +M 1 δ 2 φ-ū 2 x = 0 (4.20)" }, { "formula_coordinates": [ 12, 240.43, 184.24, 124.97, 24.23 ], "formula_id": "formula_43", "formula_text": "U (x, 0) = U L , if x < 0 U R , else" }, { "formula_coordinates": [ 12, 177.05, 270.94, 349.49, 24.23 ], "formula_id": "formula_44", "formula_text": "U L = u L ϕ L = 0.6 0.7 and U R = u R ϕ R = 0.3 0.6 , (5" }, { "formula_coordinates": [ 12, 176.75, 496.53, 363.25, 24.23 ], "formula_id": "formula_45", "formula_text": "U L = u L ϕ L = 0.45 0.8 and U R = u R ϕ R = 0.3 0.6 (5.3)" }, { "formula_coordinates": [ 13, 163.51, 546.89, 376.49, 26.19 ], "formula_id": "formula_46", "formula_text": "U L = u L ϕ L = 2 • 10 -4 0.1 and U R = u R ϕ R = 0.35 0.5 .(5.5)" }, { "formula_coordinates": [ 13, 170.56, 617.13, 369.44, 26.19 ], "formula_id": "formula_47", "formula_text": "ŨL = ũL ϕ L = 2 • 10 -3 0.1 and ŨR = ũR ϕ R = 0.7 0.5 (5.6)" }, { "formula_coordinates": [ 15, 166.24, 613.87, 360.31, 26.19 ], "formula_id": "formula_48", "formula_text": "U L = u L ϕ L = 0.6 0.7 and U R = u R ϕ R = 4 • 10 -4 0.2 , (5" }, { "formula_coordinates": [ 17, 163.51, 141.64, 376.49, 26.19 ], "formula_id": "formula_49", "formula_text": "U L = u L ϕ L = 0.49 0.7 and U R = u R ϕ R = 4 • 10 -4 0.2 , (5.9)" }, { "formula_coordinates": [ 17, 168.14, 206.06, 371.87, 26.19 ], "formula_id": "formula_50", "formula_text": "ŨL = ũL ϕ L = 0.7 0.7 and ŨR = ũR ϕ R = 2 • 10 -3 0.2 . (5.10)" }, { "formula_coordinates": [ 18, 395.64, 246.69, 78.42, 11.5 ], "formula_id": "formula_51", "formula_text": "U L U R" } ]
2023-09-25
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b6", "b6" ], "table_ref": [], "text": "Figure 1: Results of MiVOS [5] (baseline) in red and READMem-MiVOS (READMem added to MiVOS [5]) in blue on the LV1 [21] dataset. We depict the intersection between both predictions in turquoise. We indicate the ground-truth with yellow contours. The first column displays the input frames (i.e., ground-truth mask in yellow) for each sequence of the LV1 [21] dataset used to initialize the methods. The experimental configuration of these results is consistent with the quantitative experiments in Section 4." }, { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b50", "b5", "b6", "b7", "b20", "b23", "b24", "b25", "b29", "b5", "b23", "b25", "b6", "b7", "b29", "b5", "b5", "b33", "b7", "b25", "b5", "b5" ], "table_ref": [], "text": "Video object segmentation (VOS), i.e., the pixel-level discrimination of a foreground object from the background in videos, is a fundamental and challenging problem in computer vision. This paper focuses on the semi-automatic video object segmentation (sVOS) setting [49], where the goal consists of segmenting an object (or a collection of objects) in a video by only providing a segmentation mask as input.\nMotivation: Current sVOS methods [4,5,6,19,22,23,24] predominantly utilize the space-time memory network design [28], which stores deep representations of previous frames and their estimated masks in an external memory for the segmentation of a query image. Although some attention has recently been given to the memory update strategy [4,22,24], most methods rely on a simple frame aggregation approach, which inserts every t-th frame into an ever-expanding memory. This approach works well for short sequences but fails on longer videos due to the saturation of the GPU memory. Furthermore, this strategy neither considers redundancies nor novel information while adding new embeddings to the memory. A naive approach to better deal with long videos is to increase the sampling interval (s r ) of state-of-the-art methods [5,6,28], as shown in Figure 2, which displays that increasing the sampling interval leads to better performances. Due to a higher sampling interval, the memory stores embeddings of frames that are further apart in time from each other. These embeddings are more likely to reflect appearance variations resulting in a more diverse set of embeddings in the memory, contributing to better predictions. However, a high sampling interval may lead to the omission of potential useful frames [4,21], which is particularly detrimental for short sequences. Moreover, as noted by [4], a higher sampling interval leads to unstable performances.\nTo address the aforementioned points, we propose a simple yet effective extension in the form of a module, named READMem, for updating the memory. Drawing inspiration from [32] in the visual object tracking field, our approach focuses on increasing the diversity of the embeddings stored in the memory, rather than simply aggregating every t-th frame. This selective approach allows us to save only key embeddings in the memory, thereby alleviating the memory saturation constraint faced by previous methods on long videos and even on unconstrained video lengths. , STCN [6], QDMN [24]) with and without the READMem extension on the LV1 [21] dataset when varying the sampling interval s r . The configuration employed for these experiments is consistent to the quantitative analysis in Section 4.\nOur contributions are as follows:\n(1) We propose READMem, a novel module suitable to extend existing sVOS methods to handle unconstrained videos (arbitrary rapidity of appearance changes and arbitrary video length) without requiring re-training or fine-tuning the sVOS baseline. Our approach improves the diversity of the embeddings stored in the memory and performs a robust association with query embeddings during the update process.\n(2) When using a small sampling interval (which is desired since this improves the stability of the methods [4] and avoids missing key information [4,21]) the proposed READMem module consistently improves the results of current sVOS baselines on long videos (LV1 [21]), while not hindering the performance on short videos (i.e., 2017 DAVIS validation set [31] (D17)). Overall, we attain competitive results against the state-of-the-art. (3) We provide insight into our approach through an extensive ablation study. ( 4) Finally, we make our code publicly available to promote reproducibility and facilitate the implementation of our method." }, { "figure_ref": [], "heading": "Relate Work", "publication_ref": [ "b4", "b26", "b27", "b37", "b39", "b40", "b50", "b11", "b13", "b14", "b31", "b48", "b50", "b50", "b9", "b38", "b43", "b44", "b45", "b46", "b47", "b29", "b29", "b41", "b34", "b35", "b24", "b6", "b6", "b7", "b30", "b42", "b21", "b25", "b12", "b20", "b5", "b2", "b8", "b7" ], "table_ref": [], "text": "Short-term sVOS Methods: Online fine-tuning approaches [3,25,26,36,38,39] adapt the network parameters on-the-fly based on an initial mask indicating the object of interest, but suffer from slow inference and poor generalization [49].\nIn contrast, propagation-based methods [10,12,13,30,47] utilize strong priors offered by previous (adjacent) frames to propagate the mask, allowing them to better deal with fast appearance changes. However, they are prone to error accumulation and lack long-term context for identifying objects after occlusion [49].\nMatching-based methods [49] use features of the initial frame along with previously processed frames (intermediate frames) as references for feature matching during inference. In [8,37,42,43,44,45], the embeddings of the initial frame and the adjacent frame are matched with the embeddings of the query frame through global and local matching, while Zhou et al. [46] also combine online-fine tuning approaches. However, leading sVOS methods rely mostly on the Space-Time Memory (STM) design introduced by Oh et al. [28]. Using a non-local approach, the method matches the embeddings from the query frame with the embeddings of reference frames. Based on STM [28], Xie et al. [40] and Seong et al. [33] perform a local-to-local matching to alleviate distractors-based mismatches. A memory matching at multiple scales is also conducted by Seong et al. [34]. To enhance pixel discrimination Yong et al. [23] explore a unique approach that uses both, the frequency and spectral domains. To reduce noisy predictions due to the ever-expanding memory, Cheng et al. [5] Legend:\nConcatenation Matrix Product Key Embeddings\nValue Embeddings\n✗ Discard Update I q Query Encoder k q v q Space-Time Memory F Top-k Softmax Scatter indices W v m v q Query Decoder skip-connections External Memory k m 1 k m 2 ... k m N K m v m 1 v m 2 ... v m N V m Memory Encoder M q A I q k q,m v q,m READMem REA LSB DME k m 1 ✗ ✗ { k m n } N n=1 k q,m K m W\nUpdate External Memory with k q,m and v q,m k q,m v q,m Figure 3: Overview of READMem-MiVOS (MiVOS [5] using READMem) after initialization. We omit the temporary memory for simplicity.\nadd a top-k filter during non-local matching and propose an efficient extension [6], which decouples the feature extraction of the frame and corresponding mask while improving the matching process. Park et al. [29] deviate from the frame-to-frame propagation scheme and adopt instead a clip-to-clip approach to speed-up the propagation process. However, as space-time memory networks-based methods display incredible performance on short-term benchmarks [31,41], they are hampered in real-world applications by their ever-expanding memory.\nLong-term sVOS Methods: To address memory limitations, Li et al. [20] propose a compact global module to summarize object segmentation information within a fixed memory size. Similarly, Liang et al. [21] apply a weighted average on the extracted features to merge new features with existing ones into the memory and discard obsolete features through a least frequently used (LFU) approach. Liu et al. [24] explore the use of a quality-awaremodule (QAM) [11] to assess the quality of the predicted mask on-the-fly before integrating it in the memory. Li et al. [19] use a spatio-temporal aggregation module to update a fixedsized memory bank. Cheng and Schwing [4] follow the Atkinson-Shiffrin [1] model for their sVOS pipeline (XMem), which comprises: a sensory memory [7] that learns an appearance model for every incoming frame, a working memory based on STCN [6], and a long-term memory that consolidates the working memory embeddings in a compact representation." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b6", "b7", "b29", "b6" ], "table_ref": [], "text": "This section presents our READMem module -an extension for space-time memory-based sVOS pipelines [5,6,28]. Figure 3 illustrates our module embedded in MiVOS [5]." }, { "figure_ref": [], "heading": "Video Object Segmentation Model Structure", "publication_ref": [ "b29", "b10", "b29", "b6", "b25", "b29", "b6", "b29", "b28", "b6", "b7", "b25", "b6", "b42", "b6" ], "table_ref": [], "text": "Since READMem is built upon the popular space-time memory network paradigm [28], we provide a brief description of the corresponding components.\n(1) A query encoder (an extension of ResNet50 [9]) that encodes a query frame I q seen at time step t into a pair of a query key feature k q ∈ R C k ×HW and a query value feature v q ∈ R C v ×HW . With C k and C v we denote the number of feature channels, while H and W indicate the spatial image dimensions using a stride of 16.\n(2) A memory encoder that extracts latent feature representations from a frame and its corresponding segmentation mask into a pair of a memory key k m ∈ R C k ×HW and a memory value v m ∈ R C v ×HW . The key features encode visual semantic information that is robust to appearance changes (needed for soft-attention, refer to (4)), while value features encode boundary and texture cues [28] useful at the decoder stage.\n(3) An external memory that maintains a set of memory keys\nM k = k m n N n=1 and a set of memory values M v = v m n N n=1\n. The variable N denotes the total number of memory slots, while n represents the index of a slot. In addition, some methods [5,24,28] use a temporary memory that only contains the latent representation (i.e., memory key and value features) of the previous adjacent frame.\n(4) A Space-Time Memory (STM) component that serves as an attention mechanism to determine relevant memory information. It matches the channels of the query key k q with all channels of every memory key k m n ∈ M k on a temporal and spatial aspect by inferring an affinity matrix F ∈ R NHW ×HW . Concretely, the channel level similarities of the query k q and every memory key in K m are computed through\nF = (K m ) T k q ,(1)\nwhere\nK m ∈ R C k ×NHW denotes the matrix obtained from concatenating every k m n ∈ M k along the column dimension, such that K m = k m n=1 , k m n=2 , . . . , k m n=N .\nTo reduce noise (i.e., low affinities) in the affinity matrix F, a top-k filter is applied along the columns (memory dimension) following [5]. This produces a sparse matrix S ∈ R NHW ×HW by\nS i, j =        F i, j , if F i, j ∈ argmax F j ⊂{ f | f ∈F * , j },|Fj|=k ∑ f ∈F j f 0 , otherwise ,(2)\nwhere i and j denote the row and column indices respectively, and * refers to either the complete row or column of a matrix. Based on the sparse affinity matrix S, the method computes soft-weights W ∈ R NHW ×HW using\nW i, j = exp(S i, j ) ∑ NHW l=1 exp(S l, j ) .(3)\nAll the channels of every memory value feature v m n ∈ M v are weighted with W to compute a pseudo memory feature representation\nv m ∈ R C v ×HW through v m = V m W, where V m ∈ R C k ×NHW is the concatenated form of the memory values v m\nn along the columns. ( 5) Lastly, a decoder inspired by [28] that predicts a segmentation mask M q for the query frame I q using refinement modules [27], skip-connections and bilinear upsampling.\nWe use the original weights for the sVOS baselines in our experiments (i.e., MiVOS [5], STCN [6] and QDMN [24]). The training of the sVOS baselines uses a bootstrapped crossentropy loss [5] and consists of two stages: a pre-training using static image datasets, and a main-training using DAVIS [31] and Youtube-VOS [41]. An intermediate stage can also be performed using the BL30K [5] dataset." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "READMem", "publication_ref": [ "b36", "b36", "b29", "b3", "b25", "b29" ], "table_ref": [ "tab_2", "tab_3", "tab_3" ], "text": "Diversification of Memory Embeddings (DME): As mentioned, contemporary methods insert the embeddings of every t-th frame into the memory, along with the corresponding mask. This leads to an inevitable memory overload when accumulating embeddings of unconstrained sequences. In contrast, READMem proposes an alternative frame insertion scheme, maximizing the diversity of latent representations stored in the memory by only adding frames that enhance information. This allows us to limit the number of memory slots without degrading the performance and avoid GPU memory saturation when processing long videos.\nIn pursuit of enhancing the diversity of the memory key embeddings M k = {k m n } N n=1 , we require a mean to quantify this diversity. If we conceptually consider the embeddings to form a parallelotope, where the key embeddings {k m n } N n=1 act as edges, we can quantify the diversity by calculating the volume of this geometric object. A conventional approach to estimate the volume involves concatenating the keys k m n ∈ R C k HW (we take a flattened representation of the keys k m n ∈ R C k ×HW ) into a matrix X ∈ R C k HW ×N and inferring the determinant of X. However, since N ≪ C k × HW , X is a non-square matrix, which prevents the computation of the determinant. To circumvent this issue, we leverage the Gram matrix G ∈ R N×N , computed through G = X T X. As demonstrated by [35] (pages 195-196) the determinant of the Gram matrix (i.e., Gramian [35]) allows us to compute the square volume (vol) of the parallelotope, such that (volP({k m n } N n=1 )) 2 = det(G). Therefore, we can quantify the diversity of our memory with det(G). We use the set of memory keys M k = {k m n } N n=1 to compute the diversity of the memory, as they encode visual semantics that are robust to appearance variation [28]. Hence, by computing the similarity s a,b between the a-th and b-th flatten memory key in M k through an inner product g :\nR C k HW × R C k HW → R with g(k m a , k m b ) := s a,b , we construct G by G(M k ) =    s 1,1 s 1,2 • • • s 1,N . . . . . . . . . . . . s N,1 s N,2 • • • s N,N    .(4)\nThe size of the memory N is bounded by the dimension of the feature space since otherwise det (G) = 0 [2]. However, this is not a relevant limitation since in practice, N is several magnitudes smaller than the dimension of the feature space C k × HW (i.e., N ≪ C k × HW ).\nTo increase the diversity of the embeddings stored in the memory, we want to maximize the absolute Gramian |det (G)| of G(M k ). As the annotated frame I 1 provides accurate and controlled segmentation information [24,28], the corresponding memory key k m n=1 and value v m n=1 are exempt from the following update strategy: (1) For each memory slot n ∈ {2, . . . , N}, we substitute the respective memory key k m n with the query-memory key k q,m (memory key of the current query frame I q and mask M q ) and construct a temporary Gram matrix G t n . Using the temporary matrices G t n , we build a set G t containing the absolute values of the determinants, such that\nG t = |det (G t n )| N n=2 . (2)\nThe memory is updated based on whether the highest value in G t surpasses the absolute value of the current Gramian |det (G)|. If this condition is met, then the corresponding memory slots n (position of the highest value in G t ) is replaced with the query-memory key k q,m and value v q,m embeddings.\nRobust Embeddings Association (REA): Given that we compute an inner-product between two embeddings (enforced by the Gram matrix), we capture the global similarity of two frames in a single score. However, as two frames, I t and I t+∆t are further apart in time, it becomes more likely for the object of interest to experience in-between: motion, ego-motion, or size variations. Consequently, a channel-wise embedding may encode different information (specifically foreground vs. background) for the same image region in frame I t and frame I t+∆t due to the spatial disparity in the object's location. As a result, the resulting similarity between the two frames is inherently low when comparing their embeddings. To address this issue and dampen positional variation without relying on a motion model or fixed-sized search region (which would introduce additional hyper-parameters), our proposal utilizes transition matrices {T n } N n=1 . Wherein, the core idea is to leverage the cross-attention already at hand in the memory-based networks (i.e., W and F) to identify the best channel-wise mapping between two embeddings. Specifically, through the transition matrix T n ∈ R HW ×HW , we project the n-th memory embedding k m n ∈ R C v ×HW from its original frame of reference (FoR) to the query's FoR (see Figure 4) by\nk m n = k m n T n ,(5)\nwhere k m n ∈ R C v ×HW denotes the pseudo-embedding of the embedding k m n , but expressed in the query's frame of reference. This approach effectively compensates for the target's spatial disparity between two distant frames (refer to Table 2 and3).\nThus, we compute the similarity w.r.t. k q,m not with the memory keys {k m n } N n=1 but with the pseudo memory keys { k m n } N n=1 . Figure 4 depicts the operation that maps key embeddings from the memory's FoR to the query's FoR through the transition matrix T n . We use W, a filtered version of F (see (3)), to only consider strong affinities between the point-wise embeddings (channels) of the query and memory key which are more likely to encode a similar information. This ensures that the corresponding mapping matrix T n is constructed based on relevant channels (refer to Table 3). Although W entails strong similarities, using W n as transition matrix T n potentially leads to the aggregation of multiple memory channels onto a single pseudo memory channel -which is certain to degrade the similarity. Hence, to avoid the summation, we need to map at most one element from the memory FoR to the query FoR, i.e., filtering W n . Generating a bijective mapping between the memory key k m n and the corresponding pseudo query key k m n would constraint the foreground-background ratio of k m n based on k m n . However, it is unlikely that the area taken by the object in the image is unchanged from one frame to the other. To avoid this foreground-background restriction, we should allow point-wise embeddings from the memory key k m n to be re-used for the creation of the pseudo memory key k m n , as long as they are sufficiently similar. Thus, an appropriate function that validates the aforementioned criteria is argmax applied along the columns of W to maximize the re-usability of point-wise embeddings when producing the pseudo memory key k m n on the query FoR. We obtain\nT n ∈ R HW ×HW by dividing W ∈ R NHW ×HW into N-\nsquare matrices, such that W n ∈ R HW ×HW , and by applying argmax along the columns by\nT n,i, j = 1 , if i ∈ argmax w | w ∈ W n, * , j 0 , otherwise .(6)\nLower Similarity Bound of Memory Embeddings (LSB): To ensure the presence of the foreground object in a candidate frame, we set a lower bound on similarity l bs = 0.5. We validate object presence in I q by computing a similarity score such that g( k m 1 , k q,m ) > l bs . Initialization: We integrate every t-th frame (that satisfies the lower similarity bound) into the memory -until the memory is completely filled. Afterward, the method only integrates embeddings of frames that enhance the diversity of the memory." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b6", "b7", "b25", "b5", "b6", "b5", "b5", "b25", "b5", "b5", "b42", "b5", "b5", "b5", "b7", "b5", "b5", "b6", "b7", "b25", "b6", "b25" ], "table_ref": [ "tab_2" ], "text": "In our experiments, we use the Long-time Video [21] (LV1) dataset (long videos) as well as the validation set of the DAVIS 2017 [31] (D17) dataset (short videos). We use the J (intersection over union) and F (contour accuracy) metrics (higher is better) introduced by [31] to quantify the performance of each method. Both metrics are averaged into the J &F metric for ease of ranking. We perform all experiments on an Nvidia GeForce GTX 1080 Ti.\nQuantitative Results: Table 1 presents the quantitative results of recent sVOS methods (MiVOS [5], STCN [6] and QDMN [24]) and their READMem based extensions along with the state-of-the-art sVOS method XMem [4] on the LV1 [21] and the D17 [31] datasets -we display the qualitative results of MiVOS [5] and READMem-MiVOS on LV1 [21] \nin Fig- ure 1.\nTo ensure a fair comparison, we keep the default settings for XMem [4] and use the same sampling interval (i.e., s r = 10) for all other methods (in contrast to previous works [4,21] that use dataset-specific sampling intervals). In our experiments, we follow the evaluation of [24] and limit the number of memory slots of the baselines and the corresponding READ-Mem counterparts to 20. Note that this value differs from the memory slot limit (i.e., 50) used by [4,21]. In contrast to [4,21], we do not adapt the sampling interval to the sequence length when dealing with long videos as we argue that the sampling interval should not be correlated to the video's length and is not a standard practice when dealing with short videos (D17 [31] and Y-VOS [41]). Instead, we opt for a first-in-first-out (i.e., FIFO) replacement strategy when updating the memory of the mentioned baselines. Our results in Table 1 demonstrate that READMem improves long-term performance while preserving good shortterm capabilities. Moreover, by setting s r = 1 we minimize the likelihood of omitting key frames and enhance the stability of the method [4]. As a result, we attain competitive results against the state-of-the-art method (i.e., XMem [4]). We do not employ READMem on XMem [4], as a long-term memory is already present, consolidating the embeddings of the working memory [6] and updated by a least-frequently-used (LFU) approach [4,21].\nAblation Studies: For clarity and to isolate the benefits of each design decision, we present our ablation results solely for READMem-MiVOS in Table 2. We observe that integrating embeddings of frames that diversify the memory (DME) and enforcing a lower bound on similarity (LSB) results in a significant performance improvement for long videos. Moreover, using the robust association of memory embeddings (REA) also leads to a substantial performance increase. Although including the adjacent frame leads to a slight improvement for long videos, it is particularly important when dealing with short sequences as the previous adjacent frame provides recent information (not guaranteed by our module).\nTable 1: Quantitative evaluation of sVOS methods [4,5,6,24] with and without READMem on the LV1 [21] and D17 [31] datasets. The symbol † denotes no pre-training on BL30K [5], while ⋆ indicates the use of a flexible sampling interval as in QDMN [24]." }, { "figure_ref": [], "heading": "Configuration", "publication_ref": [ "b6", "b19", "b19" ], "table_ref": [ "tab_4", "tab_3", "tab_4" ], "text": "J &F LV1 J &F D17 MiVOS [5] Computation of the transition matrix T n J &F LV1 using W n J &F LV1 using F n Hungarian Method [18] 76.8 75.1 argmax along the columns 86.0 73.4 argmax along the rows 79.8 77.1 Table 5: Relative performance and Gramian (i.e., |G|) evolution for three sections on the blueboy sequence [21]. The final Gramian and J &F score is reported in Table 4.\nIn Table 3, we compare different approaches for inferring the transition matrix T n . Our results indicate, as expected, that using the weight matrix W to generate the transition matrices T n leads to better performance compared to the affinity matrix F. Furthermore, we note that using the argmax function along the columns axis (memory) leads to the best performance in comparison to applying argmax along the rows axis (query) or when using a bijective mapping (Hungarian Method [18]).\nTable 4 tabulates the J &F score and Gramian of READMem-MiVOS on the three sequences of LV1 [21] for three different sampling interval s r . In Figure 5, we display the evolution of the Gramian over the observed blueboy sequence along with the specific attained Gramian and the respective J &F performance of three intermediate sections in Table 5. We note that the Gramian continuously increases over the observed sequences and provides more assistance in the latter stages as indicated in Table 5, where a high Gramian generally correlates to higher performance." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b6", "b7", "b25", "b6" ], "table_ref": [], "text": "We propose READMem, a modular framework for STM-based sVOS methods, to improve the embeddings-stored in the memory on unconstrained videos and address the expanding memory demand. Our evaluation displays that sVOS methods [5,6,24] using our extension consistently improve their performance compared to their baseline on the long video sequences (i.e., LV1 [21]) without compromising their efficiency on shorter sequences (i.e., D17 [31]). Moreover, we achieve competitive results with state-of-the-art and provide a comprehensive overview of each design decision, supported by an extensive ablation study.\nMiVOS [5] vs." }, { "figure_ref": [ "fig_0" ], "heading": "READMem-MiVOS", "publication_ref": [ "b7", "b25", "b5", "b5", "b6", "b7", "b25", "b5", "b5" ], "table_ref": [], "text": "#3075 #3639 #4767 #10407 #12711\nSTCN [6] vs. READMem-STCN QDMN [24] vs. READMem-QDMN XMem [4] Figure S2: Results on the dressage sequence of LV1 [21] with s r = 10. We depict the results of: the baselines in red, the READMem variations in blue, the intersection between both in turquoise, the ground-truth contours in yellow and XMem [4] results in purple.\nMiVOS [5] vs. STCN [6] vs. READMem-STCN QDMN [24] vs. READMem-QDMN XMem [4] Figure S3: Results on the rat sequence of LV1 [21] with s r = 10. We depict the results of: the baselines in red, the READMem variations in blue, the intersection between both in turquoise, the ground-truth contours in yellow and XMem [4] results in purple." }, { "figure_ref": [], "heading": "READMem-MiVOS", "publication_ref": [ "b6" ], "table_ref": [], "text": "A.2 Qualitative Results on LV1 [21] with s r = 1 MiVOS [5] vs." }, { "figure_ref": [ "fig_2" ], "heading": "READMem-MiVOS", "publication_ref": [ "b7", "b25", "b5", "b5", "b6" ], "table_ref": [], "text": "#4941 #6057 #11139 #11895 #18252\nSTCN [6] vs. READMem-STCN QDMN [24] vs. READMem-QDMN XMem [4] Figure S4: Results on the blueboy sequence of LV1 [21] with s r = 1. We depict the results of: the baselines in red, the READMem variations in blue, the intersection between both in turquoise, the ground-truth contours in yellow and XMem [4] results in purple.\nMiVOS [5] vs." }, { "figure_ref": [], "heading": "READMem-MiVOS", "publication_ref": [ "b7", "b25", "b5", "b5", "b6", "b25", "b5", "b5" ], "table_ref": [], "text": "#3075 #3639 #4767 #10407 #12711\nSTCN [6] vs. READMem-STCN QDMN [24] vs. READMem-QDMN XMem [4] Figure S5: Results on the dressage sequence of LV1 [21] with s r = 1. We depict the results of: the baselines in red, the READMem variations in blue, the intersection between both in turquoise, the ground-truth contours in yellow and XMem [4] results in purple.\nMiVOS [5] vs. READMem-STCN QDMN [24] vs. READMem-QDMN XMem [4] Figure S6: Results on the rat sequence of LV1 [21] with s r = 1. We depict the results of: the baselines in red, the READMem variations in blue, the intersection between both in turquoise, the ground-truth contours in yellow and XMem [4] results in purple." }, { "figure_ref": [], "heading": "READMem-MiVOS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B Additional Quantitative Evaluation", "publication_ref": [ "b42", "b17", "b18", "b18", "b6", "b7", "b25", "b7", "b25" ], "table_ref": [ "tab_5", "tab_5", "tab_2" ], "text": "We present in Table S1 useful statistics for popular (sVOS) [21,31,41] and Visual Object Tracking (VOT) [16,17] datasets. As our goal is to allow sVOS methods to perform on long video sequence, Table S1 reveals that the LV1 dataset [21] and the recently introduced VOTS2023 dataset [17] are ideal candidates for assessing the effectiveness of our READ-Mem extension.\nHence, in the main paper we focus on the LV1 dataset [21], to allow for a direct comparison with contemporary sVOS methods. We include the D17 dataset [31] in our evaluations to encompass scenarios with shorter sequences. However, to demonstrate the scalability and versatility of our approach, we also report complementary experiments on VOTS2023 in Table S2. We want to clarify that our method is originally designed for managing the memory of sVOS task, and as such is not modifying the underlying architecture of the sVOS baselines [5,6,24], which are not tailored towards handling specific challenges found only in VOT datasets (e.g., small object-to-image ratio, presence of numerous distractors). , STCN [6], QDMN [24]) with and without the READMem extension on the D17 [31] dataset, while varying the sampling interval s r . Regardless of the final performance, we observe a general tendency where increasing the sampling interval (i.e., s r higher than 10) on short video sequences leads to a performance drop." }, { "figure_ref": [ "fig_0" ], "heading": "B.1 Performance on the DAVIS (D17 [31]) Dataset", "publication_ref": [ "b6", "b7", "b25" ], "table_ref": [], "text": "We display the performance of MiVOS [5], STCN [6] and QDMN [24] with and without the READMem extension when varying the sampling interval s r on the D17 [31] dataset, using the same configuration as in Section 4.\nIn Figure 2 of Section 1, we observe that increasing the sampling interval generally improves the performance of all methods on long videos, regardless of the baseline employed. However, this trend does not hold when working with short video sequences, as shown in Figure S7. Here, we notice a degradation in performance for all methods when using larger sampling intervals. Therefore, it is essential to utilize a sampling interval that does not negatively impact the performance on both long and short video sequences. This is where our READMem extension becomes valuable, as it enables the sVOS pipeline to use a small sampling interval (typically s r ∈ [1 -10]) that achieves and maintains high performance for both long and short video sequences." }, { "figure_ref": [ "fig_0" ], "heading": "B.2 Performance as a Function of Memory Size", "publication_ref": [ "b6", "b7", "b25" ], "table_ref": [], "text": "We explore the impact of the size of the memory on the performance of MiVOS [5], STCN [6] and QDMN [24] with and without our READMem extension on the LV1 [21] dataset. We follow the same experimental setup as in Section 4 (with s r = 10), except for the varying memory size N, which ranges from 5 to 50.\nFrom Figure S8, we observe that the performance of the baselines improves as the memory size increases. Similarly, although to a lesser extent, the READMem variants also demonstrate improved performance with larger memory sizes. However, the READMem variations consistently outperform their respective baselines, especially when using a smaller memory size. This is desired as a smaller memory requires less GPU resources.\nComparing Figure S8 with Figure 2, we notice that increasing the sampling interval (i.e., s r ) of the baselines leads to a significant boost in performance compared to increasing the memory size (i.e., N). Hence, storing a diverse set of embeddings in the memory is more beneficial than including additional ones." }, { "figure_ref": [], "heading": "B.3 Performance on the VOTS2023 [17] Dataset", "publication_ref": [ "b5", "b18", "b6", "b7", "b25", "b15", "b16", "b17" ], "table_ref": [ "tab_2", "tab_2" ], "text": "In our quantitative evaluation (refer to Table 1 of Section 4), we demonstrate and analyze the effectiveness of our approach on sVOS datasets, encompassing both short (i.e., D17 [31]) and long (i.e., LV1 [21]) sequences, to allow for a direct comparison with contemporary sVOS approaches (i.e., [4,21]). In an effort, to enhance the soundness of our READMem extension, we conduct additional experiments on the VOTS2023 dataset [17]. We tabulate in Table S2, the results of sVOS baselines [5,6,24] with and without READMem on the VOTS2023 tracking benchmark.\nFor the evaluation we use the same settings as described in Section 4 (refer to quantitative results) and the official VOT evaluation toolkit (version 0.6.4 released on the 31 May 2023 https://github.com/votchallenge/toolkit). We observe from Table S2, that the READMem variants consistently outperform their baseline counterpart.\nIn contrast to previous VOT challenges [14,15,16], VOTS2023 introduced new evaluation metrics split into: (i) a primary performance metric: The Tracking Quality (Q) and (ii) secondary metrics: the Accuracy (ACC), Robustness (ROB), Not-Reported Error (NRE), Drift-Rate Error (DRE) and Absence-Detection Quality (ADQ). Please refer to the VOTS2023 paper for more details." }, { "figure_ref": [], "heading": "C Initialization of the Memory", "publication_ref": [ "b6", "b7", "b25", "b5", "b6", "b7", "b25", "b18" ], "table_ref": [ "tab_3", "tab_2" ], "text": "We investigate the performance variation when employing two different initialization for READMem in Table S3: The strategies are as follows: (1) integrates every t-th frame into the memory until full, while (2) fills the memory slots with the embeddings of the annotated frame and includes a new frame to the memory if the conditions on the lower bound on similarity and the Gramian are met (follows a greedy approach). The second strategy yields worse results on the short scenarios and is slightly below the performance of strategy (1) on LV1 [21]. We argue that with longer sequences the memory has more opportunities to integrate decisive frame representations in the memory to use as a reference. Hence, 6L]HRIWKH0HPRU\\LHN We compare the performance of sVOS baselines (MiVOS [5], STCN [6], QDMN [24]) with and without the READMem extension on the LV1 [21] dataset while varying the size of the memory (i.e., N). A general tendency is that increasing the memory size, leads to better performance. Table S2: Quantitative evaluation of sVOS methods [4,5,6,24] with and without READ-Mem on the VOTS2023 [17] datasets. We use the same settings as described in Section 4 and the official VOT evaluation toolkit.\ninitialization plays a crucial role in short videos, but as the method observes longer videos and has access to a larger pool of frames to select from, the importance diminishes." }, { "figure_ref": [], "heading": "D Discussion and Limitations", "publication_ref": [], "table_ref": [], "text": "We are aware of the limitations imposed by the hand-crafted threshold for the lower similarity bound l sb , although to avoid any fine-tuning, we set the threshold value to 0.5. A more thoughtful approach would incorporate a learnable parameter. This approach could potentially lead to improved performance, albeit at the expense of the plug-and-play nature of our extension. Another point for improvement is to reduce the participation of the background when computing the similarity between two embeddings. A possible enhancement is to integrate either the segmentation mask estimated by the sVOS pipeline or use the memory values to estimate a filter that can be applied to the memory keys before computing a similarity score." }, { "figure_ref": [], "heading": "E Training", "publication_ref": [ "b6", "b7", "b25", "b29", "b6", "b6", "b6", "b7", "b25", "b29", "b42", "b49", "b6" ], "table_ref": [ "tab_3" ], "text": "For our experiments, we utilize the original weights provided by the authors of MiVOS [5], STCN [6], and QDMN [24]. Our primary focus is to showcase the benefits of our extension (i.e., READMem) without modifying the baselines. To provide a comprehensive overview of the baselines, we briefly elaborate on the training methodology. The training procedure follows the regiment presented in STM [28] and refined in the subsequent work, MiVOS [5].\nThe training is divided into two stages employing the bootstrapped cross-entropy loss [5] and\nREADMem-MiVOS READMem-STCN READMem-QDMN Initialization J &F LV1 J &F D17 J &F LV1 J &F D17 J &F LV1 J &F D17\n( S3: Performance variation when leveraging two different initialization strategies for READMem-MiVOS. Besides the initialization strategy, the remaining parameters are consistent to Section 4 (we set s r = 10).\nutilizing the Adam optimizer (refer to the original papers [5,6,24] and their supplementary materials for detailed insights).\nThe training comprises the following stages: (1) A pre-training stage, in which static image datasets are used as in [28] to simulate videos consisting of three frames. While all three frames originate from the same image, the second and third frames are modified using random affine transformations (2) A main-training stage, which uses the DAVIS [31] and the Youtube-VOS [41] datasets (which provide real videos). Similar to the pre-training stage, three frames from a video are sampled, gradually increasing the temporal gap from 5 to 25 frames during training. Subsequently, the temporal gap is annealed back to 5 frames, following a curriculum training approach [48]. (Optional) Moreover, after the pre-training stage a synthetic dataset BL30K [5] can be leveraged to enhance the ability of the model to better handle complex occlusion patterns." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [ "b6", "b7", "b25", "b5", "b6", "b7", "b25", "b5", "b6", "b25", "b5", "b5" ], "table_ref": [], "text": "READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation\nIn this supplementary document, we provide additional experiments, visualizations and insights.\nA Additional Qualitative Results on the Long-time Video (LV1 [21]) Dataset\nWe display qualitative results for the READMem variations of MiVOS [5], STCN [6] and QDMN [24] along with their baseline on the LV1 [21] dataset. We use the same settings as described in Section 4 (refer to quantitative results). We also provide the results for XMem [4], which represents the state-of-the-art. Figures S1, S2 and S3 displays the results for the blueboy, dressage and rat sequences in LV1 [21] respectively when using s r = 10, while Figures S4, S5 and S6 display the results for s r = 1. The estimated segmentation mask of the baselines (MiVOS [5], STCN [6], and QDMN [24]) are visualized in red, while the results of the READMem-based variations (READMem with a baseline) are highlighted in blue. The intersection between the prediction of a baseline and its corresponding READMem variation is depicted in turquoise. The ground-truth contours are highlighted in yellow. We depict XMem [4] results in purple.\nA.1 Qualitative Results on LV1 [21] with s r = 10 MiVOS [5] vs. READMem-STCN QDMN [24] vs. READMem-QDMN XMem [4] Figure S1: Results on the blueboy sequence of LV1 [21] with s r = 10. We depict the results of: the baselines in red, the READMem variations in blue, the intersection between both in turquoise, the ground-truth contours in yellow and XMem [4] results in purple." }, { "figure_ref": [], "heading": "READMem-MiVOS", "publication_ref": [], "table_ref": [], "text": "⋆ Fraunhofer IOSB is a member of the Fraunhofer Center for Machine Learning." } ]
We present READMem (Robust Embedding Association for a Diverse Memory), a modular framework for semi-automatic video object segmentation (sVOS) methods designed to handle unconstrained videos. Contemporary sVOS works typically aggregate video frames in an ever-expanding memory, demanding high hardware resources for long-term applications. To mitigate memory requirements and prevent near object duplicates (caused by information of adjacent frames), previous methods introduce a hyper-parameter that controls the frequency of frames eligible to be stored. This parameter has to be adjusted according to concrete video properties (such as rapidity of appearance changes and video length) and does not generalize well. Instead, we integrate the embedding of a new frame into the memory only if it increases the diversity of the memory content. Furthermore, we propose a robust association of the embeddings stored in the memory with the query embeddings during the update process. Our approach avoids the accumulation of redundant data, allowing us in return, to restrict the memory size and prevent extreme memory demands in long videos. We extend popular sVOS baselines with READMem, which previously showed limited performance on long videos. Our approach achieves competitive results on the Long-time Video dataset (LV1) while not hindering performance on short sequences.
READMem: Robust Embedding Association for a Diverse Memory in Unconstrained Video Object Segmentation
[ { "figure_caption": "Figure 2 :2Figure 2: Performance comparison of sVOS baselines (MiVOS[5], STCN[6], QDMN[24]) with and without the READMem extension on the LV1[21] dataset when varying the sampling interval s r . The configuration employed for these experiments is consistent to the quantitative analysis in Section 4.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "F∈ R N HW ×HW W ∈ R N HW ×HW | w ∈ Wn, * ,j }) HW Ck k m n=1 ∈ R C k ×HW Tn=1 ∈ R HW ×HW", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Inference of a transition matrix T n and the corresponding pseudo-memory embeddings k m n for robust embeddings association (REA).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "FigureS7: Performance comparison of sVOS baselines (MiVOS[5], STCN[6], QDMN[24]) with and without the READMem extension on the D17[31] dataset, while varying the sampling interval s r . Regardless of the final performance, we observe a general tendency where increasing the sampling interval (i.e., s r higher than 10) on short video sequences leads to a performance drop.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FigureS8: We compare the performance of sVOS baselines (MiVOS[5], STCN[6], QDMN[24]) with and without the READMem extension on the LV1[21] dataset while varying the size of the memory (i.e., N). A general tendency is that increasing the memory size, leads to better performance.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance comparison on the LV1[21] dataset when computing the transition matrix T n through different methods. As a reminder, the baseline (i.e., MiVOS[5]) achieves a J &F score of 64.3 on LV1[21] using the same configuration.", "figure_data": "blueboydressagerats r J &F|G|J &F|G|J &F|G|189.514.6 × 10 -783.73.36 × 10 -877.74.22 × 10 -10587.06.35 × 10 -783.79.38 × 10 -874.91.39 × 10 -101088.14.31 × 10 -784.018.0 × 10 -886.082.3 × 10 -10", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "0 ×10 6 val on blueboy [21]. 20 40 60 0 1 sr = 1 sr = 5 sr = 10 Gramian Sequence Length in % for different sampling inter-80 1.45 0.63 0.43 100 Figure 5: Evolution of |G|0.0% -26.20% s r J &F |G| 1 94.3 0.61 × 10 -7 5 93.3 1.46 × 10 -7 10 90.4 1.69 × 10 -726.20% -57.64% J &F |G| 72.5 4.80 × 10 -7 59.3 2.34 × 10 -7 66.1 2.44 × 10 -757.64% -83.63% J &F |G| 95.6 6.93 × 10 -7 92.2 3.45 × 10 -7 95.4 3.09 × 10 -7", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Statistics of popular sVOS and VOT datasets. For more details refer to the original publications.", "figure_data": "0L92667&15($'0HP0L9265($'0HP67&1D176DPSOLQJ,QWHUYDOsr6DPSOLQJ,QWHUYDOsr6DPSOLQJ,QWHUYDOsr", "figure_id": "tab_5", "figure_label": "S1", "figure_type": "table" }, { "figure_caption": "↑0.04 0.67 ↓0.08 0.35 ↓0.06 0.07 ↑0.01 READMem-MiVOS (ours) 0.43 ↑0.05 0.57 ↑0.02 0.60 ↑0.06 0.67 ↓0.08 0.33 ↓0.08 ↑0.01 0.67 ↓0.02 0.27 ↓0.01 0.09↓0.01 ", "figure_data": "(Higher is better)(Lower is better)MethodQACCROBADQNREDREMiVOS [5] (CVPR 21)0.380.550.540.750.410.06MiVOS [5] (s r = 50) (CVPR 21)0.39 ↑0.010.550.58 0.06STCN [6] (NIPS 21)0.400.550.620.670.290.08STCN [6] (s r = 50) (NIPS 21)0.400.550.61 ↓0.01 0.61 ↓0.060.290.10 ↑0.02READMem-STCN (ours)0.42 ↑0.02 0.56 ↑0.01 0.66 ↑0.04 0.57 ↓0.10 0.25 ↓0.04 0.09 ↑0.01QDMN [24] (ECCV 22)0.440.590.620.690.280.10QDMN [24] (s r = 50) (ECCV 22) 0.42 ↓0.020.590.60 ↓0.02 0.63 ↓0.06 0.30 ↑0.02 0.11 ↑0.01READMem-QDMN (ours)0.45 ↑0.010.590.63", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Stéphane Vujasinović; Sebastian Bullinger; Stefan Becker; Norbert Scherer-Negenborn; Michael Arens; Rainer Stiefelhagen
[ { "authors": "", "journal": "Long Sequences) D", "ref_id": "b0", "title": "", "year": "" }, { "authors": "", "journal": "J &F J F J &F J F", "ref_id": "b1", "title": "Sampling Interval s r = 10 XMem", "year": "0288" }, { "authors": "C Richard; Richard M Atkinson; Shiffrin", "journal": "Elsevier", "ref_id": "b2", "title": "Human memory: A proposed system and its control processes", "year": "1968" }, { "authors": "Nils Barth", "journal": "Journal of Young Investigators", "ref_id": "b3", "title": "The gramian and k-volume in n-space: some classical results in linear algebra", "year": "1999" }, { "authors": "Kevis-Kokitsi Sergi Caelles; Jordi Maninis; Laura Pont-Tuset; Daniel Leal-Taixé; Luc Cremers; Van Gool", "journal": "", "ref_id": "b4", "title": "One-shot video object segmentation", "year": "2017" }, { "authors": "Kei Ho; Alexander G Cheng; Schwing", "journal": "", "ref_id": "b5", "title": "XMem: Long-term video object segmentation with an atkinson-shiffrin memory model", "year": "2022" }, { "authors": "Kei Ho; Yu-Wing Cheng; Chi-Keung Tai; Tang", "journal": "", "ref_id": "b6", "title": "Modular interactive video object segmentation: Interaction-to-mask, propagation and difference-aware fusion", "year": "2021" }, { "authors": "Kei Ho; Yu-Wing Cheng; Chi-Keung Tai; Tang", "journal": "", "ref_id": "b7", "title": "Rethinking space-time networks with improved memory coverage for efficient video object segmentation", "year": "2021" }, { "authors": "Kyunghyun Cho; Bart Van Merriënboer; Dzmitry Bahdanau; Yoshua Bengio", "journal": "", "ref_id": "b8", "title": "On the properties of neural machine translation: Encoder-decoder approaches", "year": "2014" }, { "authors": "Suhwan Cho; Heansung Lee; Minhyeok Lee; Chaewon Park; Sungjun Jang; Minjung Kim; Sangyoun Lee", "journal": "", "ref_id": "b9", "title": "Tackling background distraction in video object segmentation", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b10", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Ping Hu; Gang Wang; Xiangfei Kong; Jason Kuen; Yap-Peng Tan", "journal": "", "ref_id": "b11", "title": "Motion-guided cascaded refinement network for video object segmentation", "year": "2018" }, { "authors": "Zhaojin Huang; Lichao Huang; Yongchao Gong; Chang Huang; Xinggang Wang", "journal": "", "ref_id": "b12", "title": "Mask Scoring R-CNN", "year": "2019" }, { "authors": "Won-Dong Jang; Chang-Su Kim", "journal": "", "ref_id": "b13", "title": "Online video object segmentation via convolutional trident network", "year": "2017" }, { "authors": "Joakim Johnander; Martin Danelljan; Emil Brissman; Fahad Shahbaz Khan; Michael Felsberg", "journal": "", "ref_id": "b14", "title": "A generative appearance model for end-to-end video object segmentation", "year": "2019" }, { "authors": "Kristan Matej", "journal": "", "ref_id": "b15", "title": "The eighth visual object tracking vot2020 challenge results", "year": "2020" }, { "authors": "Kristan Matej", "journal": "", "ref_id": "b16", "title": "The ninth visual object tracking vot2021 challenge results", "year": "2021" }, { "authors": "Kristan Matej", "journal": "", "ref_id": "b17", "title": "The tenth visual object tracking vot2022 challenge results", "year": "2022" }, { "authors": "Kristan Matej", "journal": "", "ref_id": "b18", "title": "The vots2023 challenge performance measures", "year": "2023" }, { "authors": " Harold W Kuhn", "journal": "Naval research logistics quarterly", "ref_id": "b19", "title": "The hungarian method for the assignment problem", "year": "1955" }, { "authors": "Mingxing Li; Li Hu; Zhiwei Xiong; Bang Zhang; Pan Pan; Dong Liu", "journal": "", "ref_id": "b20", "title": "Recurrent dynamic embedding for video object segmentation", "year": "2022" }, { "authors": "Yu Li; Zhuoran Shen; Ying Shan", "journal": "", "ref_id": "b21", "title": "Fast video object segmentation using the global context module", "year": "2020" }, { "authors": "Yongqing Liang; Xin Li; Navid Jafari; Jim Chen", "journal": "", "ref_id": "b22", "title": "Video object segmentation with adaptive feature bank and uncertain-region refinement", "year": "2020" }, { "authors": "Zhihui Lin; Tianyu Yang; Maomao Li; Ziyu Wang; Chun Yuan; Wenhao Jiang; Wei Liu", "journal": "", "ref_id": "b23", "title": "Swem: Towards real-time video object segmentation with sequential weighted expectation-maximization", "year": "2022" }, { "authors": "Yong Liu; Ran Yu; Jiahao Wang; Xinyuan Zhao; Yitong Wang; Yansong Tang; Yujiu Yang", "journal": "", "ref_id": "b24", "title": "Global spectral filter memory network for video object segmentation", "year": "2022" }, { "authors": "Yong Liu; Ran Yu; Fei Yin; Xinyuan Zhao; Wei Zhao; Weihao Xia; Yujiu Yang", "journal": "", "ref_id": "b25", "title": "Learning quality-aware dynamic memory for video object segmentation", "year": "2022" }, { "authors": "K-K Maninis; Sergi Caelles; Yuhua Chen; Jordi Pont-Tuset; Laura Leal-Taixé; Daniel Cremers; Luc Van Gool", "journal": "Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b26", "title": "Video object segmentation without temporal information", "year": "2018" }, { "authors": "Tim Meinhardt; Laura Leal-Taixé", "journal": "NeurIPS", "ref_id": "b27", "title": "Make one-shot video object segmentation efficient again", "year": "2020" }, { "authors": "Seoung Wug Oh; Joon-Young Lee; Kalyan Sunkavalli; Seon Joo Kim", "journal": "", "ref_id": "b28", "title": "Fast video object segmentation by reference-guided mask propagation", "year": "2018" }, { "authors": "Seoung Wug Oh; Joon-Young Lee; Ning Xu; Seon Joo Kim", "journal": "", "ref_id": "b29", "title": "Video object segmentation using space-time memory networks", "year": "2019" }, { "authors": "Kwanyong Park; Sanghyun Woo; Seoung Wug Oh; In So Kweon; Joon-Young Lee", "journal": "", "ref_id": "b30", "title": "Per-clip video object segmentation", "year": "2022" }, { "authors": "Federico Perazzi; Anna Khoreva; Rodrigo Benenson; Bernt Schiele; Alexander Sorkine-Hornung", "journal": "", "ref_id": "b31", "title": "Learning video object segmentation from static images", "year": "2017" }, { "authors": "Jordi Pont-Tuset; Federico Perazzi; Sergi Caelles; Pablo Arbeláez; Alexander Sorkine-Hornung; Luc Van Gool", "journal": "", "ref_id": "b32", "title": "The 2017 davis challenge on video object segmentation", "year": "2017" }, { "authors": "Axel Sauer; Elie Aljalbout; Sami Haddadin", "journal": "", "ref_id": "b33", "title": "Tracking holistic object representations", "year": "2019" }, { "authors": "Hongje Seong; Junhyuk Hyun; Euntai Kim", "journal": "", "ref_id": "b34", "title": "Kernelized memory network for video object segmentation", "year": "2020" }, { "authors": "Hongje Seong; Seoung Wug Oh; Joon-Young Lee; Seongwon Lee; Suhyeon Lee; Euntai Kim", "journal": "", "ref_id": "b35", "title": "Hierarchical memory matching network for video object segmentation", "year": "2021" }, { "authors": "Ėrnest Borisovich; Vinberg ", "journal": "American Mathematical Soc", "ref_id": "b36", "title": "A course in algebra", "year": "2003" }, { "authors": "Paul Voigtlaender; Bastian Leibe", "journal": "", "ref_id": "b37", "title": "Online adaptation of convolutional neural networks for video object segmentation", "year": "2017" }, { "authors": "Paul Voigtlaender; Yuning Chai; Florian Schroff; Hartwig Adam; Bastian Leibe; Liang-Chieh Chen", "journal": "", "ref_id": "b38", "title": "Feelvos: Fast end-to-end embedding learning for video object segmentation", "year": "2019" }, { "authors": "Huaxin Xiao; Jiashi Feng; Guosheng Lin; Yu Liu; Maojun Zhang", "journal": "", "ref_id": "b39", "title": "Monet: Deep motion exploitation for video object segmentation", "year": "2018" }, { "authors": "Huaxin Xiao; Bingyi Kang; Yu Liu; Maojun Zhang; Jiashi Feng", "journal": "Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b40", "title": "Online meta adaptation for fast video object segmentation", "year": "2019" }, { "authors": "Haozhe Xie; Hongxun Yao; Shangchen Zhou; Shengping Zhang; Wenxiu Sun", "journal": "", "ref_id": "b41", "title": "Efficient regional memory network for video object segmentation", "year": "2021" }, { "authors": "Ning Xu; Linjie Yang; Yuchen Fan; Dingcheng Yue; Yuchen Liang; Jianchao Yang; Thomas Huang", "journal": "", "ref_id": "b42", "title": "Youtube-vos: A large-scale video object segmentation benchmark", "year": "2018" }, { "authors": "Zongxin Yang; Yi Yang", "journal": "", "ref_id": "b43", "title": "Decoupling features in hierarchical propagation for video object segmentation", "year": "2022" }, { "authors": "Zongxin Yang; Yunchao Wei; Yi Yang", "journal": "", "ref_id": "b44", "title": "Collaborative video object segmentation by foreground-background integration", "year": "2020" }, { "authors": "Zongxin Yang; Yunchao Wei; Yi Yang", "journal": "", "ref_id": "b45", "title": "Associating objects with transformers for video object segmentation", "year": "2021" }, { "authors": "Zongxin Yang; Yunchao Wei; Yi Yang", "journal": "Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b46", "title": "Collaborative video object segmentation by multi-scale foreground-background integration", "year": "2021" }, { "authors": "Mao Yunyao; Wang Ning; Zhou Wengang; Li Houqiang", "journal": "", "ref_id": "b47", "title": "Joint inductive and transductive learning for video object segmentation", "year": "2021" }, { "authors": "Lu Zhang; Zhe Lin; Jianming Zhang; Huchuan Lu; You He", "journal": "", "ref_id": "b48", "title": "Fast video object segmentation via dynamic targeting network", "year": "2019" }, { "authors": "Peng Zhang; Li Hu; Bang Zhang; Pan Pan; Alibaba", "journal": "", "ref_id": "b49", "title": "Spatial consistent memory network for semi-supervised video object segmentation", "year": "2020" }, { "authors": "Tianfei Zhou; Fatih Porikli; David J Crandall; Luc Van Gool; Wenguan Wang", "journal": "Transactions on Pattern Analysis and Machine Intelligence (TPAMI)", "ref_id": "b50", "title": "A survey on deep learning technique for video segmentation", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 24.97, 32.75, 341.84, 104.89 ], "formula_id": "formula_0", "formula_text": "✗ Discard Update I q Query Encoder k q v q Space-Time Memory F Top-k Softmax Scatter indices W v m v q Query Decoder skip-connections External Memory k m 1 k m 2 ... k m N K m v m 1 v m 2 ... v m N V m Memory Encoder M q A I q k q,m v q,m READMem REA LSB DME k m 1 ✗ ✗ { k m n } N n=1 k q,m K m W" }, { "formula_coordinates": [ 5, 25.51, 65.84, 365.95, 31.17 ], "formula_id": "formula_1", "formula_text": "M k = k m n N n=1 and a set of memory values M v = v m n N n=1" }, { "formula_coordinates": [ 5, 179.99, 199.29, 211.47, 12.22 ], "formula_id": "formula_2", "formula_text": "F = (K m ) T k q ,(1)" }, { "formula_coordinates": [ 5, 25.51, 220.84, 365.95, 25.57 ], "formula_id": "formula_3", "formula_text": "K m ∈ R C k ×NHW denotes the matrix obtained from concatenating every k m n ∈ M k along the column dimension, such that K m = k m n=1 , k m n=2 , . . . , k m n=N ." }, { "formula_coordinates": [ 5, 98.26, 270.76, 293.2, 53.33 ], "formula_id": "formula_4", "formula_text": "S i, j =        F i, j , if F i, j ∈ argmax F j ⊂{ f | f ∈F * , j },|Fj|=k ∑ f ∈F j f 0 , otherwise ,(2)" }, { "formula_coordinates": [ 5, 159.68, 378.11, 231.78, 28.03 ], "formula_id": "formula_5", "formula_text": "W i, j = exp(S i, j ) ∑ NHW l=1 exp(S l, j ) .(3)" }, { "formula_coordinates": [ 5, 25.51, 424.97, 365.95, 23.96 ], "formula_id": "formula_6", "formula_text": "v m ∈ R C v ×HW through v m = V m W, where V m ∈ R C k ×NHW is the concatenated form of the memory values v m" }, { "formula_coordinates": [ 6, 17.01, 257.53, 365.95, 69.23 ], "formula_id": "formula_7", "formula_text": "R C k HW × R C k HW → R with g(k m a , k m b ) := s a,b , we construct G by G(M k ) =    s 1,1 s 1,2 • • • s 1,N . . . . . . . . . . . . s N,1 s N,2 • • • s N,N    .(4)" }, { "formula_coordinates": [ 6, 31.95, 456.86, 340.91, 26.26 ], "formula_id": "formula_8", "formula_text": "G t = |det (G t n )| N n=2 . (2)" }, { "formula_coordinates": [ 7, 184.25, 312.98, 207.21, 13.09 ], "formula_id": "formula_9", "formula_text": "k m n = k m n T n ,(5)" }, { "formula_coordinates": [ 7, 179.82, 591.61, 211.64, 12.12 ], "formula_id": "formula_10", "formula_text": "T n ∈ R HW ×HW by dividing W ∈ R NHW ×HW into N-" }, { "formula_coordinates": [ 8, 101.87, 49.29, 281.09, 26.35 ], "formula_id": "formula_11", "formula_text": "T n,i, j = 1 , if i ∈ argmax w | w ∈ W n, * , j 0 , otherwise .(6)" }, { "formula_coordinates": [ 8, 17.01, 292.73, 365.95, 23.96 ], "formula_id": "formula_12", "formula_text": "in Fig- ure 1." }, { "formula_coordinates": [ 16, 114.67, 440.87, 219.31, 4.48 ], "formula_id": "formula_13", "formula_text": "#4941 #6057 #11139 #11895 #18252" } ]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b4", "b29", "b2", "b15", "b17", "b0", "b12", "b30", "b7", "b37", "b24", "b27", "b28", "b1", "b16", "b39", "b36", "b10", "b25", "b35", "b4", "b29", "b8", "b9", "b11", "b18", "b24", "b28", "b38", "b16", "b2", "b5", "b39", "b1", "b13", "b36", "b36", "b36", "b22", "b21", "b20", "b14" ], "table_ref": [], "text": "In recent years, deep neural networks (DNNs) have demonstrated outstanding performance and have proven to be highly expressive in a broad range of tasks, including semantic image segmentation (Chen et al., 2018;Pan et al., 2022). Semantic segmentation aims at segmenting objects in an image by assigning each pixel to a fixed and predefined set of semantic classes, providing comprehensive and precise information about the given scene. However, DNNs are vulnerable to adversarial attacks (Bar et al., 2021) which is very hazardous in safety-related applications like automated driving. Adversarial attacks are small perturbations added to the input image causing the DNN to perform incorrect predictions at test time. The perturbations are not perceptible to humans, making the detection of these examples very challenging, see for example Figure 1. This undesirable property of DNNs is a major security concern in real world applications. Hence, developing efficient strategies against adversarial attacks is of high importance. Such strategies can either increase the robustness of DNNs making it more difficult to generate adversarial examples (defense) or build on approaches to detect adversarial attacks (detection).\nAdversarial attacks have attracted much attention, and numerous attacks as well as detection strategies arXiv:2305.12825v2 [cs.CV] 15 Jan 2024 have been proposed (Khamaiseh et al., 2022). However, adversarial examples have not been analyzed extensively beyond standard image classification models, often using small datasets such as MNIST (LeCun and Cortes, 2010) or CIFAR10 (Krizhevsky, 2009). The vulnerability of modern DNNs to attacks in more complex tasks like semantic segmentation in the context of real datasets from different domains has been rather poorly explored. The attacks which have been tested so far on semantic segmentation networks can be divided roughly into three categories. The first approaches transfer common attacks from image classification to semantic segmentation, i.e., pixel-wise classification (Agnihotri and Keuper, 2023;Gu et al., 2022;Rony et al., 2022). Other works (Cisse et al., 2017;Xie et al., 2017) have presented attacks specifically designed for the task of semantic segmentation, either attacking the input in a way that leads to the prediction of some predefined and image-unrelated segmentation mask or the complete deletion of one segmentation class (Metzen et al., 2017). Such attacks are more difficult to detect than attacks that perturb each pixel independently. The third group of attacks creates rectangular patches smaller than the input size, which leads to an erroneous prediction of the entire image (Nakka and Salzmann, 2020;Nesti et al., 2022). Defense methods aim to be robust against such attacks, i.e., to achieve high prediction accuracy even on perturbed images. For semantic segmentation tasks, defense approaches are often robust against only one type of attack (Arnab et al., 2018;Klingner et al., 2020;Yatsura et al., 2022). In contrast, detection methods aim at classifying an input as clean or perturbed based on the output of the segmentation model (Xiao et al., 2018).\nIn this paper, we present an uncertainty-based approach for detecting several kinds of adversarial attacks on semantic image segmentation models. Uncertainty information has already been exploited for the detection of adversarial attacks on classification DNNs, but has not been investigated in the context of segmentation so far. In (Feinman et al., 2017) an approximation to Bayesian inference (Monte-Carlo Dropout) is proposed, which is widely employed to estimate model uncertainty, for the detection of adversarial attacks. The gradient-based approach introduced in (Michel and Ewetz, 2022) generates salient features which are used to train a detector. While both methods require access to the inside of the model, our approach can be applied as a post-processing step using only information of the network output. We construct features per image based on uncertainty information provided by the DNN such as the entropy of the output distribution. In Figure 1 (c), the entropy heatmaps for a clean (left) and a perturbed image (right) are shown, indicating high uncertainties in the attacked regions, motivating the use of uncertainty information to separate clean and perturbed images. On the one hand, these features for clean and perturbed inputs are fed into an one-class support vector machine that performs unsupervised novelty detection (Weerasinghe et al., 2018). On the other hand, we train a logistic regression model with clean and perturbed data for classification. The perturbed data used during training is generated by only one kind of adversarial attack method, while the detector is applied to identify adversarial examples of other methods. Our approach neither modifies the semantic segmentation model nor requires knowledge of the process for generating adversarial examples. We only assume that our post-processing model is kept private while the attacker may have full access to the semantic segmentation model.\nIn our tests, we employ state-of-the-art semantic segmentation networks (Chen et al., 2018;Pan et al., 2022) applied to the Cityscapes (Cordts et al., 2016) as well as Pascal VOC2012 dataset (Everingham et al., 2012) demonstrating our adversarial attack detection performance. To this end, we consider different types of attackers, from pixel-level attacks designed for image classification (Goodfellow et al., 2015;Kurakin et al., 2017) to pixel-wise attacks developed for semantic segmentation (Metzen et al., 2017) and patch-based ones (Nesti et al., 2022). The source code of our method is publicly available at https://github.com/kmaag/ Adversarial-Attack-Detection-Uncertainty. Our contributions are summarized as follows:\n• We introduce a new uncertainty-based approach for the detection of adversarial attacks for the semantic image segmentation task. In a thorough empirical analysis, we demonstrate the capability of uncertainty measures to distinguish between clean and perturbed images. Our approach serves as a light-weight post-processing step, i.e., we do not modify the model or need knowledge of the process for generating adversarial examples.\n• For the first time, we present a detection method that was not designed for a specific adversarial attack, rather has a high detection capability across multiple types. We achieve averaged detection accuracy values of up to 100% for different network architectures and datasets.\nIn this section, we discuss the related works on defense and detection methods for the semantic segmentation task. Defense methods aim to achieve high prediction accuracy even on perturbed images, while detection methods classify the model input as clean or attacked image. A dynamic divide-and-conquer strategy (Xu et al., 2021) and multi-task training (Klingner et al., 2020), which extends supervised semantic segmentation by a self-supervised monocular depth estimation using unlabeled videos, are considered as adversarial training approaches enhancing the robustness of the networks. Another kind of defense strategy is input denoising to remove the perturbation from the input without the necessity to re-train the model. In (Bar et al., 2021) image quilting and the non-local means algorithm are presented as input transformation techniques. To denoise the perturbation and restore the original image, a denoise autoencoder is used in (Cho et al., 2020). The demasked smoothing technique, introduced in (Yatsura et al., 2022), reconstructs masked regions of each image based on the available information with an inpainting model defending against patch attacks. Another possibility to increase the robustness of the model is during inference. In (Arnab et al., 2018) is shown how mean-field inference and multi-scale processing naturally form an adversarial defense. The non-local context encoder proposed in (He et al., 2019) models spatial dependencies and encodes global contexts for strengthening feature activations. From all pyramid features multi-scale information is fused to refine the prediction and create segmentation. The presented works up to now are defense methods improving the robustness of the model. To the best of our knowledge so far only one work focuses on detecting adversarial attacks on segmentation models, i.e., the patchwise spatial consistency check which is introduced in (Xiao et al., 2018).\nThe described defense approaches are created for and tested only on a specific type of attack. The problem is that you assume a high model robustness, however, the defense method may perform poorly on new unseen attacks and does not provide a statement about this insecurity. Therefore, we present an uncertaintybased detection approach which shows strong results over several types of adversarial attacks. The presented detection approach (Xiao et al., 2018) is only tested on perturbed images attacked in such a way that a selected image is predicted. The spatial consistency check randomly selects overlapping patches to obtain pixel-wise confidence vectors. In contrast, we use only information of the network output from one inference and not from (computationally expensive) multiple runs of the network. For these reasons, the detection approach introduced in (Xiao et al., 2018) cannot be considered as a suitable baseline.\nPost-processing classification models as well as simple output-based methods are used for false positive detection (Maag et al., 2020;Maag and Rottmann, 2023) and out-of-distribution segmentation (Maag et al., 2022;Hendrycks and Gimpel, 2016), but have not been investigated for adversarial examples." }, { "figure_ref": [], "heading": "ADVERSARIAL ATTACKS", "publication_ref": [ "b11", "b18", "b23", "b3", "b26", "b0", "b12", "b30", "b7", "b37", "b24", "b24" ], "table_ref": [], "text": "For the generation of adversarial examples, we distinguish between white and black box attacks. White box attacks are created based on information of the victim model, i.e., the adversarial attacker has access to the full model, including its parameters, and knows the loss function used for training. In contrast, black box attackers have zero knowledge about the victim model. The idea behind these type of attacks is transferability, i.e., an adversarial example generated from another model works well with the victim one. The attacks described in the following belong to the white box setting and were proposed to attack semantic segmentation models.\nAttacks on Pixel-wise Classification The attacks described in this paragraph were originally developed for image classification and were (in a modified version) applied to semantic segmentation. For semantic segmentation, given an image x a neural network with parameters w provides pixel-wise probability distributions f (x; w) i j over a label space C = {y 1 , . . . , y c } per spatial dimension (i, j). The single-step untargeted fast gradient sign method (FGSM, (Goodfellow et al., 2015)) creates adversarial examples by adding perturbations to the pixels of an image x with (one-hot encoded) label y that leads to an increase of the loss L (here cross entropy), that is\nx adv i j = x i j + ε • sign(∇ x L i j ( f (x; w) i j , y i j )) ,(1)\nwhere ε is the magnitude of perturbation. The singlestep targeted attack with target label y ll instead decreases the loss for the target label and is given by\nx adv i j = x i j -ε • sign(∇ x L i j ( f (x; w) i j , y ll i j )) .(2)\nFollowing the convention, the least likely class predicted by the model is chosen as target class. This attack is extended in (Kurakin et al., 2017) in an iterative manner (I-FGSM) to increase the perturbation strength\nx adv i j,t+1 = (3) clip x,ε x adv i j,t + α • sign(∇ x adv t L i j ( f (x adv t ; w) i j , y i j ))\nwith x adv 0 = x, step size α, and a clip function ensuring that x adv t ∈ [x -ε, x +ε]. The targeted case (see eq. ( 2)) can be formulated analogously. Based on these attacks, further methods for pixel-wise perturbations in the classification context have been proposed such as projected gradient descent (Madry et al., 2018;Bryniarski et al., 2022) and DeepFool (Moosavi-Dezfooli et al., 2016). Some of these approaches have been further developed and adapted to semantic segmentation (Agnihotri and Keuper, 2023;Gu et al., 2022;Rony et al., 2022).\nStationary Segmentation Mask Attacks Another type of attacks are so called stationary segmentation mask methods (SSMM) where the pixels of a whole image are iteratively attacked until most of the pixels have been mis-classified into the target class (Cisse et al., 2017;Xie et al., 2017). For each spatial dimension (i, j) ∈ I , the loss function per image x is given by\nL( f (x; w), y) = 1 |I | ∑ (i, j)∈I L i j ( f (x; w) i j , y i j ) .(4)\nIn (Metzen et al., 2017), the universal perturbation is introduced to achieve real-time performance for the attack at test time. To this end, training inputs D train = {x (k) , y (k),target } m k=1 are generated where y (k),target defines a fixed target segmentation. The universal noise in iteration t + 1 is computed by\nξ t+1 = clip ε ξ t (5) -α • sign( 1 m m ∑ k=1 ∇ x L( f (x (k) + ξ t ; w), y (k),target )\nwith ξ 0 = 0. The loss of pixels which are predicted as belonging to the desired target class with a confidence above a threshold τ are set to 0. At test time, this noise is added to the input image and does not require multiple calculations of the backward pass.\nThe dynamic nearest neighbor method (DNNM) presented in (Metzen et al., 2017) aims to keep the network's segmentation unchanged but to remove a desired target class. Let o be the object class being deleted and ŷ(x) i j = arg max y∈C f (x; w) y i j the predicted class, where f (x; w) y i j denotes the probability the model assigns for the pixel at position (i, j) to belong to class y, then I o = {(i, j)| ŷ(x) i j = o} and I ō = I \\ I o . The target label is chosen by 2 for all (i, j) ∈ I o and y target i j = ŷ(x) i j for all (i, j) ∈ I ō. Since the loss function described in eq. ( 4) weights all pixels equally though both objectives, i.e., hiding a object class and being unobtrusive are not necessarily equally important, a modified version of the loss function with weighting parameter ω is given by\ny target i j = ŷ(x) i ′ j ′ with arg min (i ′ , j ′ )∈I ō (i ′ -i) 2 + ( j ′ -j)\nL ω ( f (x; w), y) = 1 |I | (ω ∑ (i, j)∈I o L i j ( f (x; w) i j , y target i j ) + (1 -ω) ∑ (i, j)∈I ō L i j ( f (x; w) i j , y target i j\n)) .\n(6)\nNote, the universal perturbation can also be computed for the DNNM." }, { "figure_ref": [], "heading": "Patch-based Attacks", "publication_ref": [ "b27", "b28", "b28" ], "table_ref": [], "text": "The idea behind patch attacks is that perturbing a small region of the image causes prediction errors in a much larger region (Nakka and Salzmann, 2020). In (Nesti et al., 2022), the expectation over transformation (EOT)based patch attack is introduced to create robust adversarial examples, i.e., individual adversarial examples that are at the same time adversarial over a range of transformations. Transformations occurring in the real world are for instance angle and viewpoint changes. These perturbations are modeled within the optimization procedure and an extension of the pixelwise cross entropy loss is additionally presented in (Nesti et al., 2022) to enable crafting strong patches for the semantic segmentation setting." }, { "figure_ref": [ "fig_1" ], "heading": "DETECTION METHOD", "publication_ref": [ "b32", "b31", "b33" ], "table_ref": [], "text": "Our method does not alter the semantic segmentation model, nor does it require knowledge of the adversarial example generation process. While the attacker may have full access to the semantic segmentation model, we only assume that our post-processing model is kept secret or not attacked. Our approach can be applied to any semantic segmentation network serving as a post-processing step using only information of the network output. In Figure 2 an overview of our approach is given. The degree of uncertainty in a semantic segmentation prediction is quantified by pixel-wise dispersion measures like the entropy\nE(x) i j = -∑ y∈C f (x; w) y i j log f (x; w) y i j ,(7)\nthe variation ratio or the probability margin\nV (x) i j = 1 -max y∈C f (x; w) y i j ,(8)\nM(x) i j = V (x) i j + max y∈C \\{ ŷ(x) i j } f (x; w) y i j .(9)\nThe entropy heatmaps for a clean (left) and a perturbed image (right) are shown in Figure 1 (c) indicating that higher uncertainties occur in the attacked regions which motivates the use of uncertainty information to separate clean and perturbed data. To obtain uncertainty features per image from these pixel-wise dispersion measures, we aggregate them over a whole image by calculating the averages D = 1/|I | ∑ (i, j)∈I D(x) i j where D ∈ {E,V, M}. Moreover, we obtain mean class probabilities for each class y ∈ {1, . . . ,C}\nP(y|x) = 1 |I | ∑ (i, j)∈I f (x; w) y i j .(10)\nThe concatenation of this |C| + 3 features forms the feature vectors used in the following. We compute these image-wise features for a set of benign (and adversarially changed) images, which are then used to train classification models providing per image a probability p(x) of being clean (and not perturbed). We classify x as perturbed if p(x) < κ and as clean if p(x) ≥ κ, where κ is a predefined detection threshold. We explore different ways to construct such a classifier. First, we consider two basic outlier detection techniques which only require benign data, i.e., an oneclass support vector machine (OCSVM, (Schölkopf et al., 1999)) and an approach for detecting outliers in a Gaussian distributed dataset learning an ellipse (Rousseeuw and Driessen, 1999). Second, we consider the supervised logistic regression (LASSO, (Tibshirani, 1996)) as classification model trained on the features extracted for clean and perturbed images. Importantly, we do not require knowledge of the adversarial example generation process used by the attacker, instead we use attacked data generated by any (other) adversarial attack (cross-attack). While the OCSVM and the ellipse approach are unsupervised and outlier detection is a difficult task, the supervised cross-attack method has the advantage of having already seen other types of perturbed data. Third, we threshold only on the mean entropy Ē (which requires only to choose the threshold value) proposing a very basic uncertainty-based detector. Note, applying our detection method is light-weight, i.e., the feature computation is inexpensive and classification models are trained in advance so that only one inference run is added after semantic segmentation inference." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "First, we present the experimental setting and then evaluate our adversarial detection approach." }, { "figure_ref": [ "fig_2" ], "heading": "Experimental Setting", "publication_ref": [ "b8", "b4", "b6", "b40", "b29", "b1", "b2", "b13", "b16", "b38", "b18", "b24", "b28" ], "table_ref": [], "text": "Datasets We perform our tests on the Cityscapes (Cordts et al., 2016) dataset for semantic segmentation in street and on the Pascal VOC2012 (Everingham et al., 2012) (shorthand VOC) dataset of visual object classes in realistic scenes. The Cityscapes dataset consists of 2,975 training and 500 validation images of dense urban traffic in 18 and 3 different German towns, respectively. The VOC dataset contains 1,464 training and 1,449 validation images with annotations for the various objects of categories person, animal, vehicle and indoor.\nSegmentation Networks We consider the state-ofthe-art DeepLabv3+ network (Chen et al., 2018) with Xception65 backbone (Chollet, 2017). Trained on Cityscapes, we achieve a mean intersection over union (mIoU) value of 78.93 on the validation set and trained on VOC, a validation mIoU value of 88.39. Moreover, we use the BiSeNet (Yu et al., 2018) trained on Cityscapes obtaining a validation mIoU of 70.32. We consider also two real-time models for the Cityscapes dataset, the DDRNet (Pan et al., 2022) Adversarial Attacks In many defense methods in semantic segmentation, the adapted FGSM and I-FGSM attack are employed (Arnab et al., 2018;Bar et al., 2021;He et al., 2019;Klingner et al., 2020;Xu et al., 2021). Thus, we study both attacks in our experiments with the parameter setting presented in (Kurakin et al., 2017). The magnitude of perturbation is given by ε = {4, 8, 16}, the step size by α = 1 and the number of iterations is computed as n = min{ε + 4, ⌊1.25ε⌋}. We denote the attack by FGSM # ε and the iterative one by I-FGSM # ε , # ∈ { , ll}, where the superscript discriminates between untargeted and targeted (here ll refers to \"least likely\"). For the re-implementation of SSMM and DNNM (Metzen et al., 2017), we use the parameters ε = 0.1 • 255, α = 0.01 • 255, n = 60 and τ = 0.75. For SSMM, the target image is chosen randomly for both datasets and for DNNM, the class person is to be deleted for the Cityscapes dataset. For the VOC dataset, the DNNM attack makes no sense, since on the input images often only one object or several objects of the same class are contained. For our experiments, we use a model zoo1 where we add the implementations of the adversarial attacks FGSM, I-FGSM, SSMM and DNNM. As we also use the pre-trained networks provided in the repository, we run experiments for the Cityscapes dataset on both models, DeepLabv3+ and HRNet, and for VOC on the DeepLabv3+ network.\nFor the patch attack introduced in (Nesti et al., 2022), we use the provided code with default parameters and consider two different segmentation models, BiSeNet and DDRNet, applied to the one tested real world dataset (Cityscapes). Since we use the cross-attack procedure (logistic regression) as detection model, i.e., we train the classifier on clean and perturbed data attacked by an attack other than the patch, we use the data obtained from the DeepLabv3+ to train the detector and test on the DDRNet. For the HRNet and the BiSeNet we proceed analogously, since in each case the prediction performance (in terms of mIoU) of both networks is similar.\nAs the Cityscapes dataset provides highresolution images (1024 × 2048 pixels) which require a great amount of memory to run a full backward pass for the computation of adversarial samples, we re-scale image size for this dataset to 512 × 1024 when evaluating. In Figure 3, a selection of these attacks is shown for the Cityscapes dataset and the DeepLabv3+ network (or DDRNet for the patch attack).\nEvaluation Metrics Our detection models provide per image a probability p(x) of being clean (and not perturbed). The image is then classified as attacked if the probability exceeds a threshold κ for which we tested 40 different values equally spaced in [0, 1]. The first evaluation metric we use is the averaged detection accuracy (ADA) which is defined as the proportion of images that are classified correctly. As this metric depends on a threshold κ, we report the optimal ADA score obtained by ADA * = max κ∈[0,1] ADA(κ). Secondly, we compute the area under the receiver operating characteristic curve (AUROC) to obtain a threshold independent metric. Lastly, we consider the true positive rate while fixing the false positive rate on clean images to 5% (TPR 5% ). " }, { "figure_ref": [ "fig_3", "fig_2", "fig_5", "fig_6", "fig_2", "fig_3" ], "heading": "Numerical Results", "publication_ref": [ "b30", "b36", "b31" ], "table_ref": [ "tab_0" ], "text": "In the following, we study the attack success performance and evaluate our adversarial attack detection performance.\nAttack Performance In order to access the performance, i.e., the strength, of the attack generating methods, we consider the attack pixel success rate (APSR) (Rony et al., 2022) defined by\nAPSR = 1 |I | ∑ (i, j)∈I arg max y∈C f (x adv ; w) y i j ̸ = y GT i j ,(11)\nwhere y GT i j denotes the true class of the pixel at location (i, j). Note, this metric is the opposite of the accuracy, as it focuses on falsely (and not correct) predicted pixels. If we replace x adv in eq. ( 11) by the input image x, we obtain a measure how well the semantic segmentation model performs on clean data. The values for this measure for the different networks and both datasets are given in Table 1. As expected, we observe small APSR scores: all values are below 6.84%.\nThe APSR results for various attacks on the Cityscapes dataset are shown in Figure 4. For all variations of the FGSM attack (untargeted vs. targeted, non-iterative vs. iterative) the APSR increases with larger magnitude of perturbation strength. Moreover, targeted attacks lead to larger ASPR values than their untargeted counterpart. The I-FGSM outperforms the FGSM due to the iterative procedure of individual perturbation steps. For the SSMM attack, the target is a randomly chosen image from the dataset. The examples shown in Figure 3 the correct and target classes of the clean and the perturbed image coincide in several areas, such as the street or the sky reflecting the nature of street scenes. This may explain the relatively low ASPR values around 50%. For the DNNM attack, the APSR scores are comparatively small since most parts of the images are not perturbed but only one class is to be deleted. We observe that the performance of the patch attack has more or less impact depending on the model. Comparing the two models, we find that DeepLabv3+ is more robust against adversarial attacks, as the APSR values are mostly smaller than those of the HRNet network. A selection of qualitative results is shown in Appendix A.\nThe results for the VOC dataset given in Table 2 are qualitatively similar to the findings for the Cityscapes dataset. However, the outcome for the (targeted as well as untargeted) non-iterative FGSM attack differs, i.e., the APSR scores are not increasing with higher noise but stay at similar values. This observation is confirmed in the sample images in the Appendix A which show very little variation over the various magnitudes of noise. In summary, for both datasets and the different network architectures, most attacks achieve high APSR values and greatly alter the prediction. Thus, the detection of such attacks is extremely important. Evaluation of our Detection Method The defense approaches described above are created for and tested only on a specific type of attacks. The sole presented detection approach (Xiao et al., 2018) is only tested on stationary segmentation mask methods and is computationally expensive due to the requirement of multiple runs of the network. For this reason, neither this detection approach nor the defense methods can be considered as suitable baselines.\nIn the following, we denote the single feature, mean entropy based classification by Entropy, the ordinary one-class support vector machine by OCSVM, the outlier detection method proposed in (Rousseeuw and Driessen, 1999) by Ellipse and the logistic regression model by CrossA. For training the regression model, we chose data resulting from the iterative targeted FGSM attack with a magnitude of noise of 2 as perturbed data (assuming that it might be advantageous to have malicious data stemming from an attack method with small perturbation strength, which makes the attack harder to detect). The resulting classifier is then evaluated against all other attacks. Note, the training of the detection models is light-weight and no knowledge of the process for generating adversarial examples is needed. For evaluating our detection models, we use cross-validation with 5 runs.\nThe detection results for the VOC dataset are shown in Figure 5 and for the Cityscapes dataset in Figure 6. We observe comparatively lower detection performance results for smaller perturbation magnitudes for the untargeted FGSM attacks which may be explained by the fact that weaker attacks lead to a change of prediction for a lower number of pixels and are thus more difficult to detect. Targeted attacks are better detected than untargeted attacks (with ADA * values over 80% for all models). This could be due to the procedure of picking the most unlikely class as target which results in larger changes of the uncertainty measures used as features. The detectors perform not that well on the adversarial examples resulting from the untargeted I-FGSM despite the strength of the attack. An inspection of these examples shows that during segmentation only a few classes are predicted (see Figure 3 (c)) with often low uncertainty for large connected components which complicates the distinction between clean and perturbed data. Interesting are the high detection results of up to 96.89% ADA * values (obtained by CrossA for the Cityscapes dataset and the DeepLabv3+ network) for the DNNM attack as the perturbation targets only a few pixels, (APSR values around 15%) and is therefore difficult to detect. For the patch attack, it is noticeable that the detection performance for the DeepLabv3+ network on the Cityscapes dataset is low compared to the HRNet which is explained by the higher disturbance power of this attack on the HRNet, reflected in 36.11 percentage points higher APSR values. In general, the detection capability for attacks on the HRNet is stronger than for the DeepLabv3+, since the HRNet is more easily to attack (see higher APSR values in Figure 4). Generally, our experiments highlight the high potential of investigating uncertainty information for successfully detecting adversarial segmentation attacks. Already the basic method Entropy leads to high accuracies often outperforming OCSVM and Ellipse. However, across different attacks and datasets, CrossA achieves ADA * values of up to 100%. Thus, our light-weight and uncertainty-based detection approach should be considered as baseline for future methods.\nIn this work, we introduced a new uncertainty-based approach for the detection of adversarial attacks on semantic image segmentation tasks. We observed that uncertainty information as given by the entropy behaves differently on clean and perturbed images and used this property to distinguish between the two cases with very basic classification models. Our approach works in a light-weight and post-processing manner, i.e., we do not modify the model nor need knowledge of the process used by the attacker for generating adversarial examples. We achieve averaged detection accuracy values of up to 100% for different network architectures and datasets. Moreover, it has to be pointed out, that our proposed detection approach is the first that was not designed for a specific adversarial attack, but has a high detection capability across multiple types. Given the high detection accuracy and the simplicity of the proposed approach, we are convinced, that it should serve as simple baseline for more elaborated but computationally more expensive approaches developed in future." }, { "figure_ref": [ "fig_7", "fig_8", "fig_9", "fig_10", "fig_11", "fig_12" ], "heading": "APPENDIX A More Adversarial Examples", "publication_ref": [], "table_ref": [], "text": "In this section, we provide qualitative results for the considered attacks. In Figure 7 and Figure 8 semantic segmentation predictions for an example image from the Cityscapes dataset are shown and in Figure 9 for an example image from the VOC dataset. For Cityscapes, we observe more pixel changes for the FGSM attack for increasing noise. For the untargeted I-FGSM attack, the predictions result in less different classes across datasets and network architectures. In general, for the weaker HRNet in comparison to the DeepLabv3+, the perturbations are more visible. Even the non-iterative FGSM attacks and also the patch attack show great success.\nIn Figure 10, Figure 11, and Figure 12, the corresponding entropy heatmaps for the clean images and the perturbed ones are given. The heatmaps for the targeted and untargeted FGSM attack still look similar to the heatmap for the clean image. These attacks change the prediction though, however, shapes are still recognizable. High uncertainties can also be seen in the areas where the attack was successful. For the iterative untargeted FGSM attack, the heatmaps are dark with only high values on the segment boundaries. In this case, the perturbation can be detected as the average uncertainty per image deviates downward from the clean prediction. The iterative targeted attack shows the highest uncertainty, since the predictions consist of comparatively large numbers of different segments and classes. Similar to the clean image, the SSMM and DNNM attacks indicate higher uncertainties in the background while the certainty in the prediction of the road and the cars is high. The patch attack has different effects on both networks, i.e. for the DeepLabv3+, is uncertainty higher in general and especially at the patch boundaries and for the HRNet, the impact on the prediction is stronger, which is also reflected in the heatmaps. " }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "This work is supported by the Ministry of Culture and Science of the German state of North Rhine-Westphalia as part of the KI-Starter research funding program and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC-2092 CASA -390781972." } ]
State-of-the-art deep neural networks have proven to be highly powerful in a broad range of tasks, including semantic image segmentation. However, these networks are vulnerable against adversarial attacks, i.e., nonperceptible perturbations added to the input image causing incorrect predictions, which is hazardous in safetycritical applications like automated driving. Adversarial examples and defense strategies are well studied for the image classification task, while there has been limited research in the context of semantic segmentation. First works however show that the segmentation outcome can be severely distorted by adversarial attacks. In this work, we introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation. We observe that uncertainty as for example captured by the entropy of the output distribution behaves differently on clean and perturbed images and leverage this property to distinguish between the two cases. Our method works in a light-weight and post-processing manner, i.e., we do not modify the model or need knowledge of the process used for generating adversarial examples. In a thorough empirical analysis, we demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
Uncertainty-Based Detection of Adversarial Attacks in Semantic Segmentation
[ { "figure_caption": "Figure 1: Semantic segmentation prediction and entropy heatmap for clean (left) and perturbed image (right) generated by a dynamic target attack for hiding pedestrians.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Schematic illustration of our detection method. The adversarial attacker can have full access to the semantic segmentation model. Information from the network output is extracted to construct the features which serve as input to the detector model classifying between clean and perturbed images.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Input image (a) with corresponding ground truth (e). Semantic segmentation prediction for clean (b) and perturbed image generated by an untargeted (c) and a targeted FGSM attack (d) as well as by SSMM (f), DNNM (g) and patch attack (h).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: APSR results for the Cityscapes dataset and both networks perturbed by different attacks.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) and (f) indicate, that", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Detection performance results for the VOC dataset and the DeepLabv3+ network.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Detection performance results for DeepLabv3+ (left) and the HRNet (right) trained on the Cityscapes dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Input image (a) with corresponding ground truth (b) for the Cityscapes dataset. Semantic segmentation prediction obtained by the DeepLabv3+ network for a clean image (c) and perturbed images generated by various attacks (d)-(r).", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Input image (a) with corresponding ground truth (b) for the Cityscapes dataset. Semantic segmentation prediction obtained by the HRNet network for a clean image (c) and perturbed images generated by various attacks (d)-(r).", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Input image (a) with corresponding ground truth (b) for the VOC dataset. Semantic segmentation prediction obtained by the DeepLabv3+ network for a clean image (c) and perturbed images generated by various attacks (d)-(p).", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Input image (a) with corresponding ground truth (b) for the Cityscapes dataset. Entropy heatmaps obtained by the DeepLabv3+ network for a clean image (c) and perturbed images generated by various attacks (d)-(r).", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Input image (a) with corresponding ground truth (b) for the Cityscapes dataset. Entropy heatmaps obtained by the HRNet network for a clean image (c) and perturbed images generated by various attacks (d)-(r).", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Input image (a) with corresponding ground truth (b) for the VOC dataset. Entropy heatmaps obtained by the DeepLabv3+ network for a clean image (c) and perturbed images generated by various attacks (d)-(p).", "figure_data": "", "figure_id": "fig_12", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "APSR results for the semantic segmentation predictions on clean data.", "figure_data": "DeepLabv3+ DDRNet HRNet BiSeNetCityscapes6.844.005.485.26VOC2.92---", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "APSR results for the VOC dataset and the DeepLabv3+ network perturbed by different attacks.", "figure_data": "FGSM 4FGSM ll 4I-FGSM 4I-FGSM ll 4SSMM17.8119.6856.7364.3062.99FGSM 8FGSM ll 8I-FGSM 8I-FGSM ll 817.9319.6674.9482.51FGSM 16 FGSM ll 16I-FGSM 16 I-FGSM ll 1616.6717.6580.9891.68", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Kira Maag; Asja Fischer
[ { "authors": "S Agnihotri; M Keuper", "journal": "", "ref_id": "b0", "title": "Cospgd: a unified white-box adversarial attack for pixel-wise prediction tasks", "year": "2023" }, { "authors": "A Arnab; O Miksik; Torr ; P ", "journal": "", "ref_id": "b1", "title": "On the robustness of semantic segmentation models to adversarial attacks", "year": "2018" }, { "authors": "A Bar; J Lohdefink; N Kapoor; S Varghese; F Huger; P Schlicht; T Fingscheidt", "journal": "IEEE Signal Processing Magazine", "ref_id": "b2", "title": "The vulnerability of semantic segmentation networks to adversarial attacks in autonomous driving: Enhancing extensive environment sensing", "year": "2021" }, { "authors": "O Bryniarski; N Hingun; P Pachuca; V Wang; N Carlini", "journal": "", "ref_id": "b3", "title": "Evading adversarial example detection defenses with orthogonal projected gradient descent", "year": "2022" }, { "authors": "L.-C Chen; Y Zhu; G Papandreou; F Schroff; H Adam", "journal": "", "ref_id": "b4", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "S Cho; T J Jun; B Oh; D Kim", "journal": "", "ref_id": "b5", "title": "Dapas : Denoising autoencoder to prevent adversarial attack in semantic segmentation", "year": "2020" }, { "authors": "F Chollet", "journal": "", "ref_id": "b6", "title": "Xception: Deep learning with depthwise separable convolutions", "year": "2017" }, { "authors": "M Cisse; Y Adi; N Neverova; J Keshet", "journal": "", "ref_id": "b7", "title": "Houdini: Fooling deep structured prediction models", "year": "2017" }, { "authors": "M Cordts; M Omran; S Ramos; T Rehfeld; M Enzweiler; R Benenson; U Franke; S Roth; B Schiele", "journal": "", "ref_id": "b8", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "M Everingham; L Van Gool; C K I Williams; J Winn; A Zisserman", "journal": "", "ref_id": "b9", "title": "The PASCAL Visual Object Classes Challenge 2012", "year": "2012" }, { "authors": "R Feinman; R R Curtin; S Shintre; A B Gardner", "journal": "", "ref_id": "b10", "title": "Detecting adversarial samples from artifacts", "year": "2017" }, { "authors": "I J Goodfellow; J Shlens; C Szegedy", "journal": "", "ref_id": "b11", "title": "Explaining and harnessing adversarial examples", "year": "2015" }, { "authors": "J Gu; H Zhao; V Tresp; Torr ; P ", "journal": "", "ref_id": "b12", "title": "Segpgd: An effective and efficient adversarial attack for evaluating and boosting segmentation robustness", "year": "2022" }, { "authors": "X He; S Yang; G Li; H Li; H Chang; Y Yu", "journal": "", "ref_id": "b13", "title": "Non-local context encoder: Robust biomedical image segmentation against adversarial attacks", "year": "2019" }, { "authors": "D Hendrycks; K Gimpel", "journal": "", "ref_id": "b14", "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "year": "2016" }, { "authors": "S Y Khamaiseh; D Bagagem; A Al-Alaj; M Mancino; H W Alomari", "journal": "IEEE Access", "ref_id": "b15", "title": "Adversarial deep learning: A survey on adversarial attacks and defense mechanisms on image classification", "year": "2022" }, { "authors": "M Klingner; A Bär; T Fingscheidt", "journal": "", "ref_id": "b16", "title": "Improved noise and attack robustness for semantic segmentation by using multi-task training with selfsupervised depth estimation", "year": "2020" }, { "authors": "A Krizhevsky", "journal": "", "ref_id": "b17", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "A Kurakin; I J Goodfellow; S Bengio", "journal": "", "ref_id": "b18", "title": "Adversarial machine learning at scale", "year": "2017" }, { "authors": "Y Lecun; C Cortes", "journal": "", "ref_id": "b19", "title": "MNIST handwritten digit database", "year": "2010" }, { "authors": "K Maag; R Chan; S Uhlemeyer; K Kowol; H Gottschalk", "journal": "", "ref_id": "b20", "title": "Two video data sets for tracking and retrieval of out of distribution objects", "year": "2022" }, { "authors": "K Maag; M Rottmann", "journal": "SCITEPRESS -Science and Technology Publications", "ref_id": "b21", "title": "False negative reduction in semantic segmentation under domain shift using depth estimation", "year": "2023" }, { "authors": "K Maag; M Rottmann; H Gottschalk", "journal": "", "ref_id": "b22", "title": "Timedynamic estimates of the reliability of deep semantic segmentation networks", "year": "2020" }, { "authors": "A Madry; A Makelov; L Schmidt; D Tsipras; A Vladu", "journal": "ICLR", "ref_id": "b23", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2018" }, { "authors": "J H Metzen; M C Kumar; T Brox; V Fischer", "journal": "", "ref_id": "b24", "title": "Universal adversarial perturbations against semantic image segmentation", "year": "2017" }, { "authors": "A Michel; R Ewetz", "journal": "SoutheastCon", "ref_id": "b25", "title": "Gradient-based adversarial attack detection via deep feature extraction", "year": "2022" }, { "authors": "S.-M Moosavi-Dezfooli; A Fawzi; P Frossard", "journal": "", "ref_id": "b26", "title": "Deepfool: A simple and accurate method to fool deep neural networks", "year": "2016" }, { "authors": "K K Nakka; M Salzmann", "journal": "", "ref_id": "b27", "title": "Indirect local attacks for context-aware semantic segmentation networks", "year": "2020" }, { "authors": "F Nesti; G Rossolini; S Nair; A Biondi; G C Buttazzo", "journal": "", "ref_id": "b28", "title": "Evaluating the robustness of semantic segmentation for autonomous driving against realworld adversarial patch attacks", "year": "2022" }, { "authors": "H Pan; Y Hong; W Sun; Y Jia", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b29", "title": "Deep dualresolution networks for real-time and accurate semantic segmentation of traffic scenes", "year": "2022" }, { "authors": "J Rony; J.-C Pesquet; I B Ayed", "journal": "", "ref_id": "b30", "title": "Proximal splitting adversarial attacks for semantic segmentation", "year": "2022" }, { "authors": "P J Rousseeuw; K V Driessen", "journal": "Technometrics", "ref_id": "b31", "title": "A fast algorithm for the minimum covariance determinant estimator", "year": "1999" }, { "authors": "B Schölkopf; R C Williamson; A Smola; J Shawe-Taylor; J Platt", "journal": "MIT Press", "ref_id": "b32", "title": "Support vector method for novelty detection", "year": "1999" }, { "authors": "R Tibshirani", "journal": "Journal of the Royal Statistical Society: Series B", "ref_id": "b33", "title": "Regression shrinkage and selection via the lasso", "year": "1996" }, { "authors": "J Wang; K Sun; T Cheng; B Jiang; C Deng; Y Zhao; D Liu; Y Mu; M Tan; X Wang; W Liu; Xiao ; B ", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b34", "title": "Deep high-resolution representation learning for visual recognition", "year": "2021" }, { "authors": "P S Weerasinghe; S M Erfani; T Alpcan; C Leckie; M Kuijper", "journal": "International Symposium on Mathematical Theory of Networks and Systems", "ref_id": "b35", "title": "Unsupervised adversarial anomaly detection using one-class support vector machines", "year": "2018" }, { "authors": "C Xiao; R Deng; B Li; F Yu; M Liu; D Song", "journal": "", "ref_id": "b36", "title": "Characterizing adversarial examples based on spatial consistency information for semantic segmentation", "year": "2018" }, { "authors": "C Xie; J Wang; Z Zhang; Y Zhou; L Xie; A L Yuille", "journal": "", "ref_id": "b37", "title": "Adversarial examples for semantic segmentation and object detection", "year": "2017" }, { "authors": "X Xu; H Zhao; J Jia", "journal": "", "ref_id": "b38", "title": "Dynamic divideand-conquer adversarial training for robust semantic segmentation", "year": "2021" }, { "authors": "M Yatsura; K Sakmann; N G Hua; M Hein; J H Metzen", "journal": "", "ref_id": "b39", "title": "Certified defences against adversarial patch attacks on semantic segmentation", "year": "2022" }, { "authors": "C Yu; J Wang; C Peng; C Gao; G Yu; N Sang", "journal": "", "ref_id": "b40", "title": "Bisenet: Bilateral segmentation network for real-time semantic segmentation", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 324.94, 591.86, 196.67, 12.92 ], "formula_id": "formula_0", "formula_text": "x adv i j = x i j + ε • sign(∇ x L i j ( f (x; w) i j , y i j )) ,(1)" }, { "formula_coordinates": [ 3, 324.94, 651.64, 196.67, 12.92 ], "formula_id": "formula_1", "formula_text": "x adv i j = x i j -ε • sign(∇ x L i j ( f (x; w) i j , y ll i j )) .(2)" }, { "formula_coordinates": [ 4, 81.66, 112.63, 204.66, 32.55 ], "formula_id": "formula_2", "formula_text": "x adv i j,t+1 = (3) clip x,ε x adv i j,t + α • sign(∇ x adv t L i j ( f (x adv t ; w) i j , y i j ))" }, { "formula_coordinates": [ 4, 88.36, 390.12, 197.96, 25.52 ], "formula_id": "formula_3", "formula_text": "L( f (x; w), y) = 1 |I | ∑ (i, j)∈I L i j ( f (x; w) i j , y i j ) .(4)" }, { "formula_coordinates": [ 4, 85.6, 497.92, 200.72, 40.96 ], "formula_id": "formula_4", "formula_text": "ξ t+1 = clip ε ξ t (5) -α • sign( 1 m m ∑ k=1 ∇ x L( f (x (k) + ξ t ; w), y (k),target )" }, { "formula_coordinates": [ 4, 75, 697.26, 211.32, 28.78 ], "formula_id": "formula_5", "formula_text": "y target i j = ŷ(x) i ′ j ′ with arg min (i ′ , j ′ )∈I ō (i ′ -i) 2 + ( j ′ -j)" }, { "formula_coordinates": [ 4, 310.46, 172.23, 197.81, 53.53 ], "formula_id": "formula_6", "formula_text": "L ω ( f (x; w), y) = 1 |I | (ω ∑ (i, j)∈I o L i j ( f (x; w) i j , y target i j ) + (1 -ω) ∑ (i, j)∈I ō L i j ( f (x; w) i j , y target i j" }, { "formula_coordinates": [ 4, 339.44, 659.57, 182.17, 21.72 ], "formula_id": "formula_7", "formula_text": "E(x) i j = -∑ y∈C f (x; w) y i j log f (x; w) y i j ,(7)" }, { "formula_coordinates": [ 4, 358.36, 705.65, 163.26, 18.17 ], "formula_id": "formula_8", "formula_text": "V (x) i j = 1 -max y∈C f (x; w) y i j ,(8)" }, { "formula_coordinates": [ 5, 100.44, 249.85, 185.88, 19.68 ], "formula_id": "formula_9", "formula_text": "M(x) i j = V (x) i j + max y∈C \\{ ŷ(x) i j } f (x; w) y i j .(9)" }, { "formula_coordinates": [ 5, 123.11, 410.6, 163.21, 25.52 ], "formula_id": "formula_10", "formula_text": "P(y|x) = 1 |I | ∑ (i, j)∈I f (x; w) y i j .(10)" }, { "formula_coordinates": [ 7, 78.68, 449.5, 207.64, 25.52 ], "formula_id": "formula_11", "formula_text": "APSR = 1 |I | ∑ (i, j)∈I arg max y∈C f (x adv ; w) y i j ̸ = y GT i j ,(11)" } ]
10.1145/3308560.3317593
2023-05-31
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b25", "b6", "b33", "b11", "b1", "b13", "b34", "b14" ], "table_ref": [], "text": "Recent research has shown that natural language processing (NLP) models are biased and systematically discriminate between people based on factors like ethnicity, gender, and others (Nangia et al., 2020;Elsafoury et al., 2022). The literature suggests four main sources of bias that have an impact on the fairness of NLP models: Label bias, Representation bias, sample bias, and Overamplification bias (Shah et al., 2020;Hovy and Prabhumoye, 2021). In the NLP literature, these sources of bias are typically categorized as Upstream bias, which includes representation bias, and Downstream bias, which includes Label, Sample and Overampflication bias.\nThe focus of studying bias in the NLP literature has been mainly on upstream bias and how it impacts the fairness of NLP models (Cao et al., 2022;Kaneko et al., 2022;Steed et al., 2022). In this work, we provide a holistic analysis of the different sources of bias and their impact on the fairness of the task of text classification. We first investigate the impact of upstream bias and its removal on the fairness of the task of text classification. Then, we investigate the impact of downstream bias and its removal on the fairness of text classification. We aim to find out the most impactful sources of bias and the most effective bias removal techniques to use to ensure the fairness of the task of text classification. To this end, this work aims to answer the following research questions: (RQ1) What is the impact of upstream bias and its removal on the fairness of text classification? (RQ2) What is the impact of downstream bias on the fairness of text classification? (RQ3) What is the impact of removing downstream bias on the fairness of text classification? (RQ4) What is the most effective downstream debias method?\nTo answer these questions, we first train three language models (LM) on the task of text classification ( §3). Then, we use group fairness metrics (Kusner et al., 2017) to measure the fairness, ( §4). After that, to answer RQ1 and to understand the impact of upstream bias and its removal on the fairness of the task of text classification ( §5), we measure upstream bias, remove it and measure its impact before and after removal on the fairness of the task of text classification. After that, we investigate downstream bias and its impact on the models' fairness ( §6) to answer RQ2. We then use different methods to remove the downstream bias ( §7) and investigate the impact of these debiasing methods ( §7.3) on the models' fairness to answer RQ3. Then, we analyse our results §7.4 to find out the most effective bias removal technique to answer RQ4 and to ensure the fairness of the task of text classification. Finally, to help the NLP community improve the fairness of text classification tasks, we build on our findings and provide practical guidelines ( §8) to follow to ensure the fairness of the downstream task of text classification. We also showcase these practical guidelines by applying them to the task of sentiment analysis ( §8.1).\nThe main contributions of this paper can be summarized as follows: (1) To the best of our knowledge, this is the first paper to study the impact of different sources of bias on the fairness of the task of text classification.\n(2) We provide empirical guidelines to have fairer text classification task.\nOur findings suggest that the dataset used in measuring fairness impacts the fairness scores, and using a balanced dataset improves the fairness scores. They also show that unlike the findings of previous research, our results suggest that there is a positive correlation between upstream bias and models' fairness. Our results demonstrate that downstream bias is more impactful than upstream on the fairness of the task of text classification, which is in line with previous research. Our results demonstrate that removing downstream bias improved the models' fairness. Our results also demonstrate that removing overampflication bias in the training dataset is the most effective downstream debiasing method and led to fairer text classification models. Improving the fairness of the downstream task of toxicity classification is very critical to ensure that the decisions made by the models are not based on sensitive attributes like race or gender." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b1", "b13", "b34", "b34", "b21", "b32", "b13", "b28", "b7", "b29" ], "table_ref": [], "text": "The impact of upstream bias on models' fairness in NLP models is not clear. Some researchers did not find evidence that upstream bias impacts models'fairness bias in language models (Cao et al., 2022;Kaneko et al., 2022;Steed et al., 2022). However, there are some limitations with those studies. For example, Steed et al. (2022) used upstream bias metrics that depend on sentences templates, these metrics have been criticized as they may not be semantically bleached (May et al., 2019). This means that the upstream bias scores measured in those studies are not reliable, and hence their findings are inconclusive. Different methods have been proposed for removing upstream bias from language models (Liang et al., 2020a;Schick et al., 2021). Kaneko et al. (2022) investigated the effectiveness of different upstream debiasing methods on the fairness of downstream tasks and found no positive impact. However, they do not investigate the effectiveness of removing downstream bias on the fairness of downstream NLP tasks. On the other hand, Prabhakaran et al. (2019); Fryer et al. (2022); Qian et al. (2022) use counterfactuals to remove downstream bias and improve the fairness of the text classification task. These methods fall short on understanding downstream bias and its impact on the fairness of text classification. As they do not investigate downstream bias and the impact of different debiasing methods to remove it on the fairness of text classification.\nIn this work, we aim to fill the gaps in the literature by investigating different sources of bias and their impact on the models' fairness in the downstream task of text classification. We also aim to overcome the limitations of previous research on investigating the impact of upstream (intrinsic) bias on the models' fairness (extrinsic bias) by using different metrics to measure them. Moreover, we investigate the impact of different debiasing techniques, to remove upstream and downstream biases, on the models' fairness. We also provide practical guidelines to ensure the fairness of the downstream task of text classification." }, { "figure_ref": [], "heading": "Text classification", "publication_ref": [ "b12", "b5", "b10", "b31", "b20", "b23", "b26", "b3", "b18", "b15", "b5" ], "table_ref": [], "text": "Jigsaw-Toxicity Dataset: We use the Jigsaw dataset (Jigsaw, 2021). The dataset contains almost 2M comments, labelled as toxic or not, along with labels on the identity of the target of the toxicity, e.g. religion, gender, and race. The identity labels provided in the dataset are both crowdsourced and automatically labelled. We keep only the data items with crowdsourced identity labels and follow the same data pre-processing steps used in (Elsafoury et al., 2021), where the authors train a BERT model for the task of cyberbullying detection. The final dataset after pre-processing contains 400K data items, and we split them into training 40%, validation 30% and test 30% sets.\nWe only use the Jigsaw dataset because, to the best of our knowledge, it is the only available toxicity dataset that contains information on both marginalized and non-marginalized identities, which is important to the way we measure fairness, as explained in Section 4. Other datasets, like Tox-iGen (Hartvigsen et al., 2022), SocialFrame (Sap et al., 2020), HateExplain (Mathew et al., 2021), the Ethos dataset (Mollas et al., 2022), and the MLM data (Ousidhoum et al., 2019) contain information only about marginalized groups, and thus cannot be used in our study.\nModels: For our investigation, we inspected three language models, BERT-base-uncased (Devlin et al., 2019), RoBERTa-base (Liu et al., 2019), and ALBERT-base (Lan et al., 2020) models. We fine-tuned them on the Jigsaw-toxicity training dataset. Following the experimental setting from (Elsafoury et al., 2021), the models were trained for 3 epochs, using a batch size of 32, a learning rate of 2e -5 , and a maximum text length of 61 tokens.\nText classification results indicate that ALBERT-base is the best-performing model, followed by RoBERTa-base and Bert-base with AUC-scores of 0.911, 0.908, and 0.902 respectively. The fine-tuned models are used to measure fairness.\n4 Fairness (extrinsic bias)" }, { "figure_ref": [], "heading": "Fairness evaluation", "publication_ref": [ "b34", "b2", "b0", "b6" ], "table_ref": [], "text": "To evaluate the fairness of the examined models on the task of text classification, we used two sets of extrinsic metrics: (i) Threshold-based (Steed et al., 2022;De-Arteaga et al., 2019), which use the absolute difference (gap) in the false positive rates (F P R) and true positive rates (T P R) between marginalized group g and non-marginalized group ĝ, as shown in Equations 1 and 2, and (ii) Threshold-agnostic metrics (Borkan et al., 2019), which measure the absolute difference in the area under the curve (AUC) scores between marginalized group g and non-marginalized group ĝ, as shown in Equation 3.\nF P R_gap g,ĝ = |F P Rg -F P R ĝ |\n(1)\nT P R_gap g,ĝ = |T P Rg -T P R ĝ | (2) AU C_gap g,ĝ = |AU Cg -AU C ĝ |(3)\nThese scores express the amount of unfairness in the models, with higher scores denoting unfair models and lower scores denoting fairer models. These metrics are measured between two groups, marginalized and non-marginalized, similar to the approach used by (Elsafoury et al., 2022). We limit our study to 3 sensitive attributes, i.e. gender, religion, and race. In cases where there is more than one identity group in the marginalized group for a sensitive attribute, e.g. Asian and Black vs. White, we then measure the mean of the FPR, TPR, and AUC scores of the two groups and then use that score to represent the marginalized group (ĝ)." }, { "figure_ref": [], "heading": "Fairness dataset", "publication_ref": [ "b27" ], "table_ref": [], "text": "To measure fairness, we use the fine-tuned models to predict the labels of the test set. However, We found imbalanced representations of the different identity groups in our test set. For example, we found differences in the number of sentences that are targeted at the different groups, with different ratios of positive examples. We hypothesize that this imbalance in the dataset might have an impact on the measured fairness scores.\nTo test our hypothesis, we created a balanced toxicity fairness dataset and used it to measure fairness in the task of text classification. To create this balanced toxicity fairness dataset, we used lexical word replacement to create perturbations of existing sentences using regular expressions. That was possible with the Jigsaw dataset because after inspecting the most common nouns and adjectives used in each subset that targets a certain identity, we found that the most common words are words that describe that identity. For example, among the most common nouns to describe the samples that are targeted at Black people are the words \"black\" and \"blacks\". A similar pattern was found for religion and gender identities.\nWe created perturbations for each item in our datasets. So for the identity of Black people, in addition to the subset of sentences that are targeted at Black people, we create perturbations from the sentences that are targeted at white and Asian identities, replacing any references to white or Asian identities with Black identities. Then we do the same with the sentences that are targeted at White and Asian people. This way, we make sure that all the different racial identities in our dataset are represented in the same way. We repeat the same process for the identity groups in the gender and religionsensitive attributes. However, this approach is not suitable for gender perturbations, as pronouns also change between males and females. To this end, perturbations for the male and female identity groups we created using the AugLy tool, which swaps gender information including pronouns (Papakipos and Bitton, 2022). The balanced toxicity fairness dataset contains 55,476 samples and has the same ratio between the positive and the negative samples for each identity group within the same sensitive attribute. When we used the balanced dataset, we found that the fairness scores of the different models, measured by the different extrinsic metrics, improved on the balanced toxicity fairness dataset in comparison to the original fairness dataset (Table 1)." }, { "figure_ref": [], "heading": "Upstream bias", "publication_ref": [ "b33", "b25", "b24", "b21" ], "table_ref": [], "text": "Upstream bias, also known as intrinsic bias, describes the societal stereotypes that language models encoded during pre-training (Shah et al., 2020). We use three metrics to measure upstream bias, CrowS-Pairs (Nangia et al., 2020), StereoSet (Nadeem et al., 2021), and SEAT (May et al., 2019) to measure three types of social bias: gender, religion, and race." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "What is the impact of upstream bias and its removal on the fairness of text classification?", "publication_ref": [ "b34", "b1", "b13", "b22", "b13", "b9" ], "table_ref": [], "text": "Similar to (Steed et al., 2022), we use Pearson correlation between the extrinsic bias scores measured by the different extrinsic bias metrics on the balanced toxicity-fairness-dataset and the upstream bias scores measured by the different intrinsic bias metrics (fig. 1) (Balanced). We found a consistent positive correlation between the CrowS-Pairs intrinsic bias scores with the extrinsic bias scores measured by all our three metrics (FPR_gap. TPR_gap and AUC_gap) for all our models and all the sensitive attributes. There is also a consistent negative correlation between SEAT scores and all extrinsic bias metrics. On the other hand, there is inconsistent correlation with the StereoSet scores. This finding is different from previous research that suggested that there is no correlation between intrinsic bias and extrinsic bias (Cao et al., 2022;Kaneko et al., 2022). We hypothesize that previous research did not use a balanced fairness dataset, which is why they did not find a consistent positive correlation with extrinsic bias metrics. To test our hypothesis, we calculate the Pearson correlation between intrinsic bias scores and extrinsic bias scores measured on the original unbalanced toxicity fairness dataset. In that case, we found no consistent correlation between intrinsic and extrinsic bias, which supports our hypothesis as shown in fig. 1 (Original). We, then, used SentDebias (Liang et al., 2020b) to remove biased representations from the models by making the representations orthogonal to the bias subspace. We remove gender, racial and religious bias from our models, following the same approach as (Meade et al., 2022). We fine-tuned our three debiased models on the Jigsaw training dataset and measured the fairness of the models. The results, table 3 (+upstream-sentDebias), indicate that removing upstream bias did not change the AUC scores much, but removing gender information increased slightly the AUC scores, especially for Bert and Roberta. This is because the debiased models tend to predict more positive examples, leading to more true positives. For most of the models, the majority of the extrinsic bias metrics show that removing a certain type of bias from the model representation did not improve fairness for the corresponding sensitive attribute. The same findings are made by (Kaneko et al., 2022). This could be because the current measures used to remove upstream bias are superficial (Gonen and Goldberg, 2019). So For the rest of the paper, we focus only on downstream bias." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Downstream bias", "publication_ref": [ "b33", "b33", "b4" ], "table_ref": [], "text": "According to (Shah et al., 2020), there are three sources of downstream bias: label bias, sample bias, and overampflication bias. Since we do not have information on the annotators of the Jigsaw dataset, we focus only on the other two sources of downstream bias. The first is sample bias, which is a result of non-representative observations in the datasets (Shah et al., 2020). For the task of text classification, we interpret sample bias as the overrepresentation of a certain identity group with the positive (toxic) class, as shown in Figure 2 marginalized and non-marginalised groups. We found that the highest sample bias scores are in the sensitive attribute of religion (0.077), followed by race (0.53) and finally gender(0.027).\nSample g,ĝ = |( Ng,toxicity=1 Ng ) -( N ĝ,toxicity=1 N ĝ )| (4) Overampf lication g,ĝ = |Ng -N ĝ | (5)\nThe second source of downstream bias is overampflication bias, which happens during the training of the NLP models. As the models rely on small differences between sensitive attributes regarding an objective function and amplify these differences to be more pronounced in the predicted outcomes. For example, marginalized groups co-occur with hateful contexts more often than non-marginalized groups in most hate speech datasets, which leads to NLP models to learn spurious correlations and associate between hate and marginalized groups regardless of the content (Dixon et al., 2018). In this paper, we aim to measure the overampflication bias in the training dataset before it gets amplified, as shown in Figure 3 (Original). To measure the overampflication bias, in the jigsaw training dataset, Equation 5, we measure the differences between the number of examples targeted at marginalized (N g ) vs. non-marginalized groups (N g ). Then the scores are normalized to the range [0, 1] using max normalization. The different sizes mean that certain identity groups appear in more semantic contexts than others. These contexts could be positive or negative. The highest overampflication bias scores were found in the sensitive attributes of religion (1), followed by race (0.97), and finally gender (0.94)." }, { "figure_ref": [], "heading": "What is the impact of the downstream bias on the fairness of text classification?", "publication_ref": [ "b13", "b34" ], "table_ref": [], "text": "To answer this research question, we investigate the impact of the two sources of downstream bias, sample and overampflication, on the fairness of text classification. We follow the work of (Kaneko et al., 2022;Steed et al., 2022) and use the correlation between bias scores and fairness scores to measure that impact.\nSample bias: To measure the impact the sample bias has on the fairness of the Text classification task (extrinsic bias), we use Pearson correlation to correlate between the extrinsic bias scores measured by the different metrics and the sample bias scores in the Jigsaw training dataset. We found that, for Albert-base and Roberta-base, the extrinsic bias scores correlate positively with the sample bias when measured by the TPR_gap , the AUC_gap and by FPR_gap. As for BERT-base we found almost no correlation between sample bias and FPR_gap. These results suggest that the sample bias in the training dataset has a direct effect on the fairness of our models, as evident by the positive correlations with the different extrinsic bias metrics.\nFor the impact of overampflication bias on the models' fairness (extrinsic) bias, we measure the Pearson correlation between the overampflication bias and the extrinsic bias scores using thresholdbased and threshold-agnostic metrics. For Albertbase and Roberta-base, we found a positive correlation between overampflication bias scores and the fairness scores measured by TPR_gap, FPR_gap and AUC_gap. As for Bert-base, we also found no correlation between the overampflicaitn bias scores and the FPRgap. These results suggest that overampflication bias in the training dataset has a direct effect on the fairness of our models.\nTo summarize our findings and to answer our research question, our results indicate that the two sources of downstream bias, sample and overampflication, have an impact on the fairness of the task of text classification. Overampflication bias seems to have slightly stronger impact than sample bias, as evidenced by the stronger correlation scores as shown in tion that these correlations are not significant, but that is because we have only a few data points.\n7 Downstream bias removal" }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Sample bias removal", "publication_ref": [ "b33", "b37", "b19" ], "table_ref": [ "tab_4", "tab_4" ], "text": "According to (Shah et al., 2020), to remove sample bias, we need to realign the sample distribution to minimize the mismatch in the class representation between the different identities. To achieve that, we follow the same methodology used in (Zmigrod et al., 2019) to use data augmentation to add slightly altered sentences to balance the class representation in our training dataset. Since the percentages of the positive examples for the different identity groups are small, ranging from 0.05 to 0.2 as shown in Figure 2 (Original), we had to generate synthetic positive examples using existing positive examples in our dataset but with word substitutions using the NLPAUG tool that uses contextual word embeddings to generate word substitutions (Ma, 2019). The newly generated training dataset contains balanced class representation, as shown in Figure 2 (Re-stratified). The size of the dataset after adding the synthesized data is, 443046 data items with the difference in the ratios of positive examples between the identity groups being 0.002 for gender, 0.019 for race and 0.017 for religion.\nWe then, fine-tune our models, AlBERT, BERT, and RoBERTa, on the new balanced dataset. Removing sample bias in the training dataset led to a reduction in the performance, AUC scores, of all three models (+downstream-stratified-data), as shown in Table 3. This reduction in the AUC scores is a result of predicting more positive examples. We analysed the fairness scores for all the sensitive attributes, using the different extrinsic bias metrics in all our models. We found that for the AUC_gap metric, the fairness improved for all mod-els and most of the sensitive attributes as evident in Albert (race, religion), Bert (gender, race, religion), and Roberta (gender, race, religion), Table 3. However, the results are inconsistent for the TPR_gap or FPR_gap." }, { "figure_ref": [ "fig_2" ], "heading": "Overampflication bias removal", "publication_ref": [ "b30", "b29", "b30" ], "table_ref": [ "tab_4", "tab_4", "tab_4" ], "text": "To remove overampflication bias, we need to make sure that there is no difference between the different groups regarding the semantic context in our dataset. To achieve that, we follow the work proposed in (Webster et al., 2020b) to have each of our identities, marginalized and non-marginalized, in similar semantic contexts so that the models would not associate certain contexts with certain groups.\nTo create the perturbations, we first fine-tuned a Text-to-text model (Raffel et al., 2020) on the PANDA dataset (Qian et al., 2022) to automatically generate perturbations. We used the same values for the hyperparameters as Raffel et al. (2020) and our Text-to-text model achieved a Rougu-2 score of 0.9 which is the same score reported in the original paper. However, upon inspection of the perturbed text, we found that the perturbed text is not consistently changing identity keywords and that it does not perform well on religious or racial attributes. Upon further inspection, we realised that the perturbed text in the PANDA dataset is inconsistent, and sometimes the perturbations are not correct.\nSo instead, we used the same method described in Section 4 to create the balanced toxicity fairness dataset. The size of the training dataset after perturbations is, 382,212 and the ratio between the positive and the negative examples for each identity group within the same sensitive attribute is the same as in Figure 3 (Perturbed). For example, in the gender attribute, the ratio of the positive (toxic) examples in the male and female identity groups is 0.10, in the race attribute, the ratio of the positive examples for the Black, white and Asian groups is 0.2 and for the religion attribute, the ratio of the positive examples for the Muslim, Christian and Jewish groups is 0.10. Finally, we use the new perturbed dataset to fine-tune our models.\nAnother method proposed in the literature to remove bias is by removing biased subspaces in trained models. As an alternative to fine-tuning our models on perturbed text, we used SentDebias (Liang et al., 2020b) to remove the biased subspaces from our models after fine-tuning them on the Jigsaw dataset. To investigate the impact of removing overampflication bias on the fairness of the toxicity detention task, we use the thresholdbased and threshold-agnostic metrics to measure the impact on models' fairness of the task of Text classification. Downstream-SentDebias: Starting with the impact of removing the biased subspaces from the fine-tuned models. We find that the performance of the models after removing the biased representation (+downstream-sentDebias) is much worse, almost random with the AUC scores close to 0.5 as shown in Table 3, which is expected since the model lost a lot of information related to the task of text classification along with the biased subspaces after fine-tuning. To simplify the result's analysis, we investigate the impact of removing a certain type of bias on the fairness of the matching sensitive attribute. The results show that removing the biased subspaces after fine-tuning led to reduced fairness in all our models according to all our extrinsic bias metrics for almost all the sensitive at-tributes. However, these results come with a big loss in performance.\nData perturbation As for the impact of finetuning the models on perturbed datasets, the results (+ downstream-perturbed-data) in Table 3 show that the performance slightly improved in all our models. Fine-tuning the models on the perturbed data made the models predict more positives, TPs and FPs without hurting the TNs much, which was the case with the other inspected debiasing methods. When we investigate the fairness scores after fine-tuning the models of the perturbed dataset, we find that the different extrinsic bias metrics agree, in almost all the models for most of the sensitive attributes, that fairness improved.\nRemoving both sample and overampflication biases: we investigate the impact of removing both biases to remove downstream bias. So, we re-stratify the perturbed data, as we explained in sections 7.1 and 7.2. The new stratified-perturbed dataset contains 841,814 sentences, where the ratio of the positive examples for all identity groups ranges from 0.47 to 0.49. then we fine-tune our models on the restratified-perturbed dataset to measure fairness (+downstream-stratified-perturbed-data).\nWhen we fine-tuned the models on the restratified perturbed data, we found that the performance is slightly worse for all three models, as shown in Table 3 (+ downstream-stratifiedperturbed-data). Like most of the other debiasing techniques, fine-tuning the re-stratified perturbed data caused the model to predict more positives, but especially more FPs in Albert and Roberta. When we investigated the fairness scores, we found that the AUC_gap consistently improved across all models and for almost all the sensitive attributes, Albert (race, religion), BERT (gender, race, religion), and Roberta (gender, race, religion). The results for FPR_gap and TPR_gap are not as consistent, but still improved for most of the sensitive attributes and models." }, { "figure_ref": [], "heading": "What is the impact of downstream bias removal on fairness of text classification?", "publication_ref": [], "table_ref": [], "text": "To answer this research question, we summarize our findings on the impact of the different debias approaches on the models' fairness. We accumulate the debiasing techniques that improved the bias according to all our extrinsic bias metrics for each sensitive attribute in all our models. The results" }, { "figure_ref": [], "heading": "Albert-base", "publication_ref": [], "table_ref": [], "text": "Bert-base Roberta-base Debias approach gender race religion gender race religion gender race religion Downstream-SentDebias\n✗ ✗ ✓ ✓ ✓ ✗ ✗ ✓ ✓ Downstream-perturbed-data ✓ ✓ ✓ ✓ ✗ ✓ ✓ ✗ ✓ Downstream-stratified-data ✗ ✗ ✗ ✓ ✗ ✗ ✗ ✗ ✓ Downstream-stratified-perturbed-data ✗ ✗ ✓ ✓ ✗ ✓ ✓ ✗ ✓\nTable 4: Summary of the most effective downstream debiassing method.\nin Table 4 show that the most effective debiasing method that improved the fairness according to all the extrinsic bias metrics in most of the models and sensitive attributes is removing overampflication bias. These results are in line with our early findings that overampflication bias is the most impactful on the fairness of the downstream task. The results also show that using perturbed data is more effective than training on re-stratified perturbed data. Removing the biased subspaces after finetuning (+downstream-SentDebias) is effective in some cases, like Albert (religion), Bert (gender, race), and Roberta (religion). However, using this technique leads to bad performance. So, We do not recommend using this debiasing technique, as it is important to find the right trade-off between performance and fairness.\nRemoving sample bias by fine-tuning the models on re-stratified data improved fairness in some cases, like Bert (gender) and Roberta (religion), but not as effective as removing overampflication bias. We speculate that this is the case because removing sample bias led to a balanced ratio between the positive and negative classes (≈ 0.5) for all identity groups. This resulted in the model predicting more FPs and fewer TNs which we can see in the lower AUC scores for the different models when training on re-stratified data. On the other hand, removing overampflication bias by training the model on perturbed data ensured balanced positive class representation between the different identity groups, but the ratio between the positive to negative class stayed low (≈ 0.1 to 0.2). This made the model predict more positives, especially TPs, without hurting the number of TNs." }, { "figure_ref": [], "heading": "What is the most effective downstream debias method?", "publication_ref": [ "b14", "b28" ], "table_ref": [ "tab_5" ], "text": "The answer to the previous research question suggests that the best technique to improve the fairness of the task of text classification is to remove overampflication bias. So in this section, and to confirm that this is the most effective downstream debias technique to follow and to answer this research question, we use the counterfactual fairness meth-ods (Kusner et al., 2017). We use the Perturbation sensitivity score (SenseScore) proposed in (Prabhakaran et al., 2019) to inspect closely the models' prediction probabilities of the different identity groups within each sensitive attribute. Then, we report the absolute value of SenseScore for easier comparison between the different debiasing methods.\nSenseScore = |M eanx∈X (f (x) -f (x))| (6)\nSenseScore is an indicator of how the model treats different groups of people, since the sentence is the same with only the identity group being different. The bigger the score, the less fair the model is, since it means the model treats the different groups differently. On the contrary, the smaller the score, the more fair the model is. This analysis is possible because our balanced toxicity fairness dataset contains perturbed examples. We carried out this analysis for the downstream debaising techniques that we found the most effective and impactful on the model's fairness without hurting the models' performance, which are: re-stratification (+downstream-stratified-data), perturbation (+downstream-perturbed-data), and re-stratification and perturbation (+downstreamstratified-perturbed-data).\nWe inspect the difference in the prediction probabilities for the different sensitive attributes. For the gender sensitive attribute, we study the sentences that were targeted at the Male group and that were perturbed to change the identity to the Female group. We also inspected the sentences that were targeted at the Female group and were perturbed to change the identity to the male group, as shown in Table 5. Then, we measured the SenseScore between the same sentences with the Male and the Female identities swapped. For the race sensitive attribute, we inspected the sentences that were targeted at the Black group and that were perturbed to change the identity to the White group. We also inspected the sentences that were targeted at the White group and were perturbed to change the identity to the Black group. For the religion sensitive % of men lie to law-enforcement officers when question into domestic violence cases it is not without marks on the body or conclusive proof that the men are charged you sir you are one of the men that is the problem we can tell ! % of women lie to law-enforcement officers when question into domestic violence cases it is not without marks on the body or conclusive proof that the women are charged. you mademoiselle you are one of the women that is the problem we can tell ! attribute, we inspected the sentences that were targeted at the Christian group and that were perturbed to change the identity to the Muslim group. We also inspected the sentences that were targeted at Muslims and were perturbed to change the identity to Christians. The prediction sensitivity scores (SenseScore), in table 6, show that removing overampflication bias is the most effective debiasing method. Finetuning the different models on a perturbed balanced dataset (+ downstream-perturbed-data) improved the fairness, lower SenseScore, for almost all the sensitive attributes as evident in Albert (gender, race, religion), BERT (gender, race), and Roberta (gender, race, religion). The next most effective debiasing method is removing both sample and overampflication bias. Since fine-tuning the different models on a re-stratified-perturbed balanced dataset (+ downstream-stratified-perturbed-data) improved the fairness for all the models but only for the race and the religion sensitive attributes as evident in Albert (race, religion), BERT (race, religion), and Roberta (race, religion). On the other hand, removing only sample bias by fine-tuning the models on re-stratified data (+ downstream-stratified-data), was the least effective on the models' fairness as it improved only BERT (race) and led to worse fairness than the original models as evident in Albert (gender, religion), BERT (gender, religion), and Roberta (gender, race).\nTo answer our research question, the results in this section, show that the most effective method to remove downstream bias is by training the models on datasets with balanced semantic representation and balanced ratios of the positive examples between the different identity groups." }, { "figure_ref": [], "heading": "Practical fairness guidelines", "publication_ref": [], "table_ref": [], "text": "We build on our findings and recommend a list of guidlines to follow to ensure the fairness of the downstream task of text classification. To showcase these guidelines, we apply them to ensure the fairness of the downstream task of sentiment analysis. she, her, hers, mum, mom, mother, daughter, sister, niece, aunt, grandmother, lady, woman, girl, ma'am, female, wife, ms, miss, mrs, ms., mrs. Male he, him, his, dad, father, son, brother, nephew, uncle, grandfather, gentleman, man, boy, sir, male, husband, mr, mr. Neutral they, them, theirs, parent, child, sibling, person, spouse Table 7: Keywords used to filter out the gendered sentences in the IMDB dataset." }, { "figure_ref": [], "heading": "Female", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Sentiment Analysis", "publication_ref": [], "table_ref": [], "text": "To train a sentiment analysis model that is fair, we need first a dataset that contains information about sensitive attributes. Since there is no such dataset, we filtered the IMDB sentiment dataset (?) to make sure that our dataset contains gender information similar to the work done in (Webster et al., 2020a). We use the keywords in table 7 to filter the data items and make sure that gendered information is present in our IMDB training dataset.The IMDB dataset after the keyword filter contains 50K data items. We chose only the data items that are labelled \"Male\" or, \"Female\" and we call the filtered dataset IMDB-gendered. The dataset contains, 9790 sentences with 72% targeted at males and 27% targeted at females. The ratio of the positive examples in the Male subset is 0.55 and 0.52 in the female subset. Then, we pre-processed and split the IMDB-gendered dataset as explained in Section 3. We trained three models, Albert-base, Bert-base, and Roberta-base on the IMDB-gendered dataset. The AUC scores of the models are 0.899, 0.912, and 0.914, respectively." }, { "figure_ref": [], "heading": "Sentiment analysis fairness dataset:", "publication_ref": [], "table_ref": [], "text": "We used the SST-sentiment-fairness dataset (gero et al., 2023) which contains 462 data items with the target of the sentiment labelled by 3 annotators with an inter-annotation agreement of 0.65. 41% of the dataset is targeted at females (ratio of positive examples = 0.61) and 58% is targeted at males (ratio of positive examples= 0.5). We used the finetuned models on IMDB-genderd dataset, to predict the labels of this fairness dataset and to measure their fairness. The performance of the trained models on the SST-sentiment-fairness dataset is good with AUC scores of 0.865, 0.860, and 0.878 for AlBERT-base, Bert-base, and RoBERTa-base." }, { "figure_ref": [], "heading": "Recommendations", "publication_ref": [], "table_ref": [], "text": "In this section, we provide recommendations to follow to ensure the fairness of the downstream task of text classification, and we showcase them on the downstream task of sentiment analysis.\n1. know the data: The first recommendation is to know the data and understand the biases in our dataset. We recommend measuring the selection bias and overampflication bias in the training dataset. When we apply this to the IMDB-gendered dataset, using the methods proposed in section 6 to measure selection and overampflication biases, we find that the selection bias is 0.03 and the overampflication bias is 0.309." }, { "figure_ref": [], "heading": "Remove overampflication bias:", "publication_ref": [], "table_ref": [], "text": "We recommend removing the overampflication bias since it is the most impactful debiasing method on the models' fairness, as we showed in Section 7.4. We recommend removing overampflication bias as described in Section 4.2. We applied this technique to the IMDB-gendered dataset, and fine-tuned our models on the perturbed-IMDB-gendered dataset. The AUC scores of the models on the perturbed-IMDB-gendered dataset are 0.869, 0.860, and 0.877 for Albert, Bert, and Roberta respectively.\n3. Balance the fairness data: We recommend using a balanced fairness dataset to make sure that the measured fairness scores are reliable, as discussed in Section 4.2. We use the same method used in the section 4.2 to create a perturbed SST-fairness dataset. 50% of the dataset is targeted at women (ratio of positive examples = 0.54) and 50% is targeted at men (ratio of positive examples = 0.54). The data after perturbation contain 924 data items. 4. Measure counterfactual fairness: We recommend using counterfactual fairness metrics. For our case study, we measure the SenseScore of our fine-tuned models, on the perturbed SST-sentimentfairness dataset. We found that the Sensescore between the same sentences with the Male and the Female identities swapped for ALBERT-base is 0.005, BERT-base is 0.0005, and Roberta-base is 0.009." }, { "figure_ref": [], "heading": "Decide:", "publication_ref": [], "table_ref": [], "text": "The results indicate that after removing overampflication bias, we find that the model that discriminates the least between the male and the female groups in our sentiment analysis task is BERT. So, based on that, we decided that Bert is the most fair model to use. This decision comes with a trade-off as the performance score, on the perturbed-gendered-IMDB dataset, of Roberta (+ downstream-perturbed-data) is 0.877 which slightly outperforms BERT (+ downstreamperturbed-data) with an AUC score of 0.860." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "It is important to point out that our work is limited to the examined models and datasets. Our work studies bias and fairness from a Western perspective regarding the language, English and culture in which identities are from the marginalized and the non-marginalised groups may differ. We recognize that the provided recommendations to have a fairer text classification task rely on creating perturbations for the training and the fairness dataset. Which might be challenging for some datasets." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we provide a holistic investigation on the impact of the different sources of bias on the fairness of the task of text classification. Then, we investigated the impact of removing them on improving the models' fairness. We found that downstream sources of bias are more impactful on the models' fairness and that removing them, especially overampflication bias, makes the models fairer. Finally, we provide practical guidelines to ensure the fairness of the task of text classification." } ]
In this paper, we provide a holistic analysis of the different sources of bias, Upstream, Sample and Overampflication biases, in NLP models. We investigate how they impact the fairness of the task of text classification. We also investigate the impact of removing these biases using different debiasing techniques on the fairness of text classification. We found that overamplification bias is the most impactful bias on the fairness of text classification. And that removing overamplification bias by fine-tuning the LM models on a dataset with balanced representations of the different identity groups leads to fairer text classification models. Finally, we build on our findings and introduce practical guidelines on how to have a fairer text classification model.
On Bias and Fairness in NLP: How to have fairer text classification?
[ { "figure_caption": "Figure 1 :1Figure 1: Pearson correlation between upstream bias and extrinsic bias.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The ratios of positive examples.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The number of examples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "It is important to men-", "figure_data": "Albert ModelFairnessSource of biasFPR_gap TPR_gap AUC_gapSample0.9840.6330.911Overampflication 0.9880.6130.921Bert-ModelFairnessSource of biasFPR_gap TPR_gap AUC_gapSample-0.0370.4180.150Overampflication -0.010.3950.175Roberta-ModelFairnessSource of biasFPR_gap TPR_gap AUC_gapSample0.8090.7850.992Overampflication 0.7940.7700.995", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The Pearson correlation coefficient between downstream bias and extrinsic bias scores.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Text classification performance and fairness scores for all models before and after different debiasing methods.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Example of a sentence where the original target was a Male (top) and its counterfactual (bottom).", "figure_data": "SenseScoreModelGenderRaceReligionAlbert-base6.9e -050.0320.006+ downstream-perturbed-data↓ 4.2e -05 ↓ 0.002 ↓ 0.001+ downstream-stratified-data↑ 0.0420.032↑ 0.009+ downstream-stratified-perturbed-data ↑ 0.013↓ 0.003 ↓ 0.0007BERT-base0.0010.030.001+ downstream-perturbed-data↓ 0.0007↓ 0.003 0.001+ downstream-stratified-data↑ 0.025↓ 0.022 ↑ 0.004+ downstream-stratified-perturbed-data ↑ 0.002↓ 0.002 ↓ 0.0008Roberta-base0.0010.0240.003+ downstream-perturbed-data↓ 0.0008↓ 0.006 ↓ 0.001+ downstream-stratified-data↑ 0.038↑ 0.036 0.003+ downstream-stratified-perturbed-data ↑ 0.003↓ 0.002 ↓ 0.0003", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "SenseScores of the difference models before and after the different debiasing methods.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Fatma Elsafoury; Stamos Katsigiannis; Naeem Ramzan
[ { "authors": "Daniel Borkan; Lucas Dixon; Jeffrey Sorensen; Nithum Thain; Lucy Vasserman", "journal": "", "ref_id": "b0", "title": "Nuanced metrics for measuring unintended bias with real data for text classification", "year": "2019" }, { "authors": "Yang Cao; Yada Pruksachatkun; Kai-Wei Chang; Rahul Gupta; Varun Kumar; Jwala Dhamala; Aram Galstyan", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations", "year": "2022" }, { "authors": "Maria De-Arteaga; Alexey Romanov; Hanna Wallach; Jennifer Chayes; Christian Borgs; Alexandra Chouldechova; Sahin Geyik; Krishnaram Kenthapadi; Adam Tauman; Kalai ", "journal": "Association for Computing Machinery", "ref_id": "b2", "title": "Bias in bios: A case study of semantic representation bias in a high-stakes setting", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Lucas Dixon; John Li; Jeffrey Sorensen; Nithum Thain; Lucy Vasserman", "journal": "Association for Computing Machinery", "ref_id": "b4", "title": "Measuring and mitigating unintended bias in text classification", "year": "2018" }, { "authors": "Fatma Elsafoury; Stamos Katsigiannis; Steven R Wilson; Naeem Ramzan", "journal": "Association for Computing Machinery", "ref_id": "b5", "title": "Does bert pay attention to cyberbullying", "year": "2021" }, { "authors": "Fatma Elsafoury; Steve R Wilson; Stamos Katsigiannis; Naeem Ramzan", "journal": "International Committee on Computational Linguistics", "ref_id": "b6", "title": "SOS: Systematic offensive stereotyping bias in word embeddings", "year": "2022" }, { "authors": "Zee Fryer; Vera Axelrod; Ben Packer; Alex Beutel; Jilin Chen; Kellie Webster", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Flexible text generation for counterfactual fairness probing", "year": "2022" }, { "authors": "Katy Gero; Nathan Butters; Anna Bethke; Fatma Elsafoury", "journal": "", "ref_id": "b8", "title": "A dataset to measure fairness in the sentiment analysis task", "year": "2023" }, { "authors": "Hila Gonen; Yoav Goldberg", "journal": "", "ref_id": "b9", "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them", "year": "2019" }, { "authors": "Thomas Hartvigsen; Saadia Gabriel; Hamid Palangi; Maarten Sap; Dipankar Ray; Ece Kamar", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection", "year": "2022" }, { "authors": "Dirk Hovy; Shrimai Prabhumoye", "journal": "Language and Linguistics Compass", "ref_id": "b11", "title": "Five sources of bias in natural language processing", "year": "2021" }, { "authors": " Jigsaw", "journal": "", "ref_id": "b12", "title": "Detecting toxic behaviour in wikipedia talk pages", "year": "2021-04-07" }, { "authors": "Masahiro Kaneko; Danushka Bollegala; Naoaki Okazaki", "journal": "International Committee on Computational Linguistics", "ref_id": "b13", "title": "Debiasing isn't enough! -on the effectiveness of debiasing MLMs and their social biases in downstream tasks", "year": "2022" }, { "authors": "Matt J Kusner; Joshua Loftus; Chris Russell; Ricardo Silva", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Counterfactual fairness", "year": "2017" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b15", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "year": "2020-04-26" }, { "authors": "Paul Pu Liang; Irene Mengze Li; Emily Zheng; Chong Yao; Ruslan Lim; Louis-Philippe Salakhutdinov; Morency", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Towards debiasing sentence representations", "year": "2020" }, { "authors": "Paul Pu Liang; Irene Mengze Li; Emily Zheng; Chong Yao; Ruslan Lim; Louis-Philippe Salakhutdinov; Morency", "journal": "", "ref_id": "b17", "title": "Towards debiasing sentence representations", "year": "2020" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b18", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Edward Ma", "journal": "", "ref_id": "b19", "title": "Nlp augmentation", "year": "2019" }, { "authors": "Binny Mathew; Punyajoy Saha; Seid Muhie Yimam; Chris Biemann; Pawan Goyal; Animesh Mukherjee", "journal": "", "ref_id": "b20", "title": "Hatexplain: A benchmark dataset for explainable hate speech detection", "year": "2021" }, { "authors": "Chandler May; Alex Wang; Shikha Bordia; R Samuel; Rachel Bowman; Rudinger", "journal": "NSF, or the U.S. Government. Publisher Copyright", "ref_id": "b21", "title": "On measuring social biases in sentence encoders", "year": "2019" }, { "authors": "Nicholas Meade; Elinor Poole-Dayan; Siva Reddy", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "An empirical survey of the effectiveness of debiasing techniques for pre-trained language models", "year": "2022" }, { "authors": "Ioannis Mollas; Zoe Chrysopoulou; Stamatis Karlos; Grigorios Tsoumakas", "journal": "Complex & Intelligent Systems", "ref_id": "b23", "title": "ETHOS: a multilabel hate speech detection dataset", "year": "2022" }, { "authors": "Moin Nadeem; Anna Bethke; Siva Reddy", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "StereoSet: Measuring stereotypical bias in pretrained language models", "year": "2021" }, { "authors": "Nikita Nangia; Clara Vania; Rasika Bhalerao; Samuel R Bowman", "journal": "", "ref_id": "b25", "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models", "year": "2020" }, { "authors": "Nedjma Ousidhoum; Zizheng Lin; Hongming Zhang; Yangqiu Song; Dit-Yan Yeung", "journal": "", "ref_id": "b26", "title": "Multilingual and multi-aspect hate speech analysis", "year": "2019" }, { "authors": "Zoe Papakipos; Joanna Bitton", "journal": "", "ref_id": "b27", "title": "Augly: Data augmentations for robustness", "year": "2022" }, { "authors": "Ben Vinodkumar Prabhakaran; Margaret Hutchinson; Mitchell", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Perturbation sensitivity analysis to detect unintended model biases", "year": "2019" }, { "authors": "Rebecca Qian; Candace Ross; Jude Fernandes; Eric Michael Smith; Douwe Kiela; Adina Williams", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Perturbation augmentation for fairer NLP", "year": "2022" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b30", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Maarten Sap; Saadia Gabriel; Lianhui Qin; Dan Jurafsky; Noah A Smith; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Social bias frames: Reasoning about social and power implications of language", "year": "2020" }, { "authors": "Timo Schick; Sahana Udupa; Hinrich Schütze", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b32", "title": "Self-diagnosis and self-debiasing: A proposal for reducing corpus-based bias in nlp", "year": "2021" }, { "authors": "Deven Santosh Shah; H Andrew Schwartz; Dirk Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Predictive biases in natural language processing models: A conceptual framework and overview", "year": "2020" }, { "authors": "Ryan Steed; Swetasudha Panda; Ari Kobren; Michael Wick", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Upstream Mitigation Is Not All You Need: Testing the Bias Transfer Hypothesis in Pre-Trained Language Models", "year": "2022" }, { "authors": "Kellie Webster; Xuezhi Wang; Ian Tenney; Alex Beutel; Emily Pitler; Ellie Pavlick; Jilin Chen; Ed Chi; Slav Petrov", "journal": "", "ref_id": "b35", "title": "Measuring and reducing gendered correlations in pre-trained models", "year": "2020" }, { "authors": "Kellie Webster; Xuezhi Wang; Ian Tenney; Alex Beutel; Emily Pitler; Ellie Pavlick; Jilin Chen; Ed H Chi; Slav Petrov", "journal": "", "ref_id": "b36", "title": "Measuring and reducing gendered correlations in pre-trained models", "year": "2020" }, { "authors": "Ran Zmigrod; Sabrina J Mielke; Hanna Wallach; Ryan Cotterell", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 114.75, 446.01, 130.49, 8.35 ], "formula_id": "formula_0", "formula_text": "F P R_gap g,ĝ = |F P Rg -F P R ĝ |" }, { "formula_coordinates": [ 3, 115.33, 468.29, 174.41, 26.26 ], "formula_id": "formula_1", "formula_text": "T P R_gap g,ĝ = |T P Rg -T P R ĝ | (2) AU C_gap g,ĝ = |AU Cg -AU C ĝ |(3)" }, { "formula_coordinates": [ 5, 81.2, 287.3, 208.54, 47.45 ], "formula_id": "formula_2", "formula_text": "Sample g,ĝ = |( Ng,toxicity=1 Ng ) -( N ĝ,toxicity=1 N ĝ )| (4) Overampf lication g,ĝ = |Ng -N ĝ | (5)" }, { "formula_coordinates": [ 8, 100.18, 88.29, 377.42, 33.96 ], "formula_id": "formula_3", "formula_text": "✗ ✗ ✓ ✓ ✓ ✗ ✗ ✓ ✓ Downstream-perturbed-data ✓ ✓ ✓ ✓ ✗ ✓ ✓ ✗ ✓ Downstream-stratified-data ✗ ✗ ✗ ✓ ✗ ✗ ✗ ✗ ✓ Downstream-stratified-perturbed-data ✗ ✗ ✓ ✓ ✗ ✓ ✓ ✗ ✓" }, { "formula_coordinates": [ 8, 335.25, 288.35, 189.76, 8.06 ], "formula_id": "formula_4", "formula_text": "SenseScore = |M eanx∈X (f (x) -f (x))| (6)" } ]
2023-05-22
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b4", "b28", "b3", "b17", "b0", "b5", "b30", "b29", "b14" ], "table_ref": [], "text": "The success of deep learning are seen in many computer vision tasks including object detection. Many deep learningbased approaches [5,29,4,17,23,20,18,1,39] are proposed and have shown impressive performance in localizing and classifying objects of interest in 2D images. However, it is important for these deep learning-based approaches to be trained on balanced and representative datasets. Unfortunately, most real-world datasets always follow a long-Figure 1. LST [10] is more susceptible to catastrophic forgetting due to their incremental learning scheme with numerous data splits. We alleviate the problem by building smooth-tail data that flattens long-tailed datasets and always maintains data from all categories. tailed distribution, where the head classes have a significantly larger number of instances than the tail classes. Training on such imbalanced datasets often leads to bias towards head classes and significant performance degeneration of the tail classes due to the extremely scarce samples.\nTo circumvent the long-tailed distribution problem of object detection task, many attempts exploit data re-sampling and loss re-weighting approaches. Data re-sampling methods [6,31] re-balance the distribution of the instance numbers of each category. Loss re-weighting methods [28, 30,15] adopt different re-weighting strategies to adjust the loss of different categories based on each category's statistics. As shown in Figure 2, Hu et al. [10] proposes LST which is a \"divide & conquer\" strategy that leverages class-incremental few-shot learning to solve the long-tailed distribution problem. The model is first trained with abundant labeled data of the head classes. The categories in the long-tailed training data is then sorted and divided according to the number of samples to get the corresponding subsets for incremental learning and merging of each part in N phases.\nDespite the innovative adoption of class-incremental fewshot learning on the long-tailed distribution problem, we find that [10] catastrophically forgets the knowledge of the head classes and cannot sufficiently learn the tail classes in their incremental learning process. We postulate that this is attributed to three reasons: 1) Categories with high appearance similarity get divided into different parts due to the hard divisions. This leads to lower discriminability since these categories can only be trained together on the exemplar replay subsets. 2) There is an apparent discrepancy between the decision boundaries of the current model trained simultaneously on the exemplar replay subsets of the head and tail classes from the previous model trained solely on the head class subset. This discrepancy impedes the maintenance of the knowledge on the head classes and the learning of the tail classes. 3) The method divides the long-tailed dataset into numerous smaller balanced parts. However, this leads to more knowledge transfer steps and thus expediting catastrophic forgetting.\nIn this paper, we adopt a similar incremental few-shot learning approach to the long-tailed distribution object detection problem. To mitigate the above issues, we propose a simple but effective step-wise learning framework. We note that the main difference of long-tailed learning from class-incremental leaning is that the data of all categories can co-occur. In contrast to [10] that starts the training on only the head classes, we start the learning process from pretraining the model on the whole long-tailed dataset to better preserve the discriminative capability between the head and tail classes. In the subsequent steps, we keep the classagnostic modules fixed and only update the class-specific modules of the pre-trained model trained on the whole longtailed data. This circumvents the lack of training data in the tail end of the long-tailed data by preserving knowledge from the pre-trained model and limiting the network parameters that need to be updated.\nTo avoid severe catastrophic forgetting, we first divide all categories of long-tailed dataset into two parts: head classes with more than M images each category, and tail classes with less than M images each category. We then propose to build smooth-tail data: 1) a head class dominant data that contain a roughly balanced subset of the head classes minored with a roughly balanced subset of tail classes, and 2) a tail class dominant data in similar vein. We leverage the pre-trained model to select representative exemplars for the head class dominant and tail class dominant data. Subsequently, we fine-tune the pre-trained model on the head class dominant data to learn a head class expert model. Finally, we learn a unified model on the tail class dominant data while preserving knowledge of the head classes with the head class expert model. Knowledge distillation at feature level with a head class focused mask is adopt to facilitate the learning of tail classes from the head class expert model. In addition, knowledge distillation at classification head is also adopted, where object query features from the head class expert model are shared to the unified model to align the predictions between them.\nOur contributions can be summarized as follows:\n1. We propose to build smooth-tail data, i.e., a head class dominant data and a tail class dominant data, to alleviate the extreme class imbalance of long-tail data and prevent catastrophic forgetting in our step-wise learning framework. 2. We design a novel step-wise learning framework that unifies fine-tuning and knowledge transfer for the longtailed object detection task. 3. Our framework is frustratingly simple but effective.\nWe achieve state-of-the-art performances on long-tailed datasets LVIS v0.5 and LVIS v1.0 in both the overall accuracy, and especially the impressive accuracy of the rare categories." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b4", "b28", "b3", "b23", "b18", "b5", "b30", "b14", "b29", "b31", "b10", "b35", "b2", "b21", "b1", "b25" ], "table_ref": [], "text": "General Object Detection. A large number of approaches have been proposed for object detection task, which can be briefly summarized into two different types based on their frameworks. Two-stage object detection methods such as R-CNN [5] apply a deep neural network to extract features from proposals generated by selective search [29]. Fast R-CNN [4] utilizes a differentiable RoI Pooling to improve the speed and performance. Faster R-CNN [24] [19]. However, the distribution of categories in the real-world scenarios is often long-tailed and most of these object detection models fail to maintain their performance. An extreme imbalance leads to low accuracy on tail classes.\nLong-tailed Object Detection. Many existing works have been proposed to alleviate the challenge of long-tailed object detection. These works can be categorized into three categories. Data re-sampling is the most intuitive among all methods. Gupta et al. [6] proposes repeat factor sampling (RFS) to create a roughly balanced distribution by over-sampling data of tail classes based on the frequency of each category at image-level. Wang et al. [31] proposes a calibration framework to alleviate classification head bias with a bi-level class balanced sampling approach at instance-level. Loss re-weighting is another common approach. EQLv2 [28] adopts a gradient-guided mechanism to re-weight the loss contribution of each category. EFL [15] introduces a category-relevant modulating factor into focal loss to overcome the imbalance problem for one-stage object detectors. Wang et al. [30] proposes seesaw loss to re-balance gradients of positive and negative samples for each category, with two complementary factors. Wang et al. [32] proposes to understand the long-tailed distribution in a statistic-free perspective and present a adaptive class suppression loss.\nIn addition to the above two common categories of methods, many works also approach the problem from different perspectives. AHRL [14] addresses long-tailed object detection from a metric learning perspective, which splits the whole feature space into hierarchical structure and eliminates the problem in a coarse-to-fine manner. Hu et al.\n[10] which mainly focuses on instance segmentation task proposes to alleviate long-tailed distribution problem in a classincremental few-shot learning way.\nFew-Shot Object Detection and Knowledge Transfer.\nApproaches of few-shot object detection can be categorized into meta-learning based [34,11,36,38] and fine-tuning based methods [33,35,27]. There are two key differences between few-shot object detection and long-tailed object detection. On one hand, few-shot object detection merely focuses on the performance on few-shot categories, which is different from long-tailed object detection that aims at detecting all categories accurately. On the other hand, the datasets of few-shot object detection are comprised of base data which contains abundant training samples per category and novel data which contains a few training samples per category, which are quite different from long-tailed datasets. Exemplar replay and knowledge distillation are two commonly used techniques to transfer knowledge across different models and remain performance of previous model. In exemplar replay based methods, the models strengthen memories learned in the past through replaying the past information periodically. They [22,37,2] usually keep a small number of exemplars per category to achieve this purpose. Knowledge distillation first proposed by Hinton et al. [8], where the knowledge of predicted distribution from the teacher model is distilled into the student model. Apart from the final prediction, other types of knowledge, like intermediate representations [26], can also be used to guide the learning of the student model.\nOur proposed step-wise learning framework unifies finetuning and knowledge transfer techniques for the first time to alleviate the long-tailed distribution problem for object detection task, which can remain powerful on the head classes and better adapt to the tail classes." }, { "figure_ref": [], "heading": "Our Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Dataset Pre-processing", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 3, given a long-tailed dataset D l with C l categories, we divide the entire set of categories into: the head classes C head with each category containing ≥ M images, and the tail classes C tail with each category containing < M images. Furthermore, C head ∪ C tail = C l and C head ∩ C tail = ∅. We then form D head which is dominant with a roughly balanced subset of the head classes C head and minored with a roughly balanced subset of the tail classes C tail . Similarly, we form D tail which is dominant with a roughly balanced subset of the tail classes C tail and minored with a balanced subset of the head classes C head .\nSmooth-tail Data. We propose a confidence-guided exemplar replay scheme for the selection of representative and diverse exemplars in D head and D tail . The number of exemplars is set to be significantly smaller than the original dataset. We propose to use the model pre-trained with the whole long-tailed data (c.f . next subsection) for the selection of the exemplars to ensure that the model trained on the few samples can also minimize the loss on the original dataset. Specifically, we save all instances and corresponding classification scores {I j , S j } predicted by the pre-trained model for each category. We then sort the instances by the value of corresponding classification scores in a descending order. Finally, we select the top-scoring instances as representative exemplars for replay. Notably, only the annotations belonging to the selected instances are considered valid in the training process. Furthermore, the images in original dataset are diverse in color, texture and size of region. The diversity of the exemplars ensures the same robustness and discrimination of the model as trained on original dataset, thus instances with classification scores greater than threshold 0.5 and are not in the same image are given the priority to be chosen as exemplars." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Step-wise Learning", "publication_ref": [ "b12", "b17" ], "table_ref": [], "text": "We use the state-of-the-art Deformable DETR [39] as our backbone object detector. Given a long-tailed dataset D l with C l categories, we pre-train a model on all categories using the same loss functions as Deformable DETR. This pre-trained model serves to: 1) provide output classification confidences as instance selection cues for building the smooth-tail data; 2) learn discriminative representation and provide separation capability of all categories for subsequent fine-tuning on D head and knowledge transfer on D tail .\nAs shown in Figure 4, we learn a head class expert model with fine-tuning, and adopt knowledge transfer from the head class expert model and the final model to unify the capability of detecting head and tail classes. As the learning proceeds, the model gradually approaches an optimal performance of all categories.\nFine-tuning on D head . We propose to only update the class-specific projection layer Φ p and classification head Φ cls with D head while keeping the class-agnostic modules frozen. This is to impose a strong constraint on the previous representation and thus the discrimination representation does not shift severely in subsequent process. The model is fine-tuned with the standard Deformable DETR loss [39]. Note that D head is dominant with a roughly balanced subset of C head to alleviate class imbalance in the head classes, and minored with a roughly balanced subset of C tail to make sure the decision boundary in the feature space has smaller gap compared to the final unified model in subsequent step.\nLet the detection targets in D head be denoted as y =\n{y i } N i=1 = {(c i , b i )} N i=1\n, where c i and b i are the object category and bounding box. Assume the N predictions for target category made by the model are\nŷ = {ŷ i } N i=1 = {(p(c i ), bi )} N i=1\n, where p(c i ) is probability of category c i and bi is the predicted bounding box. Following Deformable DETR, we compute the same match cost between the prediction ŷσ(i) and the ground truth y i using Hungarian algorithm [13], where σ(i) is the index computed by the optimal bipartite matching. The Hungarian loss for all matched pairs is thus defined as:\nL hg (y, ŷ) = N i=1 [L cls (c i , pσ(i) (c i )) + 1 {ci =∅} L box (b i , bσ(i) )],(1)\nwhere L cls is the sigmoid focal loss [18]. L box is a linear combination of 1 loss and generalized IoU loss [25] with the same weight hyperparameters as Deformable DETR.\nKnowledge Transfer on D tail . As shown in Figure 5, we keep the model fine-tuned on D head fixed as the head class expert model. We also keep a unified model initialized with the parameters from the head class expert model, which we train on D tail while preserving the knowledge from D head . Similar to the fine-tuning step, we also update only the classspecific projection layer Φ p and classification head Φ cls of the unified model while keeping the class-agnostic modules frozen. However, a naive constant updates of the projection layer and classification head on the tail classes can aggravate catastrophic forgetting of the head classes. We thus propose the use of exemplar replay and knowledge distillation to mitigate the catastrophic forgetting of the head classes.\nAs mentioned earlier, we keep a small but balanced replay exemplars of the head classes in D tail . The head class expert model is employed as an extra supervision signal to prevent the projection layer output features of the unified model from deviating too much from the output features of the head class expert model. On the other hand, we do not want the head class expert model to limit the learning process of the unified model on the tail classes. To this end, we introduce a head class focused binary mask mask head based on the groundtruth bounding boxes of the head classes to prevent negative influence on the tail class learning. Specifically, we set the value of the pixel on the feature map within the ground truth bounding boxes of head classes as 1, and the value of the pixel outside the ground truth bounding boxes as 0. The distillation loss on the features with the mask is written as:\nL fm_dis = 1 2N head w i=1 h j=1 c k=1 mask head ij f unify ijk -f head ijk 2 ,(2)\nwhere N head = w i=1 h j=1 mask head ij . f head and f unify denote the features of the head class expert model and the unified model, respectively. w , h and c are the width, height and channels of the features.\nDeformable DETR is built upon the transformer encoderdecoder architecture combined with a set-based Hungarian loss that forces unique predictions for each object via bipartite matching. Object queries extract features from the feature maps. Deformable DETR learns different spatial specialization for each object query, which indicates that different object queries focus on different position areas and box sizes. Since there is a mismatch in the object query features input into the classification head of the head class expert model and the unified model, the predicted classification outputs between the two models can be inevitably mismatched. To prevent the mismatch during knowledge distillation on the classification head, we first share the object query features q head from the decoder output of the head class expert model to align the classification probability to the unified model. The classification outputs of the head class expert model and the unified model are compared in the distillation loss function given by:\nL cls_dis = L kl_div (log(p unify shared (c i )), phead (c i )),(3)\nwhere we follow [8] in the definition of the KL-divergence loss L kl_div between the category probabilities of the head class expert model and the unified model. punify shared (c i ) denotes the probability of category c i with the shared object queries predicted by the unified model. phead (c i ) denotes the probability of category c i predicted by the head class expert model.\nA Hungarian loss L hg is also applied to the ground truth set y and the predictions ŷ of the data of tail class dominant subset D tail . The overall loss L total is given by:\nL total = L hg (y, ŷ) + λ fm L fm_dis + λ cls L cls_dis .(4)\nλ fm and λ cls are hyperparameters to balance the loss terms." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b5" ], "table_ref": [], "text": "Datasets. To evaluate the performance of our proposed method, we conduct extensive experiments on the challenging LVIS v0.5 and LVIS v1.0 datasets. LVIS [6] In the model pre-training step (step 0 of our framework), we train our model for 50 epochs with an initial learning rate of 2 × 10 -4 and the learning rate is decayed at 40 th epoch by a factor of 0.1. In the model fine-tuning step (step 1 of our framework), the model is initialized from the pre-trained model. The parameters of the projection layer and classification head are updated while keeping the parameters of other modules frozen. We fine-tune the model for 1 epoch with a learning rate of 2 × 10 -5 . In the knowledge transfer step (step 2 of our framework), the model is initialized from the fine-tuned model. The parameters of the projection layer and classification head are updated while keeping the other modules frozen. We train the model for 2 epochs with an initial learning rate of 2 × 10 -4 and the learning rate is decayed at 1 th epoch by a factor of 0.1. λ fm and λ cls are set to 0.1 and 1, respectively. The hyperparameter M is set to 30." }, { "figure_ref": [], "heading": "Comparisons with the State-of-the-art Methods", "publication_ref": [ "b14", "b14" ], "table_ref": [ "tab_1", "tab_2", "tab_3" ], "text": "To validate the effectiveness of our approach, we compare with state-of-the-art methods for long-tailed object detection on benchmark datasets LVIS v0.5 and LVIS v1.0. Our baseline is Deformable DETR [39] trained on long-tailed dataset D l with the same loss functions as [39]. As shown in Table 1, our method achieves the best performance compared to all other existing methods. Specifically, our proposed method achieves 30.3% AP on LVIS v0.5 with ResNet-50 backbone. It improves the baseline by 3.3% AP, and even achieves 9.4% AP improvement on the rare categories. Our proposed method also outperforms the state-of-the-art AHRL [14] by 2.9% AP. With ResNet-101 as backbone, our approach still performs well on the baseline (+3.7% AP). Furthermore, our method outperforms the baseline by 3.6% AP with ResNet-50 backbone and 3.2% AP with ResNet-101 backbone on LVIS v1.0. The above results demonstrate that our method which unifies fine-tuning and knowledge transfer can effectively solve the severe class imbalance problem.\nTo eliminate the doubt that whether the gain is brought by different baselines, we present a more detailed comparison with the state-of-the-art methods on both the baselines and the final models. The results are present in Table 2. On LVIS v0.5, our method suppresses AHRL [14] by 2.9% AP with a slight advantage on baseline (AHRL's baseline: 26.7% AP vs Our baseline: 27.0% AP). On LVIS v1.0, while the performance of the baseline of EFL [15] is better than our baseline (EFL's baseline: 25.7% AP vs Our baseline: 25.1% AP), our method still outperforms EFL [15] by 1.2% AP and outperforms our baseline by 3.6% AP. Consequently, we can conclude that the improvements brought by our method benefit from our novel design instead of the different baseline. Effectiveness of Each Component. There are two steps in our proposed step-wise learning framework, i.e., finetuning on the head class dominant data and knowledge transfer on the tail class dominant data. We perform ablation study to demonstrate the effectiveness of each of them. As shown in Table 3, both the fine-tuning step and knowledge transfer step on the matched smooth-tail data play significant roles in step-wise learning framework." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "For fine-tuning the model on the head class dominant data, it improves the performance of our baseline from 27.0% AP to 29.7% AP, while the performance improvement on rare categories is still limited (19.4% AP). We then examine the effectiveness of knowledge transfer. In this setting, we directly leverage the baseline as the extra supervision in knowledge transfer step instead of using the fine-tuned head class expert model as the extra supervision. Our method outperforms the baseline by 2.4% AP with significant improvement of the performance on the rare and common categories. However, the performance of the frequent categories experiences a slight drop.\nFine-tuning and knowledge transfer work collaboratively to achieve an improvement from 27.0% AP to 30.3% AP. Particularly, it achieves 24.9% AP for the rare categories, which outperforms the baseline by 9.4% AP and outperforms the fine-tuned head class expert by 5.5% AP. This indicates our proposed step-wise learning framework can sufficiently eliminate the class imbalance problem. However, our method experiences a further drop in the performance of the frequent categories after fine-tuning and knowledge transfer compared to using them separately (FT: 31.6% vs KT:31.3% vs FT&KT: 30.9% AP). We postulate that the drop in performance on the frequent categories might be due to insufficient representation of the frequent categories in our tail class dominant replay data during knowledge transfer. Similarly, the selection of a roughly balanced head classes for the head class dominant replay data might also result in under representation of the frequent categories. Consequently, catastrophic forgetting has a more detrimental effect on the frequent categories. Effectiveness of Each Component of Knowledge Transfer. We also demonstrate the effectiveness of each component of knowledge transfer. The results in Row 1 and Row 2 of Table 4 show that both knowledge distillation on features and knowledge distillation on classification output predictions play significant roles in knowledge transfer. It is worth noting that the performance decreases drastically when we do not share the object query features (from 30.3% AP to 24.4% AP), which can be attributed to the mismatch between the classification outputs of the head class expert model and the unified model." }, { "figure_ref": [], "heading": "Analysis of Divisions.", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "The type of divisions on the longtailed data plays an important role in our approach. We conduct extensive experiments to study the influence of 6 and7, respectively. We find that increasing N ex of C head helps maintain the performance of head classes. However, we also observe that increasing N ex of C head impedes the learning of tail classes and hurts the performance of tail classes. In addition, increasing N ex of C tail to large values does not significantly help the learning of the tail classes and slightly shows adverse affects on the performance of the head classes. By validation, we therefore store 200 instances per category of C head and 30 " }, { "figure_ref": [], "heading": "Analysis of", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "Step-wise RFS. Class imbalance still exists in the exemplar replay data for the head and tail classes due to the severe imbalance between categories of the longtailed dataset, and thus hinders the learning of categories having fewer data. To narrow the imbalance in the exemplar replay data, we propose to adopt the repeat factor sampling (RFS) to over-sample the data from categories having fewer data. In our proposed step-wise learning framework, RFS is used in different ways in different steps and thus we terms it as step-wise RFS. In the fine-tuning step, for the head class dominant replay data, we over-sample the categories having fewer data among the dominant head classes. In the knowledge transfer step, we also over-sample the categories having few data among the dominant tail classes for the tail class dominant replay data. As shown in Table 8, the comparisons between our method using and without using step-wise RFS indicate that applying step-wise RFS does help alleviate the imbalance inside the subsets." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a simple yet effective method that leverages incremental learning on the long-tailed distribution problem for the object detection task. We identify that a pre-trained model on the whole long-tailed dataset can achieve high discriminability in all categories for subsequent training steps. We propose to build the smooth-tail distributed data for calibrating the class imbalance in longtailed datasets, and maintaining representative and diverse head and tail class exemplar replay data. We propose a novel step-wise learning framework that first fine-tune the pre-trained model on the head class dominant replay data to get the head class expert model. Subsequently, knowledge is transferred from the head class expert model to a unified model trained on the tail class dominant replay data. Our method brings large improvements with notable boost on the tail classes on different backbones and various long-tailed datasets. Furthermore, our method achieves state-of-the-art performance on the challenging LVIS benchmarks for object detection task." } ]
Real-world data tends to follow a long-tailed distribution, where the class imbalance results in dominance of the head classes during training. In this paper, we propose a frustratingly simple but effective step-wise learning framework to gradually enhance the capability of the model in detecting all categories of long-tailed datasets. Specifically, we build smooth-tail data where the long-tailed distribution of categories decays smoothly to correct the bias towards head classes. We pre-train a model on the whole long-tailed data to preserve discriminability between all categories. We then fine-tune the class-agnostic modules of the pre-trained model on the head class dominant replay data to get a head class expert model with improved decision boundaries from all categories. Finally, we train a unified model on the tail class dominant replay data while transferring knowledge from the head class expert model to ensure accurate detection of all categories. Extensive experiments on long-tailed datasets LVIS v0.5 and LVIS v1.0 demonstrate the superior performance of our method, where we can improve the AP with ResNet-50 backbone from 27.0% to 30.3% AP, and especially for the rare categories from 15.5% to 24.9% AP. Our best model using ResNet-101 backbone can achieve 30.7% AP, which suppresses all existing detectors using the same backbone.
Boosting Long-tailed Object Detection via Step-wise Learning on Smooth-tail Data
[ { "figure_caption": "Figure 2 .2Figure 2. The incremental learning training strategy of [10] on numerous smaller and balanced data splits inevitably expedites catastrophic forgetting.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. D head contains a roughly balanced subset of C head and a small roughly balanced subset of C tail . D tail contains a roughly balanced subset of C tail and a small balanced subset of C head .", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Overview of our step-wise learning framework. We first pre-train on the whole long-tailed training data D l , and then the class-specific modules are fine-tuned on D head . Finally, we train the model on D tail while concurrently preserves knowledge from D head .", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Overview of our proposed knowledge transfer. The framework consists of the fixed head class expert model (top branch) obtained from fine-tuning on D head for knowledge transfer to the unified model (bottom branch) during training on D tail .", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Comparisons with the state-of-the-art methods on LVIS v0.5 and LVIS v1.0 datasets. ResNet-50 and ResNet-101 are adopted as the backbones, respectively. † indicates results copied from[15].", "figure_data": "is a large", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparisons with the state-of-the-art methods and corresponding baselines. to 0.95. Additionally, the boxes AP for frequent (AP f ), common (AP c ), and rare (AP r ) categories are also reported, respectively.", "figure_data": "Implementation Details. We implement our method onDeformable DETR [39]. The ImageNet [3] pre-trainedResNet-50 and ResNet-101 [7] are adopted as the back-bone. The training is carried out on 8 RTX 3090 GPUs witha batch size of 2 per GPU. We train our model using the", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of each component in our step-wise learning framework on the smooth-tail data. FT, KT indicate the fine-tuning and knowledge transfer, respectively.", "figure_data": "FTKTAP bAP rAP cAP f27.015.526.931.629.719.431.431.629.423.229.831.330.324.931.530.9", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of each component in our knowledge transfer. SOQ indicates the shared object queries.", "figure_data": "SOQ L fm_dis L cls_disAP bAP rAP cAP f29.424.930.629.829.725.030.830.324.424.326.322.030.324.931.530.9", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of different type of divisions.", "figure_data": "DivisionAP bAP rAP cAP f[1, 10) ∪ [10, -)30.124.731.130.8[1, 30) ∪ [30, -) (Ours)30.324.931.530.9[1, 50) ∪ [50, -)30.224.131.631.0[1, 100) ∪ [100, -)30.123.931.331.2[1, 10) ∪ [10, 100) ∪ [100, -)29.823.731.030.7[1, 10) ∪ [10, 30) ∪ [30, 100) ∪ [100, -)29.324.830.429.8N ex of C headN ex of C tailAP bAP rAP cAP f1003030.125.031.330.62003030.324.931.530.93003030.024.631.031.02001030.224.731.330.92003030.324.931.530.92005030.225.031.330.820010030.225.131.530.8", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study of exemplar memory size of D head .", "figure_data": "N ex of C headAP bAP rAP cAP f1029.624.430.830.23030.024.831.230.65030.324.931.530.910030.123.231.431.3", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablation study of exemplar memory size of D tail .Analysis of Exemplar Memory Size. We form D head which is dominant with a roughly balanced subset of the head classes C head and minored with a roughly balanced subset of the tail classes C tail . Similarly, we form D tail which is dominant with a roughly balanced subset of the tail classes C tail and minored with a balanced subset of the head classes C head . We denote N ex as the number of instances per category. For D head and D tail , we vary N ex of the head classes C head and the tail classes C tail and report the results in Tables", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Ablation study of step-wise RFS.instances per category of C tail in D head . Similarly, in D tail , we store 50 instances per category of C head and introduce all instances of C tail . This can eliminate the class imbalance between C head and C tail inside the exemplar sets and achieve a trade-off of the performance of all categories.", "figure_data": "MethodAP bAP rAP cAP fOurs w/o step-wise RFS29.619.031.631.2Ours30.324.931.530.9", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Na Dong; Yongqiang Zhang; Mingli Ding; Gim Hee Lee
[ { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b0", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Manuel J Francisco M Castro; Nicolás Marín-Jiménez; Cordelia Guil; Karteek Schmid; Alahari", "journal": "", "ref_id": "b1", "title": "End-toend incremental learning", "year": "2018" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b2", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Ross Girshick", "journal": "", "ref_id": "b3", "title": "Fast r-cnn", "year": "2015" }, { "authors": "Ross Girshick; Jeff Donahue; Trevor Darrell; Jitendra Malik", "journal": "", "ref_id": "b4", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2014" }, { "authors": "Agrim Gupta; Piotr Dollar; Ross Girshick", "journal": "", "ref_id": "b5", "title": "Lvis: A dataset for large vocabulary instance segmentation", "year": "2019" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b6", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b7", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Ting-I Hsieh; Esther Robb; Hwann-Tzong Chen; Jia-Bin Huang", "journal": "", "ref_id": "b8", "title": "Droploss for long-tail instance segmentation", "year": "2021" }, { "authors": "Xinting Hu; Yi Jiang; Kaihua Tang; Jingyuan Chen; Chunyan Miao; Hanwang Zhang", "journal": "", "ref_id": "b9", "title": "Learning to segment the tail", "year": "2020" }, { "authors": "Bingyi Kang; Zhuang Liu; Xin Wang; Fisher Yu; Jiashi Feng; Trevor Darrell", "journal": "", "ref_id": "b10", "title": "Few-shot object detection via feature reweighting", "year": "2019" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "year": "" }, { "authors": " Harold W Kuhn", "journal": "Naval research logistics quarterly", "ref_id": "b12", "title": "The hungarian method for the assignment problem", "year": "1955" }, { "authors": "Banghuai Li", "journal": "", "ref_id": "b13", "title": "Adaptive hierarchical representation learning for long-tailed object detection", "year": "2022" }, { "authors": "Bo Li; Yongqiang Yao; Jingru Tan; Gang Zhang; Fengwei Yu; Jianwei Lu; Ye Luo", "journal": "", "ref_id": "b14", "title": "Equalized focal loss for dense long-tailed object detection", "year": "2022" }, { "authors": "Yu Li; Tao Wang; Bingyi Kang; Sheng Tang; Chunfeng Wang; Jintao Li; Jiashi Feng", "journal": "", "ref_id": "b15", "title": "Overcoming classifier imbalance for long-tail object detection with balanced group softmax", "year": "2020" }, { "authors": "Tsung Yi Lin; Piotr Dollar; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b16", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b17", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b18", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Wei Liu; Dragomir Anguelov; Dumitru Erhan; Christian Szegedy; Scott Reed; Cheng ; Yang Fu; Alexander C Berg", "journal": "", "ref_id": "b19", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b20", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "", "ref_id": "b21", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "Joseph Redmon; Santosh Divvala; Ross Girshick; Ali Farhadi", "journal": "", "ref_id": "b22", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Sun Girshick; Jian", "journal": "", "ref_id": "b23", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Hamid Rezatofighi; Nathan Tsoi; Junyoung Gwak; Amir Sadeghian; Ian Reid; Silvio Savarese", "journal": "", "ref_id": "b24", "title": "Generalized intersection over union: A metric and a loss for bounding box regression", "year": "2019" }, { "authors": "Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio", "journal": "", "ref_id": "b25", "title": "Fitnets: Hints for thin deep nets", "year": "2014" }, { "authors": "Bo Sun; Banghuai Li; Shengcai Cai; Ye Yuan; Chi Zhang", "journal": "", "ref_id": "b26", "title": "Fsce: Few-shot object detection via contrastive proposal encoding", "year": "2021" }, { "authors": "Jingru Tan; Xin Lu; Gang Zhang; Changqing Yin; Quanquan Li", "journal": "", "ref_id": "b27", "title": "Equalization loss v2: A new gradient balance approach for long-tailed object detection", "year": "2021" }, { "authors": " Jasper Rr Uijlings; E A Koen; Theo Van De Sande; Arnold Wm Gevers; Smeulders", "journal": "International journal of computer vision", "ref_id": "b28", "title": "Selective search for object recognition", "year": "2013" }, { "authors": "Jiaqi Wang; Wenwei Zhang; Yuhang Zang; Yuhang Cao; Jiangmiao Pang; Tao Gong; Kai Chen; Ziwei Liu; Chen Change Loy; Dahua Lin", "journal": "", "ref_id": "b29", "title": "Seesaw loss for long-tailed instance segmentation", "year": "2021" }, { "authors": "Tao Wang; Yu Li; Bingyi Kang; Junnan Li; Junhao Liew; Sheng Tang; Steven Hoi; Jiashi Feng", "journal": "Springer", "ref_id": "b30", "title": "The devil is in classification: A simple framework for longtail instance segmentation", "year": "2020" }, { "authors": "Tong Wang; Yousong Zhu; Chaoyang Zhao; Wei Zeng; Jinqiao Wang; Ming Tang", "journal": "", "ref_id": "b31", "title": "Adaptive class suppression loss for long-tail object detection", "year": "2021" }, { "authors": "Xin Wang; Thomas E Huang; Trevor Darrell; Joseph E Gonzalez; Fisher Yu", "journal": "", "ref_id": "b32", "title": "Frustratingly simple few-shot object detection", "year": "2020" }, { "authors": "Yu-Xiong Wang; Deva Ramanan; Martial Hebert", "journal": "", "ref_id": "b33", "title": "Meta-learning to detect rare objects", "year": "2019" }, { "authors": "Jiaxi Wu; Songtao Liu; Di Huang; Yunhong Wang", "journal": "Springer", "ref_id": "b34", "title": "Multi-scale positive sample refinement for few-shot object detection", "year": "2020" }, { "authors": "Xiongwei Wu; Doyen Sahoo; Steven Hoi", "journal": "", "ref_id": "b35", "title": "Metarcnn: Meta learning for few-shot object detection", "year": "2020" }, { "authors": "Yue Wu; Yinpeng Chen; Lijuan Wang; Yuancheng Ye; Zicheng Liu; Yandong Guo; Yun Fu", "journal": "", "ref_id": "b36", "title": "Large scale incremental learning", "year": "2019" }, { "authors": "Gongjie Zhang; Zhipeng Luo; Kaiwen Cui; Shijian Lu", "journal": "", "ref_id": "b37", "title": "Meta-detr: Few-shot object detection via unified image-level meta-learning", "year": "2021" }, { "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "journal": "", "ref_id": "b38", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 50.11, 678.71, 95.18, 12.32 ], "formula_id": "formula_0", "formula_text": "{y i } N i=1 = {(c i , b i )} N i=1" }, { "formula_coordinates": [ 4, 218.47, 235.37, 153.42, 479.57 ], "formula_id": "formula_1", "formula_text": "ŷ = {ŷ i } N i=1 = {(p(c i ), bi )} N i=1" }, { "formula_coordinates": [ 4, 313.84, 330.41, 246.72, 39.87 ], "formula_id": "formula_2", "formula_text": "L hg (y, ŷ) = N i=1 [L cls (c i , pσ(i) (c i )) + 1 {ci =∅} L box (b i , bσ(i) )],(1)" }, { "formula_coordinates": [ 5, 55.69, 376.43, 230.67, 27.03 ], "formula_id": "formula_3", "formula_text": "L fm_dis = 1 2N head w i=1 h j=1 c k=1 mask head ij f unify ijk -f head ijk 2 ,(2)" }, { "formula_coordinates": [ 5, 81.21, 682.74, 205.15, 13.76 ], "formula_id": "formula_4", "formula_text": "L cls_dis = L kl_div (log(p unify shared (c i )), phead (c i )),(3)" }, { "formula_coordinates": [ 5, 337.26, 454.96, 207.85, 9.65 ], "formula_id": "formula_5", "formula_text": "L total = L hg (y, ŷ) + λ fm L fm_dis + λ cls L cls_dis .(4)" } ]
10.18653/v1/2022.acl-long.393
2023-05-22
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b8", "b5", "b27", "b14", "b3", "b0", "b15", "b11", "b31", "b25", "b26" ], "table_ref": [], "text": "End-to-end automatic speech recognition (ASR) systems (Chorowski et al., 2015;Chan et al., 2016;Rao et al., 2017;Gulati et al., 2020;Boulianne, 2022;Zhang et al., 2022) have reached remarkable performance in recent years. However, accurately transcribing contextually relevant phrases remains a challenge for current ASR systems (Alon et al., 2019;Han et al., 2022). These phrases often take the form of named entities such as personal names, place names, and organization names, which are frequently encountered in speech assistant applications or conference speeches. Due to their infrequent occurrence in the training data, ASR models encounter difficulties in transcribing these entities during inference. As a result, these entities\n安 徽 铜 铃 自 来 他 output:\nan zi lai ta hui ling tong pinyin:\n他 来 自 安 徽 铜 陵 gold:\nHe comes from An Hui Tong Ling english: are typically transcribed with homophonic or nearhomophonic errors, which may reduce the comprehensibility of the transcription. Improving the accuracy of transcribing named entities is of paramount importance as they carry significant semantic information. It not only helps in obtaining more coherent transcriptions but also has significant implications for downstream natural language processing (NLP) tasks such as information retrieval and spoken language understanding (Ganesan et al., 2021;Wu et al., 2022).\nIn recent years, various approaches have been proposed to enhance the accuracy of ASR for specific phrases and entities. One of the most straightforward approaches is to provide the ASR model with a named entity (NE) dictionary, and assume that entities in the dictionary are more likely to appear in the output. Models are encouraged to prioritize these entities during decoding, thus reducing the error rate when transcribing specific phrases and entities. These approaches are collectively known as contextual ASR. The most classic among them is CLAS (Pundak et al., 2018), which utilizes an LSTM as a dictionary encoder to en-code the dictionary and employs an attention mechanism to implicitly guide the model to attend to the corresponding phrase in the dictionary during auto-regressive decoding. Recently, in contrast to implicitly guiding the model to attend to entities, Zhang and Zhou (2022) explicitly guide the model to attend to specific entities at each decoding step by adding a bias loss. Experiments have shown that explicitly guiding the model to attend to relevant entities can yield better results.\nAlthough contextual ASR approaches have greatly reduced the error rate of transcribing entities by introducing a contextual entity dictionary, they still output incorrect tokens when transcribing entities. As depicted in Figure 1, the ASR system incorrectly transcribed the location entity \"铜陵\" (Tong Ling) as \"铜铃\" (copper bell). This error occurred due to the fact that \"铜陵\" and \"同龄\" are homophones in Mandarin and the phrase \"铜铃\" is a common term that appears far more frequently than \"铜陵\" in the training data. As a result, the model is more likely to transcribe \"铜陵\" as \"同 龄\".\nActually, the issue with previous contextual ASR approaches lies in their reliance on token-level mechanisms. Token-level approaches predict individual tokens separately from the vocabulary, which poses a challenge in distinguishing between homophones and near-homophones. This issue becomes more pronounced when transcribing entities, as they are often infrequent in training data, making it difficult for token-level ASR models to transcribe them accurately. As a result, token-level ASR models may rely on more commonly occurring, phonetically similar tokens to transcribe the entity, resulting in transcriptions that are phonetically similar but semantically different from the correct ones.\nIn this paper, we propose a new contextual ASR approach called CopyNE. Unlike previous approaches, CopyNE employs a span-level copying mechanism that directly copies the entire entity from the contextual entity dictionary, thus ensuring the integrity of the token span as a complete entity. Specifically, we introduce a copy loss during training, which guides the model to select the appropriate entity from the contextual entity dictionary for copying. During inference, our CopyNE model has the flexibility to either predict a token from the vocabulary or copy an entity from the contextual entity dictionary at each decoding step. By copy-ing multiple tokens simultaneously, we can avoid errors caused by homophones or near-homophones.\nOur CopyNE model shows significant improvements in overall text transcription under entity-rich scenarios on two widely used Mandarin datasets, Aishell and ST-cmds. It achieves relative CER reductions of 23.1% and 27.8% on Aishell and ST-cmds, respectively. When it comes to transcribing entities, our approach demonstrates even more remarkable improvements in the NE-CER metric, with relative reductions of 55.4% and 53.9% on Aishell and ST-cmds, respectively. Even when using the powerful pre-trained ASR model Whisper (Radford et al., 2022), our approach still achieves considerable relative CER reductions. We will release our code, configuration files, and models at https://github.com/.\n2 Related Works" }, { "figure_ref": [], "heading": "Contextual ASR", "publication_ref": [ "b25", "b0", "b19", "b33", "b16" ], "table_ref": [], "text": "Contextual ASR endeavors to integrate dictionaries into the transcription process, aiming to enable the model to take into account the entities or phrases present in the dictionary and enhance the accuracy of transcribing them in ASR. Pundak et al. (2018) firstly proposed the approach, which encoded the dictionary with LSTM, and let the model implicitly attend to the entities and phrases in the dictionary during decoding. Alon et al. (2019) further enhance the encoding ability of the dictionary encoder by focusing more on entities and use phonetically similar phrases as negative examples. Zhang and Zhou (2022) recently introduced a additional bias loss during training which explicitly encourages the model to attend to the entities and phrases in the dictionary, and get better performance. Due to its simplicity and effectiveness, this approach has been adopted in other ASR systems, such as Rnn-Transducer models (Jain et al., 2020;Yang et al., 2023), and Continuous Integrate-and-Fire (CIF) models (Han et al., 2021) 1 .\nHowever, all of these mentioned approaches are token-level and still struggle with homophonic and near-homophonic token issues. In this paper, we propose the CopyNE model, which further advances contextual ASR by enabling span-level copying from NE dictionary, effectively mitigating homophonic and near-homophonic token issues." }, { "figure_ref": [], "heading": "The Copy Mechanism", "publication_ref": [ "b13", "b32", "b21" ], "table_ref": [], "text": "The copy mechanism is a prevalent technique in NLP. For instance, it is commonly used in text summarization to copy key phrases from the input text (Gu et al., 2016;Xu et al., 2020) and in grammatical error correction to copy correct text from the input to the target (Zhao et al., 2019b). In addition to copying from the input text, Lan et al. (2023) recently introduced the copy mechanism in text generation tasks to select text segments from other documents to generate target text. In this paper, we propose CopyNE, which introduces the copy mechanism in contextual ASR. Our model can copy entire entities from a given entity dictionary, preserving the correctness of generated text as a whole entity." }, { "figure_ref": [], "heading": "The CTC-Transformer Model", "publication_ref": [ "b17", "b20", "b23", "b24", "b12", "b30", "b20" ], "table_ref": [], "text": "In this work, we build our proposed approach on the end-to-end CTC-Transformer model, since it is the most widely used and achieves competitive performance in the ASR field (Hori et al., 2017;Kim et al., 2017;Miao et al., 2020;Omachi et al., 2021;Gong et al., 2022). However, it is worth noticing that our idea can be applied to other ASR approaches.\nThe CTC-Transformer is built upon the seq-toseq Transformer (Vaswani et al., 2017), with a connectionist temporal classification (CTC) layer added after the audio encoder. As shown in Figure 2, it takes a sequence of acoustic frames x = (x 1 , ..., x T ) as input and generates the corresponding transcription text y = (y 1 , ..., y U ) as output. The model consists of two main components: an encoder and a decoder. First, the encoder encodes the acoustic frames x into hidden states h = (h 1 , ..., h T ). Then, the decoder predicts the target sequence y in an auto-regressive manner. At each decoding step u, the decoder predicts the next target token y u+1 based on the encoder's output h and the previously predicted tokens y ≤u = (y 1 , ..., y u ). This process is expressed as follows:\nh = TransformerEncoder(x)\n(1)\nd u = TransformerDecoder(y ≤u , h) (2) P (y u+1 |y ≤u ) = softmax(W d u + b) (3)\nHere, d u represents the hidden state at step u, and P (y u+1 |y ≤u ) is the posterior distribution of predicting token y u+1 . W ∈ R |V|×d and b ∈ R |V| are learned parameters of the model, where V is the token vocabulary, and |V| is the size of the vocabulary.\nThe loss of Transformer, L trans (y), comes from minimizing the log probability of the target sequence y.\nL trans (y) = - U -1 u=0 log P (y u+1 |y ≤u ) (4)\nIn addition to the original loss in the Transformer, the CTC loss is also applied. CTC aligns each acoustic frame with a token from left to right. For a given target sequence y, there may be multiple valid alignments of acoustic frames to y. The CTC loss is derived from maximizing the sum of these valid alignments. The CTC loss has been proved that it can enhance the representational capacity of the audio encoder and improve the stability of the model in noisy environments (Kim et al., 2017). Finally, the overall loss is the a weighted sum of the L trans (y) and L ctc (y), as follows:\nL(y) = λL trans (y) + (1 -λ)L ctc (y) (5)\nwhere λ is a hyper-parameter that determines the relative weight of each loss term.\nDuring inference, the model selects the most probable transcription using beam search as follows:\nŷ = arg max y ( u log P (y u+1 |y ≤u ))(6)\nHere, there are many ways to use scores during decoding, such as combining CTC scores and Transformer scores as in training, or using CTC-prefix beam search followed by re-scoring with Transformer to select the optimal result. However, to facilitate comparison with most previous works, we use the simplest decoding strategy, as shown in Equation 6." }, { "figure_ref": [], "heading": "Our CopyNE Model", "publication_ref": [], "table_ref": [], "text": "This section describes our proposed CopyNE model. The basic idea is that the model incorporates a contextual NE dictionary as external knowledge and can choose to directly copy named entities from the dictionary. During training, a copy loss is designed to encourage the model to copy entities from the dictionary when the following n-gram appears in the dictionary. During inference, at each generation step, the model can either predict a single token or directly copy a entity (token span) from a given dictionary." }, { "figure_ref": [ "fig_1" ], "heading": "The Model Framework", "publication_ref": [ "b25", "b19", "b25" ], "table_ref": [], "text": "Figure 3 illustrates the framework of our CopyNE model, which shares the same audio encoder as the basic CTC-Transformer model, but with a distinct decoder. In the decoder, we introduce an extra NE encoder that takes the NE dictionary as input and encodes into entity representations. Then, we employ a dot-product attention module to derive copy probabilities based on the obtained entity representations, which are then aggregated to form the overall dictionary (dict) representation. The decoder can utilize copy probabilities to select entities for copying, and it can also leverage the dictionary representation to aid in predicting the next token.\nNE Representation. We denote a NE dictionary as E = (e 0 , e 1 , ..., e N ). We use e 0 = ∅ as a pseudo entity to handle the case when the text to be transcribed has no relation to any entity and the model should not copy any entity at current step.\nFor each entity e i , we apply a multi-layer LSTM to the char sequence and use the last hidden state of the NE encoder as the entity representation z.\nz i = LSTM(e i )(7)\nThis is a popular practice in previous contextual ASR works (Pundak et al., 2018;Jain et al., 2020;Zhang and Zhou, 2022).\nCopy Probability. Once the contextual entity representations are obtained, the copy probability is computed by a dot-product attention mechanism. It is used to determine which entities to copy. First, we compute the attention score a e u for entity e as follows:\na e u = (W q d u ) (W k z) √ d (8)\nwhere W q , W k ∈ R da×d are two learned parameters. d a denotes the dimension of the attention. After that we obtain the attention probability P c (e|y ≤u ) for entity e by softmax.\nP c (e|y ≤u ) = exp(a e u ) e ∈E exp(a e u )(9)\nHere, P c (e|y ≤u ) not only represents the attention probability of e but also naturally serves as the copy probability for the entity. And the copy probabilities are trained to guide the model select the correct entities to copy from the dictionary. During inference, we use the copy probabilities to select the entities for copying.\nDict Representation. With copy (attention) probabilities, we can obtain the dict representation Z u at decoding step u. It is used to help the generation of subsequent tokens. Specifically, Z u in computed by weighted summing the entity representations with the copy (attention) probabilities:\nZ u = e∈E P c (e|y ≤u )z(10)\nDict-enhanced Prediction. Finally, we get the overall dict representation and copy probabilities. Following Pundak et al. (2018), the dictionary representation is applied to help the prediction of the next token. So Equation 3 is extended as follows:\nP (y u+1 |y ≤u , E) = softmax(W [d u , Z u ] + b) (11)" }, { "figure_ref": [ "fig_1" ], "heading": "Training", "publication_ref": [ "b0" ], "table_ref": [], "text": "During training, to guide the model in selecting correct entities from the NE dictionary for copying, we introduce an additional copy loss L copy . First, based on the ground truth transcription y and the NE dictionary, we construct a copy target σ for each decoding step u, telling the model whether to copy an entity from the dictionary or not, and which one to copy. Then we compute the copy loss L copy according to the copy target σ and the copy probability P c (σ|y ≤u ).\nThe Computation of Copy Loss. Provided that we have an NE dictionary E b , we construct a copy target, denoted as σ u+1 , for decoding step u, which informs the model whether to copy an entity from the dictionary and which one to copy. In order to build the copy target, we perform maximum matching on the transcription text y from left to right based on the dictionary E b . If the text span y i,j = (y i , ..., y j ) matches the k-th entity e k in E b , then we set the copy target σ i = e k , and σ i+1∼j = ∅. This indicates that the model can copy the k-th entity from the dictionary at decoding step i -1, but cannot copy any entity from decoding step i to j -1. When it comes to a span of length 1, i.e., i = j, during the left-to-right maximum matching process, we also set σ i to ∅2 . For example, in the instance shown in Figure 3, the span \"安徽 (an hui)\" matches the second entity in the dictionary, and the span \"铜陵 (tong ling)\" matches the first entity in the dictionary. This means that at decoding steps 0 and 2, the model can choose to copy the second and first entities from the dictionary, respectively. Therefore, σ 1 = e 2 and σ 3 = e 1 , while σ 2 = ∅ and σ 4 = ∅.\nAfter generating all the copy targets σ, we can compute the copy loss as follows:\nL copy (σ) = - U -1 u=0 log P c (σ u+1 |y ≤u )(12)\nwhere P c (σ u+1 |y ≤u ) is the copy probability computed in Equation 9, meaning the probability of copying entity σ u+1 at decoding step u.\nFinally, the loss in our CopyNE model is formed as follows:\nL = λL trans (y) + (1 -λ)L ctc (y) + L copy (σ)(13)\nDictionary Construction. To construct the copy target and compute the copy loss, we should first build a contextual NE dictionary for training. Provided that the entities have been labeled in the dataset, the most straightforward way is to use all entities in the training set as the NE dictionary E T . However, typically the training set contains tens of thousands of different entities. Computing the contextual representation for all entities in E T during each batch iteration not only consumes a significant amount of computational resources and GPU memory. Moreover, when the dictionary includes numerous irrelevant entities, it is difficult for the model to learn the correct distribution of the copy probability, which may lead to unstable training. Therefore, we build a smaller NE dictionary E b for each data batch. However, please noted that during inference, to align with real-world scenarios where we do not have prior knowledge of the entities present in the transcription, we utilize all entities from the test set as the dictionary. Firstly, to construct E b , we extract all entities in the instances of this batch and add them to the dictionary. Then, following Alon et al. (2019) and Zhang and Zhou (2022), for instances that do not contain any entities, we randomly select one or two substrings of length 2 or 3 from the transcription and add them to the dictionary as pseudo-entities.\nSuppose the dictionary already contains m entities, either real entities and pseudo n-gram entities. In order to enable the model to select the correct entities for copying during inference from a wide range of entities, we incorporate additional negative examples when constructing the dictionary. This facilitates the training of the model to better discriminate between correct and incorrect entities, enhancing its ability to make accurate copying decisions. Specifically, we sample β • m entities as negative examples from E T . We utilize the parameter β to control the number of negative examples. Thus, we get the final dictionary for this batch which contains a total of (β + 1) • m entities." }, { "figure_ref": [ "fig_1" ], "heading": "Inference", "publication_ref": [ "b4", "b6", "b4", "b6", "b16", "b16", "b25", "b26" ], "table_ref": [], "text": "During inference, our CopyNE model produces the transcription in an auto-regressive manner. However, unlike previous token-level approaches that predict a token from the token vocabulary at each step, our model has the flexibility to predict either a token from the vocabulary or an entity from the NE dictionary. In the latter case, the model predicts multiple tokens at one step. If the text to be transcribed contains an entity and the entity exists in the dictionary, CopyNE model can directly copy all the tokens of that entity. This approach can potentially avoid errors caused by homophonic or near-homophonic tokens that occur when predicting multiple tokens separately. As shown in Figure 3, the text contains two entities \"安徽\" and \"铜 陵\", and the entities exist in the dictionary. Therefore, our CopyNE model can avoid homophonic and near-homophonic errors by copying all tokens of two entities from the dictionary, rather than generating them token by token.\nSpecifically, at decoding step u, our prediction is based on both the model's probability for a token v, i.e., P (v|ŷ≤ u, E), and the copy probability for an entity e, i.e., P c (e|ŷ ≤u ). The former represents the probability of predicting a token v from the token vocabulary, while the latter is normalized on all entities, originally indicating the attention probability over entity e, which can be naturally interpreted as the probability of copying entity e from the dictionary. To consider both probabilities on the same scale, we take use of the copy probability of ∅, i.e., P c (∅|ŷ ≤u ), and re-normalize the probabilities as follows:\nQ(i|ŷ ≤u ) = P c (∅|ŷ ≤u )P (i|ŷ ≤u , E), i ∈ V P c (i|ŷ ≤u ), i ∈ E (14)\nHere, to ensure the sum of the probabilities of all elements is 1, we use α u,∅ as a prior probability, representing the probability of the text to be transcribed has no relation with the entities in the dictionary and the text should be generated from the token vocabulary. If the element is from the token vocabulary V, we obtain the probability by multiplying the prior probability and the model's probability for the token. Otherwise, we use the copy probability directly.\nHowever, in our experiments, we observed that the model occasionally selects irrelevant entities for copying. To enhance the quality of copying, we introduce a confidence threshold γ during decoding to filter out low-confidence copies. Specifically, we set P c (i|ŷ ≤u ) = 0, i ∈ E, and P c (∅|ŷ ≤u ) = 1 when max{P c (i|ŷ ≤u )|i ∈ E} < γ. This means that if the model's maximum copy probability over the entities is less than γ, it is prevented from copying entities from the dictionary and instead generates tokens from the original token vocabulary. In section 5.1, we discuss the influence of the γ in detail.\nFinally, we use beam search to select the best element at each decoding step to form the final prediction3 .\nŷ = arg max y ( u log Q(i|y ≤u ))(15)\n5 Experiments Datasets and Evaluation. Experiments are conducted on Aishell (Bu et al., 2017;Chen et al., 2022) and ST-cmds4 , which are two widely used Mandarin datasets. Aishell was first released by Bu et al. (2017), they invited 400 speakers to record about 150 hours of speech. Chen et al. ( 2022) further annotated entities in each transcription text. ST-cmds was built based on commonly used online chatting speech which contains about 110 hours of speech. In our experiments with Aishell, we directly use the entities released by Chen et al. (2022) to build the contextual entity dictionary. Since entities in ST-cmds were not labeled, we use HanLP5 to get three types of entities: person, location, and organization. Furthermore, to compare the performance of different methods in entity-rich scenarios, following Han et al. (2021), we extract instances containing entities from the development and test sets of Aishell and ST-cmds, forming Aishell-NE Dev, Aishell-NE Test, ST-cmds-NE Dev, and ST-cmds-NE Test. Detailed statistics of datasets can be viewed in §B.\nCharacter Error Rate (CER) is typically used as an evaluation metric for overall system performance in Mandarin ASR. In this paper, in addition to CER, we use another important metric called NE-CER (Han et al., 2021), which evaluates the model's accuracy in transcribing entities. The predicted hypothesis and reference are first aligned using the minimum edit distance algorithm, and then NE-CER is calculated by computing the CER between the entity text in the reference and the corresponding text in the hypothesis.\nExperimental Settings. The parameter setting in our work is the same as that in most previous ASR works, and the detailed descriptions can be found in §C. To ensure a fair comparison with prior works, we carefully reproduced the CLAS (Pundak et al., 2018) and CBA (Zhang and Zhou, 2022) using the same structure as our model, including the same audio encoder, decoder, and contextual encoder, among others. Moreover, to verify the effectiveness of our approach on pre-trained large models, we conducted experiments on Whisper (Radford et al., 2022). Specifically, we use the Whisper model as our transformer encoder and decoder." }, { "figure_ref": [ "fig_2" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "Analysis about γ. We first investigate the influence of the threshold of copying confidence γ. Figure 4 displays the CER variation of our CopyNE on the Aishell and Aishell-NE dev sets for different γ values. Our results reveal that the model's effectiveness in using the copy mechanism is impaired when the confidence is low. And the CER initially decreases as γ increases. As the threshold grows, the model is constrained to perform copying actions when it has a higher confidence, which to some extent improves the reliability of the copying process. However, when the threshold becomes larger (i.e., above 0.9), the model has limited chances to choose to copy entities, leading to a rise in the error rate. This is due to the difficulty in triggering the copying mechanism, which makes it challenging for our approach to be effective. These findings demonstrate that γ has a significant impact, and a appropriate γ value can enhance the robustness of our model. Therefore, we set γ to 0.9 in all subsequent experiments and discussions. Overall Results. To evaluate the overall performance of our approach, we focused on the CER of the models on the four datasets. Fortunately, as shown in Table 1, although our approach is designed to be more biased towards entities, the overall performance of the model did not decrease, and even showed significant improvement. In typ- When it comes to the comparison between our CopyNE model and prior works that used NE dictionary too, our model also performs better. As shown in the comparison results between CLAS and CTC-Transformer, introducing contextual entity dictionary and implicitly attending to the entities can really lead to some improvements. However, as seen in the comparison with CBA, explicitly attending to the entities in the dictionary through bias loss can result in greater improvements. Finally, our CopyNE model outperforms CBA's token-level bias loss and decoding with our span-level copy loss and decoding. These results demonstrate that our CopyNE can better utilize contextual entity dictionary to achieve superior performance.\nResults about Transcribing Entities. Our main goal in this paper is to improve the accuracy of transcribing entities. Therefore, in the following analysis, we will focus on the performance of different methods in transcribing entities. From the results in Table 2, we can see that our approach achieved significant improvements in entity transcription compared to the baseline and previous approaches. Even when compared to CBA, our CopyNE achieved a relative NE-CER reduction of 55.4% on the Aishell-NE Test and 53.9% on the ST-cmds-NE Test. This indicates the superiority of the span-level copying mechanism employed in our CopyNE model. Unlike previous methods that focus on individual tokens within the entity, our CopyNE directly considers the entire entity, and when a higher degree of attention is paid to a specific entity in the dictionary, it simply copies all the tokens of that entity as the result. This approach can effectively reduce the error rate of In terms of Whisper-based scenarios, our approach also brings considerable improvements, achieving a relative reduction of 33.9% on the Aishell-NE Test and 40.5% on the ST-cmds-NE Test." }, { "figure_ref": [], "heading": "Qualitative Analysis", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "CopyNE demonstrates significant improvements in both CER and NE-NER metrics. To gain further insight into CopyNE's performance, we conduct a qualitative analysis of its generations. Table 3 shows examples of transcriptions from different ASR models, including CLAS, CBA, and CopyNE. CLAS and CBA models mistakenly recognized \"铜陵\" as \"同龄\" due to their identical pronunciations, highlighting the issue of homophones in token-level decoding and entity transcribing. Even in the second example, where CBA successfully identified the correct entity \"杨丙卿\" from the dictionary and produced a transcription that is close to gold, it still made a mistake by transcribing \"炳\" instead of \"丙\" due to homophones. In contrast, CopyNE utilized a copy mechanism and performed span-level decoding, which leverages the contextual knowledge of entities to mitigates the homophone and near-homophone issues. For example, in the transcription \"以及拥有陈露的女单 项目\", CLAS and CBA wrongly transcribed \"程 度\" and \"成路\" instead of \"陈露\", while CopyNE correctly copied the entity from the provided dictionary. These results demonstrate that CopyNE can effectively transcribe entities and mitigate the homophone and near-homophone issues." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose CopyNE to improve the accuracy of ASR in transcribing contextual entities. CopyNE can copy all the tokens of an entity from the contextual entity dictionary at once, ensuring the integrity of the token span as a complete entity, and reducing homophonic and near-homophonic errors that occur when predicting multiple tokens separately. Our approach demonstrated significant improvements in both CER and NE-CER on the Aishell and ST-cmds datasets. Overall, CopyNE represents a significant advancement in contextual ASR and offers a promising direction for future research in this field." }, { "figure_ref": [], "heading": "Appendices A Other Approaches", "publication_ref": [ "b7", "b28", "b10", "b22", "b2", "b18", "b9", "b1", "b29" ], "table_ref": [], "text": "Since the the ultimate output of ASR is text, it is natural to utilize information from the text modality to help the transcribing of entities. Apart from the contextual ASR, there are two other approaches that leverage text information to assist ASR: LM-fusion and joint pre-training (Joint-PT).\nLM-fusion. LM-fusion involves utilizing a pretrained language model to help ASR models. LMfusion (Chorowski and Jaitly, 2016;Sriram et al., 2017;Zhao et al., 2019a) combines scores from the ASR model and language model during beam search to generate text that conforms better to the target language style. Implementing LM-fusion requires an additional language model to be trained, which can be time-consuming and requires a significant amount of data from target domain to create a specialized language model for a specific domain. Compared to contextual ASR, which only needs a few key phrases from the target domain, LM-fusion requires more time and effort to train a specialized language model. Joint-PT. Recently, self-supervised pre-training of large-scale models has achieved significant improvements in single-modal tasks (Devlin et al., 2019;Lewis et al., 2020;Baevski et al., 2020;Hsu et al., 2021). Researchers have started to explore joint pre-training of speech and text models, hoping to leverage the information from both modalities (Chung et al., 2021;Ao et al., 2022;Tang et al., 2022;Zhang et al., 2022). However, all of these models require large amounts of unlabeled data for both modalities, as well as ASR data for alignment. In addition, compared to contextual ASR, Joint-PT models have very large parameters that require significant computational resources for training and deployment." }, { "figure_ref": [], "heading": "B Datasets", "publication_ref": [], "table_ref": [], "text": "Table 4 shows the detailed statistics of the datasets used in our experiments. \"Sent\" means the number of instances. \"NE\" is the number of named entities in the dataset and also the size of the contextual entity dictionary used during inference. " }, { "figure_ref": [], "heading": "C Parameter settings", "publication_ref": [], "table_ref": [], "text": "We use 80-dimensional log-mel acoustic features with 25ms frame window and 10ms frame shift. The log-mel features are first fed into a 2D convolutional layer for downsampling and mapped to 256 dimensions before being inputted into the Audio Encoder. Both the Audio Encoder and Decoder consist of 6 Transformer layers with 4 attention heads each. The Contextual Encoder is composed of three LSTM layers, with the input being a randomly initialized 256-dimensional embedding vector and the hidden size being 512. And the dotproduct attention is computed at 512 dimensions. The relative weight λ in 13 is set to 0.7. The sample rate β used in training is set to 2." } ]
Recent years have seen remarkable progress in automatic speech recognition (ASR). However, traditional token-level ASR models have struggled with accurately transcribing entities due to the problem of homophonic and nearhomophonic tokens. This paper introduces a novel approach called CopyNE, which uses a span-level copying mechanism to improve ASR in transcribing entities. CopyNE can copy all tokens of an entity at once, effectively avoiding errors caused by homophonic or nearhomophonic tokens that occur when predicting multiple tokens separately. Experiments on Aishell and ST-cmds datasets demonstrate that CopyNE achieves significant reductions in character error rate (CER) and named entity CER (NE-CER), especially in entity-rich scenarios. Furthermore, even when compared to the strong Whisper baseline, CopyNE still achieves notable reductions in CER and NE-CER. Qualitative comparisons with previous approaches demonstrate that CopyNE can better handle entities, effectively improving the accuracy of ASR.
CopyNE: Better Contextual ASR by Copying Named Entities
[ { "figure_caption": "Figure 1 :1Figure 1: An Example with Homophonic Errors. Pinyin is the Mandarin pronunciation of each token. The red text indicates the wrongly predicted token.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 2: The CTC-Transformer Model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Variation of CER with γ on the Aishell and Aishell-NE Dev sets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "The overall CER (%) on the all four datasets.", "figure_data": "ModelAishellAishell-NEST-cmdsST-cmds-NEDev Test Dev TestDevTestDevTestJoint CTC-Transformer6.12 6.70 7.36 9.00 10.63 10.56 13.67 13.63CLAS (Pundak et al., 2018)6.04 6.72 7.06 8.73 10.10 10.09 12.64 12.85CBA (Zhang and Zhou, 2022) 6.11 6.56 6.73 8.00 10.73 10.72 12.69 12.43Our CopyNE5.59 6.35 5.36 6.92 9.769.899.909.84Whisper (Radford et al., 2022) 5.10 5.55 6.19 7.33 8.047.94 10.54 10.59Whisper + Our CopyNE5.02 5.54 5.23 6.52 7.457.368.378.33ModelAishell-NE ST-cmds-NEDev Test Dev TestJoint CTC-Transformer11.64 14.03 21.63 21.41CLAS (Pundak et al., 2018)11.24 13.12 19.70 20.10CBA (Zhang and Zhou, 2022) 7.78 9.44 15.72 15.92Our CopyNE3.00 4.21 7.60 7.34Whisper (Radford et al., 2022) 10.38 13.28 18.54 18.93Whisper + Our CopyNE6.93 8.79 11.37 11.27", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The NE-CER (%) on Aishell-NE and ST-cmds-NE.", "figure_data": "transcribing entities compared to previous token-level approaches. With the copy mechanism, ourCopyNE model has reached a comparable level ofentity transcription capability to that of transcribinggeneral text.", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Examples of different models. Red text in the transcription indicates errors, while text enclosed in square brackets in the CopyNE results represents entities that were directly copied from the dictionary.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Shilin Zhou; Zhenghua Li; Yu Hong; Min Zhang; Zhefeng Wang; Baoxing Huai
[ { "authors": "Uri Alon; Golan Pundak; Tara N Sainath", "journal": "IEEE", "ref_id": "b0", "title": "Contextual speech recognition with difficult negative training examples", "year": "2019" }, { "authors": "Junyi Ao; Rui Wang; Long Zhou; Chengyi Wang; Shuo Ren; Yu Wu; Shujie Liu; Tom Ko; Qing Li; Yu Zhang; Zhihua Wei; Yao Qian; Jinyu Li; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "SpeechT5: Unified-modal encoderdecoder pre-training for spoken language processing", "year": "2022" }, { "authors": "Alexei Baevski; Yuhao Zhou; Abdelrahman Mohamed; Michael Auli", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations", "year": "2020" }, { "authors": "Gilles Boulianne", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Phoneme transcription of endangered languages: an evaluation of recent ASR architectures in the single speaker scenario", "year": "2022" }, { "authors": "Hui Bu; Jiayu Du; Xingyu Na; Bengu Wu; Hao Zheng", "journal": "IEEE", "ref_id": "b4", "title": "Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline", "year": "2017" }, { "authors": "William Chan; Navdeep Jaitly; Quoc Le; Oriol Vinyals", "journal": "", "ref_id": "b5", "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "year": "2016" }, { "authors": "Boli Chen; Guangwei Xu; Xiaobin Wang; Pengjun Xie; Meishan Zhang; Fei Huang", "journal": "IEEE", "ref_id": "b6", "title": "Aishellner: Named entity recognition from chinese speech", "year": "2022" }, { "authors": "Jan Chorowski; Navdeep Jaitly", "journal": "", "ref_id": "b7", "title": "Towards better decoding and language model integration in sequence to sequence models", "year": "2016" }, { "authors": "Jan K Chorowski; Dzmitry Bahdanau; Dmitriy Serdyuk; Kyunghyun Cho; Yoshua Bengio", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Attention-based models for speech recognition", "year": "2015" }, { "authors": "Yu-An Chung; Chenguang Zhu; Michael Zeng", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "SPLAT: Speech-language joint pre-training for spoken language understanding", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Karthik Ganesan; Pakhi Bamdev; B Jaivarsan; Amresh Venugopal; Abhinav Tushar", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "N-best ASR transformer: Enhancing SLU performance using multiple ASR hypotheses", "year": "2021" }, { "authors": "Zhuo Gong; Daisuke Saito; Sheng Li; Hisashi Kawai; Nobuaki Minematsu", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Can we train a language model inside an end-to-end ASR model? -investigating effective implicit language modeling", "year": "2022" }, { "authors": "Jiatao Gu; Zhengdong Lu; Hang Li; O K Victor; Li", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Incorporating copying mechanism in sequence-to-sequence learning", "year": "2016" }, { "authors": "Anmol Gulati; James Qin; Chung-Cheng Chiu; Niki Parmar; Yu Zhang; Jiahui Yu; Wei Han; Shibo Wang; Zhengdong Zhang; Yonghui Wu", "journal": "", "ref_id": "b14", "title": "Conformer: Convolution-augmented transformer for speech recognition", "year": "2020" }, { "authors": "Minglun Han; Linhao Dong; Zhenlin Liang; Meng Cai; Shiyu Zhou; Zejun Ma; Bo Xu", "journal": "IEEE", "ref_id": "b15", "title": "Improving end-to-end contextual speech recognition with fine-grained contextual knowledge selection", "year": "2022" }, { "authors": "Minglun Han; Linhao Dong; Shiyu Zhou; Bo Xu", "journal": "IEEE", "ref_id": "b16", "title": "Cif-based collaborative decoding for end-toend contextual speech recognition", "year": "2021" }, { "authors": "Takaaki Hori; Shinji Watanabe; John Hershey", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Joint CTC/attention decoding for end-to-end speech recognition", "year": "2017" }, { "authors": "Wei-Ning Hsu; Benjamin Bolte; Hubert Yao-Hung; Kushal Tsai; Ruslan Lakhotia; Abdelrahman Salakhutdinov; Mohamed", "journal": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "ref_id": "b18", "title": "Hubert: Self-supervised speech representation learning by masked prediction of hidden units", "year": "2021" }, { "authors": "Mahaveer Jain; Gil Keren; Jay Mahadeokar; Geoffrey Zweig; Florian Metze; Yatharth Saraf", "journal": "", "ref_id": "b19", "title": "Contextual rnn-t for open domain asr", "year": "2020" }, { "authors": "Suyoun Kim; Takaaki Hori; Shinji Watanabe", "journal": "IEEE", "ref_id": "b20", "title": "Joint ctc-attention based end-to-end speech recognition using multi-task learning", "year": "2017" }, { "authors": "Tian Lan; Deng Cai; Yan Wang; Heyan Huang; Xian-Ling Mao", "journal": "", "ref_id": "b21", "title": "Copy is all you need", "year": "2023" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Haoran Miao; Gaofeng Cheng; Changfeng Gao; Pengyuan Zhang; Yonghong Yan", "journal": "", "ref_id": "b23", "title": "Transformer-based online ctc/attention end-to-end speech recognition architecture", "year": "2020" }, { "authors": "Motoi Omachi; Yuya Fujita; Shinji Watanabe; Matthew Wiesner", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "End-to-end ASR to jointly predict transcriptions and linguistic annotations", "year": "2021" }, { "authors": "Golan Pundak; Tara N Sainath; Rohit Prabhavalkar; Anjuli Kannan; Ding Zhao", "journal": "IEEE", "ref_id": "b25", "title": "Deep context: end-to-end contextual speech recognition", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Tao Xu; Greg Brockman; Christine Mcleavey; Ilya Sutskever", "journal": "", "ref_id": "b26", "title": "Robust speech recognition via large-scale weak supervision", "year": "2022" }, { "authors": "Kanishka Rao; Haşim Sak; Rohit Prabhavalkar", "journal": "IEEE", "ref_id": "b27", "title": "Exploring architectures, data and units for streaming end-to-end speech recognition with rnntransducer", "year": "2017" }, { "authors": "Anuroop Sriram; Heewoo Jun; Sanjeev Satheesh; Adam Coates", "journal": "", "ref_id": "b28", "title": "Cold fusion: Training seq2seq models together with language models", "year": "2017" }, { "authors": "Yun Tang; Hongyu Gong; Ning Dong; Changhan Wang; Wei-Ning Hsu; Jiatao Gu; Alexei Baevski; Xian Li; Abdelrahman Mohamed; Michael Auli; Juan Pino", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Unified speech-text pre-training for speech translation and recognition", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Attention is all you need", "year": "2017" }, { "authors": "Tongtong Wu; Guitao Wang; Jinming Zhao; Zhaoran Liu; Guilin Qi; Yuan-Fang Li; Gholamreza Haffari", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Towards relation extraction from speech", "year": "2022" }, { "authors": "Song Xu; Haoran Li; Peng Yuan; Youzheng Wu; Xiaodong He; Bowen Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Self-attention guided copy mechanism for abstractive summarization", "year": "2020" }, { "authors": "Zhanheng Yang; Sining Sun; Xiong Wang; Yike Zhang; Long Ma; Lei Xie", "journal": "", "ref_id": "b33", "title": "Two stage contextual word filtering for context bias in unified streaming and non-streaming transducer", "year": "2023" }, { "authors": "Zhengyi Zhang; Pan Zhou", "journal": "", "ref_id": "b34", "title": "End-to-end contextual asr based on posterior distribution adaptation for hybrid ctc/attention system", "year": "2022" }, { "authors": "Ziqiang Zhang; Long Zhou; Junyi Ao; Shujie Liu; Lirong Dai; Jinyu Li; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "SpeechUT: Bridging speech and text with hiddenunit for encoder-decoder based speech-text pretraining", "year": "2022" }, { "authors": "Ding Zhao; Tara N Sainath; David Rybach; Pat Rondon; Deepti Bhatia; Bo Li; Ruoming Pang", "journal": "", "ref_id": "b36", "title": "a. Shallow-fusion end-to-end contextual biasing", "year": "2019" }, { "authors": "Wei Zhao; Liang Wang; Kewei Shen; Ruoyu Jia; Jingming Liu", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data", "year": "2019" } ]
[ { "formula_coordinates": [ 1, 329.58, 260.66, 173.29, 10.86 ], "formula_id": "formula_0", "formula_text": "安 徽 铜 铃 自 来 他 output:" }, { "formula_coordinates": [ 1, 337.03, 230.37, 165.85, 11.08 ], "formula_id": "formula_1", "formula_text": "他 来 自 安 徽 铜 陵 gold:" }, { "formula_coordinates": [ 3, 112.45, 687.4, 135.11, 9.6 ], "formula_id": "formula_2", "formula_text": "h = TransformerEncoder(x)" }, { "formula_coordinates": [ 3, 97.83, 725.47, 191.31, 48.73 ], "formula_id": "formula_3", "formula_text": "d u = TransformerDecoder(y ≤u , h) (2) P (y u+1 |y ≤u ) = softmax(W d u + b) (3)" }, { "formula_coordinates": [ 3, 333.81, 204.98, 190.6, 33.58 ], "formula_id": "formula_4", "formula_text": "L trans (y) = - U -1 u=0 log P (y u+1 |y ≤u ) (4)" }, { "formula_coordinates": [ 3, 324.67, 422.26, 199.74, 10.63 ], "formula_id": "formula_5", "formula_text": "L(y) = λL trans (y) + (1 -λ)L ctc (y) (5)" }, { "formula_coordinates": [ 3, 336.79, 524.03, 187.62, 21.69 ], "formula_id": "formula_6", "formula_text": "ŷ = arg max y ( u log P (y u+1 |y ≤u ))(6)" }, { "formula_coordinates": [ 4, 144.16, 632.15, 144.98, 10.67 ], "formula_id": "formula_7", "formula_text": "z i = LSTM(e i )(7)" }, { "formula_coordinates": [ 4, 363.74, 247.07, 160.67, 26.89 ], "formula_id": "formula_8", "formula_text": "a e u = (W q d u ) (W k z) √ d (8)" }, { "formula_coordinates": [ 4, 349.47, 349.38, 174.94, 30.03 ], "formula_id": "formula_9", "formula_text": "P c (e|y ≤u ) = exp(a e u ) e ∈E exp(a e u )(9)" }, { "formula_coordinates": [ 4, 366.26, 587.64, 158.15, 22.29 ], "formula_id": "formula_10", "formula_text": "Z u = e∈E P c (e|y ≤u )z(10)" }, { "formula_coordinates": [ 4, 313.06, 696.64, 211.35, 23.39 ], "formula_id": "formula_11", "formula_text": "P (y u+1 |y ≤u , E) = softmax(W [d u , Z u ] + b) (11)" }, { "formula_coordinates": [ 5, 89.24, 551.8, 199.89, 33.58 ], "formula_id": "formula_12", "formula_text": "L copy (σ) = - U -1 u=0 log P c (σ u+1 |y ≤u )(12)" }, { "formula_coordinates": [ 5, 76.31, 676.38, 212.83, 23.36 ], "formula_id": "formula_13", "formula_text": "L = λL trans (y) + (1 -λ)L ctc (y) + L copy (σ)(13)" }, { "formula_coordinates": [ 6, 75.68, 408.76, 213.45, 39.44 ], "formula_id": "formula_14", "formula_text": "Q(i|ŷ ≤u ) = P c (∅|ŷ ≤u )P (i|ŷ ≤u , E), i ∈ V P c (i|ŷ ≤u ), i ∈ E (14)" }, { "formula_coordinates": [ 6, 345.64, 111.65, 178.77, 21.69 ], "formula_id": "formula_15", "formula_text": "ŷ = arg max y ( u log Q(i|y ≤u ))(15)" } ]
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b16", "b43", "b24" ], "table_ref": [], "text": "3D scene registration is a fundamental task for constructing large-scale 3D scenes, with numerous important applications such as virtual reality and panoramic indoor and outdoor maps. Research efforts have been made on registering traditional explicit 3D scene representations including point clouds [34,37,53] and meshes [7,41], which have achieved good results. On the other hand, Neural Radiance Fields (NeRF) [5] provide a novel implicit scene representation that generates images of 3D scenes by volume rendering. The rapid development of NeRF in recent years has revealed its high potential and has made it the next typical 3D scene representation. This demands a method for NeRF registration that this paper focuses on.\nTo register NeRFs, we may directly operate on the continuous neural field, or convert NeRF into existing discrete scene representations. While directly dealing with continuous fields is natural and expected to be accurate, the implicit nature of neural continuous fields introduces complexity to the problem, where frequent and irregular queries to NeRF are expected. Conversion to explicit scene representations is easier and more convenient during registration, but the conversion itself is not obvious and may introduce additional inaccuracy to registration. On the other hand, the problem of traditional 2D image registration has been extensively studied, for which there exists a mature pipeline: key point detection, key point description, descriptor matching and registration. In essence, a 2D image is a discrete representation of RGB color fields, and the goal of image registration is to align two color fields. Inspired by 2D image registration, in this paper, we propose to convert NeRF into 3D images rather than other scene representations, and perform registration on 3D images. In order to make our registration photometric-invariant, we target at registering the geometry of the scene only, which correspond to density fields in NeRF. Thus we only make use of neural density fields, but discard radiance fields to avoid disturbance from illumination. Hence our registration framework only needs to query NeRF density, and we only need a one-time query on 3D image texels, i.e., grid nodes. Therefore, the conversion to 3D density images is efficient and takes advantage of explicitness. In addition, 3D images can be easily downsampled into multi-scale, which makes our registration framework scale-invariant.\nSimilar to the image registration pipeline, good key point descriptors are critical in our framework. Designing 3D descriptors are more challenging than 2D due to the increasing complexity of corner appearances in 3D. In view of the success of neural descriptors on 2D image features [17] and other 3D representations [44,25], we choose to use a universal neural network that generates rotationinvariant descriptors from 3D density image patches extracted from any scene. The network is expected to generate descriptors good enough for matching and registration without the need for fine-tuning, thus allows efficient descriptor generation. Despite the universality and convenience of this network, its training does not require much data effort. Specifically, we detect corners in various training scenes and sample their local neighborhoods in several different orientations, which synthesizes a large amount of training data. Then we propose a contrastive learning strategy that effectively trains our universal network.\nOur contributions mainly consist of two parts: 1) we propose, to the extent of our knowledge, the first 3D density image based NeRF registration framework; 2) we propose a universal neural 3D corner descriptor, coupled with a strategy to train this network with contrastive learning." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "NeRF", "publication_ref": [ "b1", "b21", "b17", "b45", "b44" ], "table_ref": [], "text": "Neural radiance field (NeRF) [5] is a revolutionary method with high potential for novel view generation and 3D reconstruction, which utilizes an implicit radiance field to model a certain given scene. More works are done on this original NeRF representation idea. Instant neural graphics primitives (instant-ngp) [32] provides state-of-the-art optimization in the training process. The training time for one single NeRF drops dramatically from around 10 hours to less than 10 seconds by applying multi-resolution hash encoding. There are also other works depending on NeRF exploring different perspectives. NeRF-RPN [22], makes a great contribution to object detection in the radiance field. Nerf2nerf [18] focuses on high-quality object registration in different NeRF scenes. Works are also done for optimizing the training process on a large scale by Mega-NeRF [46] and Block-NeRF [45]. Although the mentioned works have touched on the topic of NeRF registration, none of them has contributed in registering two overlapping NeRF scenes." }, { "figure_ref": [], "heading": "NeRF and 3D Registration", "publication_ref": [ "b7", "b23", "b4", "b17" ], "table_ref": [], "text": "To make future NeRF research applicable to complex and large-scale starting with indoor scenes, registering overlapping NeRFs captured at different times and possibly resolution is a necessary step. To date, however, there is sparse research effort to address this fundamental 3D computer vision problem for NeRFs. Bundle-adjusting neural radiance fields (BARF) and its variants [28,8,24] contribute to the registration of camera poses by learning the 3D scene representation of the original NeRF. Zero NeRF [35] leverages the NeRF representation to register two sets of images with almost no correspondence. Nerf2nerf [18] studied the pairwise object registration in different NeRF scenes, which helps in semantic analysis in the NeRF space. However, none of the mentioned works aimed at large-scale 3D NeRF registration. That is, there is still no solution to merging two overlapping scenes directly using NeRF representation. On the other hand, 3D scene registration has been extensively researched in computer vision, mostly relying on explicit and discrete representation, unlike NeRFs. In particular, point cloud registration has been extensively studied. Most point cloud registration methods depend heavily on explicit features to localize the points. Some commonly used features are the shapes [51] and point feature histogram (PFH) [40]. Another notable 3D approach consists of registering meshes. Typically, due to the expensive 3D computation, 3D meshes are either downsampled [7] or reduced in dimension [41] to avoid the expensive computation. We are inspired by the ideas of rigid registration of 3D point clouds to avoid heavy computation. But these methods are limited to their input, which differs from the continuous implicit 3D representation of NeRF. Thus, none of them can be directly utilized for 3D NeRF registration." }, { "figure_ref": [], "heading": "Feature Descriptors", "publication_ref": [ "b9", "b0", "b8", "b22", "b30", "b5", "b38", "b26", "b11", "b6", "b47", "b16", "b29", "b49", "b18", "b41" ], "table_ref": [], "text": "Hand-crafted non-learning-based descriptors, represented by SIFT and its variants [33,10,1,29,23,31,9] and an array of well-known descriptors (e.g., HOG [11], SURF [4], MOPS [6], ORB [39] etc), have been extended to 3D SIFT [43] and employed in matching 2D and 3D imageries in a wide range of applications, such as 2D/3D medical registration [27,2], nonrigid mesh registration [12], point cloud registration [37], RGB-D registration [48] to name a few. SIFT is still widely adopted due to its high robustness and invariance. Learning-based descriptors, such as [14, 54, 52, 53, 13] have been proposed. While excellent results have been reported in matching the above discrete domains, none of them are designed to match 3D continuous density volumes which are different from discrete point cloud (clustered), mesh (irregular), and RGB-D (only quasi-dense) data. This paper regards NeRFs as 3D images, and adopts a data-driven approach to learning 3D neural descriptors at detected corners in NeRF density fields. Similar to SIFT [33,43], by construction, our neural descriptors are photometric, scale, and rotational invariant, which will be detailed in the next section.\nWith the development of machine learning, network-based descriptors started to outperform traditional hand-crafted ones. Unsupervised convolutional neural networks perform far better than the classic SIFT algorithm in descriptor-matching tasks [17]. Based on the capability of CNN, many improved network architectures were proposed [55, 30,50,20,19]. These deep-learning-based descriptors are extensions of the hand-crafted descriptors in some way [42] because most of them depend on classical algorithms. Due to the power of deep neural networks, the performance is highly improved. We further implement a 3D contrastive learning descriptor in the NeRF field in order to be compatible with the implicit spatial density representation." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Given two neural radiance and density fields\nF 1 : (x 1 , d) → (c, σ) and F 2 : (x 2 , d) → (c, σ), where x 1 ∈ V 1 , x 2 ∈ V 2 such that V 1 , V 2 are two overlapping volumes (i.e., V 1 ∩ V 2 = ∅)\n, we aim to solve for an optimal rigid transformation making the two fields align with each other. This rigid transformation can be represented as x 1 = lRx 2 + t, where l ∈ R + is a scale factor, R is a 3 × 3 rotation matrix with 3 degrees of freedom, and t is a 3D translation vector.\nWith density grids, i.e. 3D density images, extracted from the two NeRFs to be registered, our registration pipeline generalizes 2D image registration to 3D, where we perform matching using neural feature descriptors. First, we discretize the two NeRFs to density grids with filtering strategies that eliminate noise from sampling, and then downsample them to get multi-scale density grids. Second, we operate a 3D version of Harris corner detector on the grids to compute Harris response in multi-scale, followed by non-maximum suppression to determine corner point locations. Next, we extract density grids from multi-scale neighborhoods of the corners. These neighborhood density grids are fed into a pre-trained neural descriptor network to generate corner descriptors, which are then matched to obtain correspondences of those corners. Finally, we use RANSAC to compute the rigid transformation between the two corner point sets, and regard this transformation as our solution.\nWe will frequently use notations of grids to represent related computations. In this section, all arithmetic operations on grids are interpreted as element-wise operations on each grid cell value. Also, although we describe our registrtion as rigid, our registration is not strictly rigid because we additionally consider a scale factor l. This is considered since NeRFs to be registered can be in different scales." }, { "figure_ref": [], "heading": "Neural Density Field Discretization", "publication_ref": [], "table_ref": [], "text": "Our registration pipeline requires corners as key points for matching. To detect corners, analogous to 2D Harris corner detectors on images, we seek to adapt 3D Harris detectors on NeRFs. NeRF, more than its name suggests, represents a radiance field as well as a density field. While the radiance field depends on viewing directions and does not separate color and illumination, the density field represents scene geometry in NeRF and is only related to query positions. Therefore, to robustly find We discretize the continuous neural field using a grid covering the whole scene. Extracting grids from the continuous 3D field is essentially sampling signals in 3D spaces. In the meantime, NeRFs obtained from indoor scenes are often too noisy for the purpose of corner detection, due to the relatively insufficient training images. Thus, we need an appropriate sampling method to make our samples represent the continuous density signals well.\nDirectly sample densities on each grid node locations is expected to extract a lot of noise from the continuous field. To filter out noise, we may first sample with a high resolution grid, and then downsample with smoothing operations, such as average pooling and Gaussian filter. However, these types of filters is known to smooth edges and corners, making our downstream corner detection task more difficult. Therefore, we choose to denoise our sampled density grids with techniques that preserve corners as large variations in densities, such as anisotropic diffusion [36]. We provide further discussion in section 4.4 about the advantages of this type of techniques." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Corner Detection", "publication_ref": [], "table_ref": [], "text": "We first extract high-resolution density grids G 0 1 , G 0 2 from the NeRF pair, and regard each of them as a 3D image. As Harris detectors are not scale-invariant, as shown in in Figure 1, we downsample G 0 1 , G 0 2 with blurring filters to filter out high-frequency geometry information. This generates two sets of multi-scale grids\n{G 0 1 , G 1 1 , . . . , G d 1 }, {G 0 2 , G 1 2 , . . . , G d 2 }\n, where the number of scales d is manually determined according to the scene. We typically use d = 3 or 4. Discretizing the original continuous neural density field into multi-scale grids not only facilitates density queries, which makes later steps of registration convenient, but also takes large-scale, low-frequency geometry information into consideration.\nFor each grid G, we then use a 3D operator to compute density gradients I x , I y , I z in three directions. For example, a 3D Sobel operator, defined as (1, 0, -1) ⊗ (1, 2, 1) ⊗ (1, 2, 1) where the symbol ⊗ denotes outer product, can be applied here. Then we form a Harris matrix for each grid cell, which constructs a Harris matrix grid where W denotes local 3D windows around grid cells. With M , 3D Harris response grid H can be computed by\nM = W I x I x I x I y I x I z I x I y I y I y I y I z I x I z I y I z I z I z (1)\nH = det M -k(Tr M ) 2 (2)\nwhere k is a manually chosen hyper-parameter. We typically use k = 0.06.\nOnce the global response grids H is obtained for each scene and scale, we perform non-maximum suppression (NMS) on H to find its local maxima as corner point locations in the two scenes. Before NMS, we may filter out low response values because local maxima with very low response are highly likely to be noises. Grid indices of corners detected in every scale are then converted into coordinates to construct two sets X 1 , X 2 for each scene for future use. This step is visualized as Figure 1." }, { "figure_ref": [], "heading": "Rotation-invariant Neural Descriptor", "publication_ref": [], "table_ref": [], "text": "The next step in our pipeline is to create descriptors for all corners. As the two NeRFs to be registered may be trained under different scales, poses and lighting, our corner descriptors are supposed to be invariant to these factors to support accurate matching. Since our corner detection is of multi-scale and only operates on density, our descriptors are expected to be scale and illumination invariant already. Thus we focus on developing a strategy for rotation-invariant descriptors." }, { "figure_ref": [ "fig_1" ], "heading": "Universal Descriptor Neural Network", "publication_ref": [], "table_ref": [], "text": "3D corners as features are more difficult to describe than in 2D, because there exist much more appearance variations of 3D corners than 2D corners. Despite the complexity of 3D corners, we want a descriptor design with simplicity comparable to 2D versions. This leads us to use neural descriptors, in the form of descriptor-generating neural networks (Figure 2), which can encode rotation-invariant corner representations within its weights. Given different rotation-dependent corner representations of the same corner as input, the network should output the same result, in order to effectively distinguish corner appearance and orientation.\nPreviously, we have obtained corner positions in both scenes as indices in multi-scale grids. For each corner, we extract a local neighborhood region of the grid as the input corner representation to the network. For a grid G and its 3D indices of a corner i, j, k, we extract a cube-shaped subgrid\nN i,j,k (s) = G [i-s,i+s]×[j-s,j+s]×[k-s,k+s](3)\nwhere s ∈ N + indicates the size of the neighborhood. The intervals\n[i -s, i + s], [j -s, j + s], [k - s, k + s]\nof grid indices are closed on both sides, so the edge length of N i,j,k (s) is 2s + 1 units. Note that this representation is scale-invariant, because the resolution of N i,j,k (s) is completely determined by the resolution of G and is independent of s. Thus we retain scale-invariance after neighborhood extraction and do not rely on the network to be invariant to scale.\nThe set of N of all corners are then fed into a pre-trained neural network f to generate their descriptors δ = f (N ). This network is universal, in the sense that it can be applied to neighborhood grids sampled from any scene. This is a natural design since corner appearances are pure local geometric features that are independent to global scenes. A network trained from sufficiently diversified corner data is supposed to classify corners in any scene. In addition, our method is flexible in choices of network architectures and output representations δ, which means it can be adapted to multiple descriptor matching criteria. In our experiments, a simple shallow fully-connected network is effective enough for our registration purposes. Please refer to Section 4 for details." }, { "figure_ref": [ "fig_2" ], "heading": "Contrastive Learning on Descriptor Networks", "publication_ref": [ "b46" ], "table_ref": [], "text": "In this section we describe our method on training the descriptor network, which is a pre-processing step independent of the registration pipeline. Figure 3 shows the overall strategy. Our network is expected to generate similar results, so a contrastive learning strategy is suitable here, where the network is penalized by a contrastive loss that measures the difference between outputs generated from an input pair (N 1 , N 2 ) representing the same corner. However, in addition to matching, we also expect the model to distinguish inputs from corners with different appearances. For this purpose, instead of using training pairs, we choose to use triplets of the form (N 1 , N 2 , N 1 ) to generate (δ 1 , δ 2 , δ 1 ) where N 1 is from a different corner. The contrastive loss penalizes not only the difference between δ 1 , δ 2 but also the similarity between δ 1 , δ 1 . This strategy of learning with triplets has shown to be successful in previous works on 2D image descriptor matching [47] and object detection [16]. We adapt the margin ranking loss first proposed in [49]:\nL = max{0, + ||δ 1 -δ 2 || 2 -||δ 1 -δ 1 || 2 } (4)\nwhere is a small positive value, and we assume δ are vectors with Euclidean difference used. Larger epsilon results in larger penalization on the similarity between δ 1 , δ 1 . However, note that the loss function is flexible as long as it penalizes the aforementioned difference and similarity.\nAlthough there has been no dataset for training corner descriptor networks taking neighborhood grids as input, generating these data can be convenient and effective. We first obtain NeRF models of several scenes, and apply multi-scale corner detection on these scenes as described in Section 3.2. Then for each corner point, we sample its neighborhood density grid from its original NeRF in various orientations. Since our grids are in 3D space, where there are much more number of orientations for meaningful sampling, we can generate a large number of grids for each detected corner. Each indoor scene we use contains a large number of corners derived from a variety of typical man-made objects, so a small number of scenes is sufficient to generate a sufficiently large and diversified corner dataset for training our network. When training, we randomly sample neighborhood grid triplets from the dataset to compute the contrastive loss and update the weights." }, { "figure_ref": [], "heading": "Register Key Points Using RANSAC", "publication_ref": [], "table_ref": [], "text": "After computing descriptors for all corners in both scenes, we match them between the two scenes to get corner point correspondences, and our registration task is converted to traditional point-set registration." }, { "figure_ref": [], "heading": "Matching Descriptors", "publication_ref": [], "table_ref": [], "text": "Descriptors of the two scenes are matched based on similarity scores p, representing the probability for a descriptor pair to be correct. This requires a mathematical definition, which is flexible according to the form of δ. For example, we can use normalized inverse Euclidean distance or angular distance for vector outputs. Descriptor pairs with p smaller than a pre-defined threshold are unlikely to be correct pairs and should be filtered out. The rest of potential matches can be viewed as a bipartite graph, where the vertices consists of descriptors δ, and each edge is associated with a similarity score p. Then we apply the Maximum-weight matching algorithm [15] on this graph to determine descriptor correspondences, which are regarded as correspondences between point location sets X 1 , X 2 for registration." }, { "figure_ref": [], "heading": "Rigid Registration with RANSAC", "publication_ref": [ "b20" ], "table_ref": [], "text": "By removing unmatched points from X 1 , X 2 , we finally construct X1 , X2 with | X1 | = | X2 | for rigid point set registration. For every correct point pair x 1 ∈ X1 , x 2 ∈ X2 , we solve for a scale factor l * > 0, a 3 × 3 rotation matrix R * and a 3D translation vector t * in the rigid transformation that registers x 2 to x 1 with the least registration error of Euclidean distance\nl * , R * , t * = arg min l,R,t (x1,x2) x1∈ X1,x2∈ X2 n||x 1 -(lRx 2 + t)|| 2(5)\nwhere n = 1 if pairs are correct and 0 if otherwise. If all pairs in X1 , X2 are perfectly correct, then at least 3 point correspondences are required to determine the transformation. Given such 3 correspondences, there exist algorithms [3,21] that gives a closed-form solution for l * , R * and t * .\nDue to the existence of incorrect correspondences as outliers, we use RANSAC to ignore them. For the transformation l, R, t proposed by each RANSAC iteration, we consider a point pair x 1 , x 2 to be an inlier if its Euclidean distance error\ne = ||x 1 -(lRx 2 + t)|| 2(6)\nis smaller than a pre-defined threshold. Among all transformations with their numbers of inliers larger than a manually set number m, we select the one with the least average Euclidean distance error as our final result. m is used to guarantee robust registration results, which is selected according to the performance of the previously used descriptor network. In practice, it is not advised to pick a very small m even if the network performs very well. This is because smaller number of points have higher chance to be symmetric. There may be several transformations between symmetric point correspondences where only one is correct, but the algorithm may not output the correct one." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Training Descriptor Networks", "publication_ref": [ "b37", "b1", "b25" ], "table_ref": [], "text": "In our experiments, we extract 7 × 7 × 7 neighborhood density grids around corners, feed them into a shallow 3D CNN, and normalize the last layer as the descriptor of the input neighborhood. See Figure 4 for our network structure.\nOur network is trained on scenes from the Hypersim [38] dataset, which is a photorealistic synthetic indoor scene dataset where each scene corresponds with hundreds of rendered images from different viewpoints. In addition to fully rendered images, for each viewpoint, Hypersim also provides images Figure 5: Visualization of the NeRFs, density grids with Harris corners as white dots and response heatmap around them, and registered results. The first, second, third row corresponds to Hypersim scene ai_001_001, ai_001_008, ai_002_005 respectively. Each row visualizes two parts of the scene to be registered, as well as the registered volume in the last column rendered as two meshes with different colors.\nrendered from diffuse illumination only, which we used for training NeRFs with Instant-NGP [32]. We select 22 scenes providing about 1200 corners for training our network and 1 scene providing 58 corners for validation.\nWe implement Section 3.3.2 to generate neighborhood grids from these scenes. For each scene, we select corners from detected corners not on the scene boundary. For each detected corner, its 7 × 7 × 7 neighborhood grid is rotated along the x, y, z axes by θ x , θ y , θ z , where θ x , θ y , θ z take values from evenly-spaced angles in (-π, π]. Each angle is spaced by 1 6 π, so for each corner, we generate ( 2π 1 6 π ) 3 = 3456 neighborhood density grids. These grids derived from different corners from different scenes are then assembled as our training data. To form a training triplet, we randomly select 2 grids from one corner and 1 grid from another corner. In every training iteration, 10,000 triplets are formed and fed into the network to compute the contrastive loss defined in Equation 4, where we use = 0.1. The Adam [26] optimizer with learning rate 2 × 10 -6 is used. We train our network on an NVIDIA GeForce GTX 1080 Ti GPU for 80,000 iterations, which takes about 7 hours.\nThe training and validation loss are shown in Figure 4. Due to randomness of our triplet formation, the two loss curves exhibit some oscillations. In addition to loss, we also compute error rate on validation data for a more direct evaluation on the effectiveness of corner classification. In each validation step, we randomly select 1 test neighborhood grid from each corner. For each test grid, another 58 grids from these 58 corners are randomly selected as proposal grids to compute similarity scores with the test grid. The one with the largest similarity score, for which we use inverse angular distance, is matched to the test grid. This match is correct if its two grids come from the same corner. Then, the error rate is computed by n wrong /100 where n wrong is the number of incorrect matches. Please refer to Figure 4 for error rates as training progresses. In the end, error rates remain around 0.2. According to RANSAC, given our network performance, roughly 7 corner pairs from the overlapping region are required for a robust registration result with 99% confidence. This is not a very strict requirement which is often easily satisfied. " }, { "figure_ref": [], "heading": "Registration Results", "publication_ref": [], "table_ref": [], "text": "We use the network described and trained above to measure the performance of our method. The Hypersim dataset is used for training and testing as well as validation to ensure network coherency. We note on the other hand the Hypersim dataset has its inherent deficiencies. Specifically, camera poses are not abundant enough for a single scene, which leads to information loss in the opposite direction. In many cases, only the front view of the room is visible and thus is not suitable for registration. To avoid such corruption of data, we picked 15 scenes of relatively high quality and not used in training, which are then cropped into overlapping scene portions from different orientations for registration.\nFor each scene, we manually split its NeRF into 2 parts with partial overlapping, and sample density grids in different resolutions. We translate and rotate one of the density grid and try to register back to see if the two volumes register as well as the registration error. We run RANSAC for 50,000 iterations, where a transformation is only considered if the number of inlier pairs is larger than 6. A point pair is considered an inlier if their registered distance error is less than 3 (we regard a grid cell unit length to be 1). The qualitative visualization of registration on 3 scenes are shown in Figure 5.\nOn the other hand, we measure the registration results quantitatively on 15 Hypersim scenes by the average squared distance error. This error is defined as\ne avg = 1 |I| (x1,x2)∈I ||x 1 -(lRx 2 + t)|| 2 2 (7\n)\nwhere I is the set of inlier corner point pairs determined by RANSAC. These error are summarized in Figure 6. Typically, the magnitude of distance errors is comparable to the grid cell length, which indicate very good registration results for large indoor scenes." }, { "figure_ref": [], "heading": "Comparison with 3D-SIFT", "publication_ref": [], "table_ref": [], "text": "While we use neural networks to learn 3D corner descriptors, it is natural to question about the performance of traditional, non-learning-based or hand-crafted descriptors. In this section, we compare our neural descriptor with a typical traditional descriptor 3D SIFT [43]. 3D SIFT computes circular histograms of gradient orientations in subgrids of the local neighborhood grid. The histograms are concatenated together, normalized and rotated to align to the dominant gradient direction, thus it is rotation-invariant.\nIn our experiments, we extract 9 × 9 × 9 neighborhoods around each detected corner. This neighborhood is evenly divided into 27 subgrids to compute histograms for each of them. Then we replace our neural descriptors with these histograms in our pipeline and perform registration on the test scenes used in Section 4.2. Other settings are identical to our previous experiments on the network-based descriptor. The results are shown in Figure 6.\nAs shown in this table, network-based registration has larger numbers of successful registration attempts than 3D-SIFT. In addition, once a descriptor netowrk is loaded, it can generate corner descriptors very efficiently, while 3D-SIFT has to take about half a minute to compute the descriptor. This shows that our descriptor network performs better than 3D-SIFT, so we believe it is a better descriptor choice for our 3D-image based NeRF registration. For the frequency domains, zero-frequency locate at the center, with higher positive and negative frequencies around." }, { "figure_ref": [ "fig_6" ], "heading": "Neural Density Field Sampling Strategies", "publication_ref": [], "table_ref": [], "text": "Here we compare 3 different density field sampling strategies mentioned in section 3.1, namely direct sampling, average pooling from higher resolution, and anisotropic diffusion. For average pooling, we use pool size 2 × 2 × 2 and apply it once. For anisotropic diffusion, we use 5 iterations with timestep ∆t = 0.01. The diffusion coefficient c we use is\nc(||∇I||) = e -( ||∇I|| K ) 2(8)\nwhere ∇I denotes the Laplacian of the grid, and the sensitivity constant K = 5. We select a typical scene from Hypersim and sample with each strategy. For all 3 strategies, we visualize their resulting density grid, normalized Harris response and frequency domain of density in Figure 7. For the frequency domain, we visualize the amplitudes of frequencies obtained by 3D Discrete Fourier Transform. For better illustration, we also visualize 2D frequency domains by slicing 3D domains at zero-frequency on an axis.\nDensity grid directly sampled from NeRF appears to be noisy, which also causes its Harris response to be noisy. Correspondingly, we can see high amplitudes for higher frequencies in its frequency domain. By contrast, the other grids are less noisy, and the high amplitudes in their frequency domains centralize around the lower frequencies. For denoising, as seen in the frequency domains of average pooling and anisotropic diffusion, their effectiveness is similar. However, the density grid obtained from average pooling looks blurrier than that from anisotropic diffusion. Average pooling also causes higher normalized Harris response around corners, showing its smoothing effects on corners. To conclude, anisotropic diffusion does the best in denoising and corner-preserving, so it is applied in our sampling step." }, { "figure_ref": [], "heading": "Limitation", "publication_ref": [], "table_ref": [], "text": "As a method that generalizes from key point based 2D image registration, our framework has several limitations. First, the accuracy of registration is limited by the resolution of the sampling density grids, which means that optimizing registration in the scale of a grid cell unit length is not possible. This may require a method that directly operates on the continuous field. In addition, our method relies on a sufficient number of correct matches between corner points. If the overlapping part of 2 NeRFs does not contain enough corners as key points, our framework is expected to struggle." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose a NeRF registration framework which operates on a 3D image representation of the NeRF density field. This framework generalizes the traditional 2D image registration pipeline to 3D. We propose to use a universal descriptor network to generate descriptors of 3D corner features without fine-tuning, as well as a contrastive learning strategy and a data generation method for training the network. By performing experiments on the Hypersim dataset, we demonstrate that our framework can register two indoor scenes with sufficient accuracy. We also show that the performance of our shallow fully-connected descriptor network is adequate for our registration purpose. As the first effort on direct NeRF registration, we hope that our framework can benefit the construction of NeRFs of large-scale indoor scenes, and inspire future work on NeRF registration." } ]
No significant work has been done to directly merge two partially overlapping scenes using NeRF representations. Given pre-trained NeRF models of a 3D scene with partial overlapping, this paper aligns them with a rigid transform, by generalizing the traditional registration pipeline, that is, key point detection and point set registration, to operate on 3D density fields. To describe corner points as key points in 3D, we propose to use universal pre-trained descriptor-generating neural networks that can be trained and tested on different scenes. We perform experiments to demonstrate that the descriptor networks can be conveniently trained using a contrastive learning strategy. We demonstrate that our method, as a global approach, can effectively register NeRF models, thus making possible future largescale NeRF construction by registering its smaller and overlapping NeRFs captured individually.
Registering Neural Radiance Fields as 3D Density Images
[ { "figure_caption": "Figure 1 :1Figure 1: Multi-scale Corner Detection on a 3D Density Image.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Neural Corner Descriptor Generation. Neighborhood grid is extracted around every corner in every density image.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Corner point neighborhoods in various orientations are extracted from training scenes to form triplets, which serves as training data of our descriptor network. The network is trained by a contrastive loss.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Descriptor network structure (left), training and evaluation loss (center), and error rate (right). Loss is plotted in a log scale. Due to randomness of our triplet formation, loss and error rate have oscillations.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Visualization of density grid, Harris response and frequency domains. Row 1: Direct sampling. Row 2: Average Pooling. Row 3: Anisotropic Diffusion. The 'jet' color map is used where red denotes higher value and blue denotes smaller value. All grids and images have values normalized within 0 and 1. For the frequency domains, zero-frequency locate at the center, with higher positive and negative frequencies around.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" } ]
Han Jiang; Ruoxuan Li; Yu-Wing Tai; Chi-Keung Tang
[ { "authors": "Alaa E ; Abdel-Hakim Aly; A Farag", "journal": "Ieee", "ref_id": "b0", "title": "Csift: A sift descriptor with color invariant characteristics", "year": "2006" }, { "authors": "Stéphane Allaire; John J Kim; Stephen L Breen; David A Jaffray; Vladimir Pekar", "journal": "IEEE", "ref_id": "b1", "title": "Full orientation invariance and improved feature selectivity of 3d sift with application to medical image analysis", "year": "2008" }, { "authors": "K S Arun; T S Huang; S D Blostein", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b2", "title": "Least-squares fitting of two 3-d point sets", "year": "1987" }, { "authors": "Herbert Bay; Tinne Tuytelaars; Luc Van Gool", "journal": "Lecture notes in computer science", "ref_id": "b3", "title": "Surf: Speeded up robust features", "year": "2006" }, { "authors": "Matthew Tancik; Jonathan T Barron; Ravi Ramamoorthi; Ren Ng; Ben Mildenhall; P Pratul; Srinivasan", "journal": "", "ref_id": "b4", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Matthew Brown; Richard Szeliski; Simon Winder", "journal": "IEEE", "ref_id": "b5", "title": "Multi-image matching using multi-scale oriented patches", "year": "2005" }, { "authors": "Xuesong Chen; Liangjun Zhang; Ruofeng Tong; Jinxiang Dong", "journal": "", "ref_id": "b6", "title": "Multi-resolution-based mesh registration", "year": "2004" }, { "authors": "Yue Chen; Xingyu Chen; Xuan Wang; Qi Zhang; Yu Guo; Ying Shan; Fei Wang", "journal": "", "ref_id": "b7", "title": "Local-to-global registration for bundle-adjusting neural radiance fields", "year": "2022" }, { "authors": "Warren Cheung; Ghassan Hamarneh", "journal": "IEEE", "ref_id": "b8", "title": "N-sift: N-dimensional scale invariant feature transform for matching medical images", "year": "2007" }, { "authors": "Liang-Chi Chiu; Tian-Sheuan Chang; Jiun-Yen Chen; Nelson Yen-Chung Chang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b9", "title": "Fast sift design for real-time visual feature extraction", "year": "2013" }, { "authors": "Navneet Dalal; Bill Triggs", "journal": "Ieee", "ref_id": "b10", "title": "Histograms of oriented gradients for human detection", "year": "2005" }, { "authors": "Tal Darom; Yosi Keller", "journal": "IEEE Transactions on Image Processing", "ref_id": "b11", "title": "Scale-invariant features for 3-d mesh models", "year": "2012" }, { "authors": "Haowen Deng; Tolga Birdal; Slobodan Ilic", "journal": "", "ref_id": "b12", "title": "Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors", "year": "2018" }, { "authors": "Tomasz Daniel Detone; Andrew Malisiewicz; Rabinovich", "journal": "", "ref_id": "b13", "title": "Superpoint: Self-supervised interest point detection and description", "year": "2018" }, { "authors": "Ran Duan; Seth Pettie", "journal": "Journal of the ACM (JACM)", "ref_id": "b14", "title": "Linear-time approximation for maximum weight matching", "year": "2014" }, { "authors": "Qi Fan; Wei Zhuo; Chi-Keung Tang; Yu-Wing Tai", "journal": "", "ref_id": "b15", "title": "Few-shot object detection with attention-rpn and multi-relation detector", "year": "2020" }, { "authors": "Philipp Fischer; Alexey Dosovitskiy; Thomas Brox", "journal": "", "ref_id": "b16", "title": "Descriptor matching with convolutional neural networks: a comparison to sift", "year": "2014" }, { "authors": "Lily Goli; Daniel Rebain; Sara Sabour; Animesh Garg; Andrea Tagliasacchi", "journal": "", "ref_id": "b17", "title": "nerf2nerf: Pairwise registration of neural radiance fields", "year": "2022" }, { "authors": "Xufeng Han; Thomas Leung; Yangqing Jia; Rahul Sukthankar; Alexander C Berg", "journal": "", "ref_id": "b18", "title": "Matchnet: Unifying feature and metric learning for patch-based matching", "year": "2015" }, { "authors": "Kun He; Yan Lu; Stan Sclaroff", "journal": "", "ref_id": "b19", "title": "Local descriptors optimized for average precision", "year": "2018" }, { "authors": "K P Berthold; Horn", "journal": "J. Opt. Soc. Am. A", "ref_id": "b20", "title": "Closed-form solution of absolute orientation using unit quaternions", "year": "1987-04" }, { "authors": "Benran Hu; Junkai Huang; Yichen Liu; Yu-Wing Tai; Chi-Keung Tang", "journal": "", "ref_id": "b21", "title": "Nerf-rpn: A general framework for object detection in nerfs", "year": "2022" }, { "authors": "Yan Ke; Rahul Sukthankar", "journal": "IEEE", "ref_id": "b22", "title": "Pca-sift: A more distinctive representation for local image descriptors", "year": "2004" }, { "authors": "Hyunjin Kim; Minkyeong Song; Daekyeong Lee; Pyojin Kim", "journal": "IEEE", "ref_id": "b23", "title": "Visual-inertial odometry priors for bundle-adjusting neural radiance fields", "year": "2022" }, { "authors": "Seonggyeom Kim; Dong-Kyu Chae", "journal": "Association for Computing Machinery", "ref_id": "b24", "title": "Exmeshcnn: An explainable convolutional neural network architecture for 3d shape analysis", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b25", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Prateek Kumar; Steven Henikoff; Pauline C Ng", "journal": "Nature protocols", "ref_id": "b26", "title": "Predicting the effects of coding non-synonymous variants on protein function using the sift algorithm", "year": "2009" }, { "authors": "Chen-Hsuan Lin; Wei-Chiu Ma; Antonio Torralba; Simon Lucey", "journal": "", "ref_id": "b27", "title": "Barf: Bundle-adjusting neural radiance fields", "year": "2021" }, { "authors": "Ce Liu; Jenny Yuen; Antonio Torralba; Josef Sivic; William T Freeman", "journal": "Springer", "ref_id": "b28", "title": "Sift flow: Dense correspondence across different scenes", "year": "2008" }, { "authors": "Anastasiia Mishchuk; Dmytro Mishkin; Filip Radenovic; Jiri Matas", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Working hard to know your neighbor's margins: Local descriptor learning loss", "year": "2017" }, { "authors": "Jean-Michel Morel; Guoshen Yu", "journal": "SIAM journal on imaging sciences", "ref_id": "b30", "title": "Asift: A new framework for fully affine invariant image comparison", "year": "2009" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Trans. Graph", "ref_id": "b31", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2008-02" }, { "authors": "C Pauline; Steven Ng; Henikoff", "journal": "Nucleic acids research", "ref_id": "b32", "title": "Sift: Predicting amino acid changes that affect protein function", "year": "2003" }, { "authors": "Jaesik Park; Qian-Yi Zhou; Vladlen Koltun", "journal": "", "ref_id": "b33", "title": "Colored point cloud registration revisited", "year": "2017" }, { "authors": "Casey Peat; Oliver Batchelor; Richard Green; James Atlas", "journal": "", "ref_id": "b34", "title": "Zero nerf: Registration with zero overlap", "year": "2022" }, { "authors": "P Perona; J Malik", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b35", "title": "Scale-space and edge detection using anisotropic diffusion", "year": "1990" }, { "authors": "Yingying Ran; Xiaobin Xu", "journal": "Optik", "ref_id": "b36", "title": "Point cloud registration method based on sift and geometry feature", "year": "2020" }, { "authors": "Mike Roberts; Jason Ramapuram; Anurag Ranjan; Atulit Kumar; Miguel Angel Bautista; Nathan Paczan; Russ Webb; Joshua M Susskind", "journal": "", "ref_id": "b37", "title": "Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding", "year": "2021" }, { "authors": "Ethan Rublee; Vincent Rabaud; Kurt Konolige; Gary Bradski", "journal": "Ieee", "ref_id": "b38", "title": "Orb: An efficient alternative to sift or surf", "year": "2011" }, { "authors": "Bogdan Radu; Nico Rusu; Michael Blodow; Beetz", "journal": "IEEE", "ref_id": "b39", "title": "Fast point feature histograms (fpfh) for 3d registration", "year": "2009" }, { "authors": "Arman Savran; Bulent Sankur", "journal": "", "ref_id": "b40", "title": "Non-rigid registration of 3d surfaces by deformable 2d triangular meshes", "year": "2008" }, { "authors": "L Johannes; Hans Schonberger; Torsten Hardmeier; Marc Sattler; Pollefeys", "journal": "", "ref_id": "b41", "title": "Comparative evaluation of hand-crafted and learned local features", "year": "2017" }, { "authors": "Paul Scovanner; Saad Ali; Mubarak Shah", "journal": "", "ref_id": "b42", "title": "A 3-dimensional sift descriptor and its application to action recognition", "year": "2007" }, { "authors": "Anthony Simeonov; Yilun Du; Andrea Tagliasacchi; Joshua B Tenenbaum; Alberto Rodriguez; Pulkit Agrawal; Vincent Sitzmann", "journal": "", "ref_id": "b43", "title": "Neural descriptor fields: Se(3)-equivariant object representations for manipulation", "year": "2021" }, { "authors": "Matthew Tancik; Vincent Casser; Xinchen Yan; Sabeek Pradhan; Ben Mildenhall; P Pratul; Jonathan T Srinivasan; Henrik Barron; Kretzschmar", "journal": "", "ref_id": "b44", "title": "Block-nerf: Scalable large scene neural view synthesis", "year": "2022" }, { "authors": "Haithem Turki; Deva Ramanan; Mahadev Satyanarayanan", "journal": "", "ref_id": "b45", "title": "Mega-nerf: Scalable construction of large-scale nerfs for virtual fly-throughs", "year": "2021" }, { "authors": "Daniel Ponsa; Vassileios Balntas; Edgar Riba; Krystian Mikolajczyk", "journal": "BMVA Press", "ref_id": "b46", "title": "Learning local feature descriptors with triplets and shallow convolutional neural networks", "year": "2016-09" }, { "authors": "Jun Wan; Qiuqi Ruan; Wei Li; Gaoyun An; Ruizhen Zhao", "journal": "Journal of Electronic Imaging", "ref_id": "b47", "title": "3d smosift: three-dimensional sparse motion scale invariant feature transform for activity recognition from rgb-d videos", "year": "2014" }, { "authors": "Jiang Wang; Yang Song; Thomas Leung; Chuck Rosenberg; Jinbin Wang; James Philbin; Bo Chen; Ying Wu", "journal": "", "ref_id": "b48", "title": "Learning fine-grained image similarity with deep ranking", "year": "2014" }, { "authors": "Xing Wei; Yue Zhang; Yihong Gong; Nanning Zheng", "journal": "", "ref_id": "b49", "title": "Kernelized subspace pooling for deep local descriptors", "year": "2018" }, { "authors": "Walter Wohlkinger; Markus Vincze", "journal": "IEEE", "ref_id": "b50", "title": "Ensemble of shape functions for 3d object classification", "year": "2011" }, { "authors": "Jianchao Yang; Kai Yu; Thomas Huang", "journal": "IEEE", "ref_id": "b51", "title": "Supervised translation-invariant sparse coding", "year": "2010" }, { "authors": "Jian Zi; Gim Yew; Lee Hee", "journal": "", "ref_id": "b52", "title": "3dfeat-net: Weakly supervised local 3d features for point cloud registration", "year": "2018" }, { "authors": "Andy Zeng; Shuran Song; Matthias Nießner; Matthew Fisher; Jianxiong Xiao; Thomas Funkhouser", "journal": "", "ref_id": "b53", "title": "3dmatch: Learning local geometric descriptors from rgb-d reconstructions", "year": "2017" }, { "authors": "Xu Zhang; Felix X Yu; Sanjiv Kumar; Shih-Fu Chang", "journal": "", "ref_id": "b54", "title": "Learning spread-out local feature descriptors", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 107.64, 403, 397.6, 20.59 ], "formula_id": "formula_0", "formula_text": "F 1 : (x 1 , d) → (c, σ) and F 2 : (x 2 , d) → (c, σ), where x 1 ∈ V 1 , x 2 ∈ V 2 such that V 1 , V 2 are two overlapping volumes (i.e., V 1 ∩ V 2 = ∅)" }, { "formula_coordinates": [ 4, 186.16, 573.98, 150.38, 12.2 ], "formula_id": "formula_1", "formula_text": "{G 0 1 , G 1 1 , . . . , G d 1 }, {G 0 2 , G 1 2 , . . . , G d 2 }" }, { "formula_coordinates": [ 4, 240.25, 693.09, 263.75, 31.47 ], "formula_id": "formula_2", "formula_text": "M = W I x I x I x I y I x I z I x I y I y I y I y I z I x I z I y I z I z I z (1)" }, { "formula_coordinates": [ 5, 255.4, 269.09, 248.6, 11.03 ], "formula_id": "formula_3", "formula_text": "H = det M -k(Tr M ) 2 (2)" }, { "formula_coordinates": [ 5, 219.86, 600.46, 284.14, 9.96 ], "formula_id": "formula_4", "formula_text": "N i,j,k (s) = G [i-s,i+s]×[j-s,j+s]×[k-s,k+s](3)" }, { "formula_coordinates": [ 5, 108, 620.44, 396, 19.65 ], "formula_id": "formula_5", "formula_text": "[i -s, i + s], [j -s, j + s], [k - s, k + s]" }, { "formula_coordinates": [ 6, 217.3, 428.63, 286.7, 10.62 ], "formula_id": "formula_6", "formula_text": "L = max{0, + ||δ 1 -δ 2 || 2 -||δ 1 -δ 1 || 2 } (4)" }, { "formula_coordinates": [ 7, 192.05, 379.24, 311.95, 31.81 ], "formula_id": "formula_7", "formula_text": "l * , R * , t * = arg min l,R,t (x1,x2) x1∈ X1,x2∈ X2 n||x 1 -(lRx 2 + t)|| 2(5)" }, { "formula_coordinates": [ 7, 255.07, 499.11, 248.93, 9.68 ], "formula_id": "formula_8", "formula_text": "e = ||x 1 -(lRx 2 + t)|| 2(6)" }, { "formula_coordinates": [ 9, 223.98, 421.92, 276.14, 27.27 ], "formula_id": "formula_9", "formula_text": "e avg = 1 |I| (x1,x2)∈I ||x 1 -(lRx 2 + t)|| 2 2 (7" }, { "formula_coordinates": [ 9, 500.13, 428.98, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 10, 259.94, 521.96, 244.06, 13.16 ], "formula_id": "formula_11", "formula_text": "c(||∇I||) = e -( ||∇I|| K ) 2(8)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b18", "b9", "b18", "b37", "b13", "b12", "b10", "b14", "b1", "b10", "b6", "b30", "b37", "b37" ], "table_ref": [], "text": "The field of autonomous driving has seen a surge of interest in LiDAR 3D object detection due to its ability to overcome the limitations of image-based methods and improve overall system reliability. There has been an ever-growing list of novel point cloud feature extractors [44,19,38,10,5,6,31] and new detection paradigms [19,31,40,25] in this area. In the meantime, data-centric works in this field fall far behind model-centric ones, though data and model are widely recognized as two fundamental components in perception tasks. The quantity and quality of point cloud data play key roles in achieving a performant detector and data augmentation has always been an integral part of this. However, existing works of LiDAR data augmentation either focus more on data under special weather condition [14,13] or fall short at verifying their effectiveness [24,11,43,15] on large-scale real-world datasets [2,32]. In this work, we systematically study the synthesis-based approach for LiDAR data augmentation, which indicates the produce of placing a set of object point clouds into scene point clouds [38,11]. Compared with scan-based(e.g. flip, scale, rotate) and object-based LiDAR data augmentation [7], synthesis-based ones generate diverse LiDAR scans and offer fine-grain controllability over the synthesized scenes like over-sampling objects from rare classes. However, simply applying the vanilla synthesis-based LiDAR data augmentation(so-called GT-Aug) does not lead to satisfactory results on modern large-scale datasets like nuScenes and Waymo [33]. To explain the above phenomenon, we take the bicycle class in nuScenes as an example and plot its PR-Curve for a CenterPoint [40] detector trained with and without GT-Aug. Fig. 1 shows that after applying GT-Aug, the detector is able to recall more objects at the cost of generating more false positives. The PR-curve clearly reveals a downside of vanilla GT-Aug -introducing non-existing LiDAR scans pattern into the original dataset.\nReal-Aug, which prioritizes on the realisticness of newly synthesized scenes, is proposed in this paper to overcome the limitation of vanilla synthesis-based LiDAR augmentation methods. It consists a reality-conforming scene composition module for handling intricate technical details throughout scene composition and a real-synthesis mixing up training strategy which gradually aligns the distribution from synthetic data to the real one. The effectiveness of Real-Aug are validated across multiple LiDAR object detection datasets for different detectors. We achieve a 4.7% 3D mAP improvement on KITTI 3D object detection benchmarks (2.1%, 4.0%, 8.1% for car, pedestrian and cyclist respectively) for a baseline SECOND [38] detector. Notably, we achieve a 74.4% NDS and a 70.2% mAP on the test set of nuScenes 3D object detection benchmark 1 . There is a significant 6.1% NDS and a 8.7% mAP improvement over our baseline CenterPoint [40] detector solely through data augmentation.\nOur contributions can be summarized as follows:\n(1) We reveal the realisticness issue of vanilla synthesisbased LiDAR data augmentation.\n(2) We present a well-designed Real-Aug scheme, which features a reality-conforming scene composition module and a real-synthesis mixing up strategy.\n(3) We highlight the effectiveness of Real-Aug by applying it cross a wide range of detectors for multiple datasets and achieve state-of-the-art results.\n(4) We validate technical advantages of Real-Aug, including its robustness to hyper parameter choices and the improvement of data utilization." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Non-synthesis Data Augmentation Methods", "publication_ref": [ "b13", "b12", "b33" ], "table_ref": [], "text": "Data augmentaion methods are widely applied to artificially diversify the dataset and help promote the detectors' capacity. Commonly-used strategies include random flip, rotation, scale, and translation at both scene-and instancelevel. Some physically valid simulation methods were deployed to deal with the detection challenges under foggy or snowy weather [14,13]. PointPainting augmented point clouds with image semantics. It appended the predicted class score from image semantic segmentation network to each point [36]. Inspired by PointPainting, PointAugment- " }, { "figure_ref": [], "heading": "Synthesis-based Data Augmentation Methods", "publication_ref": [ "b14", "b18", "b37", "b36", "b16", "b0", "b19", "b10", "b14", "b10" ], "table_ref": [], "text": "Mix3D devised a \"mixing\" technique to create new scenes by combining two augmented ones while ensuring sufficient overlap [27]. A sensor-centric approach was applied for maintaining the data structure of synthesized scenes consistent with the lidar sensor capacities [15]. One of the most popular synthesis-based data augmentation methods, Ground-Truth augmentation (GT-Aug), was presented by Yan et al. [38] in 2018 and applied in multiple Li-DAR detection tasks [19,40,45,39,6,17,18,1]. On top of GT-Aug, many techniques were proposed for diversifying the ground-truth database. Part-aware and shape-aware gt sampling divided objects into partitions and stochastically applied augmentation methods to each local region [8,43]. Pattern-aware gt sampling downsampled the points of objects to create a new one with farther distance [16]. Point-Mixup utilized an interpolation method to compose new objects [4]. PointCutMix replaced part of the sample with shape-preserved subsets from another one [20]. Fang et al. proposed a rendering-based method for inserting visual objects simulated by CAD into the real background [11].\nPlacing instances at semantically plausible positions was proved to be essential to guarantee the improved performance for 2D object detectors [9,35,42]. In LiDAR-based 3D object detection, collision problem is commonly seen as a physical placement issue in GT-Aug. Yan et al. performed a collision test after ground-truth sampling and removed any sampled objects that collided with others [38]. Competition strategy, which remains the points closer to the sensor, was employed to generate a more physical synthesized scene [15]. LiDAR-Aug leveraged a \"ValidMap\" to generate poses for achieving more reasonable obstacle placements [11]. It divided point clouds into pillars and filtered out valid pillars according to the height distribution. Although some researchers have noticed the placement issue in GT-Aug, the systematical studies about unrealistic LiDAR scan patterns in synthesized scenes is still woefully insufficient. Particularly, the deviations of data distribution from synthetic data to the real one are rarely discussed. As a result, existing synthesis-based LiDAR augmentations only achieved limited success, especially in large-scale datasets like nuScenes and Waymo." }, { "figure_ref": [], "heading": "Realistic Scene Synthesis", "publication_ref": [], "table_ref": [], "text": "Our method mainly consists of a reality-conforming scene composition module and a real-synthesis mixing up training strategy. We introduce a reality-conforming score in Sec. 3.1 to measure the realisticness of synthesized scans. The details of scene composition is described in Sec. 3.2. We elaborate the training strategy of how to blend synthe-sized and real LiDAR scans to achieve the optimal performance in Sec. 3.3." }, { "figure_ref": [], "heading": "Reality-Conforming Score", "publication_ref": [], "table_ref": [], "text": "Finding a proper metric to measure the realisticness is at the core of our method. The most direct approach is measuring how well a model trained on the vanilla train set perform on the augmented val set. If the newly synthesized scenes conform to the same data distribution of the train set, the vanilla model could recognize it without any performance degradation. Therefore, we define a realityconforming score Re directly based on metric of the perception task at hand, which could seen as a generalized version of detection agreement score mentioned in LiDAR-Sim [24]. Specifically, for LiDAR object detection, we define the the reality-conforming score Re(mAP) as the ratio between mAP tested on val set with and without groundtruth augmentation.\nRe(mAP) = mAP aug mAP noaug(1)\nThe reality-conforming score could also be defined over other metrics, like mIoU for semantic segmentation and PQ for panoptic segmentation." }, { "figure_ref": [], "heading": "Reality-Conforming Scene Composition", "publication_ref": [ "b38", "b22", "b37", "b1" ], "table_ref": [], "text": "We perform our scene synthesis solely via the objectscene composition approach, which could be formulated as sequentially place LiDAR points of one or more objects into a existing scene. We refer the set of all placeable objects as the object bank and all scenes as the scene bank accordingly.\nThe composition approach is widely adopt in both 2D and 3D object detection [41,34,23,38,40] but achieves far less success in large scale LiDAR object detection benchmark like nuScenes [2] and Waymo [32]. We find the realisticness is the key to the success of this kind of methods and elaborate the technical details affecting the realisticness of synthesized LiDAR scans in the following sections. Related ablation studies could be found in Tab. 5" }, { "figure_ref": [], "heading": "Placeable Location Detection Module", "publication_ref": [ "b25", "b25", "b21" ], "table_ref": [], "text": "It is obvious that not everywhere in a LiDAR scan is a suitable spot for placing an object. Accurate modeling where each kind of object could appear requires a ton of extra knowledge, we simplify this task by assuming most objects of interest are on the drivable surface. This simplification is not perfect in every case(e.g. a pedestrian could appear on the sidewalk) but it is a good approximation as objects appeared in the drivable area affect the behavior of ego vehicle most.\nWe adopt a light-weight coordinate MLP [26] as our placeability estimator. The input of the network is the coordinates and reflectivity of LiDAR points (x, y, z, r). We use a Fourier [26] encoder of order L = 10 to map the points into 64-dim embeddings. We use binary cross-entropy loss for training the estimator. The supervision could come from either model-based ground estimation method like Patch-Works [22,21] or manually labelled LiDAR semantic segmentation.\nAn alternative way would be directly using ground estimator like PatchWorks but we choose the coordinated-MLP apporach for its denoising nature and low latency(< 1ms) on modern hardware." }, { "figure_ref": [], "heading": "Design Choices of Scene Composition", "publication_ref": [], "table_ref": [], "text": "Object Position. Different from the vanilla GT-Aug which always place the object at the same position from where it been take, there are three factors in Real-Aug to consider when choosing physically reasonable position of a sampled object: the distance and observing angle from ego vehicle, as well as the predicted placeability.\nAssuming the XOY location and heading of an object in its original scene is denoted as (x, y, θ) and its location and heading in the synthesized scene as (x , y , θ ), the distance constraint could be specified as,\nx 2 + y 2 = x 2 + y 2 ± ∆(2)\nand observing angle constraint as\nθ + arctan y x = θ + arctan y x . (3\n)\nHere ∆ is the error tolerance threshold as finding an exact match for distance constraint is almost impossible for a limited number of points in a single scan. By default ∆ is set to the half length of object L/2.\nThe distance constraint ensures the realistic point density and the observing angle constraint guarantees the a realistic scan pattern. Finally for the placeable constraint, we simply reject all locations (x , y ) with predicted placeability < 0.5 to avoid placing objects into unfavorable locations. Object Heading. Taking a closer look at Eq. 2 and Eq. 3 we can find that the heading angle θ of our object is a free variable, which opens up the possibility of choosing a natural heading angle for the object instead of placing it into the scene with some wired random headings. We select the heading angle θ of our object to conform the heading distribution of objects from the same category in the scene{θ c } via measuring the cosine similarity between headings,\nθ = arg max θ c cos(θ -θ c )(4)\nIf there is no object in the scene with the same category, we simply choose the most frequent heading for our object. Object Height. Using the original height of an object could make it fly over or under the new scene's ground plane. We mitigate this by setting its bottom height in the new scene as the mean height of all ground points {z g } enclosed by the bounding box.\nz -H/2 = avg(z g )(5)\nCollision Avoidance. In order to avoid collision, we use the same strategy in GT-Aug [38] and remove placed object if it overlaps with the existing ones." }, { "figure_ref": [], "heading": "Real-Synthesis Mixing Up Training Strategy", "publication_ref": [], "table_ref": [], "text": "In this section, we elaborate a real-synthesis mixing up training strategy for gradually adapting the detector from synthetic data distribution to the real one. We introduce the real scene-category relation and scene-crowdedness relation alignments in Sec. 3.3.1 and Sec.3.3.2 to fulfill the full potential of Real-Aug." }, { "figure_ref": [], "heading": "Align to Realistic Scene-Category Relation", "publication_ref": [ "b2", "b1", "b11" ], "table_ref": [], "text": "Existing large-scale autonomous driving datasets [3,32,2,12] make great efforts at ensuring the diversity of video clips and generally contains scenarios from a wide range of weather, lighting and road conditions. However, the rich diversity of scenarios poses challenge for scene-synthesis.\nFor instance, putting a bike rider on a closed cross-state highway or synthesizing a man holding an umbrella into a sunny afternoon country road both make the synthesized scenes highly unrealistic. So maintaining a reasonable scene-category relation is of great importance in our work. We summarize the scene-category relation for three 20s video clips from nuScenes in Tab.1. It reveals strong connection between category distribution of objects and their surrounding environments. Previous approaches like GT-Aug fail to realize the scene-category relation in driving scenarios, leading to detectors which hallucinate non-existing false positives as shown in Fig. 1.\nIn order to both enjoy the enhanced feature learning via scene augmentation and respect the original scene-category relation, we propose a mix up training strategy. The plain strategy is to place a preset number of objects for each category into the scene for each LiDAR scan. Using c to denote the object category, the number of inserted objects can be represented as N c plain . The plain strategy totally ignores the scene-category relation but generates diverse scans which is good for feature learning. Another strategy is to strictly respect the scene-category relation by inserting objects only from the existing categories in the scan. We use N c exist to denote the number of objects inserted by this strategy, where N c exist = 0 for categories not exist on this scan. We use a hyper-parameter α ∈ [0, 1] to balance above two strategies and obtain our final strategy N c . We align the data distribution to the real scene-category relation by gradually annealing α from 1 to 0 towards the end of training. While our composition-based augmentation greatly facilitate the feature leanring for LiDAR object detector, it inevitably distorting the crowdedness of the real scenes as we only inserting objects into scenes.\nN c = N c plain × α + N c exist × (1 -α)(6) 3.3\nTab. 2 demonstrates the increase of foreground voxels on the feature of a CenterPoint detector when applying GT-Aug for LiDAR augmentation. Among 10 categories defined in nuScenes detection task, fg/bg of bicycle, motorcycle and construction vehicle rank the top three with increased times of 19.2, 18.0, 14.8. The significant increase of foreground voxels encourages detectors to make more predictions than what there really are. To deal with this, we introduce another hyper-parameter β and also use a gradually annealing strategy to decrease its value for aligning the real scene-crowdedness relation.\nN c = (N c plain × α + N c exist × (1 -α)) × β(7)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "For verifying the effectiveness and generality of Real-Aug, we conduct extensive experiments on nuScenes and KITTI. Some brief descriptions of datasets are summarized in Sec. 4.1. The implement details are shown in Sec 4.2. We elaborate the evalutation results on the test set of nuScenes and KITTI in Sec. 4.3. Exhaustive ablations are performed and discussed in Sec. 4.4." }, { "figure_ref": [], "heading": "Datasets and Metrics", "publication_ref": [ "b1", "b11", "b18" ], "table_ref": [], "text": "nuScenes Dataset [2]. nuScenes dataset is a large-scale dataset designed for accelerating researches on multiple tasks in autonomous driving scenarios. It comprises over 1,000 scenes, which are divided into 700 scenes for training, 150 scenes for validation and 150 scenes for testing. The full dataset consists 390k 360-degree LiDAR sweeps, which are collected by Velodyne HDL-32E with 20Hz capture frequency. The nuScenes detection task requires detecting 10 object classes with 3D bounding boxes, attributes, and velocities. For evaluation, the official detection metrics, including NuScenes Detection Score (NDS) and mean Average Precision (mAP), are used.\nKITTI Dataset [12]. KITTI dataset is a widely used benchmark dataset for 3D object detection. It contains 7481 frames for training and 7518 frames for testing. As Ref [19,38], the training frames are further divided into train set with 3712 frames and val set with 3769 frames. The KITTI detection task requires detecting 3 object classes with 3 difficulty levels (Easy, Moderate, Hard). Detectors are evaluated by 3D Average Precision AP 3D , which is calculated with recall 40 positions (R40)." }, { "figure_ref": [], "heading": "Implement Details", "publication_ref": [ "b37", "b37", "b18", "b9", "b21" ], "table_ref": [], "text": "Our implementation of LiDAR-based 3D object detection is based on open-sourced OpenPCDet [28] and the published code of CenterPoint [40]. For nuScenes, we choose CenterPoint-Voxel, CenterPoint-Pillar, SECOND-Multihead frameworks for analysis. For KITTI, we choose SECOND and PointPillar frameworks for anaylsis. Detectors are trained with a batch size of 32 on 8 A100 GPUs. We utilize adam optimizer with one-cycle learning rate policy. We use the same data augmentation methods (except GT-Aug) and network designs as prior works [40,45,38,19 20 and 80 respectively. We adopt weighted Non-Maxima Suppression (NMS) [10] during inference. The placeability estimator described in Sec. 3.2.1 is supervised by the ground labels generated from PatchWorks [22,21]." }, { "figure_ref": [], "heading": "Evaluation on nuScenes and KITTI test set", "publication_ref": [ "b37" ], "table_ref": [], "text": "nuScenes. As shown in Tab. 3, the CenterPoint detector trained with Real-Aug outperforms other state-of-theart LiDAR-only methods on the nuScenes test set. Comparing to the work reported by Yin et al. [40], Real-Aug promotes the NDS and mAP of CenterPoint from 0.673, 0.603 to 0.734, 0.690. Notably, our methods bring significant mAP improvement for bicycle, motorcycle and construction vehicle by 21.6%, 18.9%, 14.9% over the baseline. Combining with the SparseFishNet3D backbone which is described in Sec.4.4.6, we achieve 0.744 NDS and 0.702 mAP on nuScenes test set." }, { "figure_ref": [], "heading": "KITTI.", "publication_ref": [ "b8" ], "table_ref": [], "text": "The evaluation metric of KITTI changes from AP 3D R11 to AP 3D R40. For an unbiased comparison, we take the submitted results which are achieved based on the reimplement of OpenPCDet [29] as a reference. The results shown in Tab. 4 authenticate the effectiveness of Real-Aug in KITTI dataset. There is an average boost of 4.7% AP 3D for all classes with different difficulties (2.1%, 4.0%, 8.1% for car, pedestrian and cyclist respectively)." }, { "figure_ref": [], "heading": "Ablations And Analysis", "publication_ref": [], "table_ref": [], "text": "Real-Aug is investigated with extensive ablation experiments on val set of nuScenes. In Sec. 4.4.1, we discuss the realisticness of synthesized scenes. The advantages of Real-Aug, including its effectiveness, robustness and its role in promoting the data utilization, are analyzed from Sec. 4.4.2 to Sec. 4.4.5. The optimized backbone, which is called SparseFishNet3D, is described in Sec. 4.4.6 for achieving better detection performance. GT-Aug and Real-Aug. Simultaneously, we ablate the contribution of each component in Real-Aug. The detector, which possess a framework of CenterPoint-Voxel and a voxel size of [0.075,0.075,0.2], is trained on the vanilla nuScenes train set. The inference results on the augmented nuScenes val set are compared and in Tab. 5. In contrast to GT-Aug, the Re(mAP) of Real-Aug increases from 0.744 to 0.933, which means our proposed realityconforming scene composition approach and real-synthesis mixing up training strategy can effectively shrinkage the gap between the synthesized scenes and the real one. Each component, including the physically reasonable object position, heading, height and real scene-category relation, matters for realizing realistic scene synthesis for LiDAR augmentation in 3D object detection. The alignment of real scene-crowdedness relation finally regress to the raw point clouds without any synthesis-based augmentations." }, { "figure_ref": [], "heading": "Reality-Conforming", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Effectiveness of Real-Aug on nuScenes val set", "publication_ref": [ "b16" ], "table_ref": [], "text": "The effectiveness of Real-Aug, which contains a realityconforming scene composition module and a real-synthesis mixing up training strategy, is validated on nuScenes val set. Our proposed reality-conforming scene composition module boosts NDS from 0.666 to 0.678 and mAP from 0.595 to 0.611. Combining the real-synthesis mixing up training strategy, the performance of CenterPoint-Voxel can be further optimized and finally reaches 0.694 NDS and 0.641 mAP.\nThe information of objects in nuScenes trainval set are summarized in Appendix A.1 to help analyze the phenonmena shown in Tab. 6. In nuScenes dataset, car and pedestrian are two categories with most abundant data. Car exists in 97.73% frames and pedestrian exists in 79.19% frames. As a result, most detectors perform well on them. Although truck exists in 70.26% frames, which ranks the 3rd place, the corresponding performance of detector is still worse than expectation. Trucks' unsatisfactory detection accuracy should be attributed to the rare low points density inside their bounding boxes. In each voxel with the size of [0.075,0.075,0.2], the average points number of truck is 0.044, which is mucher lower than that of barrier (0.550) and traffic cone (0.534). The above points-density-related issues are more serious in construction vehicle and trailer (with only 0.018 and 0.017 points per voxel). As a result, it is hard for detectors to distinguish them from background points. The abnormal mAP decline of motorcycle and bicycle when introducing GT-Aug also attract our attention, which may own to their complex morphology. As shown in Tab. 6, the effectivness of GT-Aug severely suffer from the dramatical mAP degradation of motorcycle and bicycle.\nThe proposed Real-Aug minimizes the misleading from non-existing LiDAR scan patterns introduced by GT-Aug. It excels at dealing with the complex-morphology and lowpoints-density issues, which is beneficial for unleashing the full power of detectors. Replacing GT-Aug with Real-Aug achieves a boost of 6.3%, 4.6%, 11.7%, 17.0% AP for construction vehicle, trailer, motorcycle and bicycle respectively. Robust to different detectors. The generality of Real-Aug is validated in multiple detectors with various voxel sizes. Evaluation results on nuScenes val set are shown in Tab. 7. In center-based models (including CenterPoint-Voxel and CenterPoint-Pillar), Real-Aug bring significant improvements (approximate 3% NDS and 5% mAP) over the baseline. The optimized performance is also valid when decreasing the voxel resolution. We test Real-Aug on SECOND-Multihead [45], which is a typical anchor-based detector, for further exploring its versatility. The proposed realistic scene sysnthesis method for LiDAR augmentation also yields extra performance gain in anchor-based frameworks. It enhances SECOND-Multihead with a considerable increase of 3.1% NDS and 4.7% mAP. We compare the performance of detectors trained with different proportions of data and list them in Tab. 10. Real-Aug promotes the utilization of data to a great extent (with an approximate increase of 4 times). The detector trained with 25% data and Real-Aug performs comparably to the one trained with 100% data and GT-Aug. For enhancing the performance of detector, We use the self-calibrated convolution block (SC-Conv block) in 2D backbone and add an IoU prediction branch in the multi-task head as AFDetv2 [17]. The optimized detector achieves 1.6% NDS and 2.0% mAP increasement. We also apply a SparseUNet3D on the 2x downsample 3D feature map for stronger feature representation with larger receptive fields. Based on the above optimization methods, we achieve 0.710 NDS and 0.661 mAP on nuScenes val set (without any test time augmentations)." }, { "figure_ref": [], "heading": "Robustness of Real-Aug", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "A novel data-oriented approach, Real-Aug, is proposed for LiDAR-based 3D Object Detection. It consists of a reality-conforming scene composition module and a realsynthesis mixing up training strategy. We conduct extensive experiments to verify the effectiveness of Real-Aug and achieve a state-of-the-art 0.744 NDS and 0.702 mAP on nuScenes test set." }, { "figure_ref": [], "heading": "A. Object Information", "publication_ref": [], "table_ref": [], "text": "In order to help analyze the difficulty of detection task in nuScenes and KITTI dataset, we summarize the basic information of objects with different categories. The results are shown in Tab.13 and Tab.14.\nFor a specific category, l, w, h denotes the average length, width and height of objects. Dpts denotes the average LiDAR points number inside each voxel of the object's 3D bounding box. It can be calculated by Eq. 8:\nDpts = Npts Nvoxel = Npts ( l/v D ) × ( w/v W ) × ( h/v H )(8)\nwhere According to the performance of CenterPoint-Voxel trained with GT-Aug, which is shown in the 2nd line of Tab. 6, we divide all the ten categories defined in nuScenes detection task into four groups: (1) car, pedestrian (with AP above 0.8); (2) bus, barrier, traffic cone (with AP ranging from 0.6 to 0.8); (3) truck, motorcycle, bicycle (with AP ranging from 0.4 to 0.6); (4) trailer, construction vehicle (with AP lower than 0.4).\nCar and pedestrian are two categories with most abundant data in nuScenes dataset. Car exists in 97.73% frames and pedestrian exists in 79.19% frames. Their R obj also ranks top two throughout the whole object band. As a result, detectors perform well on car and pedestrian. Bus, barrier, traffic cone are three categories with clear and simple structural characteristics. The morphology consistency in different scenarios reduce the difficulty for detector to distinguish them from other objects and background points. Truck, motorcycle and bicycle, whose data is not as rich as pedestrian and meanwhile with more complex and volatile morphology than bus, barrier and traffic cone, are more difficult to be detected. For trailer and construction vehicle, the lowest points density in each voxel introduces much confusion for detectors to recognize them from background points.\nReal-Aug, which contains a a reality-conforming scene composition module to handle the details of the composition and a real-synthesis mixing up training strategy to gradually adapt the data distribution from synthetic data to real one, can greatly alleviate negative effects of existing synthesis-based augmentation methods. Detectors trained with Real-Aug present remarkable performance optimization, especially on motorcycle, bicycle, trailer and construction vehicle. According to the performance of SECOND, which is shown in Tab. 8, we divide the three categories defined in KITTI detection task into two groups: (1) car; (2) pedestrian and cyclist. Comparing to car, pedestrian and cyclist are lack in data quantity and meanwhile possess high morphology complexity. As a result, the mAP 3D of car are much higher than that of the other two categories. The effectiveness of Real-Aug is also validated in KITTI dataset. According to Tab. 8, the mAP 3D of moderate objects that is inferenced by SECOND can be further increased from 66.4% to 68.0% when replacing GT-Aug with Real-Aug." }, { "figure_ref": [], "heading": "A.2. KITTI", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B. Test Time Augmentation", "publication_ref": [ "b37" ], "table_ref": [], "text": "Throughout the inference process of CenterPoint-Voxel on nuScenes dataset, we use two test time augmentation (TTA), including double flip and point-cloud rotation along the yaw axis, to improve the detector's final detection performance. The yaw angles are set the same as that in [40], which are [0 • , ±6.25 • , ±12.5 ate car, pedestrian and cyclist." }, { "figure_ref": [], "heading": "C. Visualization", "publication_ref": [], "table_ref": [], "text": "The augmented point clouds generated by GT-Aug and Real-Aug are visualized in Fig. 4. GT-Aug introduces many non-existing LiDAR scans patterns into the point clouds. The inserted objects, which locate at physically unreasonable place and move towards inappropriate direction, will hinder detectors from learning effective features. In this paper, a reality-conforming scene composition module is proposed to deal with the above mentioned problems. It handles the details of synthesis operation and maintains the authenticity of the composite scene as much as possible. The real-synthesis mixing up training strategy can further alleviate the negative influence introduced by synthesis-based LiDAR augmentation. The predicted boxes that is inferenced by the model trained with GT-Aug and Real-Aug are visualized in Fig. 5. The predicted boxes with scores lower than 0.1 are filtered out. It is clear that replacing GT-Aug with Real-Aug can effectively reduce false positives." }, { "figure_ref": [], "heading": "D. Limitations and Future Work", "publication_ref": [], "table_ref": [], "text": "In addition to applying our Real-Aug on nuScenes and KITTI dataset, we will also extend our methods on Waymo dataset in the future." }, { "figure_ref": [], "heading": "Raw Point Cloud", "publication_ref": [], "table_ref": [], "text": "Synthesized\nIn KITTI dataset, objects outside the front view are not annotated. Throughout the inference process of SECOND, y-flip test time augmentation is applied. The evaluated results of SECOND on KITTI val set are shown in Tab. 16. There is an increase of 0.1%, 3.3%, 0.4% AP 3D for moder-" } ]
Data and model are the undoubtable two supporting pillars for LiDAR object detection.However, data-centric works have fallen far behind compared with the evergrowing list of fancy new models. In this work, we systematically study the synthesis-based LiDAR data augmentation approach (so-called GT-Aug) which offers maxium controllability over generated data samples. We pinpoint the main shortcoming of existing works is introducing unrealistic LiDAR scan patterns during GT-Aug. In light of this finding, we propose Real-Aug, a synthesis-based augmentation method which prioritizes on generating realistic LiDAR scans. Our method consists a reality-conforming scene composition module which handles the details of the composition and a real-synthesis mixing up training strategy which gradually adapts the data distribution from synthetic data to the real one. To verify the effectiveness of our methods, we conduct extensive ablation studies and validate the proposed Real-Aug on a wide combination of detectors and datasets. We achieve a state-of-the-art 0.744 NDS and 0.702 mAP on nuScenes test set. The code shall be released soon.
Real-Aug: Realistic Scene Synthesis for LiDAR Augmentation in 3D Object Detection
[ { "figure_caption": "1 arXivFigure 1 .11Figure 1. Pr-curve of the bicycle at different scene synthesis steps.", "figure_data": "", "figure_id": "fig_0", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Overview of the proposed realistic scene synthesis for LiDAR augmentation (Real-Aug) in 3D object detection. (Points reflected from placeable area is painted with red. GT boxes in raw point cloud, possible locations for inserted sample and final location of inserted sample are presented by the bounding boxes with blue, green and red lines respectively.)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4.4. 5 Figure 3 .53Figure 3. The framework of SparseFishNet3D. Based on the baseline backbone (SECOND), we apply a SparseUNet3D on the 2x downsample 3D feature map for stronger feature representation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "53", "figure_type": "figure" }, { "figure_caption": "Rank 1st among all LiDAR-only object detection methods by the time of submission ing decorated point clouds with corresponding point-wise CNN features extracted from 2D image detectors [37].", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The category distributions of objects at scene-0184, scene-0234, scene-0399.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparing the ratios of foreground voxels to background voxels with and without GT-Aug. The resolution used here for voxelization is [0.075m,0.075m,0.2m] and number of voxels are counted on a feature map of a total stride of 8.", "figure_data": ") Ratio", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "]. The total training epochs for nuScenes and KITTI are set as Methods NDS mAP Car Truck C.V. Bus Trailer Barrier Mot. Byc. Ped T.C. Comparison with state-of-the-art methods on test sets of nuScenes detection benchmarks.( †: test-time augmentation.)", "figure_data": "CBGS [45]0.633 0.528 0.811 0.485 0.105 0.549 0.429 0.657 0.515 0.223 0.801 0.709PillarNet-34 † [30]0.714 0.660 0.876 0.575 0.279 0.636 0.631 0.772 0.701 0.423 0.873 0.833LidarMultiNet [39]0.716 0.670 0.869 0.574 0.315 0.647 0.610 0.735 0.753 0.476 0.872 0.851Transfusion L † [1]0.702 0.655 0.862 0.567 0.282 0.663 0.588 0.782 0.683 0.442 0.861 0.820LargeKernel3D L [6]0.705 0.653 0.859 0.553 0.268 0.662 0.602 0.743 0.725 0.466 0.856 0.800LargeKernel3D L † [6]0.728 0.688 0.873 0.591 0.302 0.685 0.656 0.750 0.778 0.535 0.883 0.824AFDetV2 [17]0.685 0.624 0.863 0.542 0.267 0.625 0.589 0.710 0.638 0.343 0.858 0.801MDRNet L [18]0.705 0.652 0.865 0.545 0.257 0.638 0.589 0.748 0.731 0.452 0.866 0.829MDRNet L † [18]0.720 0.672 0.873 0.577 0.283 0.665 0.622 0.752 0.744 0.485 0.876 0.843CenterPoint [40]0.655 0.580 0.846 0.510 0.175 0.602 0.532 0.709 0.537 0.287 0.834 0.767CenterPoint+Real-Aug0.709 0.658 0.852 0.546 0.313 0.652 0.600 0.770 0.726 0.464 0.857 0.800CenterPoint † [40]0.673 0.603 0.852 0.535 0.200 0.636 0.560 0.711 0.595 0.307 0.846 0.784CenterPoint+Real-Aug †0.734 0.690 0.858 0.582 0.349 0.673 0.639 0.787 0.784 0.523 0.881 0.811SparseFishNet3D+Real-Aug † 0.744 0.702 0.868 0.593 0.355 0.701 0.656 0.776 0.783 0.551 0.890 0.845MethodCarPedestrianCyclistmAPSECOND [29]85.3% 76.6% 71.8% 43.0% 35.9% 33.6% 71.1% 55.6% 49.8% 58.1%SECOND+Real-Aug 86.8% 78.4% 74.7% 47.2% 40.3% 37.2% 81.4% 63.2% 56.2% 62.8%", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "mAP3D difference of well-trained SECOND model (using GT-Aug or Real-Aug) on KITTI test sets.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Score Re(mAP) of synthesized scenes generated by different methods. Real Composition 0.678 0.611 0.851 0.593 0.206 0.722 0.425 0.696 0.611 0.458 0.849 0.700 3 + MixUp Training 0.694 0.641 0.850 0.605 0.247 0.713 0.435 0.689 0.704 0.597 0.849 0.717", "figure_data": "Methods Position Heading Height Category Re(mAP)w/o GT-Aug1.000GT-Aug0.744Real-Aug0.933Real-Aug0.870Real-Aug0.880Real-Aug0.906Real-Aug0.795The reality-conforming score Re(mAP) defined inSec. 3.1 is deployed for comparing realisticness between", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Effectiveness of reality-conforming scene composition and real-synthesis mixing up training strategy. (Evaluation dataset: nuScenes val set, model: CenterPoint-Voxel, voxel size: [0.075,0.075,0.2])", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Model Robustness analysis. The results are evaluated on nuScenes val set", "figure_data": "Methodvoxel sizeNDS mAPSECOND GT-Aug[0.1,0.1,0.2]0.620 0.505SECOND Real-Aug[0.1,0.1,0.2]0.651 0.552CenterPP GT-Aug[0.1,0.1,8.0]0.608 0.503CenterPP Real-Aug[0.1,0.1,8.0]0.639 0.558CenterVoxel GT-Aug [0.075,0.075,0.2] 0.666 0.595CenterVoxel Real-Aug [0.075,0.075,0.2] 0.694 0.641CenterVoxel GT-Aug [0.15,0.15,0.2] 0.637 0.558CenterVoxel Real-Aug [0.15,0.15,0.2] 0.658 0.596", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Model Robustness analysis. The results are evaluated on KITTI val set.Robust to different datasets. We test the adaptability of Real-Aug on KITTI dataset, in which objects and their distributions are highly divergent from that in nuScenes. Thanks to the extensive expansion of training samples' diversity, detectors trained on KITTI can greatly benefit from GT-Aug and achieve better performance. Even so, Real-Aug yields extra performance gain. For the baseline of SECOND, GT-Aug increases mAP 3D of moderate car, pedestrian and cyclist from 77.8%, 44.2%, 56.5% to 81.4%, 52.4%, 65.3%. Replacing GT-Aug with Real-Aug, mAP 3D can be further optimized to 81.7%, 54.0%, 68.2%. Similar experimental phenomena are obtained when transforming detector from SECOND to PointPillar. The mAP 3D of all categories with various difficulties increases from 53.5% to 62.9% if GT-Aug is used. Real-Aug further enhances the performance of PointPillar to get a mAP 3D of 64.8%.", "figure_data": "MethodCarPedestrian CyclistSECOND w/o GT-Aug 77.8%44.2%56.5%SECONDGT-Aug81.4%52.4%65.3%SECONDReal-Aug81.7%54.0%68.2%PointPillar w/o GT-Aug 75.4%42.1%42.9%PointPillarGT-Aug77.9%47.6%63.2%PointPillarReal-Aug78.9%51.5%64.1%", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Ablations on backbone optimization. The results are evaluated on nuScenes val set", "figure_data": "α start pct β div steps β div factor NDS mAP---0.678 0.6110.75--0.688 0.6290.75[0.75,0.85]20.695 0.6430.75[0.75,0.85]40.694 0.6410.75[0.75,0.85]80.693 0.6400.80[0.80,0.90]20.693 0.6390.80[0.80,0.90]40.695 0.6400.80[0.80,0.90]80.695 0.642Table 11. Choice of different real-synthesis mixing-up train-ing strategy. (Evaluation dataset: nuScenes val set, model:CenterPoint-Voxel, voxel size: [0.075,0.075,0.2].)reach 0.695 and 0.643.4.4.6 Backbone OptimizationBackboneReal-Aug NDS mAPSECOND(Baseline)0.694 0.641+ SC-Conv0.699 0.645+ IoU Pred0.704 0.655+ SparseFishNet3D0.710 0.661", "figure_id": "tab_9", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Npts , Nvoxel denotes the average number of points, voxels inside object's 3D bounding box. v D , v W , v H are the voxel size defined in detectors. In Tab. 13, v D , v W , v H are set as 0.075, 0.075, 0.2. Npts is calculated according to the densified point cloud with 10 LiDAR sweeps. In Tab. v D , v W , v H are set as 0.05, 0.05, 0.1. R f rame denotes the percentage of frames that contain the corresponding objects. The sum of R f rame for all classes is not equal to 1.0 because one frame may contain objects with different categories. R obj denotes the percentage of objects throughout the whole object bank.", "figure_data": "A.1. nuSceneslwhDpts R f rame R objCar4.634 1.954 1.734 0.065 97.73% 42.43%Truck 6.992 2.517 2.870 0.044 70.26% 8.29%C.V. 6.454 2.857 3.216 0.018 23.33% 1.41%Bus 11.090 2.933 3.464 0.029 31.90% 1.60%Trailer 12.283 2.904 3.875 0.017 24.67% 2.40%Barrier 0.503 2.524 0.983 0.550 31.71% 13.65%Mot. 2.102 0.769 1.472 0.172 21.60% 1.16%Byc. 1.706 0.601 1.294 0.116 20.85% 1.07%Ped. 0.728 0.668 1.770 0.127 79.19% 20.11%T.C. 0.415 0.408 1.069 0.534 39.63% 7.88%", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Information of objects in nuScenes trainval set.", "figure_data": "", "figure_id": "tab_11", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Information of objects in KITTI trainval set.", "figure_data": "", "figure_id": "tab_13", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "• , ±25 • ]. The inference re-Effects of TTA (y-flip). (Evaluation dataset: KITTI val set, model:SECOND, voxel size: [0.05,0.05,0.1]).", "figure_data": "Modely-flipCarPedestrian CyclistSECOND81.7%54.0%68.2%SECOND81.8%57.3%68.6%", "figure_id": "tab_14", "figure_label": "16", "figure_type": "table" } ]
Jinglin Zhan; Tiejun Liu; Rengang Li; Jingwei Zhang; Zhaoxiang Zhang; Yuntao Chen
[ { "authors": "Xuyang Bai; Zeyu Hu; Xinge Zhu; Qingqiu Huang; Yilun Chen; Hongbo Fu; Chiew-Lan Tai", "journal": "", "ref_id": "b0", "title": "Transfusion: Robust lidar-camera fusion for 3d object detection with transformers", "year": "2022" }, { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu", "journal": "", "ref_id": "b1", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Ming-Fang Chang; John Lambert; Patsorn Sangkloy; Jagjeet Singh; Sławomir Ba ¸k; Andrew Hartnett; De Wang; Peter Carr; Simon Lucey; Deva Ramanan; James Hays", "journal": "", "ref_id": "b2", "title": "Argoverse: 3d tracking and forecasting with rich maps", "year": "2019" }, { "authors": "Yunlu Chen; Vincent Tao Hu; Efstratios Gavves; Thomas Mensink; Pascal Mettes; Pengwan Yang; G M Cees; Snoek", "journal": "", "ref_id": "b3", "title": "Pointmixup: Augmentation for point clouds", "year": "2020" }, { "authors": "Yukang Chen; Yanwei Li; Xiangyu Zhang; Jian Sun; Jiaya Jia", "journal": "", "ref_id": "b4", "title": "Focal sparse convolutional networks for 3d object detection", "year": "2022" }, { "authors": "Yukang Chen; Jianhui Liu; Xiaojuan Qi; Xiangyu Zhang; Jian Sun; Jiaya Jia", "journal": "", "ref_id": "b5", "title": "Scaling up kernels in 3d cnns", "year": "2022" }, { "authors": "Shuyang Cheng; Zhaoqi Leng; Ekin Dogus Cubuk; Barret Zoph; Chunyan Bai; Jiquan Ngiam; Yang Song; Benjamin Caine; Vijay Vasudevan; Congcong Li; Quoc V Le; Jonathon Shlens; Dragomir Anguelov", "journal": "", "ref_id": "b6", "title": "Improving 3d object detection through progressive population based augmentation", "year": "2020" }, { "authors": "Jaeseok Choi; Yeji Song; Nojun Kwak", "journal": "", "ref_id": "b7", "title": "Part-aware data augmentation for 3d object detection in point cloud", "year": "2021" }, { "authors": "Nikita Dvornik; Julien Mairal; Cordelia Schmid", "journal": "", "ref_id": "b8", "title": "Modeling visual context is key to augmenting object detection datasets", "year": "2018" }, { "authors": "Lue Fan; Xuan Xiong; Feng Wang; Naiyan Wang; Zhaoxiang Zhang", "journal": "", "ref_id": "b9", "title": "Rangedet: In defense of range view for lidar-based 3d object detection", "year": "2021" }, { "authors": "Jin Fang; Xinxin Zuo; Dingfu Zhou; Shengze Jin; Sen Wang; Liangjun Zhang", "journal": "", "ref_id": "b10", "title": "Lidar-aug: A general rendering-based augmentation framework for 3d object detection", "year": "2021" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "", "ref_id": "b11", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Martin Hahner; Christos Sakaridis; Mario Bijelic; Felix Heide; Fisher Yu; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b12", "title": "Lidar snowfall simulation for robust 3d object detection", "year": "2022" }, { "authors": "Martin Hahner; Christos Sakaridis; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b13", "title": "Fog simulation on real lidar point clouds for 3d object detection in adverse weather", "year": "2021" }, { "authors": "Frederik Hasecke; Martin Alsfasser; Anton Kummert", "journal": "IEEE Intelligent Vehicles Symposium", "ref_id": "b14", "title": "What can be seen is what you get: Structure aware point cloud augmentation", "year": "2022" }, { "authors": "S K Jordan; Steven L Hu; Waslander", "journal": "", "ref_id": "b15", "title": "Pattern-aware data augmentation for lidar 3d object detection", "year": "2021" }, { "authors": "Yihan Hu; Zhuangzhuang Ding; Runzhou Ge; Wenxin Shao; Li Huang; Kun Li; Qiang Liu", "journal": "", "ref_id": "b16", "title": "Afdetv2: Rethinking the necessity of the second stage for object detection from point clouds", "year": "2022" }, { "authors": "Dihe Huang; Ying Chen; Yikang Ding; Jinli Liao; Jianlin Liu; Kai Wu; Qiang Nie; Yong Liu; Chengjie Wang; Zhiheng Li", "journal": "", "ref_id": "b17", "title": "Rethinking dimensionality reduction in grid-based 3d object detection", "year": "2022" }, { "authors": "Alex H Lang; Sourabh Vora; Holger Caesar; Lubing Zhou; Jiong Yang", "journal": "", "ref_id": "b18", "title": "Pointpillars: Fast encoders for object detection from point clouds", "year": "2019" }, { "authors": "Dogyoon Lee; Jaeha Lee; Junhyeop Lee; Hyeongmin Lee; Minhyeok Lee; Sungmin Woo; Sangyoun Lee", "journal": "", "ref_id": "b19", "title": "Regularization strategy for point cloud via rigidly mixed sample", "year": "2021" }, { "authors": "Seungjae Lee; Hyungtae Lim; Hyun Myung", "journal": "", "ref_id": "b20", "title": "Patch-work++: Fast and robust ground segmentation solving partial under-segmentation using 3d point cloud", "year": "2022" }, { "authors": "Hyungtae Lim; Oh Minho; Hyun Myung", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b21", "title": "Patchwork: Concentric zone-based region-wise ground segmentation with ground likelihood estimation using a 3d lidar sensor", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b22", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Sivabalan Manivasagam; Shenlong Wang; Kelvin Wong; Wenyuan Zeng; Mikita Sazanovich; Shuhan Tan; Bin Yang; Wei-Chiu Ma; Raquel Urtasun", "journal": "", "ref_id": "b23", "title": "Lidarsim: Realistic lidar simulation by leveraging the real world", "year": "2020" }, { "authors": "Jiageng Mao; Yujing Xue; Minzhe Niu; Haoyue Bai; Jiashi Feng; Xiaodan Liang; Hang Xu; Chunjing Xu", "journal": "", "ref_id": "b24", "title": "Voxel transformer for 3d object detection", "year": "2021" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b25", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Alexey Nekrasov; Jonas Schult; Or Litany; Bastian Leibe; Francis Engelmann", "journal": "IEEE International Conference on 3D Vision", "ref_id": "b26", "title": "Mix3d: Out-of-context data augmentation for 3d scenes", "year": "2021" }, { "authors": "Guangsheng Shi; Ruifeng Li; Chao Ma", "journal": "", "ref_id": "b27", "title": "Pillarnet: Realtime and high-performance pillar-based 3d object detection", "year": "2022" }, { "authors": "Shaoshuai Shi; Li Jiang; Jiajun Deng; Zhe Wang; Chaoxu Guo; Jianping Shi; Xiaogang Wang; Hongsheng Li", "journal": "International Journal of Computer Vision", "ref_id": "b28", "title": "Pv-rcnn++: Point-voxel feature set abstraction with local vector representation for 3d object detection", "year": "2023" }, { "authors": "Pei Sun; Henrik Kretzschmar; Xerxes Dotiwalla; Aurélien Chouard; Vijaysai Patnaik; Paul Tsui; James Guo; Yin Zhou; Yuning Chai; Benjamin Caine; Vijay Vasudevan; Wei Han; Jiquan Ngiam; Hang Zhao; Aleksei Timofeev; Scott Ettinger; Maxim Krivokon; Amy Gao; Aditya Joshi; Sheng Zhao; Shuyang Cheng; Yu Zhang; Jonathon Shlens; Zhifeng Chen; Dragomir Anguelov", "journal": "", "ref_id": "b29", "title": "Scalability in perception for autonomous driving: Waymo open dataset", "year": "2020" }, { "authors": "Abyssaledge Tianweiy", "journal": "", "ref_id": "b30", "title": "Discussion about database sampler effectiveness in the official implement of centerpoint", "year": "2022" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "", "ref_id": "b31", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "Shashank Tripathi; Siddhartha Chandra; Amit Agrawal; Ambrish Tyagi; James M Rehg; Visesh Chari", "journal": "", "ref_id": "b32", "title": "Learning to generate synthetic data via compositing", "year": "2019" }, { "authors": "Sourabh Vora; Alex H Lang; Bassam Helou; Oscar Beijbom", "journal": "", "ref_id": "b33", "title": "Pointpainting: Sequential fusion for 3d object detection", "year": "2020" }, { "authors": "Chunwei Wang; Chao Ma; Ming Zhu; Xiaokang Yang", "journal": "", "ref_id": "b34", "title": "Pointaugmenting: Cross-modal augmentation for 3d object detection", "year": "2021" }, { "authors": "Yan Yan; Yuxing Mao; Bo Li", "journal": "Sensors", "ref_id": "b35", "title": "Second: Sparsely embedded convolutional detection", "year": "2005" }, { "authors": "Dongqiangzi Ye; Zixiang Zhou; Weijia Chen; Yufei Xie; Yu Wang; Panqu Wang; Hassan Foroosh", "journal": "", "ref_id": "b36", "title": "Lidarmultinet: Towards a unified multi-task network for lidar perception", "year": "2022" }, { "authors": "Xingyi Tianwei Yin; Philipp Zhou; Krähenbühl", "journal": "", "ref_id": "b37", "title": "Centerbased 3d object detection and tracking", "year": "2021" }, { "authors": "Sangdoo Yun; Dongyoon Han; Seong Joon Oh; Sanghyuk Chun; Junsuk Choe; Youngjoon Yoo", "journal": "", "ref_id": "b38", "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "year": "2019" }, { "authors": "Lingzhi Zhang; Tarmily Wen; Jie Min; Jiancong Wang; David Han; Jianbo Shi", "journal": "", "ref_id": "b39", "title": "Learning object placement by inpainting for compositional data augmentation", "year": "2020" }, { "authors": "Weiliang Wu Zheng; Li Tang; Chi-Wing Jiang; Fu", "journal": "", "ref_id": "b40", "title": "Sessd: Self-ensembling single-stage object detector from point cloud", "year": "2021" }, { "authors": "Yin Zhou; Oncel Tuzel", "journal": "", "ref_id": "b41", "title": "Voxelnet: End-to-end learning for point cloud based 3d object detection", "year": "2018" }, { "authors": "Benjin Zhu; Zhengkai Jiang; Xiangxin Zhou; Zeming Li; Gang Yu", "journal": "", "ref_id": "b42", "title": "Class-balanced grouping and sampling for point cloud 3d object detection", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 121.13, 305.45, 165.23, 23.21 ], "formula_id": "formula_0", "formula_text": "Re(mAP) = mAP aug mAP noaug(1)" }, { "formula_coordinates": [ 3, 375.76, 387.44, 169.35, 11.03 ], "formula_id": "formula_1", "formula_text": "x 2 + y 2 = x 2 + y 2 ± ∆(2)" }, { "formula_coordinates": [ 3, 361.59, 425.56, 179.65, 22.31 ], "formula_id": "formula_2", "formula_text": "θ + arctan y x = θ + arctan y x . (3" }, { "formula_coordinates": [ 3, 541.24, 432.62, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 3, 364.78, 665.27, 180.33, 19.61 ], "formula_id": "formula_4", "formula_text": "θ = arg max θ c cos(θ -θ c )(4)" }, { "formula_coordinates": [ 4, 126.27, 481.06, 160.09, 9.65 ], "formula_id": "formula_5", "formula_text": "z -H/2 = avg(z g )(5)" }, { "formula_coordinates": [ 5, 50.11, 336.1, 236.25, 37.38 ], "formula_id": "formula_6", "formula_text": "N c = N c plain × α + N c exist × (1 -α)(6) 3.3" }, { "formula_coordinates": [ 5, 341.59, 146.78, 203.52, 12.85 ], "formula_id": "formula_7", "formula_text": "N c = (N c plain × α + N c exist × (1 -α)) × β(7)" }, { "formula_coordinates": [ 12, 67.19, 202.44, 219.17, 26.1 ], "formula_id": "formula_8", "formula_text": "Dpts = Npts Nvoxel = Npts ( l/v D ) × ( w/v W ) × ( h/v H )(8)" } ]
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b16", "b33", "b36", "b58", "b61", "b80", "b81", "b82", "b24", "b10", "b22", "b62", "b65", "b78", "b74" ], "table_ref": [], "text": "Extensive evidences have revealed the vulnerability of deep neural networks (DNNs) towards adversarial attacks [5,17,28,32,33,36,42,58,61,[80][81][82] in digital and physical worlds. Different from digital world attacks which make pixelwise perturbations, physical world adversarial attacks are especially dangerous, which fail DNNs by craft-In this paper, we take the first attempt to evaluate visual naturalness of physical world attacks in autonomous driving [25], a field of attack with increasing attention [11,23,62,65,78]. Since the factors and methods studied in our work are common in physical world attacks and not limited to autonomous driving, our methods and findings also have the potential to be applied to other scenarios. The overview of our work is summarized in Fig. 1. To benchmark attack naturalness, we contribute Physical Attack Naturalness (PAN) dataset, the first dataset to study this problem. Specifically, PAN contains 2,688 images in autonomous driving, with 5 widely used attacks, 2 benign patterns (i.e., no attacks) for comparison, 5 types of environmental variations and 2 types of diversity enhancement (semantic and model diversity). Data was collected from 126 participants, containing their subjective ratings as an indicator of naturalness, and their gaze signal for all images as an indicator of the selective attention area of human when they make naturalness ratings [74].\nPAN provides a plethora of insights for the first time. First, we find contextual features have significant effect on naturalness, including semantic variations (using natural image to constrain attack) and environmental variations (illumination, pitch/yaw angles, etc). Properly selecting environmental and semantic factors can improve naturalness up to 34.73% and 8.09%, respectively. Second, we find contextual features have disparate impact on naturalness of different attacks, some attacks might look more natural under certain variations, which can lead to biased subjective evaluation even under identical settings. Third, we find naturalness is related to behavioral feature (i.e., human gaze). Specifically, we find attacks are considered less natural if human gaze are more centralized and focus more on vehicle (with statistical significance at p < .001). This correlation suggests modelling and guiding human gaze can be a feasible direction to improve attack naturalness.\nFinally, since manually collecting naturalness ratings requires human participation and can be laborious as well as costly, based on PAN dataset, we propose Dual Prior Alignment (DPA), an objective naturalness assessment algorithm that gives a cheap and fast naturalness estimate of physical world attacks. DPA aims to improve attack result by embedding human knowledge into the model. Specifically, to align with human reasoning process, rating prior alignment mimics the uncertainty and hidden desiderata when human rates naturalness. To align with human attention, attentive prior alignment corrects spurious correlations in models by aligning model attention with human gaze. Extensive experiments on PAN dataset and DPA method shows training DPA on PAN dataset outperforms the best method trained on other dataset by 64.03%; based on PAN dataset, DPA improves 3.42% in standard assessment and 11.02% in generalization compared with the best baseline. We also make early attempts to improve naturalness by DPA.\nOur contributions can be summarized as follows:\n• We take the first step to evaluate naturalness of physical world attacks, taking autonomous driving as a first attempt. Our methods and findings have the potential to be applied to other scenarios.\n• We contribute PAN dataset, the first dataset that supports studying the naturalness of physical world attacks via human rating and human gaze. PAN encourage subsequent research on enhancing and assessing naturalness of physical world attacks.\n• Based on PAN, we unveil insights of how contextual and behavioral features affect attack naturalness.\n• To automatically assess image naturalness, we propose DPA method that embeds human behavior into model reasoning, resulting in better result and generalization." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b18", "b35", "b59", "b70", "b4", "b16", "b58", "b29", "b25", "b79", "b41", "b12", "b10", "b22", "b62", "b65", "b78", "b21", "b66", "b8", "b22", "b54", "b62", "b10", "b69", "b7", "b25", "b71", "b2", "b50", "b63", "b40", "b44", "b57", "b68", "b84", "b0", "b11", "b43", "b49", "b37", "b39", "b43", "b72" ], "table_ref": [], "text": "Adversarial Attacks and Naturalness. Adversarial attacks are elaborately designed attacks to fool DNNs. A plethora of studies have been proposed to study adversarial attacks, defenses, and benchmarks [19,35,59,70]. Based on attack domains, adversarial attacks could be categorized as digital world attacks and physical world attacks. Digital world attacks [5,17,42,58] add oftentimes imperceptible pixelwise adversarial perturbations on images, its naturalness are well characterized. Laidlaw et al. [29] find LPIPS [73] well correlates with naturalness. E-LPIPS [26] improves robustness over adversarial attacks by adding transformations to input image. To generate naturalness adversarial attack, approaches has been made based on color space [79], LPIPS [7] or frequency analysis [41].\nHowever, digital world attack fail in physical world, where diverse environmental variations exists. This motivates physical world attacks, which create adversarial artifacts robust to real world uncertainty. Volumes and scenarios of physical world attacks are growing rapidly. Widely known studies includes adversarial glass [53], 3D adversarial objects [2], road sign classification [13,34,56], vehicle camouflage [11,23,62,65,78] and adversarial t-shirt [22,60,66]. While physical world attacks are practical and robust, its visual appearance are usually unnatural. Mainstream works in naturalness enhancement hide attack patterns in a suitable image that well fits with attack scenario [9,23,34,54,62] or hide attack with natural styles [11].\nImage Quality Assessment and Gaze. The aim of Image Quality Assessment (IQA) is to automatically evaluate the visual quality of an image. Base on training process, IQA can be categorized as full-reference (FR) IQA, reduced reference (RR) IQA and no-reference (NR) IQA [69]. Specifically, FR-IQA [8,26,48,71,73] image; RR-IQA [3,50,51,63] evaluate naturalness extracts partial information from reference image; NR-IQA [4, 40,44,57,67,68,77,84] directly evaluate visual naturalness without reference. In our work, we consider physical world attack as a novel type of distortion, and assess its naturalness in the pipeline of NR-IQA. We do not use FR-IQA since with noise in environment, an exact reference required by FR-IQA is hard to get.\nAs an indicator of human attention, gaze was studied for gaining better IQA accuracy. There has been works that collect gaze fixations for existing IQA datasets [1,12,38,43,49], yet their datasets contain at most 160 images with gaze. In contrast, PAN contains all 2,688 images with accompanied gaze. To leverage collected gaze, one line of works use the collected gaze as a weighting metric [37,39,43, 76], while other line of works use human gaze as an additional quality indicator [72,75]. In our work, we embed human gaze directly into IQA reasoning process." }, { "figure_ref": [], "heading": "Physical Attack Naturalness (PAN) Dataset", "publication_ref": [ "b12", "b21", "b66" ], "table_ref": [], "text": "We define the task of evaluating naturalness of physical world adversarial attacks as a particular instance of No-Reference image quality assessment (NR-IQA) . As illustrated in Table . 1, PAN differs from existing IQA database from three aspects: type of distortion, image source and the property of image assessed. As for distortion type, IQA databases mainly consider artificial (e.g., Gaussian noise, JPEG compression) or authentic (e.g., motion blur) distortions, while physical world attacks are maliciously generated patterns, unexplored in these two distortion types.\nThus, we contribute physical attack naturalness (PAN) dataset, the first dataset to understand naturalness of physical world attacks in autonomous driving. While other attack scenarios exists (e.g., road sign [13,34,56], T-shirt [22,60,66], etc), the factors we study are commonly used in physical world attacks, making it possible for our methods and findings to be extended to other attack scenarios.\nSee more details of IQA in related work section. " }, { "figure_ref": [], "heading": "Construction Process", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Image Generation", "publication_ref": [ "b9", "b24", "b62", "b10", "b22", "b62", "b78", "b65", "b10", "b22", "b62", "b62", "b62", "b78", "b39" ], "table_ref": [], "text": "Evaluated baselines. We generate all test images using CARLA [10], an open source 3D virtual simulator based on Unreal Engine 4, which was widely used for autonomous driving [25] as well as physical world adversarial attacks [62]. As a first-step study, we use CARLA to first disentangle the impact of each variate on naturalness by controlling views, urban layouts, illuminations, etcin simulator. We discuss how PAN can be applied in real world in Section. 5.4. We evaluate naturalness on 7 distinct baselines, including 2 clean baselines: (1) clean, no perturbations exists. (2) painting, vehicle has benign car paintings, a common motivation for many physical world attacks [11,23,62]. We select 5 widely compared physical world attacks on autonomous driving with diverse naturalness enhancement methods and naturalness evaluation protocols, including CAMOU [78], MeshAdv [65], AdvCam [11], UPC [23] and DAS [62]. We carefully reproduce attack results of these baselines [62], with detailed results in supplementary materials.\nVariations. We simulate images in PAN with possible real world variations. For environmental variations, following prior arts [62,78], we consider 2 backgrounds, 2 illuminance, 8 pitch angles, 4 yaw angles and 3 distances for each baselines, resulting in 7 (baselines)×2×2×8×4×3 = 2,688 images, with details of each enumerated factors in Fig. more details in supplementary materials. Examples of applying each variations to PAN are given in Fig. 3. Data properties. For all images in PAN, we release their subjective naturalness ratings by Mean Opinion Score (MOS), calculated by averaging all human ratings [16], with rating distribution of each images given correspondingly. We also release the gaze saliency map S, calculated by applying a Gaussian mask for all raw human fixations, following [39]. Exemplar images, corresponding human gaze and MOS score are illustrated in Fig. 4." }, { "figure_ref": [], "heading": "Human Assessment", "publication_ref": [ "b45", "b20", "b20" ], "table_ref": [], "text": "Participants and apparatus. We recruit 126 participants (57 female, 69 male, age=22.2±3.3) from campus, all with normal (corrected) eyesight. Each participant is compensated $15. Images are displayed on a 16-inch screen with a resolution of 2560*1600 and an approximate viewing distance about 70cm. A Tobii Eye Tracker 5 (equipped in front of the screen) is adapted for eye gaze tracking. It records eye gaze points at about 60 GP/sec. A gaze calibration process is done before the experiment .\nExperiment process. We adopt a single stimulus continuous procedure [45] and ask participants to evaluate the naturalness of image. For each image, participants first view it for 2.5 seconds, with eye tracker activated. The time is determined by our pilot study to ensure eye gaze coverage and prevent fatigue. Next, participants rate the image by a 5-point Absolute Category Rating (ACR) scale [21]. Each participants are asked to evaluate 320 images which are divided into 8 sessions. A warmup session is given at the beginning, with a 20 seconds' rest between two sessions. Participants take no more than 35 minutes to finish all experiments. We follow quality control process of [21], enabling each image to contain ratings and gaze of at least 10 PAN does not contain Personally Identifiable Information (PII). Additional ethical concerns are discussed in supplementary materials. subjects. Due to the space limit, we defer more details of image generation, human assessment and quality control of PAN dataset to supplementary materials." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Insights", "publication_ref": [ "b14" ], "table_ref": [], "text": "We first provide an overview of PAN dataset, categorized by evaluated baselines. As shown in Fig. 5a, even for the most natural attack (MeshAdv), its MOS score is still much lower than clean baseline (2.77 vs 3.93). This suggests that, at least in autonomous driving scenario and using CARLA simulation environment, while AdvCAM and DAS claim their attacks to be more natural than others, they still remain far less natural than clean images. But what affects naturalness? How can we improve naturalness? Based on PAN, we find naturalness are disparately affected by contextual features, and are related to behavioral factors. We also offer pragmatic advice on improving naturalness below. We defer tradeoff between attack capability and naturalness; the impact of environmental factors and diversity factors to supplementary materials. Results and analysis below are reported using proper statistical tests [64] (e.g., one-way ANOVA) and post-hoc pairwise comparisons (e.g., Tukey's Honest Significant Difference (HSD) test.) to analyze our PAN dataset. We report significant findings at p < .05.\nInsight : Naturalness is affected by contextual features, including semantic diversity and environmental variations; Naturalness can be improved by selecting proper contextual features.\nFor diversity, as shown in Fig. 5b, we find semantic factors, i.e., natural image used to constraining attacks (i.e., method used by UPC, AdvCAM and DAS) have significant effect on naturalness (p < .001, One-way ANOVA) . Based on this observation, we find replacing the natural image used by UPC (bird), AdvCam (pikachu) and DAS (smile) by the most natural image (cat) improves their naturalness by 8.09%, 6.01% and 5.04%, respectively. We hypothesis the semantic relations between vehicle and natural image affects naturalness, which was verified by an additional user study (p < .001, Mann Whitney Test), details deferred in supplementary materials.\nBesides, almost all environmental factors, (i.e., illumination, pitch/yaw angle, distance), except background, has significant effect on naturalness (p < .001 for all factors except p = .588 for background, One-way ANOVA). Posthoc analysis shows that nearly all levels are significantly different (p < .001, post-hoc Tukey HSD test, variance uniformity satisfied with p = .17). Specifically, higher naturalness can be achieved at farther distance, pitch angle 0 • , yaw angle 90 • and higher luminance. By simply changing environment condition, we find an improvement up to 34.73% on naturalness. This reminds defenders physical world attacks can be more stealthy in certain occasions.\nInsight : Contextual features have disparate impact on naturalness of different attacks, which can lead to biased evaluation even under identical settings.\nAdditionally, we find contextual features do not affect attacks equally (i.e., some attacks look more natural under certain contextual features). This may lead to biased naturalness evaluation, even under same contextual features. For example, while UPC is overall more natural than AdvCam in PAN dataset (p < .001, independent samples t-test), at certain conditions (e.g., yaw angle 135 • , 180 • , or distance of 10m), AdvCam can be more natural than UPC (p < .001, independent samples t-test). This bias is not reported in any previous work of physical world attack.\nWhile this inconsistency can explained by the interaction of perceptual characteristics of attacks and contextual features, the bias nonetheless pose a threat to subjective naturalness evaluation. To solve this problem, we suggest subsequent research to report their attack naturalness on multiple contextual features. In supplementary materials, we also suggest a naturalness evaluation setting which is consistent with result in PAN and requires minimal number of testing.\nInsight : Naturalness is correlated with behavioral feature (i.e., human gaze). Manipulation of human gaze can be a feasible direction to improve naturalness.\nBesides contextual features, we find behavioral feature (i.e., human gaze) correlates with naturalness: attacks are considered less natural if gaze are more centralized (p < .05, one-way ANOVA), or focus more on vehicle (p < .001, one-way ANOVA). Specifically, centralize measures how much human concentrates, calculated as the standard deviation of gaze saliency map, while focused measures how much human pay attention to vehicle, calculated by sum of dot product between gaze saliency map and vehicle area.\nThis correlation suggests a feasible direction to improve naturalness: optimizing attack patterns that guides human gaze to be less centralized, or focus less on vehicle. This is possible via the prior work of Gatys et al. [15], which tries to guide human gaze by optimized visual patterns. Additionally, we note that our finding shares similar motivation with DAS, which aims to improve naturalness by evad-ing human attention. However, we do not find DAS triggers distinctive gaze behavior comparing with other attacks (p = 0.967 on average, Post-hoc Tukey HSD test)." }, { "figure_ref": [], "heading": "Assess Naturalness by Dual Prior Alignment", "publication_ref": [], "table_ref": [], "text": "While the procedures in PAN offers a feasible way to assess naturalness, collecting human ratings can be costly and laborious. In this section, we propose Dual Prior Alignment (DPA), a quality assessment algorithm to automatically evaluate the naturalness of physical world attacks." }, { "figure_ref": [ "fig_4" ], "heading": "Motivation", "publication_ref": [ "b40", "b57", "b68", "b84" ], "table_ref": [], "text": "The goal of IQA is to design algorithms with objective naturalness predictions well correlated with human subjective ratings [40,57,67,68,84]. Thus, it is reasonable to assume that better modelling and imitating human behavior leads to better IQA result. With rich human behaviors offered in PAN dataset, we propose Dual Prior Alignment (DPA) network that aligns human behavior with model decisions. As shown in Fig. 6, DPA network consists of two modules, i.e., rating prior alignment module and attentive prior alignment module, which enables DPA to align with human reasoning process and human attention process.\nTo align model behavior with human reasoning process, participants reveal their decisions contain uncertainty and are based on vague cognitive criterion. To reflect uncertainty in human ratings, we remodel IQA as a classification problem instead of regression, and align model output with the distribution of human ratings. To capture hidden criterion of human, we use a prototype vector which learns the hidden knowledge of each level during training. To align model attention with human attention, as shown in Fig. 7, we find existing methods cheat by exploiting spurious correlations between naturalness ratings and irrelevant areas. Intuitively, such biased model have weak generalization capability on unseen test images. To mitigate such bias, we design an IQA-specific visual grounding criterion that aligns model attention with human gaze." }, { "figure_ref": [], "heading": "Rating Prior Alignment", "publication_ref": [], "table_ref": [], "text": "To understand how human reasons naturalness, we give an interview to participants after experiment. Participant 21 (P21) and P47 noted their ratings contains uncertainty when they find both rating levels are appropriate. P6, P40 and P47 also noted that they developed a vague criterion, or believes, of each rating levels. Based on such criterion, they select ratings that fits best with current image.\nBased on participants' feedbacks, we explicitly mimic the decision process of human. To represent the hidden judgement rules of human, we initiate a prototype vector z for each rating levels ∈ {1, 2, 3, 4, 5}, with values updated during training. For image x and backbone DNN f θ parameterized by θ, we assume the representations f θ (x)\nInput Backbone 𝑓(•)" }, { "figure_ref": [], "heading": "Ra�ng Prior Alignment", "publication_ref": [], "table_ref": [], "text": "Input images 𝑥 captures the relevant information of x for naturalness assessment. To represent decision uncertainty of human and avoid overfitting to a continuous value, we model NR-IQA as a classification problem instead of regression. Specifically, the likelihood p that image x belongs to each levels is calculated by the cosine similarity between image representations f θ (x) and the prototype of each levels z , followed by a softmax function:" }, { "figure_ref": [], "heading": "A�en�ve Prior Alignment", "publication_ref": [], "table_ref": [], "text": "p (x, z) = exp(f θ (x) • z /||f θ (x)|| • ||z ||) L j=1 exp(f θ (x) • z j /||f θ (x)|| • ||z j || ,(1)\nwhere L is level of ratings, set to 5 in our experiment. With likelihood p calculated, we propose rating prior alignment (RPA) loss L R to address human rating uncertainty by aligning p with human rating distributions r :\nL R = KL(p(x, z)||r) = L =1 p (x, z) log p (x, z) r ,\n(2) where KL is the Kullback-Leibler divergence." }, { "figure_ref": [], "heading": "Attentive Prior Alignment", "publication_ref": [ "b5", "b83" ], "table_ref": [], "text": "While training models to fit subjective MOS score y can yield low error, as shown in Fig. 7, models can cheat by exploiting spurious correlations between background minutiae and predictions. To mitigate this bias, we leverage gaze signal as a guidance to correct attention of IQA model such that model align its intrinsic attention with human gaze.\nTo capture model attention, visual attention techniques [6,52,83] explain and visualize the attention of DNNs by back-propagating to neurons of the last convolutional layer:\nA(x, ŷ) = 1 Z ReLU   i,j,k ∂ ŷ ∂A k ij A k ij   ,(3)\nwhere A is the attention map, Z is a normalizing constant, A k ij denotes the value in position (i, j) of feature map k.\nHowever, Eqn.3 is biased to emphasize higher ratings:\n∂ ŷ ∂A k ij = L =1 ∂ ŷ ∂p • ∂p ∂A k ij = L =1 s • ∂p ∂A k ij .(4)\nAs a result, naively applying Grad-CAM bias the backward gradient ∂p /∂A k ij by s , the score of current rating level. To correct this bias, we modify the backpropagation step of Grad-CAM as a weighted average of the gradients backpropagated from rating likelihood ∂p /∂A k ij , using p :\nA(x, p) = 1 Z ReLU   i,j,k p • ∂p ∂A k ij A k ij   .(5)\nFinally, we propose attentive prior alignment loss L A to align model attention A with human gaze S:\nL A = ||A(x, p) -S|| 2 F .(6)" }, { "figure_ref": [], "heading": "Overall Training", "publication_ref": [], "table_ref": [], "text": "We have discussed how to align human rating prior by L R and human attentive prior by L A . To get the final IQA result, the predicted MOS score ŷ was calculated by the expectations over scores for each levels s , i.e., ŷ = L =1 p (x, z)s . We also add a standard mean square error loss\nL S = 1 N N n=1 ||ŷ n -y n || 2\n2 between ŷn and ground truth subjective MOS ratings y n , where n is the image index in a minibatch of size N. Overall, DPA learns to assess image naturalness by jointly optimizing L R , L A and L S :\nmin θ,z L S + λL R + γL A ,(7)\nwhere λ and γ are hyperparameters to control the strength of L R and L A , respectively. θ is the parameters of the backbone network, z = {z } are the set of prototypes for each rating levels. Overall training algorithm of DPA can be find in supplementary materials. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we use experiments to verify: (1) do we need PAN dataset? (2) can DPA better assess naturalness? (3) can DPA generalize in real world scenarios?" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Baselines", "publication_ref": [ "b25", "b17", "b19", "b44", "b40", "b57", "b68" ], "table_ref": [], "text": "We conduct experiments on our proposed PAN dataset. To evaluate the effectiveness of image quality assessment, we compare with 13 state-of-the-art methods, including four widely used FR-IQA methods: PSNR, SSIM, LPIPS [73] and E-LPIPS [26]; one IQA method for GAN: GIQA [18]; eight NR-IQA methods: including vanilla ResNet50 [20], BRISQUE [44], WaDIQaM [4], RankIQA [40], DBCNN [77], HyperIQA [57], Paq2Piq [68] and MANIQA [67]." }, { "figure_ref": [], "heading": "Implementation Details and Evaluation Metrics", "publication_ref": [ "b26" ], "table_ref": [], "text": "Two evaluation metrics are selected to compare the performance of different IQA algorithms: Spearman Rank Order Correlation Coefficient (SROCC) and Pearson's Linear correlation coefficient (PLCC). We also measure attention alignment by cosine similarity S C between model attention and human gaze. Results are averaged for all baselines (distortions). For implementations, we use a ResNet50 backbone for our DPA method, with hyperparameters λ and γ empirically set to 8.0 and 3.0, respectively. For fair comparison, we train all methods for 20 epochs using an Adam optimizer [27] with learning rate 3×10 -5 . See additional experiment settings in supplementary materials." }, { "figure_ref": [], "heading": "Do We Need PAN Dataset?", "publication_ref": [ "b29" ], "table_ref": [], "text": "In this section, we answer the following question: can existing IQA database solve the problem of naturalness assessment, so PAN is not needed? Specifically, we test the (1) Collecting our PAN dataset is vital for assessing naturalness of physical world attacks. Our DPA+PAN achieves 0.2928 (+64.03%) higher SROCC and 0.3759 (+94.73%) higher PLCC than SSIM, the best existing method. This clearly shows existing methods and datasets are insufficient to evaluate the naturalness of physical world attacks.\n(2) Since existing methods are ineffective, we do not recommend using SSIM and LPIPS as naturalness indicators in physical world, as opposed to digital world [7,29]. However, if our DPA is not applicable, SSIM provides a best estimate (0.0583 higher in SROCC and 0.0274 higher in PLCC than second best baseline E-LPIPS)." }, { "figure_ref": [], "heading": "Can DPA Better Assess Naturalness?", "publication_ref": [ "b13", "b23" ], "table_ref": [], "text": "Next, based on PAN dataset, we ask the question: with human priors incorporated, can DPA better assess naturalness of physical world attacks? For non-learning methods PSNR and SSIM, we evaluate them on all PAN dataset and the result is thus identical to Table . 2. From results listed in Table . 3, we can draw several conclusions as follows:\n(1) Aligning the behavior of DNNs with human improves naturalness assessment. Trained on PAN, our DPA outperform the best baseline by 0.0248 (+3.42%) in SROCC and 0.0163 (+2.15%) in PLCC.\n(2) Using S C as a measure of alignment between model and human attention, under attentive prior alignment loss, DPA gains 81.86% higher alignment compared with the best baselines, which provides significantly better alignment between model attention and human gaze. We also illustrate model attentions and corresponding human gaze in Fig. 7: while almost all baselines rely on spurious areas for prediction, DPA base its decision on correct areas.\n(3) The ineffectiveness of FR-IQA and GIQA methods could be explained by adversarial feature [14,24]: adversarial attacks are effective because they are not just noise, but meaningful features from other domains for DNNs. While FR-IQA and GIQA methods keep backbone parameters unchanged, the extracted features might be polluted by adversarial features, thus unable to give reliable results." }, { "figure_ref": [], "heading": "Can DPA Generalize?", "publication_ref": [], "table_ref": [], "text": "Finally, we ask the question: can DPA generalize to unseen real world images? To verify this, we manually collected 504 real world images, called PAN-phys with 8 pitch angles, 3 yaw angles and 3 backgrounds. See details of this dataset in supplementary materials. Next, we collect human rating and gaze signal using the same approach as PAN dataset. Finally, we fix the parameters of all methods and evaluate their result on PAN-phys. From results listed in Table . 4, we can draw several conclusions as follows:\n(1) Through aligning model behaviors with human, DPA also achieves stronger generalization capability when evaluating images drawn from unseen real world scenario, outperforming the best baseline by 0.0332 (+8.40%) in SROCC and 0.0236 (+5.34%) in PLCC.\n(2) Our DPA is able to align its attention with human attention even under unseen images, achieving 12.73% higher S C than best performing baseline. As shown in Fig. 8, the attention area of DPA keeps aligned with human gaze during generalization, while most baselines yields predictions on spurious correlations.\n(3) The domain gap between real world and simulation environment harms the naturalness assessment accuracy, calling an urgent need to further improve naturalness assessment methods via domain generalization." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct ablation studies to verify the effect of different loss terms, namely rating prior alignment loss L R and attentive prior alignment loss L A . We argue that L R and L A jointly improves alignment with human rat- ings, while L A also improves alignment with human gaze. Shown in Table . 5, L A and L R contributes to SROCC and PLCC individually, while combining them shows further improvement. For attention alignment S C , while L A significantly improves alignment with gaze, we surprisingly find aligning human rating prior by L R also partly enhance S C . Additionally, the effect of L R on S C is enhanced with the presence of L A (+0.0314 w/o L A , +0.1071 w/ L A ). We hypothesize that aligning human behavior from one aspect might have an synergy effect on another aspect. We left detailed study of this phenomenon for future work." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we study to evaluate naturalness of physical world adversarial attacks. Specifically, we contribute PAN, the first dataset to benchmark and evaluate naturalness of physical world attacks. Besides, we propose DPA, an automatic naturalness assessment algorithm which offers higher alignment with human ratings and better generalization. Our work fertilizes community by (1) contributing PAN, which enables research on evaluating naturalness of physical world attacks by human rating and high-quality, large scale gaze signals; (2) encouraging new research on natural physical world attacks via analysis of contextual and behavioral features; (3) encouraging new research to design better IQA algorithms for physical world attacks." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement. This work was supported by the National Key Research and Development Plan of China (2021ZD0110601), the National Natural Science Foundation of China (62022009, 62132010 and 62206009), and the State Key Laboratory of Software Development Environment." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Our code and dataset can be found at https://github.com/zhangsn-19" } ]
Physical world adversarial attack is a highly practical and threatening attack, which fools real world deep learning systems by generating conspicuous and maliciously crafted real world artifacts. In physical world attacks, evaluating naturalness is highly emphasized since human can easily detect and remove unnatural attacks. However, current studies evaluate naturalness in a case-by-case fashion, which suffers from errors, bias and inconsistencies. In this paper, we take the first step to benchmark and assess visual naturalness of physical world attacks, taking autonomous driving scenario as the first attempt. First, to benchmark attack naturalness, we contribute the first Physical Attack Naturalness (PAN) dataset with human rating and gaze. PAN verifies several insights for the first time: naturalness is (disparately) affected by contextual features (i.e., environmental and semantic variations) and correlates with behavioral feature (i.e., gaze signal). Second, to automatically assess attack naturalness that aligns with human ratings, we further introduce Dual Prior Alignment (DPA) network, which aims to embed human knowledge into model reasoning process. Specifically, DPA imitates human reasoning in naturalness assessment by rating prior alignment and mimics human gaze behavior by attentive prior alignment. We hope our work fosters researches to improve and automatically assess naturalness of physical world attacks.
Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
[ { "figure_caption": "Figure 2 .2Figure 2. Distribution of data variations. (a) number of images contained by each factor in PAN dataset. (b) distribution of diversity factors, including semantic and model diversity.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Environmental and diversity variations in PAN dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Illustration of data contained in PAN dataset. We provide raw image, corresponding human gaze and MOS score.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization of factors that affect naturalness. Violin plot indicates MOS score distribution across all images.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Framework of DPA. Rating prior alignment mimics the uncertainty and hidden desiderata in human naturalness rating process. Attentive prior alignment corrects spurious correlations in models by aligning model attention with human gaze.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .Figure 8 .78Figure 7. Grad-CAM visualization of DPA and baselines.LPIPS E-LPIPS ResNet50 WaDIQaM RankIQA DBCNN HyperIQA Paq2Piq MANIQA DPA(ours) Gaze", "figure_data": "", "figure_id": "fig_5", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "compare image naturalness based on reference image and distorted Comparisons between existing IQA datasets and PAN dataset. PAN differs from existing IQA database from type of distortion, image source and the property of assessed images.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Validating necessity of PAN dataset. All baselines are trained without using PAN, with DPA trained on PAN.", "figure_data": "CategoryMethodSROCC (↑) PLCC (↑)S C (↑)PSNR0.35600.3685-FR-IQASSIM LPIPS0.4573 0.10560.3968 0.1395-0.0583E-LPIPS0.39900.36940.0727OthersGIQA(KNN) GIQA(GMM)0.1382 0.15370.1133 0.1392--BRISQUE0.10290.0494-ResNet500.11490.16820.1692WaDIQaM-0.0704-0.10780.1821NR-IQARankIQA DBCNN0.1809 0.14090.1992 0.11670.0095 0.0876HyperIQA0.16390.12850.2188Paq2Piq0.03200.05040.2791MANIQA0.27410.27170.0810NR-IQADPA+PAN (Ours)0.75010.77270.7178", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Validating the effectiveness of DPA using PAN dataset. DPA outperform other baselines by aligning with human rating prior and human attention prior. result of existing methods on PAN with its released models, compared with training our DPA directly on PAN. For methods without released model, we train them on TID2013 dataset [46] using their default conditions. From results in Table.2, we can draw several conclusions as follows:", "figure_data": "CategoryMethodSROCC (↑) PLCC (↑)S C (↑)PSNR0.35600.3685-FR-IQASSIM LPIPS0.4573 0.09940.3968 0.1114-0.0089E-LPIPS0.40820.40640.0136OthersGIQA(KNN) GIQA(GMM)0.1428 0.08380.1132 -0.0366--BRISQUE0.47530.3777-ResNet500.69160.74530.2066WaDIQaM0.69980.68410.2130NR-IQARankIQA DBCNN0.7227 0.68000.7564 0.66210.1134 0.3947HyperIQA0.72530.72650.1955Paq2Piq0.60440.60890.2003MANIQA0.71290.73310.0861NR-IQADPA (Ours)0.75010.77270.7178", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Generalization results of DPA and other baselines on real world image dataset, PAN-phys.", "figure_data": "CategoryMethodSROCC (↑) PLCC (↑)S C (↑)PSNR0.31630.3009-FR-IQASSIM LPIPS0.3594 -0.26590.3558 -0.3540-0.0163E-LPIPS-0.3778-0.35890.1658OthersGIQA(KNN) GIQA(GMM)0.0075 0.07470.0275 0.0809--BRISQUE0.02610.0245-ResNet500.28740.32820.1935WaDIQaM-0.1362-0.13750.0329NR-IQARankIQA DBCNN-0.1313 0.3907-0.1368 0.41440.2942 0.3028HyperIQA0.39510.44160.3645Paq2Piq0.37520.39050.2244MANIQA0.36730.38390.2502NR-IQADPA (Ours)0.42830.46520.4109ResNet500.69160.74530.2066L A0.71210.75860.6107L R0.71540.76730.2380L R + L A0.75010.77270.7178", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study for different loss terms when evaluating human ratings. All terms in DPA achieved their desired goal.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Simin Li; Shuning Zhang; Gujun Chen; Dong Wang; Pu Feng; Jiakai Wang; Aishan Liu; Xin Yi; Xianglong Liu
[ { "authors": "H Alers; J A Redi; H Liu; I Heynderickx", "journal": "Journal of Electronic Imaging", "ref_id": "b0", "title": "Studying the effect of optimizing image quality in salient regions at the expense of background content", "year": "2013" }, { "authors": "A Athalye; L Engstrom; A Ilyas; K Kwok", "journal": "PMLR", "ref_id": "b1", "title": "Synthesizing robust adversarial examples", "year": "2018" }, { "authors": "C G Bampis; P Gupta; R Soundararajan; A C Bovik", "journal": "IEEE signal processing letters", "ref_id": "b2", "title": "Speed-qa: Spatial efficient entropic differencing for image and video quality", "year": "2017" }, { "authors": "S Bosse; D Maniry; K.-R Müller; T Wiegand; W Samek", "journal": "IEEE Transactions on image processing", "ref_id": "b3", "title": "Deep neural networks for no-reference and full-reference image quality assessment", "year": "2017" }, { "authors": "N Carlini; D Wagner", "journal": "Ieee", "ref_id": "b4", "title": "Towards evaluating the robustness of neural networks", "year": "2017" }, { "authors": "A Chattopadhay; A Sarkar; P Howlader; V N Balasubramanian", "journal": "IEEE", "ref_id": "b5", "title": "Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks", "year": "2018" }, { "authors": "V Cherepanova; M Goldblum; H Foley; S Duan; J Dickerson; G Taylor; T Goldstein", "journal": "", "ref_id": "b6", "title": "Lowkey: Leveraging adversarial attacks to protect social media users from facial recognition", "year": "2021" }, { "authors": "K Ding; K Ma; S Wang; E P Simoncelli", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b7", "title": "Image quality assessment: Unifying structure and texture similarity", "year": "2020" }, { "authors": "B G Doan; M Xue; S Ma; E Abbasnejad; D C Ranasinghe", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b8", "title": "Tnt attacks! universal naturalistic adversarial patches against deep neural network systems", "year": "2022" }, { "authors": "A Dosovitskiy; G Ros; F Codevilla; A Lopez; V Koltun", "journal": "PMLR", "ref_id": "b9", "title": "Carla: An open urban driving simulator", "year": "2017" }, { "authors": "R Duan; X Ma; Y Wang; J Bailey; A K Qin; Y Yang", "journal": "", "ref_id": "b10", "title": "Adversarial camouflage: Hiding physical-world attacks with natural styles", "year": "2020" }, { "authors": "U Engelke; A Maeder; H.-J Zepernick", "journal": "IEEE", "ref_id": "b11", "title": "Visual attention modelling for subjective image quality databases", "year": "2009" }, { "authors": "K Eykholt; I Evtimov; E Fernandes; B Li; A Rahmati; C Xiao; A Prakash; T Kohno; D Song", "journal": "", "ref_id": "b12", "title": "Robust physical-world attacks on deep learning visual classification", "year": "2018" }, { "authors": "L Fowl; M Goldblum; P -Y. Chiang; J Geiping; W Czaja; T Goldstein", "journal": "", "ref_id": "b13", "title": "Adversarial examples make strong poisons", "year": "2021" }, { "authors": "L A Gatys; M Kümmerer; T S Wallis; M Bethge", "journal": "", "ref_id": "b14", "title": "Guiding human gaze with convolutional neural networks", "year": "2017" }, { "authors": "D Ghadiyaram; A C Bovik", "journal": "IEEE Transactions on Image Processing", "ref_id": "b15", "title": "Massive online crowdsourced study of subjective and objective picture quality", "year": "2015" }, { "authors": "I J Goodfellow; J Shlens; C Szegedy", "journal": "", "ref_id": "b16", "title": "Explaining and harnessing adversarial examples", "year": "2014" }, { "authors": "S Gu; J Bao; D Chen; F Wen", "journal": "Springer", "ref_id": "b17", "title": "Giqa: Generated image quality assessment", "year": "2020" }, { "authors": "J Guo; W Bao; J Wang; Y Ma; X Gao; G Xiao; A Liu; J Dong; X Liu; W Wu", "journal": "Pattern Recognition", "ref_id": "b18", "title": "A comprehensive evaluation framework for deep model robustness", "year": "2023" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b19", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "V Hosu; H Lin; T Sziranyi; D Saupe", "journal": "IEEE Transactions on Image Processing", "ref_id": "b20", "title": "Koniq-10k: An ecologically valid database for deep learning of blind image quality assessment", "year": "2020" }, { "authors": "Z Hu; S Huang; X Zhu; F Sun; B Zhang; X Hu", "journal": "", "ref_id": "b21", "title": "Adversarial texture for fooling person detectors in the physical world", "year": "2022" }, { "authors": "L Huang; C Gao; Y Zhou; C Xie; A L Yuille; C Zou; N Liu", "journal": "", "ref_id": "b22", "title": "Universal physical camouflage attacks on object detectors", "year": "2020" }, { "authors": "A Ilyas; S Santurkar; D Tsipras; L Engstrom; B Tran; A Madry", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Adversarial examples are not bugs, they are features", "year": "2019" }, { "authors": "J Janai; F Güney; A Behl; A Geiger", "journal": "Foundations and Trends® in Computer Graphics and Vision", "ref_id": "b24", "title": "Computer vision for autonomous vehicles: Problems, datasets and state of the art", "year": "2020" }, { "authors": "M Kettunen; E Härkönen; J Lehtinen", "journal": "", "ref_id": "b25", "title": "E-lpips: robust perceptual image similarity via random transformation ensembles", "year": "2019" }, { "authors": "D P Kingma; J Ba; Adam", "journal": "", "ref_id": "b26", "title": "A method for stochastic optimization", "year": "2014" }, { "authors": "A Kurakin; I J Goodfellow; S Bengio", "journal": "", "ref_id": "b27", "title": "Adversarial examples in the physical world", "year": "" }, { "authors": "Hall Chapman", "journal": "CRC", "ref_id": "b28", "title": "", "year": "2018" }, { "authors": "C Laidlaw; S Singla; S Feizi", "journal": "", "ref_id": "b29", "title": "Perceptual adversarial robustness: Defense against unseen threat models", "year": "2020" }, { "authors": "E C Larson; D M Chandler", "journal": "Journal of electronic imaging", "ref_id": "b30", "title": "Most apparent distortion: full-reference image quality assessment and the role of strategy", "year": "2010" }, { "authors": "H Lin; V Hosu; D Saupe", "journal": "IEEE", "ref_id": "b31", "title": "Kadid-10k: A large-scale artificially distorted iqa database", "year": "2019" }, { "authors": "A Liu; J Guo; J Wang; S Liang; R Tao; W Zhou; C Liu; X Liu; D Tao", "journal": "", "ref_id": "b32", "title": "X-adv: Physical adversarial object attacks against x-ray prohibited item detection", "year": "2023" }, { "authors": "A Liu; T Huang; X Liu; Y Xu; Y Ma; X Chen; S Maybank; D Tao", "journal": "", "ref_id": "b33", "title": "Spatiotemporal attacks for embodied agents", "year": "2020" }, { "authors": "A Liu; X Liu; J Fan; Y Ma; A Zhang; H Xie; D Tao", "journal": "", "ref_id": "b34", "title": "Perceptualsensitive gan for generating adversarial patches", "year": "2019" }, { "authors": "A Liu; X Liu; H Yu; C Zhang; Q Liu; D Tao", "journal": "IEEE TIP", "ref_id": "b35", "title": "Training robust deep neural networks via adversarial noise propagation", "year": "2021" }, { "authors": "A Liu; J Wang; X Liu; C Cao; H Zhang; Yu", "journal": "", "ref_id": "b36", "title": "Bias-based universal adversarial patch attack for automatic check-out", "year": "2020" }, { "authors": "H Liu; U Engelke; J Wang; P Le Callet; I Heynderickx", "journal": "IEEE Signal Processing Letters", "ref_id": "b37", "title": "How does image content affect the added value of visual attention in objective image quality assessment", "year": "2013" }, { "authors": "H Liu; I Heynderickx", "journal": "IEEE", "ref_id": "b38", "title": "Studying the added value of visual attention in objective image quality metrics based on eye movement data", "year": "2009" }, { "authors": "H Liu; I Heynderickx", "journal": "IEEE transactions on Circuits and Systems for Video Technology", "ref_id": "b39", "title": "Visual attention in objective image quality assessment: Based on eye-tracking data", "year": "2011" }, { "authors": "X Liu; J Van De Weijer; A D Bagdanov", "journal": "", "ref_id": "b40", "title": "Rankiqa: Learning from rankings for no-reference image quality assessment", "year": "2017" }, { "authors": "C Luo; Q Lin; W Xie; B Wu; J Xie; L Shen", "journal": "", "ref_id": "b41", "title": "Frequency-driven imperceptible adversarial attack on semantic similarity", "year": "2022" }, { "authors": "A Madry; A Makelov; L Schmidt; D Tsipras; A Vladu", "journal": "", "ref_id": "b42", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2017" }, { "authors": "X Min; G Zhai; Z Gao; K Gu", "journal": "IEEE", "ref_id": "b43", "title": "Visual attention data for image quality assessment databases", "year": "2014" }, { "authors": "A Mittal; A K Moorthy; A C Bovik", "journal": "IEEE Transactions on image processing", "ref_id": "b44", "title": "No-reference image quality assessment in the spatial domain", "year": "2012" }, { "authors": "M H Pinson; S Wolf", "journal": "IEEE Transactions on broadcasting", "ref_id": "b45", "title": "A new standardized method for objectively measuring video quality", "year": "2004" }, { "authors": "N Ponomarenko; L Jin; O Ieremeiev; V Lukin; K Egiazarian; J Astola; B Vozel; K Chehdi; M Carli; F Battisti", "journal": "Signal processing: Image communication", "ref_id": "b46", "title": "Image database tid2013: Peculiarities, results and perspectives", "year": "2015" }, { "authors": "N Ponomarenko; V Lukin; A Zelensky; K Egiazarian; M Carli; F Battisti", "journal": "Advances of Modern Radioelectronics", "ref_id": "b47", "title": "Tid2008-a database for evaluation of full-reference visual quality assessment metrics", "year": "2009" }, { "authors": "E Prashnani; H Cai; Y Mostofi; P Sen", "journal": "", "ref_id": "b48", "title": "Pieapp: Perceptual image-error assessment through pairwise preference", "year": "2018" }, { "authors": "J Redi; H Liu; R Zunino; I Heynderickx", "journal": "SPIE", "ref_id": "b49", "title": "Interactions of visual attention and quality perception", "year": "2011" }, { "authors": "J A Redi; P Gastaldo; I Heynderickx; R Zunino", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b50", "title": "Color distribution information for the reduced-reference assessment of perceived image quality", "year": "2010" }, { "authors": "A Rehman; Z Wang", "journal": "IEEE transactions on image processing", "ref_id": "b51", "title": "Reduced-reference image quality assessment by structural similarity estimation", "year": "2012" }, { "authors": "R R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "", "ref_id": "b52", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "M Sharif; S Bhagavatula; L Bauer; M K Reiter", "journal": "", "ref_id": "b53", "title": "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition", "year": "2016" }, { "authors": "M Sharif; S Bhagavatula; L Bauer; M K Reiter", "journal": "ACM Transactions on Privacy and Security (TOPS)", "ref_id": "b54", "title": "A general framework for adversarial examples with objectives", "year": "2019" }, { "authors": "H R Sheikh; M F Sabir; A C Bovik", "journal": "IEEE Trans. Image Processing", "ref_id": "b55", "title": "A statistical evaluation of recent full reference image quality assessment algorithms", "year": "2006" }, { "authors": "D Song; K Eykholt; I Evtimov; E Fernandes; B Li; A Rahmati; F Tramer; A Prakash; T Kohno", "journal": "", "ref_id": "b56", "title": "Physical adversarial examples for object detectors", "year": "2018" }, { "authors": "S Su; Q Yan; Y Zhu; C Zhang; X Ge; J Sun; Y Zhang", "journal": "", "ref_id": "b57", "title": "Blindly assess image quality in the wild guided by a self-adaptive hyper network", "year": "2020" }, { "authors": "C Szegedy; W Zaremba; I Sutskever; J Bruna; D Erhan; I Goodfellow; R Fergus", "journal": "", "ref_id": "b58", "title": "Intriguing properties of neural networks", "year": "2013" }, { "authors": "S Tang; R Gong; Y Wang; A Liu; J Wang; X Chen; F Yu; X Liu; D Song; A Yuille; P H Torr; D Tao", "journal": "", "ref_id": "b59", "title": "Robustart: Benchmarking robustness on architecture design and training techniques", "year": "2021" }, { "authors": "S Thys; W Van Ranst; T Goedemé", "journal": "", "ref_id": "b60", "title": "Fooling automated surveillance cameras: adversarial patches to attack person detection", "year": "2019" }, { "authors": "J Wang; A Liu; Z Yin; S Liu; S Tang; X Liu", "journal": "", "ref_id": "b61", "title": "Dual attention suppression attack: Generate adversarial camouflage in physical world", "year": "2021" }, { "authors": "J Wang; A Liu; Z Yin; S Liu; S Tang; X Liu", "journal": "", "ref_id": "b62", "title": "Dual attention suppression attack: Generate adversarial camouflage in physical world", "year": "2021" }, { "authors": "Z Wang; G Wu; H R Sheikh; E P Simoncelli; E.-H Yang; A C Bovik", "journal": "IEEE transactions on image processing", "ref_id": "b63", "title": "Quality-aware images", "year": "2006" }, { "authors": "J O Wobbrock", "journal": "", "ref_id": "b64", "title": "Practical statistics for human-computer interaction: An independent study combining statistics theory and tool know-how", "year": "2011" }, { "authors": "C Xiao; D Yang; B Li; J Deng; M Liu", "journal": "", "ref_id": "b65", "title": "Meshadv: Adversarial meshes for visual recognition", "year": "2019" }, { "authors": "K Xu; G Zhang; S Liu; Q Fan; M Sun; H Chen; P.-Y Chen; Y Wang; X Lin", "journal": "Springer", "ref_id": "b66", "title": "Adversarial t-shirt! evading person detectors in a physical world", "year": "2020" }, { "authors": "S Yang; T Wu; S Shi; S Lao; Y Gong; M Cao; J Wang; Y Yang", "journal": "", "ref_id": "b67", "title": "Maniqa: Multi-dimension attention network for no-reference image quality assessment", "year": "2022" }, { "authors": "Z Ying; H Niu; P Gupta; D Mahajan; D Ghadiyaram; A Bovik", "journal": "", "ref_id": "b68", "title": "From patches to pictures (paq-2-piq): Mapping the perceptual space of picture quality", "year": "2020" }, { "authors": "G Zhai; X Min", "journal": "Science China Information Sciences", "ref_id": "b69", "title": "Perceptual image quality assessment: a survey", "year": "2020" }, { "authors": "C Zhang; A Liu; X Liu; Y Xu; H Yu; Y Ma; T Li", "journal": "IEEE Transactions on Image Processing", "ref_id": "b70", "title": "Interpreting and improving adversarial robustness with neuron sensitivity", "year": "2020" }, { "authors": "L Zhang; Y Shen; H Li", "journal": "IEEE Transactions on Image processing", "ref_id": "b71", "title": "Vsi: A visual saliency-induced index for perceptual image quality assessment", "year": "2014" }, { "authors": "L Zhang; Y Shen; H Li", "journal": "IEEE Transactions on Image processing", "ref_id": "b72", "title": "Vsi: A visual saliency-induced index for perceptual image quality assessment", "year": "2014" }, { "authors": "R Zhang; P Isola; A A Efros; E Shechtman; O Wang", "journal": "", "ref_id": "b73", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "R Zhang; A Saran; B Liu; Y Zhu; S Guo; S Niekum; D Ballard; M Hayhoe", "journal": "NIH Public Access", "ref_id": "b74", "title": "Human gaze assisted artificial intelligence: A review", "year": "2020" }, { "authors": "W Zhang; H Liu", "journal": "Neurocomputing", "ref_id": "b75", "title": "Learning picture quality from visual distraction: Psychophysical studies and computational models", "year": "2017" }, { "authors": "W Zhang; H Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b76", "title": "Toward a reliable collection of eye-tracking data for image quality research: challenges, solutions, and applications", "year": "2017" }, { "authors": "W Zhang; K Ma; J Yan; D Deng; Z Wang", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b77", "title": "Blind image quality assessment using a deep bilinear convolutional neural network", "year": "2018" }, { "authors": "Y Zhang; H Foroosh; P David; B Gong", "journal": "", "ref_id": "b78", "title": "Camou: Learning physical vehicle camouflages to adversarially attack detectors in the wild", "year": "2018" }, { "authors": "Z Zhao; Z Liu; M Larson", "journal": "", "ref_id": "b79", "title": "Towards large yet imperceptible adversarial image perturbations with perceptual color distance", "year": "2020" }, { "authors": "Z Zhao; S Xu; C Zhang; J Liu; J Zhang; P Li", "journal": "", "ref_id": "b80", "title": "Didfuse: Deep image decomposition for infrared and visible image fusion", "year": "2020" }, { "authors": "Z Zhao; S Xu; J Zhang; C Liang; C Zhang; J Liu", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b81", "title": "Efficient and modelbased infrared and visible image fusion via algorithm unrolling", "year": "2022" }, { "authors": "Z Zhao; J Zhang; S Xu; Z Lin; H Pfister", "journal": "IEEE", "ref_id": "b82", "title": "Discrete cosine transform network for guided depth map super-resolution", "year": "2022" }, { "authors": "B Zhou; A Khosla; A Lapedriza; A Oliva; A Torralba", "journal": "", "ref_id": "b83", "title": "Learning deep features for discriminative localization", "year": "2016" }, { "authors": "H Zhu; L Li; J Wu; W Dong; G Shi", "journal": "", "ref_id": "b84", "title": "Metaiqa: Deep meta-learning for no-reference image quality assessment", "year": "2020" } ]
[ { "formula_coordinates": [ 6, 104.8, 77.36, 52.25, 61.19 ], "formula_id": "formula_0", "formula_text": "Input Backbone 𝑓(•)" }, { "formula_coordinates": [ 6, 62.12, 357.16, 224.24, 26.56 ], "formula_id": "formula_1", "formula_text": "p (x, z) = exp(f θ (x) • z /||f θ (x)|| • ||z ||) L j=1 exp(f θ (x) • z j /||f θ (x)|| • ||z j || ,(1)" }, { "formula_coordinates": [ 6, 58.62, 450.27, 219.24, 30.55 ], "formula_id": "formula_2", "formula_text": "L R = KL(p(x, z)||r) = L =1 p (x, z) log p (x, z) r ," }, { "formula_coordinates": [ 6, 88.83, 646, 197.54, 33.76 ], "formula_id": "formula_3", "formula_text": "A(x, ŷ) = 1 Z ReLU   i,j,k ∂ ŷ ∂A k ij A k ij   ,(3)" }, { "formula_coordinates": [ 6, 343.4, 274.88, 201.72, 30.55 ], "formula_id": "formula_4", "formula_text": "∂ ŷ ∂A k ij = L =1 ∂ ŷ ∂p • ∂p ∂A k ij = L =1 s • ∂p ∂A k ij .(4)" }, { "formula_coordinates": [ 6, 325.84, 386.54, 219.28, 33.76 ], "formula_id": "formula_5", "formula_text": "A(x, p) = 1 Z ReLU   i,j,k p • ∂p ∂A k ij A k ij   .(5)" }, { "formula_coordinates": [ 6, 378.54, 466.4, 166.58, 12.69 ], "formula_id": "formula_6", "formula_text": "L A = ||A(x, p) -S|| 2 F .(6)" }, { "formula_coordinates": [ 6, 327.75, 567.91, 117.2, 14.56 ], "formula_id": "formula_7", "formula_text": "L S = 1 N N n=1 ||ŷ n -y n || 2" }, { "formula_coordinates": [ 6, 379.66, 629.15, 165.45, 14.66 ], "formula_id": "formula_8", "formula_text": "min θ,z L S + λL R + γL A ,(7)" } ]
10.1145/3587716.3587743
2023-10-14
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b22", "b39", "b35", "b10", "b44", "b12", "b14", "b32", "b36", "b10" ], "table_ref": [], "text": "Large language models (LLMs) capable of following natural language instructions have exhibited tremendous success in generalizing zero-shot to new tasks (Mishra et al., 2022;Wei et al., 2022a). Due to various concerns, the most advanced LLMs, such as ChatGPT (OpenAI, 2022) and GPT-4 (Ope-nAI, 2023) that boasting billions of parameters, are Figure 1: An illustration of the distinction between our approach and earlier ones. Previous methods facilitate a one-way knowledge transfer from the teacher to the student (solid arrow). Our approach, however, incorporates an innovative step (dashed arrow) that completes a loop: it enables the feedback\"-identifying the student model's weaknesses-to be relayed back to the teacher, in order to foster tailored learning.\ntypically proprietary, comprising both the model parameter and the training data. To foster increased transparency regarding their intricate operational mechanics, a surge in research efforts focusing on knowledge distillation from a proprietary \"teacher\" LLM to an open-source \"student\" LLM. This is typically accomplished by aligning the responses of the student model with those of the teacher model to a set of instructions, which can be manually or automatically generated (Wang et al., 2022;Taori et al., 2023;Chiang et al., 2023;Xu et al., 2023).\nHowever, previous works employ a unidirectional approach to knowledge transfer (solid arrow in Figure 1), where the teacher imparts knowledge to the student without considering any \"feedback\".\nTo better illustrate this using a tangible classroom scenario, the \"feedback\" refers to identifying the \"hard\" examples or problems where the student's performance falls short. This feedback guarantees that the teacher can provide bespoke training that centers on \"hard\" examples, thereby paving the way for more effective and tailored learning experiences for the student.\nInspired by adversarial knowledge distillation (AKD), which aims to iteratively improve the student model's performance by learning from generated hard samples (Fang et al., 2019;Micaelli and Storkey, 2019a;Heo et al., 2019), we propose an adversarial framework for distilling a proprietary LLM into a compact student model. Nevertheless, these AKD methodologies necessitate accessibility to the weights or gradients of the teacher model, which cannot be directly adapted to our setting. To circumvent this problem, we leverage the unparalleled role adaptability of LLMs, which can be effectively employed through a diverse range of prompts (Sanh et al., 2022). In particular, we prompt the proprietary teacher LLM to serve as a \"referee\" to discriminate hard instructions where there exists a significant performance discrepancy between the teacher's and student's responses, and serve as a \"generator\" to produce new instructions that emulate the data distributions corresponding to the discriminated hard instructions. Our framework, as depicted in Figure 2, consists of three stages in an iteration: 1) an imitation stage to align the student's response with the teacher's response; 2) a discrimination stage to identify hard instructions; 3) A generation stage to produce new hard instructions for escalating the challenges presented to the student model. In essence, our adversarial framework forms a positive feedback loop that efficiently bootstraps the student model's proficiency.\nTo verify the efficiency and efficacy of our method, we apply our AKD framework to transfer the knowledge of ChatGPT2 onto an open-source foundation LLM, known as LLaMA (Touvron et al., 2023). We select Alpaca's training data (generated from only 175 manually selected seed instructions) as the initial training instructions and execute three iterations of AKD, resulting in a total of 70K data that our model is trained on. We've christened our model as Lion, drawing inspiration from the art of \"distillation\". By conducting extensive exper-iments on open-ended generation and reasoning datasets, which include a total of 40 sub-tasks, our Lion-13B showcases superior performance surpassing instruction-tuned baseline models such as Vicuna (Chiang et al., 2023). Our main contributions are as follows:\n• Our work is the first attempt to adopt the idea of adversarial knowledge distillation to large language models. • The versatility of our framework allows for broad application: it is not exclusive to Chat-GPT but can be conveniently adapted to suit a variety of other proprietary LLMs.\n2 Related Work" }, { "figure_ref": [], "heading": "Instruction-Following Language Models", "publication_ref": [ "b3", "b2", "b18", "b30", "b1", "b32", "b1", "b35", "b10", "b44", "b36" ], "table_ref": [], "text": "With the impressive ability of instruction-following large language models such as ChatGPT (Ope-nAI, 2022) and GPT-4 (OpenAI, 2023), the techniques of instruction tuning (Wei et al., 2022b) have attracted a lot of attention (Wei et al., 2022c;Bubeck et al., 2023;Bang et al., 2023;Kocon et al., 2023;Chan et al., 2023a). The early research of instruction tuning aims to enhance the generalization ability of language models, allowing these models to perform new tasks by comprehending task descriptions without relying on a few examplars. By fine-tuning these instructionfollowing language models (e.g., T5 (Raffel et al., 2020), FLAN (Aribandi et al., 2022), T0 (Sanh et al., 2022), and ExT5 (Aribandi et al., 2022)) on multi-task datasets in the form of natural language phrased as instructions, these models have been shown to perform well on unseen tasks with the instructions. However, these models are only fine-tuned on simple task-specific instructions, and it is challenging to comprehend the sophisticated and diverse intent of users in real-world scenarios. Therefore, InstructGPT (Wei et al., 2022b), ChatGPT (Ope-nAI, 2022), and GPT-4 (OpenAI, 2023) trained on the diverse forms and abundant task types of human-crafted instructions annotated by a considerable number of annotators. Since these instructions were not open-sourced, recent works such as Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), and WizardLM (Xu et al., 2023) investigate how to generate high-quality instructions and fine-tune the open-source large language model LLaMA (Touvron et al., 2023) with them to approach the performance of ChatGPT." }, { "figure_ref": [], "heading": "Knowledge Distillation", "publication_ref": [ "b15", "b29", "b9", "b26", "b28", "b0", "b12", "b45", "b8", "b11", "b17", "b37" ], "table_ref": [], "text": "Knowledge Distillation (KD) (Hinton et al., 2015;Radosavovic et al., 2018;Chen et al., 2019) represents a crucial strategy within the sphere of model compression and acceleration, wherein a compact student model is instructed to emulate the performance traits of a more cumbersome teacher model. In practical contexts, the availability of training data is often constrained due to concerns regarding privacy, legality, security, or confidentiality. To address the absence of training data, data-free KD methods were proposed to align the student model to the teacher model, capitalizing on either related proxy data (Orekondy et al., 2019;Papernot et al., 2017) or synthetic data generated by learnable generators (e.g., Generative Adversarial Network (GAN)) (Addepalli et al., 2020;Fang et al., 2019;Micaelli and Storkey, 2019b) or teacher model inversions (Yin et al., 2020;Chawla et al., 2021;Fang et al., 2022). Nevertheless, these KD methodologies necessitate the accessibility to the weights or gradients of the teacher model. Consequently, an alternative line of research, commonly denoted as data-free model extraction (or stealing), endeavors to bridge this gap by employing zero-order estimation methodologies to approximate the authentic gradients of the teacher model to guide the update of the optimized generators (Kariyappa et al., 2021;Truong et al., 2021). However, adapting these methods to our distillation task presents two main hurdles. First, these techniques are primarily designed for image-based classification tasks, assuming access to a continuous softmax vector from the teacher model. Estimating zero-order gradients becomes problematic in our case, as responses are typically sequence-oriented. Second, developing an effective instruction generator capable of producing diverse, high-quality instructions that mirror the teacher model's training data distribution proves more challenging than in the image domain." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [ "b12", "b14" ], "table_ref": [], "text": "Harnessing the learned knowledge of a sophisticated teacher model T (x; θ T ) where the parameter θ T is inaccessible, our goal is to craft a more lightweight student model S(x; θ S ). Ideally, a student model is optimal if the expectation of model discrepancy (which indicates the prediction differences between teacher T and student S) on the uniform data distribution is minimized. Inspired by the success of adversarial knowledge distillation (AKD) (Fang et al., 2019;Micaelli and Storkey, 2019a;Heo et al., 2019), we turn to optimize an upper bound of the expectation -the expectation of the model discrepancy on \"hard samples\", where the teacher T and the student S have a relatively large performance gap. These \"hard samples\" are inclined to dominate the expectation of the model discrepancy. Thus, the overall expected model discrepancy can be effectively and efficiently reduced by optimizing the student model S on these \"hard samples\". The underlying rationale is rather straightforward and can be analogized to a realworld educational scenario: continuously concentrating on the \"hard\" knowledge that the student finds challenging to grasp is the most effective manner of enhancing a student's proficiency.\nHowever, in the process of training the student model S, hard samples will be mastered by the student and converted into easy samples. Hence we need a mechanism to continuously generate hard samples, which can be achieved by an adversarial framework.\nThe whole framework of our Adversarial Knowledge Distillation is depicted in Figure 2, which contains three stages in an iteration: 1) an imitation stage to align the student's response with the teacher's response; 2) a discrimination stage to identify hard samples; 3) A generation stage to produce new hard samples for escalating the challenges presented to the student model." }, { "figure_ref": [ "fig_0" ], "heading": "Initilization", "publication_ref": [ "b36" ], "table_ref": [], "text": "As shown in Figure 2, four roles and two data pools are established in our framework, and we will comprehensively illustrate their functions later. We initialize our student model S using a foundation LLM such as LLaMA (Touvron et al., 2023). We initialize our teacher model T , referee R, and generator G by using the same proprietary LLM such as ChatGPT (OpenAI, 2022). The multiple roles that this proprietary LLM serves are accomplished through the use of varied prompt templates. We start the iteration from a given initial Train Pool\nX A = {x A i } i∈[1,N A ]\n, where x A i is the i-th instruction in X A , and N A is the number of samples in X A . The Cache Pool X B is initialized as identical to X A , consisting of instructions to evaluate the performance of S and T ." }, { "figure_ref": [], "heading": "Imitation Stage", "publication_ref": [ "b35", "b10" ], "table_ref": [ "tab_1" ], "text": "To impart the knowledge of the teacher to the student, we construct the instruction-response data\n{x A i , T (x A i )} i∈[1,N A ]\nby forward propagating instructions in the Train Pool X A through the teacher T . The prompt template used for model inference is shown in Table 10. Like the imitation training of previous work (Taori et al., 2023;Chiang et al., 2023), we fine-tune our student model S to align the response of the teacher model, by optimizing the autoregressive language modeling objective." }, { "figure_ref": [ "fig_0" ], "heading": "Discrimination Stage", "publication_ref": [ "b10", "b38" ], "table_ref": [ "tab_1" ], "text": "Figure 2 demonstrates that the discrimination stage starts from the Cache Pool, denoted as X B . Even though this pool begins with the same initialization as the Train Pool, their uses diverge. The Train Pool is rejuvenated by replacing its existing instructions with freshly generated instructions, whereas the Cache Pool is enriched by incorporating these generated instructions. As a result, the growing storage capacity of the Cache Pool provides a more extensive space for evaluating the performance gap between teacher T and student S. This allows for more thorough detection of hard instructions.\nIn the discrimination stage, we ask the proprietary LLM to serve as a \"referee\", which quantifies the performance gap between T and S. Specifically, we feed each instruction x B i in the Cache Pool X B through both the teacher T and student S to generate the outputs T (x B i ) and S(x B i ), respectively. Then we ask the referee R to quantitatively measure the quality difference between teacher's response T (x B i ) and student's response S(x B i ), conditioned on x B i :\nd i = R(T (x B i ), S(x B i ) | x B i )(1)\nThe above process is conducted by using the prompt template (as shown in Table 11) inspired by (Chiang et al., 2023), which requires the LLM to consider the helpfulness, relevance, accuracy, and level of detail of two responses and output two scores. To mitigate the positional bias (Wang et al., 2023) of the LLM referee, we conduct two runs by exchanging the positions of the teacher's response and the student's response and compute the final score as the average of the two runs. Then d i is calculated as the difference between the teacher's score and the student's score. By setting a threshold τ (1.0 used in our experiments), we discriminate hard instructions as those instructions with d i ≥ τ , and the others are identified as easy ones. tribution of the identified hard instructions is quite different, focusing more on complex tasks such as math, coding, etc." }, { "figure_ref": [ "fig_2" ], "heading": "Generation Stage", "publication_ref": [ "b44" ], "table_ref": [ "tab_3", "tab_4" ], "text": "After carefully discerning the hard instructions, the generation stage aims to produce samples that mirror the data distributions corresponding to these challenging directives. This process is achieved by employing the proprietary LLM as a generator, denoted as G, leveraging its exceptional prowess in content creation. Inspired by (Xu et al., 2023), we randomly sample an instruction from the hard instructions and prompt the generator G to generate a new instruction. The newly generated instruction is required to pertain to the same domain and match the task type of the sampled instruction. The template utilized for this prompt is exhibited in Table 12. As shown in Figure 3c, the distribution of the newly generated hard instructions appears to be comparable to that of the previously identified hard instructions. To mitigate the issue of catastrophic forgetting and to augment the diversity of the generated instructions, we also randomly sample an instruction from the easy instructions and prompt the generator G to generate a new instruction that belongs to the same domain as the sampled one, but exhibit a more long-tailed distribution. The template we use to prompt this process is displayed in Table 13.\nIn each iteration, we define N as the total count of newly generated instructions and maintain a 1:1 ratio r between the generated hard instructions and the generated easy instructions. To promote diversity, a new instruction will be deemed valid only if its ROUGE-L overlap with any existing instructions in the Cache Pool is below 0.7. Finally, as aforementioned in Section 3.3, we proceed to rejuvenate the Train Pool, replacing its existing instructions with freshly generated ones. Concurrently, we enrich the Cache Pool by incorporating these newly generated instructions." }, { "figure_ref": [], "heading": "Min-Max Game Interpretation", "publication_ref": [], "table_ref": [], "text": "Our adversarial knowledge distillation framework can be interpreted as a dynamic min-max game: in the imitation stage, we fine-tune our student to minimize the model discrepancy between itself and the teacher on hard samples; in the discrimination and generation stage, we craft new hard samples to maximize the model discrepancy, based on the learning progress of the student model. This dialectic framework propels the student model towards uncovering otherwise hidden knowledge, paving the way to complete understanding. As the training progresses through several iterations, the system should ideally achieve equilibrium. This is the point where the student model has mastered all the hard samples and the referee R can no longer distinguish between the student S and teacher T models. At this juncture, S becomes functionally indistinguishable from T ." }, { "figure_ref": [], "heading": "Experiments Setting", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [], "text": "In our experiments, we implemented a comprehensive LLM evaluation protocol that considers a diverse range of abilities, such as writing, coding, commonsense, math, and logical reasoning. The datasets we utilized can be classified into two main categories: open-ended generation and reasoning." }, { "figure_ref": [], "heading": "Open-ended Generation Datasets", "publication_ref": [ "b10", "b10", "b38" ], "table_ref": [], "text": "Vicuna-Instructions (Chiang et al., 2023) is a set of 80 questions spanning 9 distinct task categories. This dataset has gained extensive usage in evaluating the capabilities of LLMs. Within our work, we examine LLMs' performance on this dataset in two different settings:\n• Setting1: Following Vicuna (Chiang et al., 2023), we leverage GPT-4 to automatically assess the quality of responses (rated on a scale of 1 to 10) between a reference model (ChatGPT) and a candidate model. Subsequently, we calculate the candidate model's performance as the percentage of the total score it achieves compared to the reference model.\n• Setting2: A recent work (Wang et al., 2023) pointed out that a systematic bias may exist in the above-mentioned GPT-4 automatic evaluation. To mitigate this, they propose two strategies, namely Multiple Evidence Calibration and Balanced Position Calibration, to obtain closer alignment with human judgments." }, { "figure_ref": [], "heading": "Reasoning Datasets", "publication_ref": [ "b46", "b34", "b33", "b46", "b23" ], "table_ref": [ "tab_7", "tab_9" ], "text": "AGIEval (Zhong et al., 2023) is a well-known benchmark that quantifies the reasoning capability of foundation models in the context of humancentric standardized exams, including college entrance exams, math competitions, lawyer qualification tests, etc. We choose all English multiplechoice questions (8 tasks, 2,546 samples) among AGIEval for our experiments. The data statistics are shown in Table 6.\nBIG-Bench Hard (BBH) (Suzgun et al., 2022) consists of a suite of challenging tasks from BIG-Bench (Srivastava et al., 2022), designed to assess the capabilities and limitations of large language models. These are the tasks on which prior language models underperform the average human rater. We choose all tasks that can be formatted into multiple-choice questions (23 tasks, 5,511 samples) among BBH for our experiments. The data statistics are shown in Table 7.\nSetting We evaluate reasoning capabilities under a zero-shot setting without any exemplars and without Chain-of-Thought (CoT). For both AGIEval and BBH, we use the prompt format and parsing following (Zhong et al., 2023;Mukherjee et al., 2023). Given the free-form response from the generative models, only the first capital character in the response is considered to compare with the gold answer (exact match). The result we report is accuracy (%)." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b36", "b35", "b44", "b10" ], "table_ref": [], "text": "We select five superior LLMs as baselines, including LLaMA (Touvron et al., 2023), Alpaca (Taori et al., 2023), WizardLM (Xu et al., 2023), Vicuna (Chiang et al., 2023), and ChatGPT (Ope-nAI, 2022). It is worth noting that Vicuna has consistently ranked as the top open-source language model on multiple leaderboards, such as Chatbot Arena3 . Therefore, we will conduct a comprehensive comparison with Vicuna. See detailed descriptions of these baselines in Appendix B." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b35" ], "table_ref": [], "text": "Training Details Our student model is initialized using the pre-trained LLaMA. The Train Pool and Cache Pool are initialized with the 52K automatically generated instructions from Alpaca (Taori et al., 2023). The total number of iterations is set to 3, with 6K newly generated instructions added at each iteration. This results in a total of 70K data that our model is trained on in order to make a fair comparison with current SOTA baselines, including WizardLM and Vicuna. The training hyperparameters are listed in Appendix C.\nInference Details To draw inferences from Lion and ChatGPT, we calibrated the temperature to 0.7 and set the maximum generation length at 1024. All other parameters adhere to their default settings. For LLaMA, Alpaca, WizardLM, and Vicuna, we configured their inference parameters in line with the specifications given in their respective original papers. When engaging with the gpt-3.5-turbo API for various roles, we employ an array of hyperparameters, the specifics of which can be located in Appendix C." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "Results for Open-ended Generation", "publication_ref": [ "b10", "b44" ], "table_ref": [ "tab_1" ], "text": "Table 1 shows the performance comparison of various models against ChatGPT as the reference model, where GPT-4 is used as a referee/rater. Our Lion-7B and Lion-13B remarkably outperform their counterparts under two evaluation settings. Noticeably, Lion-13B shows an 8-point improvement over Vicuna-13B on aggregate, achieving 98.38% capabilities of ChatGPT.\nTo comprehensively compare with other baseline models on the capability to generate high-quality responses on various types of instruction, the relative response quality (Setting2) among different task categories is depicted in Figure 4. Our model impressively and slightly surpasses ChatGPT in the generic, knowledge, common-sense, and counterfactual task categories. Furthermore, for the two difficulty task categories described in the previous study (Chiang et al., 2023;Xu et al., 2023), our model significantly outperforms other baseline models with at least 32.32% relative score in the math task category while exceeding most of the baseline in the coding generation task category." }, { "figure_ref": [], "heading": "Results for Reasoning", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "AGIEval Results Table 2 presents the standard zero-shot performance comparison between Lion and baseline models on the AGIEval benchmark for multiple-choice English questions. Lion demonstrates significantly stronger performance compared to Vicuna, surpassing it in most task cate-gories and achieving an average relative improvement of over 16%. However, Lion-13B still significantly lags behind ChatGPT, only retaining 72.5% of its reasoning capability." }, { "figure_ref": [], "heading": "BIG-Bench Hard Results", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Table 3 displays the zero-shot performance comparison between Lion and baseline models on BIG-Bench Hard with standard zero-shot prompting. Similar to AGIEval, Vicuna exhibits poor performance on sophisticated reasoning tasks within this benchmark, while Lion substantially surpasses Vicuna by around 50% on average. Particularly, Lion demonstrates significant performance enhancements of over 100% on tasks involving data understanding, semantic understanding (Disambiguation QA and Snarks), logical and geometric reasoning (Logical Deduction and Geometric Shapes), and position reasoning (Tracking Shuffled Objects). Despite achieving an average ability of nearly 74% compared to Chat-GPT on BBH, Lion-13B surpasses ChatGPT in several tasks, including Movie Recommendation, Snarks (identifying sarcastic sentences from two nearly-identical ones), and Tracking Shuffled Objects. This demonstrates the effectiveness of our method." }, { "figure_ref": [], "heading": "Analyses", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_5", "tab_6" ], "text": "The threshold τ for distinguishing between hard and easy instructions We systematically explored τ ranging from 0.0 to 2.0 and documented its influence on average performance across three datasets. Table 4 reveals an optimal range of τ between 1.0 and 1.5 for all datasets. Notably, elevating τ from 0.0 to 1.0 consistently enhances performance across all datasets, indicating effective differentiation between hard and easy instructions. However, a continuous increase from 1.0 to 2.0 gradually degrades performance due to decreased diversity in hard instructions. The ablation results demonstrate that our method is not quite sensitive to a large value of τ .\nThe ratio r of generated hard and easy instructions We change the ratio of generated hard instructions to generated easy instructions from 1:0 (all hard) to 0:1 (all easy) and investigate its impact on average performance across three datasets. It can be seen from Table 5 that higher ratios of hard to easy instructions generally lead to improved performance, with a balanced ratio of 1:1 yielding the highest average scores." }, { "figure_ref": [ "fig_4", "fig_2" ], "heading": "The Learning Dynamics of Lion", "publication_ref": [], "table_ref": [], "text": "In Figure 5, we delve into the learning dynamics of Lion by visualizing its performance on AGIEval and BBH throughout the training iterations. The results clearly demonstrate that our adversarial knowledge distillation framework consistently enhances the performance of the student model as the iterations progress. Notably, the most significant improvement in capability occurs in the first iteration, suggesting the usefulness of the identification of challenging example patterns (refer Figure 3b)." }, { "figure_ref": [], "heading": "Case Studies", "publication_ref": [], "table_ref": [], "text": "To clearly compare the generated response quality between our model and other baselines, we provide nine case studies sampled from Vicuna-instruction, AGIEval, and BBH in Appendix E. the other hand, offered a more detailed and engaging response that explored different possibilities such as the development of biophysics or discovering new principles that could be applied to both fields. Lion's response also considered the potential implications of Newton's work on motion, force, gravity, and thermodynamics in biology, providing a more comprehensive answer." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents an innovative adversarial knowledge distillation framework for distilling a proprietary LLM into a compact, open-source student model. While previous methodologies have concentrated on unidirectional knowledge transfer, our approach seeks to integrate \"feedback\" into the learning process. Leveraging the versatile role adaptability of LLMs, we prompt the proprietary model to identify \"hard\" instructions and generate new \"hard\" instructions for the student model, creating a three-stage adversarial loop of imitation, discrimination, and generation. This approach al-lows us to refine the student model's performance iteratively, efficiently bootstrapping its proficiency.\nWe aspire that our model, named Lion, may serve as a baseline to reflect the performance of ChatGPT, especially the open-source instruction-following language model baseline for our community." }, { "figure_ref": [], "heading": "Limitations and Discussions", "publication_ref": [ "b13", "b44", "b35", "b44", "b39" ], "table_ref": [], "text": "The Model Capability We have identified that Lion is subject to certain constraints: 1) A recent study (Gudibande et al., 2023) asserts that \"model imitation is a false promise\" since imitation models are adept at mimicking ChatGPT's style but fall short in improving LMs across more challenging tasks. While Lion still lags behind its teacher model ChatGPT in handling intricate reasoning tasks (as shown in our experiments), it demonstrates promising improvements compared to previous imitation models. Therefore, our adversarial knowledge distillation framework may provide a more effective way for knowledge transfer. 2) Since our training data doesn't encompass dialogues, Lion struggles to manage multi-turn conversations. 3) Due to computational resource constraints, Lion's maximum sequence length is limited to 1024. Consequently, it faces challenges when dealing with long documents. Despite these limitations, we envision Lion serving as an accessible springboard for future research endeavors aimed at addressing these limitations.\nThe Training Process To train a single student model, we request the gpt-3.5-turbo API around 450k times, a number that is roughly 70% of the WizardLM's usage of 624k (Xu et al., 2023).\nNonetheless, this utilization incurs a considerable expense, nearing $900. In contrast to methods like Alpaca (Taori et al., 2023) and WizardLM (Xu et al., 2023), which only fine-tune the student model once, our adversarial knowledge distillation method employs iterative parametric updates to the student model. While this iterative approach inevitably leads to slower iteration speed, it offers additional benefits. Finally, different from traditional adversarial knowledge distillation where the weights of the generator are iteratively updated, we use a black-box and parameter-frozen LLM (Chat-GPT in our paper) to serve the role. Therefore, the quality of the LLM is quite essential in the generation of new instructions.\nThe Evaluation Metrics Though automated evaluations leveraging GPT-4 have showcased promising prospects in appraising chatbot performance, the technique is yet to reach a level of maturity and accuracy, especially considering the propensity of large language models to generate non-existent or \"hallucinated\" information. Evaluating the efficacy of LLM across various tasks presents a considerable challenge since different tasks require quite different expertise (Wang et al., 2022). Therefore, the creation of a comprehensive, standardized evaluation system for chatbots is a prevailing research challenge that demands additional exploration and study." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b31", "b19", "b35", "b27" ], "table_ref": [], "text": "Inherited Biases It is important to consider that the behavior of our distilled student models may exhibit potential toxicity, biases, or privacy issues (Ray, 2023;Li et al., 2023) inherited from the larger teacher LLM. We anticipate that the advancements made in reducing anti-social behaviors in LLMs can also be utilized to enhance student language models.\nLicense and Legality Based on Stanford Alpaca's guidelines (Taori et al., 2023), we have determined that the weights of Lion will be exclusively licensed for research purposes in the future. Utilizing Lion's weights alongside LLaMA's original weights must adhere to Meta's LLaMA License Agreement. Users are responsible for acquiring and utilizing LLaMA in accordance with the license agreement.\nSafety Unlike ChatGPT (OpenAI, 2022), Lion does not rely on human feedback to mitigate unde-sired behaviors. Instead, Lion learns to avoid such behaviors by imitating ChatGPT. However, it is important to acknowledge the potential risks associated with using Lion for malicious purposes, especially upon releasing its weights in the future. For future work, we aim to incorporate the technique of Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022) to enhance access control. Additionally, Meta has implemented an access application process that can help regulate the distribution of LLaMA models and minimize the potential risks associated with their usage, providing an alternative option. " }, { "figure_ref": [], "heading": "A Data Statistics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B Baselines", "publication_ref": [ "b36" ], "table_ref": [], "text": "• LLaMA (Touvron et al., 2023) is a collection of foundation language models ranging from 7B to 65B parameters. It is trained on trillions of tokens from publicly available datasets and is demonstrated to outperform larger-size LLMs such as GPT-3 (175B) across a multitude of benchmarks. We use the official code from LLaMA4 . " }, { "figure_ref": [], "heading": "D Prompt Templates for Our Adversarial Distillation Framework", "publication_ref": [ "b16" ], "table_ref": [ "tab_1", "tab_1", "tab_3", "tab_4" ], "text": "Fine-tuning an LLM (i.e. ChatGPT) is costly and intricate, human-tailored prompt templates are utilized to solve various tasks (Wei et al., 2022d;Chan et al., 2023b,c;Jiang et al., 2022;Chan and Chan, 2023). The prompt template of the Teacher for generating responses is shown in Table 10. The prompt template of the Referee for comparing the quality of two responses generated by two AI assistants is shown in Table 11. The prompt templates of the Generator for generating new hard instructions and new easy instructions are shown in Table 12 and Table 13, respectively." }, { "figure_ref": [], "heading": "E Case Studies", "publication_ref": [], "table_ref": [ "tab_5", "tab_1", "tab_1", "tab_3", "tab_4" ], "text": "Here we show 3 cases in Table 14, 15, and 16 to clearly compare the open-ended generation performance among various models including our Lion-13B, LLaMA-13B, Alpaca-13B, Vicuna-13B, and ChatGPT.\nBesides, we show 6 cases in Table 16,17,18,19,20,and 21 to clearly compare the reasoning capability among various models including our Lion-13B, Vicuna-13B, and ChatGPT. We utilize ✓ and ✗ to denote whether the response is correct or incorrect, respectively. system content You are a helpful assistant that generates a response to a given task instruction. Table 10: Prompt template of gpt-3.5-turbo for generating responses. Note that the original instruction in Alpaca is composed of an instruction prompt and an instance input. For example, the instruction prompt is \"write an abstract about the following method\", and the instance input is \"knowledge distillation\". For a better adaption to real-world scenarios, we concatenate the instruction prompt and the instruction prompt into one instruction using a line break.\nsystem content You are a helpful and precise assistant for checking the quality of the answer. [System] We would like to request your feedback on the performance of two AI assistants in response to the user instruction and input displayed above.\nPlease rate the helpfulness, relevance, accuracy, and level of detail of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment. Then, output two lines indicating the scores for Assistant 1 and 2, respectively.\nOutput with the following format: Evaluation evidence: <your evaluation explanation here> Score of the Assistant 1: <score> Score of the Assistant 2: <score> Table 11: Prompt template of gpt-3.5-turbo for comparing the quality of two responses generated by two AI assistants.\nsystem content You are a helpful assistant. user content I want you to act as an Instruction Creator. Your goal is to draw inspiration from the #Given Instruction# to create a brand new instruction. This new instruction should belong to the same domain and the same task type as the #Given Instruction#. The LENGTH and difficulty level of the #Created Instruction# should be similar to that of the #Given Instruction#. The #Created Instruction# must be reasonable and must be understood and responded to by humans. '#Given Instruction#', '#Created Instruction#', 'given instruction' and 'created instruction' are not allowed to appear in #Created Instruction#.\n#Given Instruction#: {instruction} #Created Instruction#:\nTable 12: Prompt template of gpt-3.5-turbo for generating new hard instructions. system content You are a helpful assistant. user content I want you to act as an Instruction Creator. Your goal is to draw inspiration from the #Given Instruction# to create a brand new instruction. This new instruction should belong to the same domain as the #Given Instruction# but be even more rare. The LENGTH and difficulty level of the #Created Instruction# should be similar to that of the #Given Instruction#. The #Created Instruction# must be reasonable and must be understood and responded to by humans. '#Given Instruction#', '#Created Instruction#', 'given instruction' and 'created instruction' are not allowed to appear in #Created Instruction#.\n#Given Instruction#: {instruction} #Created Instruction#:\nTable 13: Prompt template of gpt-3.5-turbo for generating new easy instructions." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "W. Wang was also affiliated with Guangzhou Municipal Key Laboratory of Materials Informatics, The Hong Kong University of Science and Technology (Guangzhou), China. He was supported by HKUST(GZ) Grant G0101000028, GZU-HKUST Joint Research Collaboration Grant GZU22EG04, CCF-HuaweiDBC202302, and Guangzhou Municipal Science and Technology Project (No. 2023A03J0003)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Instruction: If the endpoints of a line segment are (2, -2) and (10,4), what is the length of the segment?" }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "To find the length of the line segment with endpoints (2, -2) and (10, 4), we can use the distance formula. The distance formula is derived from the Pythagorean theorem and is given as:\nwhere (x1, y1) and (x2, y2) are the coordinates of the endpoints of the line segment.\nSubstituting the given values, we get:\nSimplifying this expression, we get:\nTherefore, the length of the line segment with endpoints (2, -2) and (10, 4) is 10 units. ✓" }, { "figure_ref": [], "heading": "LLaMA", "publication_ref": [], "table_ref": [], "text": "The length of the line segment is 10 units. ✓" }, { "figure_ref": [], "heading": "Alpaca", "publication_ref": [], "table_ref": [], "text": "The length of the line segment is 8 units. ✗" }, { "figure_ref": [], "heading": "Vicuna", "publication_ref": [], "table_ref": [], "text": "The length of the line segment is the distance between the two points. To find the length, you can use the Pythagorean theorem, which states that the sum of the squares of the distances from a point to the two endpoints is equal to the square of the distance between the two endpoints.\nIn this case, the distance from the point (2, -2) to the point (10, 4) is given by the formula:\n((x2 -x1) 2 + (y2 -y1) 2 ) = ((10 -2) 2 + (4 -(-2)) 2 ) = (18 2 + 4 2 ) = (324 + 16) = (340) = 17.973 So the length of the line segment is 17.973. ✗" }, { "figure_ref": [], "heading": "ChatGPT", "publication_ref": [], "table_ref": [], "text": "To find the length of a line segment with two endpoints (x1, y1) and (x2, y2), we use the distance formula:\nPlugging in the coordinates of the given endpoints, we get:\nTherefore, the length of the line segment is 10 units. ✓ Instruction: What if Isaac Newton had focused on biology instead of physics?" }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "Isaac Newton is renowned for his contributions to physics, particularly his laws of motion and universal gravitation. However, what if Newton had instead chosen to focus on biology? Newton's work in physics was heavily influenced by his fascination with the natural world and his desire to understand how it worked. Similarly, had he chosen to study biology, he may have made significant contributions to the field. Newton's mathematical abilities and his interest in the natural world would have made him a formidable biologist.\nOne area where Newton may have made a significant impact in biology is in the field of taxonomy. Newton was a keen observer of nature and had a particular interest in plants. He spent much of his time studying and cataloguing the various species of plants around him. He even developed a system for classifying plants based on their physical characteristics.\nIn addition to his work in taxonomy, Newton may have also made important contributions to the field of physiology. He was interested in the workings of the human body and may have studied the mechanics of muscles and the circulatory system. Overall, while Isaac Newton is most well-known for his work in physics, his interests and abilities would have made him a formidable biologist as well. Who knows what discoveries he might have made had he chosen to focus on the natural world in a different way." }, { "figure_ref": [], "heading": "LLaMA", "publication_ref": [], "table_ref": [], "text": "I'm not sure. If he did so, he wouldn't have discovered the law of gravity." }, { "figure_ref": [], "heading": "Alpaca", "publication_ref": [], "table_ref": [], "text": "If Isaac Newton had focused on biology instead of physics, he might have made important contributions to the field of biology, such as developing a scientific theory of evolution or explaining the processes of cell growth and development. He might also have studied and discovered new principles of genetics and developed methods to study and treat diseases." }, { "figure_ref": [], "heading": "Vicuna", "publication_ref": [], "table_ref": [], "text": "Isaac Newton is widely regarded as one of the most influential scientists in history for his work in physics, particularly for his development of the laws of motion and universal gravitation. However, it is worth noting that Newton also made significant contributions to other fields, including mathematics, optics, and natural philosophy.\nIf Newton had decided to focus on biology instead of physics, it is possible that he would have made significant contributions to the field. However, it is difficult to say exactly what those contributions would have been without knowing more about his specific interests and areas of focus.\nNewton was a prolific scientist and wrote extensively on a wide range of topics, including natural philosophy, mathematics, and theology. He also conducted extensive research in optics, including the study of light and color, and made important contributions to the field of mathematics, including the development of calculus.\nIt is possible that if Newton had focused on biology, he would have made significant contributions to the understanding of living organisms and their behavior. However, it is also possible that his work in physics and mathematics would have had a greater impact on the world." }, { "figure_ref": [], "heading": "ChatGPT", "publication_ref": [], "table_ref": [], "text": "If Isaac Newton had focused on biology instead of physics, the scientific world may be vastly different today. His advancements in mathematics, such as calculus, would have certainly been useful in the field of biology. Newton's curiosity, analytical mind, and dedication to scientific inquiry would have been equally effective in biology as they were in physics. He may have made significant contributions to the field of genetics or evolutionary theory.\nOn the other hand, his contributions to physics were so groundbreaking that the subject may have been hindered without his contributions. It's difficult to say exactly what would have happened, but it's likely that the course of scientific history would have been different.\nTable 15: A counterfactual case from Vicuna-Instructions.\nInstruction: How can I develop my critical thinking skills?" }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "Developing critical thinking skills is an important aspect of personal growth and professional development. Critical thinking enables us to analyze and evaluate information, ideas, and arguments objectively, make informed decisions, and solve complex problems. Here are some strategies that can help you develop your critical thinking skills:\n1. Read widely and deeply: Reading widely exposes you to different perspectives and ideas, while reading deeply helps you to understand and analyze complex concepts. Choose books that are challenging and outside your comfort zone, and take notes as you read.\n2. Ask questions: Asking questions is an essential part of critical thinking. Start by asking ẅhyänd ḧowquestions to gain a deeper understanding of a topic. Then, ask ẅhat ifquestions to explore possible outcomes and consequences.\n3. Practice analysis and evaluation: Critical thinking involves analyzing and evaluating information and arguments. Start by breaking down complex ideas into smaller parts, and then evaluate each part based on evidence, reasoning, and credibility.\n4. Seek diverse perspectives: Surround yourself with people who have different backgrounds, experiences, and perspectives. Listen to their ideas and opinions, and challenge your own assumptions and biases.\n5. Practice decision-making: Critical thinking is essential in decision-making. Practice making decisions by weighing the pros and cons, considering potential outcomes, and seeking advice from others.\n6. Take courses and workshops: Enrolling in courses and workshops that focus on critical thinking, problem-solving, and decision-making can provide you with the tools and techniques to develop your critical thinking skills.\nBy implementing these strategies, you can develop your critical thinking skills and become better equipped to analyze, evaluate, and make informed decisions in your personal and professional life." }, { "figure_ref": [], "heading": "LLaMA", "publication_ref": [], "table_ref": [], "text": "Critical thinking is a vital skill to have, but it can be difficult to develop. Here are some ways you can start developing your critical thinking skills:\n1. Analyze and interpret: When you're presented with information, try to analyze and interpret it on your own terms. When you start to analyze and interpret information on your own, it will become easier to do so over time.\n2. Assess your information: When you receive new information, try to assess whether it's valid or reliable. This is important for evaluating and forming your own opinions.\n3. Use your intuition: Use your intuition to complement your critical thinking skills. When you pay attention to your gut feelings, they'll usually tell you when you need to double-check your information. 4. Learn from your mistakes: When you make a mistake, try not to beat yourself up about it. Instead, learn from your mistakes and use them to your advantage." }, { "figure_ref": [], "heading": "Alpaca", "publication_ref": [], "table_ref": [], "text": "Developing critical thinking skills involves actively questioning assumptions, weighing evidence, and thinking analytically. It also involves being open-minded and considering various perspectives, as well as thinking logically and systematically. Practicing problem-solving and creative thinking, working through scenarios and dilemmas, and engaging in thoughtful conversations are all ways to improve critical thinking. Additionally, reading, reflecting, and writing can help increase your capacity for critical thinking." }, { "figure_ref": [], "heading": "Vicuna", "publication_ref": [], "table_ref": [], "text": "Developing your critical thinking skills can help you to better analyze information, make more informed decisions, and solve complex problems. Here are some tips for improving your critical thinking skills: 1. Practice active listening: Pay attention to what others are saying and try to understand their perspective. 2. Ask questions: Be curious and ask questions to clarify information and better understand a situation.\n3. Analyze information: Take time to carefully analyze information and consider different viewpoints. 4. Seek out diverse viewpoints: Expose yourself to a variety of viewpoints and ideas to challenge your own assumptions and biases. 5. Practice logic and reasoning: Use logic and reasoning to evaluate arguments and make decisions. 6. Learn from your mistakes: Be open to learning from your mistakes and using them as opportunities for growth. 7. Read and watch a variety of sources: Read and watch a variety of sources to expand your knowledge and expose yourself to different perspectives. 8. Engage in debates and discussions: Engage in debates and discussions with others to practice critical thinking and develop your skills. 9. Take breaks and reflect: Take breaks and reflect on your thinking process to improve your critical thinking skills. 10. Practice mindfulness: Mindfulness can help you to be more aware of your thoughts and biases, which can help you to think more critically." }, { "figure_ref": [], "heading": "ChatGPT", "publication_ref": [], "table_ref": [], "text": "Sure, here are a few practical tips to develop critical thinking skills: 1. Ask questions: Don't simply accept information at face value. Ask questions that allow you to evaluate the reason, perspective and logic behind the information.\n2. Analyze: Take the time to analyze and evaluate the information that you receive. This will help you to identify assumptions, biases, strengths and weaknesses of an argument.\n3. Consider different perspectives: When analyzing information, try to consider different perspectives to gain a wider understanding of any topic.\n4. Research and read extensively: Read a wide range of sources to get a better understanding of different viewpoints and opinions.\n5. Practice logic and reasoning: Practice thinking through problems logically by breaking them down into smaller parts.\n6. Establish a point of view: Consider developing an opinion or thesis statement, and then collect evidence to support that position.\n7. Reflect on your own thinking: Consider how you arrived at a conclusion, and whether you have any biases or assumptions that may have influenced your thinking.\nWith time, and practicing these skills consistently, you can become a better critical thinker. " }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "To calculate the reduction percent, we need to divide the difference between the original number and the reduced number by the original number and then multiply by 100.\nHere, the original number is 120 and the reduced number is 96. So, the difference between the two numbers is 120 -96 = 24.\nTo calculate the percentage reduction, we divide the difference by the original number and then multiply by 100:\n(24 / 120) x 100 = 20% Therefore, the correct answer is option (C) 20%.\nOption (A) 30%, option (B) 40%, option (D) 10%, and option (E) 5% are all incorrect. ✓ Vicuna A. 30% ✗" }, { "figure_ref": [], "heading": "ChatGPT", "publication_ref": [], "table_ref": [], "text": "To calculate the reduction percentage, we need to find the difference between the original value and the reduced value, and then divide it by the original value.\nThe difference between 120 and 96 is 120 -96 = 24. Now, divide the difference by the original value and multiply by 100 to get the percentage reduction:\n(24 / 120) * 100 = 20%\nTherefore, the correct answer is (C) 20%. ✓\nTable 17: An AQuA-RAT case from AGIEval.\nInstruction: Q: not ( True ) and ( True ) is? Answer Choices: (A) True (B) False A: Among A through B, the answer is" }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "The expression \"not (True) and (True)\" can be simplified as \"(not True) and (True)\" or \"False and True\".\nThe \"and\" operator has a higher precedence than the \"not\" operator, so the first expression is evaluated as \"False and True\", which is false.\nTherefore, the correct answer is option (B) False. ✓\nThis statement is false.\nIf the given statement is \"not (True)\" and \"True\", it would be equivalent to \"not (True) and True\", which would be true.\n(This is because negation of \"True\" is \"False\", and \"True\" and \"False\" evaluate to \"True\".)\nTherefore, the correct answer is (A) True. ✗ ChatGPT (A) True. ✗ Instruction: Q: If you follow these instructions, do you return to the starting point? Take 8 steps. Turn around. Take 8 steps. Answer Choices: (A) Yes (B) No A: Among A through B, the answer is" }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "According to the instructions given, if you follow them, you will return to the starting point. Therefore, the correct answer is Option A, \"Yes\". By taking 8 steps and turning around, you will end up back where you started. It is important to follow instructions carefully to achieve the desired outcome. In this case, following these instructions will ensure that you return to the starting point. ✓ Instruction: Q: Which statement is sarcastic? Answer Choices: (A) Hey just be happy then you won't be depressed anymore (B) Hey just be happy that you won't be depressed anymore A: Among A through B, the answer is" }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "The correct answer is (B). The statement \"Hey just be happy that you won't be depressed anymore\" is sarcastic because it implies that being happy is an easy and straightforward solution to depression, which is a complex mental health issue that requires professional help and support. The use of the word \"just\" also adds a dismissive tone to the statement, suggesting that the speaker doesn't take depression seriously. It's important to avoid using sarcasm when talking about mental health issues, as it can be hurtful and invalidating to those who are struggling. Instead, it's important to offer compassion and support and to encourage individuals to seek professional help if needed. ✗ Vicuna (B) Hey just be happy that you won't be depressed anymore ✗ ChatGPT The sarcastic statement is (A) \"Hey just be happy then you won't be depressed anymore.\" ✓ Instruction: Q: In the following sentences, explain the antecedent of the pronoun (which thing the pronoun refers to), or state that it is ambiguous. Sentence: The scientist collaborated with the artist, and he shared a story. Answer Choices: (A) The scientist shared a story (B) The artist shared a story (C) Ambiguous A: Among A through C, the answer is" }, { "figure_ref": [], "heading": "Lion", "publication_ref": [], "table_ref": [], "text": "In the given sentence, the antecedent of the pronoun \"he\" is ambiguous. \"He\" could refer to either the scientist or the artist, as both of them shared a story. Therefore, option (C) is the correct answer. ✓ Vicuna (C) Ambiguous ### Explanation:\nThe pronoun \"he\" is ambiguous, as it could refer to either the scientist or the artist. Without more context, it is impossible to determine which person is being referred to. Therefore, the answer is \"C -Ambiguous.\" ✓ ChatGPT (C) Ambiguous.\nIn the given sentence, the antecedent of the pronoun \"he\" is unclear. It could refer to either the scientist or the artist because there is no clear indication of which person shared the story. Therefore, the sentence is ambiguous. ✓ " } ]
The practice of transferring knowledge from a sophisticated, proprietary large language model (LLM) to a compact, open-source LLM has garnered considerable attention. Previous works have focused on a unidirectional knowledge distillation way by aligning the responses of the student model with those of the teacher models to a set of instructions. Nevertheless, they overlooked the possibility of incorporating any "feedback"-identifying challenging instructions where the student model's performance falls short-to boost the student model's proficiency iteratively. To this end, we propose a novel adversarial distillation framework for a more efficient knowledge transfer. Leveraging the versatile role adaptability of LLMs, we prompt the teacher model to identify "hard" instructions and generate new "hard" instructions for the student model, creating a three-stage adversarial loop of imitation, discrimination, and generation. By applying this adversarial framework, we successfully transfer knowledge from ChatGPT to a student model (named Lion), using a mere 70k training data. Our results show that Lion-13B not only achieves comparable open-ended generation capabilities to Chat-GPT but surpasses conventional state-of-the-art (SOTA) instruction-tuned models like Vicuna-13B by 55.4% in challenging zero-shot reasoning benchmarks such as BIG-Bench Hard (BBH) and 16.7% on AGIEval.
Lion: Adversarial Distillation of Proprietary Large Language Models
[ { "figure_caption": "Figure 2 :2Figure 2: The overview of our adversarial distillation framework, where we craft a compact Student LLM S based on a superior proprietary LLM that serves three roles: the Teacher T , the Referee R, and the Generator G. From left to right, there are three stages in an iteration: 1) Imitation; 2) Discrimination; 3) Generation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3bprovides a clear and intuitive demonstration of which kinds of instructions are discriminated as hard in the first iteration. Compared with the instructions in the Cache Pool (Figure3a), the dis-(a) Instructions of the Cache Pool in the first iteration. (b) Identified hard instructions in the first iteration. (c) Generated hard instructions in the first iteration.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The top 20 most common root verbs (inner circle) and their top 4 direct noun objects (outer circle) in the instructions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Relative response quality against ChatGPT on diverse task categories of Vicuna-Instructions.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance of Lion-7B and Lion-13B on AGIEval and BBH through the training iterations.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "of Assistant 1's Answer] {answer_1} [The End of Assistant 1's Answer] [The Start of Assistant 2's Answer] {answer_2} [The End of Assistant 2's Answer]", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Relative response quality (%) against ChatGPT (assessed by GPT-4) on Vicuna-Instructions.", "figure_data": "ModelSetting1 Setting2Avg.LLaMA-7B58.4659.1258.79Alpaca-7B69.2967.2068.25WizardLM-7B89.2986.6787.98Vicuna-7B87.7989.9688.88Lion-7B94.7492.8893.81LLaMA-13B69.2368.2168.72Alpaca-13B76.8774.6975.78Vicuna-13B92.2592.9792.61Lion-13B96.57100.1898.38", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Zero-shot performance comparison of ChatGPT, Vicuna, and Lion on AGIEval (multiple-choice English questions). We report the performance of Human, ChatGPT, and Vicuna from(Mukherjee et al., 2023). Performance improvements obtained by Lion over Vicuna are shown in parenthesis.", "figure_data": "TaskChatGPT Vicuna-7B Lion-7BVicuna-13B Lion-13BBoolean Expressions82.839.255.2 (40.8%)40.865.6 (60.8%)Causal Judgement57.239.750.3 (26.7%)42.243.9 (4.0%)Date Understanding42.88.634.0 (295.3%)10.040.4 (304.0%)Disambiguation QA57.215.235.6 (134.2%)18.444.8 (143.5%)Formal Fallacies53.640.046.0 (15.0%)47.252.4 (11.0%)Geometric Shapes25.63.68.8 (144.4%)3.68.8 (144.4%)Hyperbaton69.242.851.6 (20.6%)44.056.8 (29.1%)Logical Deduction (5 objects)38.84.819.6 (308.3%)4.820.8 (333.3%)Logical Deduction (7 objects)39.61.214.4 (1100.0%) 1.221.2 (1666.7%)Logical Deduction (3 objects)60.419.640.4 (106.1%)16.838.0 (126.2%)Movie Recommendation55.424.426.8 (9.8%)43.457.6 (32.7%)Navigate55.643.649.2 (12.8%)46.445.2 (-2.6%)Penguins in a Table45.917.524.7 (41.1%)15.126.7 (76.8%)Reasoning about Colored Objects47.614.015.2 (8.6%)12.017.6 (46.7%)Ruin Names56.012.214.4 (18.0%)15.729.2 (86.0%)Salient Translation Error Detection40.82.012.0 (500.0%)2.012.4 (520.0%)Snarks59.028.056.2 (100.7%)28.161.2 (117.8%)Sports Understanding79.640.448.4 (19.8%)48.451.6 (6.6%)Temporal Sequences35.621.224.4 (15.1%)16.010.4 (-35.0%)Tracking Shuffled Objects (5 objects) 18.46.414.4 (125.0%)9.224.8 (169.6%)Tracking Shuffled Objects (7 objects) 15.24.013.6 (240.0%)5.613.2 (135.7%)Tracking Shuffled Objects (3 objects) 31.626.834.0 (26.9%)23.234.4 (48.3%)Web of Lies56.049.447.2 (-4.5%)41.254.8 (33.0%)Average48.921.932.0 (45.9%)23.336.2 (55.4%)", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Zero-shot performance comparison of ChatGPT, Vicuna, and Lion on BIGBench Hard (multiple-choice questions) without CoT. We report the performance of ChatGPT and Vicuna from(Mukherjee et al., 2023). Performance improvements obtained by Lion over Vicuna are shown in parenthesis.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table14showcases the responses of various models to a math instruction. It can be seen that only Lion and Chat-GPT provide the correct answer and follow the correct problem-solving steps. A counterfactual case is shown in Table15, where ChatGPT provides a relevant answer that considers the potential impacts of Newton focusing on biology instead of physics, but it lacked details and depth. Lion, on Ablation study of the threshold τ for Lion-7B.", "figure_data": "Threshold τ Vicuna-Instructions (Avg.) AGIEval (Avg.) BBH (Avg.)0.089.5822.426.50.592.1623.529.81.093.8126.332.01.594.0925.731.62.092.2324.631.3Ratio r Vicuna-Instructions (Avg.) AGIEval (Avg.) BBH (Avg.)1:089.6024.330.82:192.9525.733.11:193.8126.332.01:291.7723.929.60:190.0222.124.3", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study of the ratio r for Lion-7B.", "figure_data": "383634Accuracy (%)26 28 30 3220 22 2452k58k Training Data Size 62k70k AGIEval (Lion-7B) AGIEval (Lion-13B) BBH (Lion-7B) BBH (Lion-13B)", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Table 7 show the data statistics of AGIEval and BIG-Bench Hard, respectively.", "figure_data": "Task# Examples # ChoicesAQuA-RAT2545LogiQA6514LSAT-AR2305LSAT-LR5105LSAT-RC2695SAT-Math2204SAT-English2064SAT-English (w/o Psg.)2064", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Statistics of AGIEval dataset.", "figure_data": "Task# Examples# ChoicesBoolean Expressions2502Causal Judgement1872Date Understanding2506Disambiguation QA2504Formal Fallacies2502Geometric Shapes25011Hyperbaton2502Logical Deduction (5 objects)2505Logical Deduction (7 objects)2507Logical Deduction (3 objects)2503Movie Recommendation2505Navigate2502Penguins in a Table1465Reasoning about Colored Objects25018Ruin Names25011Salient Translation Error Detection2506Snarks1782Sports Understanding2502Temporal Sequences2504Tracking Shuffled Objects (5 objects)2505Tracking Shuffled Objects (7 objects)2507Tracking Shuffled Objects (3 objects)2503Web of Lies2502", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Statistics of BIG-Bench Hard dataset.", "figure_data": "", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Training hyperparameters.", "figure_data": "• Alpaca (Taori et al., 2023) is a project initi-ated by Stanford University with the objec-tive of developing and disseminating an open-source model that adeptly follows instructions.It is based on LLaMA and fine-tuned on 52Kinstruction-following examples generated by", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Hyperparameters for querying OpenAI gpt-3.5-turbo API under different roles.", "figure_data": "", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" } ]
Yuxin Jiang; Chunkit Chan; Mingyang Chen; Wei Wang
[ { "authors": "Sravanti Addepalli; Gaurav Kumar Nayak; Anirban Chakraborty; Venkatesh Babu Radhakrishnan", "journal": "", "ref_id": "b0", "title": "Degan: Data-enriching gan for retrieving representative samples from a trained classifier", "year": "2020" }, { "authors": "Vamsi Aribandi; Yi Tay; Tal Schuster; Jinfeng Rao; Huaixiu Steven Zheng; Sanket Vaibhav Mehta; Honglei Zhuang; Q Vinh; Dara Tran; Jianmo Bahri; Jai Ni; Kai Prakash Gupta; Sebastian Hui; Donald Ruder; Metzler", "journal": "", "ref_id": "b1", "title": "Ext5: Towards extreme multitask scaling for transfer learning", "year": "2022-04-25" }, { "authors": "Yejin Bang; Samuel Cahyawijaya; Nayeon Lee; Wenliang Dai; Dan Su; Bryan Wilie; Holy Lovenia; Ziwei Ji; Tiezheng Yu; Willy Chung; Quyet V Do; Yan Xu; Pascale Fung", "journal": "", "ref_id": "b2", "title": "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity", "year": "2023" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott M Lundberg; Harsha Nori; Hamid Palangi; Marco Túlio Ribeiro; Yi Zhang", "journal": "", "ref_id": "b3", "title": "Sparks of artificial general intelligence: Early experiments with GPT-4", "year": "2023" }, { "authors": "Chunkit Chan; Tsz Ho; Chan ", "journal": "ACM", "ref_id": "b4", "title": "Discourse-aware prompt for argument impact classification", "year": "2023-02-17" }, { "authors": "Chunkit Chan; Jiayang Cheng; Weiqi Wang; Yuxin Jiang; Tianqing Fang; Xin Liu; Yangqiu Song", "journal": "", "ref_id": "b5", "title": "a. Chatgpt evaluation on sentence level relations: A focus on temporal, causal, and discourse relations", "year": "2023" }, { "authors": "Chunkit Chan; Xin Liu; Tsz Ho Chan; Jiayang Cheng; Yangqiu Song; Ginny Y Wong; Simon See", "journal": "", "ref_id": "b6", "title": "Self-consistent narrative prompts on abductive natural language inference", "year": "2023" }, { "authors": "Chunkit Chan; Xin Liu; Jiayang Cheng; Zihan Li; Yangqiu Song; Ginny Y Wong; Simon See", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Discoprompt: Path prediction prompt tuning for implicit discourse relation recognition", "year": "2023-07-09" }, { "authors": "Akshay Chawla; Hongxu Yin; Pavlo Molchanov; Jose Alvarez", "journal": "", "ref_id": "b8", "title": "Data-free knowledge distillation for object detection", "year": "2021" }, { "authors": "Yen-Chun Chen; Zhe Gan; Yu Cheng; Jingzhou Liu; Jingjing Liu", "journal": "", "ref_id": "b9", "title": "Distilling knowledge learned in bert for text generation", "year": "2019" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b10", "title": "Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Gongfan Fang; Kanya Mo; Xinchao Wang; Jie Song; Shitao Bei; Haofei Zhang; Mingli Song", "journal": "", "ref_id": "b11", "title": "Up to 100x faster data-free knowledge distillation", "year": "2022" }, { "authors": "Gongfan Fang; Jie Song; Chengchao Shen; Xinchao Wang; Da Chen; Mingli Song", "journal": "Bard", "ref_id": "b12", "title": "Data-free adversarial distillation", "year": "2019" }, { "authors": "Arnav Gudibande; Eric Wallace; Charlie Snell; Xinyang Geng; Hao Liu; Pieter Abbeel; Sergey Levine; Dawn Song", "journal": "", "ref_id": "b13", "title": "The false promise of imitating proprietary llms", "year": "2023" }, { "authors": "Byeongho Heo; Minsik Lee; Sangdoo Yun; Jin Young Choi", "journal": "AAAI Press", "ref_id": "b14", "title": "Knowledge distillation with adversarial samples supporting decision boundary", "year": "2019-01-27" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b15", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Yuxin Jiang; Linhan Zhang; Wei Wang", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Improved universal sentence embeddings with promptbased contrastive learning and energy-based learning", "year": "2022-12-07" }, { "authors": "Sanjay Kariyappa; Atul Prakash; Moinuddin K Qureshi", "journal": "", "ref_id": "b17", "title": "Maze: Data-free model stealing attack using zeroth-order gradient estimation", "year": "2021" }, { "authors": "Jan Kocon; Igor Cichecki; Oliwier Kaszyca; Mateusz Kochanek; Dominika Szydlo; Joanna Baran; Julita Bielaniewicz; Marcin Gruza; Arkadiusz Janz; Kamil Kanclerz; Anna Kocon; Bartlomiej Koptyra; Wiktoria Mieleszczenko-Kowszewicz; Piotr Milkowski; Marcin Oleksy; Maciej Piasecki; Lukasz Radlinski; Konrad Wojtasik; Stanislaw Wozniak; Przemyslaw Kazienko", "journal": "", "ref_id": "b18", "title": "Chatgpt: Jack of all trades, master of none", "year": "2023" }, { "authors": "Haoran Li; Dadi Guo; Wei Fan; Mingshi Xu; Yangqiu Song", "journal": "", "ref_id": "b19", "title": "Multi-step jailbreaking privacy attacks on chatgpt", "year": "2023" }, { "authors": "Paul Micaelli; Amos J Storkey", "journal": "", "ref_id": "b20", "title": "Zero-shot knowledge transfer via adversarial belief matching", "year": "2019-12-08" }, { "authors": "Paul Micaelli; Amos J Storkey", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Zero-shot knowledge transfer via adversarial belief matching", "year": "2019" }, { "authors": "Swaroop Mishra; Daniel Khashabi; Chitta Baral; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Cross-task generalization via natural language crowdsourcing instructions", "year": "2022-05-22" }, { "authors": "Subhabrata Mukherjee; Arindam Mitra; Ganesh Jawahar; Sahaj Agarwal; Hamid Palangi; Ahmed Hassan; Awadallah ", "journal": "", "ref_id": "b23", "title": "Orca: Progressive learning from complex explanation traces of GPT-4", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b24", "title": "GPT-4 technical report", "year": "2023" }, { "authors": " Tb Openai", "journal": "OpenAI", "ref_id": "b25", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2022" }, { "authors": "Tribhuvanesh Orekondy; Bernt Schiele; Mario Fritz", "journal": "", "ref_id": "b26", "title": "Knockoff nets: Stealing functionality of blackbox models", "year": "2019" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b27", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Nicolas Papernot; Patrick Mcdaniel; Ian Goodfellow; Somesh Jha; Z Berkay Celik; Ananthram Swami", "journal": "", "ref_id": "b28", "title": "Practical black-box attacks against machine learning", "year": "2017" }, { "authors": "Ilija Radosavovic; Piotr Dollár; Ross Girshick; Georgia Gkioxari; Kaiming He", "journal": "", "ref_id": "b29", "title": "Data distillation: Towards omni-supervised learning", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b30", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Partha Pratim; Ray ", "journal": "Internet of Things and Cyber-Physical Systems", "ref_id": "b31", "title": "Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope", "year": "2023" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b32", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022-04-25" }, { "authors": "Aarohi Srivastava; Abhinav Rastogi; Abhishek Rao; Abu Awal; Md Shoeb; Abubakar Abid; Adam Fisch; Adam Adam R Brown; Aditya Santoro; Adrià Gupta; Garriga-Alonso", "journal": "", "ref_id": "b33", "title": "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models", "year": "2022" }, { "authors": "Mirac Suzgun; Nathan Scales; Nathanael Schärli; Sebastian Gehrmann; Yi Tay; Hyung Won Chung; Aakanksha Chowdhery; Quoc V Le; Ed H Chi; Denny Zhou; Jason Wei", "journal": "", "ref_id": "b34", "title": "Challenging big-bench tasks and whether chain-of-thought can solve them", "year": "2022" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b35", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b36", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Jean-Baptiste Truong; Pratyush Maini; Robert J Walls; Nicolas Papernot", "journal": "", "ref_id": "b37", "title": "Data-free model extraction", "year": "2021" }, { "authors": "Peiyi Wang; Lei Li; Liang Chen; Dawei Zhu; Binghuai Lin; Yunbo Cao; Qi Liu; Tianyu Liu; Zhifang Sui", "journal": "", "ref_id": "b38", "title": "Large language models are not fair evaluators", "year": "2023" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b39", "title": "Self-instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b40", "title": "a. Finetuned language models are zero-shot learners", "year": "2022-04-25" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b41", "title": "Finetuned language models are zero-shot learners", "year": "2022-04-25" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler; Ed H Chi; Tatsunori Hashimoto; Oriol Vinyals; Percy Liang; Jeff Dean; William Fedus", "journal": "", "ref_id": "b42", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b43", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Can Xu; Qingfeng Sun; Kai Zheng; Xiubo Geng; Pu Zhao; Jiazhan Feng; Chongyang Tao; Daxin Jiang", "journal": "", "ref_id": "b44", "title": "Wizardlm: Empowering large language models to follow complex instructions", "year": "2023" }, { "authors": "Pavlo Hongxu Yin; Jose M Molchanov; Zhizhong Alvarez; Arun Li; Derek Mallya; Hoiem; K Niraj; Jan Jha; Kautz", "journal": "", "ref_id": "b45", "title": "Dreaming to distill: Datafree knowledge transfer via deepinversion", "year": "2020" }, { "authors": "Wanjun Zhong; Ruixiang Cui; Yiduo Guo; Yaobo Liang; Shuai Lu; Yanlin Wang; Amin Saied; Weizhu Chen; Nan Duan", "journal": "", "ref_id": "b46", "title": "Agieval: A human-centric benchmark for evaluating foundation models", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 70.87, 366.02, 88.81, 14 ], "formula_id": "formula_0", "formula_text": "X A = {x A i } i∈[1,N A ]" }, { "formula_coordinates": [ 4, 70.87, 489.3, 92.52, 14 ], "formula_id": "formula_1", "formula_text": "{x A i , T (x A i )} i∈[1,N A ]" }, { "formula_coordinates": [ 4, 351.12, 495.53, 174.02, 14.19 ], "formula_id": "formula_2", "formula_text": "d i = R(T (x B i ), S(x B i ) | x B i )(1)" } ]
2023-10-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b37", "b56", "b65", "b81", "b4", "b34", "b49", "b41", "b52", "b74", "b12", "b17", "b75", "b67", "b32" ], "table_ref": [], "text": "Many problems in science and engineering require solving complex boundary value problems. Most of the time, we are interested in solving a partial differential equation (PDE) for multiple values of input parameters such as material properties, boundary conditions, initial conditions, or geometrical parameters. Traditional numerical methods such as the finite element method, finite volume method, and finite differences require fine discretization of time and space in order to be accurate. As a result, these methods are often computationally expensive, especially when the boundary value problem needs to be repeatedly solved for extensive exploration of the input parameters space. To overcome this issue, machine and deep learning have been leveraged for various tasks in computational physics, namely, solving and learning solutions to PDEs [37,56,65,81], accelerating linear solvers [5,34], reducedorder modeling [49], domain decomposition [41], closure modeling [52], and topology optimization [74], to name a few. As reported in the review papers [13,17,75], most of the recent advances have been relying on deep neural networks for their flexibility and expressiveness. In this work, we focus on learning simulations of physical phenomena, that are discretized on a non-parameterized unstructured mesh. In this situation, traditional machine learning approaches cannot easily be leveraged as the inputs of the problem are given by graphs with different numbers of nodes and edges. In contrast, deep learning models such as graph neural networks (GNNs) [67] can easily overcome this limitation thanks to their ability to operate on meshes with different resolutions and topologies. While GNNs show promising results and their flexibility is highly appealing, they still suffer from a few shortcomings that prevent their deployement in engineering fields where decisions involve high stakes. Training GNNs usually requires large datasets and computational resources, and predicting their uncertainties is still an open and challenging problem of its own [32].\nWe propose a novel methodology, called Mesh Morphing Gaussian Process (MMGP), that relies on standard and well-known morphing strategies, dimensionality reduction techniques and finite element interpolation for learning solutions to PDEs with non-parameterized geometric variations. In contrast to deep learning methods, such as GNNs, the model can easily and efficiently be trained on CPU hardware and predictive uncertainties are readily available. Our method shares some limitations with any machine learning regressor for PDE systems: (i) within the predictive uncertainties, our method produces predictions with an accuracy lower than the reference simulations, (ii) unlike many methods used in reference simulators, like the finite element method, our method provides no guaranteed error bounds and (iii) our method requires a well sampled training dataset, which has a certain computational cost, so that the workflow becomes profitable only for many-query contexts where the inference is called a large number of times. Regarding (i), rough estimates may be sufficient in preproject phases, and accuracy can be recovered by using the prediction as an initialization in the reference simulator, or by allowing the designer to run the reference simulator on the identified configuration if the regressor is used in an optimization task.\nWe start by providing the background and assumptions of our method while mentioning some related works in Section 2. Then, the proposed methodology is detailed in Section 3. Three numerical experiments are presented in Section 4. Finally, a conclusion is given in Section 5." }, { "figure_ref": [], "heading": "Preliminaries and related works", "publication_ref": [ "b82" ], "table_ref": [], "text": "Notations. Vectors and matrices are denoted with bold symbols. The entries i of a vector v and i, j of a matrix M are respectively denoted v i and M i,j . The i-th row of a matrix M is denoted by M i .\nBackground. Let U true : Ω → R d be a solution to a boundary value problem, where Ω ⊂ R dΩ denotes the physical domain of the geometry under consideration, and d Ω = 2 or 3. The domain Ω is discretized into a conformal mesh M as M = ∪ Ne e=1 Ω e . In traditional numerical approaches such as the finite element method [82], an approximation U of the solution U true is sought in the finite-dimensional space spanned by a family of trial functions, {φ I (x)} N I=1 , supported on the mesh M:\nU k (x) = N I=1 U k,I φ I (x) , k = 1, . . . , d ,(1)\nwhere N is the total number of nodes in the mesh M, U ∈ R d×N is the discretized solution (featuring d fields), and x ∈ R dΩ denotes the spatial coordinates. For simplicity of the presentation and without loss of generality, we consider the particular case of a Lagrange P 1 finite element basis, so that the solution is uniquely determined by its value at the nodes of M. In this setting, the basis {φ I } N" }, { "figure_ref": [], "heading": "I=1", "publication_ref": [ "b66", "b44", "b63", "b33", "b10", "b9", "b58", "b61", "b63", "b28", "b6", "b39", "b53", "b54", "b19", "b30", "b51", "b32" ], "table_ref": [], "text": "spans the space {v ∈ C 0 (M) : v| Ωe ∈ P 1 , ∀Ω e ∈ M}, and the discretized solution U is determined by solving the discretized weak formulation of the underyling boundary value problem. This problem also depends on some parameters µ ∈ R p , such as material properties and boundary conditions. It is assumed that there are scalar output quantities of interest w ∈ R q that depend on the discretized solution U, and possibly on M and µ. We restrict ourselves to stationary, time-independent, scalars and fields of interest, which still falls in the scope of many industrial problems of interest. The learning task that we consider herein consists in learning the mapping\nF : (µ, M) → (U, w).(2)\nFor this purpose, it is assumed that we are given a training set of size n made of input pairs (µ i , M i ) of parameters and meshes, and output pairs (U i , w i ) of discretized fields and scalars. Each input mesh M i has a number of nodes denoted by N i , and corresponds to a finite element discretization of an input geometry Ω i . The associated discretized solution U i is a matrix of size (d × N i ). For any i = 1, . . . , n, the mesh M i can be represented as an undirected graph G i = (V i , E i ), where V i denotes the set of nodes and E i is the set of edges.\nAssumptions and limitations. We assume that the observed input geometries, Ω 1 , . . . , Ω n , share a common topology. The parameterization that generates the input geometries is unknown, and we are left with the associated finite element meshes M 1 , . . . , M n . Being the discretization of physical domains involved in a boundary value problem, the input meshes inherit important features such as boundary conditions applied to subsets of nodes and elements. In finite element methods, error estimates strongly depend on the quality of the mesh [66]. Hence, in our context, it is assumed that the input meshes exhibit good quality in terms of elements aspect ratios and node densities, adapted to the regularity of the fields of interest. Our focus centers on the design optimization of industrial components with respect to specific physical phenomena. Consequently, we assume precise control over the geometry, free from any noise. Additionally, the employed geometrical transformations are constrained to avoid extreme distortions, as they are selected from sets of admissible designs that adhere to limitations on mass, volume, and mechanical resistance.\nRelated works. In recent years, there has been a substantial focus on advancing neural networks to emulate solutions to physical systems, either through the integration of domain-specific knowledge [44] or by devising efficient architectures for GNNs [63]. GNNs learn the mapping F by relying on the message passing framework introduced by Gilmer et al. [33] and extended by Battaglia et al. [11].\nIn the context of physical systems, only a few contributions address non-parameterized geometric variabilities. The early work of Baque et al. [10] explores the use of GNNs to emulate physics-based simulations in the presence of geometric variabilities by relying on geodesic convolutions [58,61].\nMore recently, Pfaff et al. [63] develop the MeshGraphNets (MGNs) model, a GNN that updates nodes and edges features in order to learn time-dependent simulations. Most notably, the model can handle various physics, three-dimensional problems, and non-parameterized geometric variabilities. Fortunato et al. [28] introduce MultiScale MGNs that relies on two different mesh resolutions in order to overcome the computational cost of the message passing algorithm on dense meshes, and to increase the accuracy of MGNs. The efficiency of MGNs has been illustrated by Allen et al. [7] for inverse problems, and by Harsch et al. [39] for time-independent systems. There exist several variants of such GNNs for learning mesh-based solutions. A multi-scale GNN that learns from multiple mesh resolutions has been proposed in Lino et al. [53] and is illustrated on two dimensional PDEs with geometric variabilities. Lino et al. [54] also devise a multi-scale and rotation-equivariant GNN that extrapolates the time evolution of the fluid flow, and Cao et al. [19] propose a novel pooling strategy that prevents loss of connectivity and wrong connections in multi-level GNNs. Regarding morphing strategies, Gao et al. [30] and Li et al. [51] deform irregular meshes into a reference one in order to learn solution of PDEs, but rely on complex coordinate transformation to compute a physical residual-based loss in the reference domain, and on input meshes with equal number of nodes. It is worth emphasizing that while the aforementioned works show promising results, they do not provide predictive uncertainties. There exist several methods for quantifying the uncertainties of deep neural networks [32], but it remains an open problem to provide well calibrated uncertainty estimates at a reasonable computational cost." }, { "figure_ref": [ "fig_0" ], "heading": "MMGP methodology", "publication_ref": [], "table_ref": [], "text": "The proposed methodology is based on two main ingredients that allow us to leverage classical machine learning methods for regression tasks in the context of non-parameterized geometrical variability: (i) the data is pretreated by morphing each input mesh into a reference shape, and resorting to finite element interpolation to express all fields of interest on a common mesh of this reference shape, and (ii) a low-dimensional embedding of the geometries is built by considering the coordinates of the nodes as a continuous input field over the meshes. Formally, the proposed methodology consists in constructing a graph kernel by relying on three well chosen transformations such that the transformed inputs can be compared with any classical kernel functions defined over Euclidean spaces. Figure 1 illustrates the proposed strategy for a two-dimensional problem where we aim at predicting output fields of interest. The first transformation morphs the input mesh onto a chosen common shape. The second transformation performs a finite element interpolation on the chosen reference mesh of the common shape. Finally, a dimensionality reduction technique is applied to obtain low-dimensional embeddings of the inputs and outputs. These three steps are all deterministic and described in the following subsections. The proposed kernel function can be plugged into any kernel method. Herein, we rely on Gaussian process regression in order to learn steady-state mesh-based simulations in computational fluid and solid mechanics. The lower rectangle in the illustration of the input of the GP represents the scalar inputs µ i .\nZ 1 ℓ Z i ℓ U i k U 1 k U n k U n k U 1 k GP Z 1 ℓ Z n ℓ Z n ℓ . . . . . . . . . . . . Z i U i" }, { "figure_ref": [], "heading": "Deterministic preprocessings of the input meshes and fields of interest", "publication_ref": [ "b71", "b8", "b21" ], "table_ref": [], "text": "In this section, we describe the methodology for building low-dimensional representations of the input meshes and output fields.\nMesh morphing into a reference shape Ω. Each input mesh M i , i = 1, . . . , n, is morphed onto a mesh M i associated to a fixed reference shape Ω. The morphed mesh has the same number of nodes and same set of edges as the initial mesh M i , but their spatial coordinates differ. In this work, we consider two morphing algorithms, namely, Tutte's barycentric mapping [71] onto the unit disk, and the Radial Basis Function (RBF) morphing [9,21] onto a chosen reference shape. Regardless of the morphing algorithm, physical features inherited from the boundary value problem are carefully taken into account. More precisely, points, lines and surfaces of importance in the definition of the physical problem are mapped onto their representant on the reference shape. Doing so, rigid body transformations that may occur in the database are corrected in the mesh morphing stage, and boundary conditions of same nature are matched together.\nCommon mesh M c of the reference shape Ω. At this stage, it should be noted that although the morphed meshes M i are associated to a common reference shape Ω, they do not share the same nodes and edges. This prevents us from measuring similarities between the input meshes and output fields with classical techniques. A strong advantage of the finite element method is that it provides accurate solution fields with a continuous description over the mesh, and a natural way to transfer fields from one mesh to another. This motivates us to introduce a common mesh M c of the reference shape Ω, as a common support for all the sample fields data. A possibility is to choose an input mesh in the training set, e.g. M 1 , and define M c as its morphing onto the chosen reference shape. The aim is twofold. First, it allows us to express the output fields on the common morphed mesh M c , leading to vector representations of same sizes. Second, the coordinates fields of the meshes M i are also transferred onto this common mesh in order to build a shape embedding. These procedures rely on classical finite element interpolation that is described in the rest of this section.\nTransporting the fields of interest on the common mesh M c . The discretized solution U i , i = 1, . . . , n, is first transferred on the morphed mesh M i as follows:\nU i k (x) = N i I=1 U i k,I φ i I (x) , , k = 1, . . . , d ,(3)\nwhere {φ i I } N i I=1 is the finite element basis associated to the morphed mesh M i . The transported fields U 1 k , . . . , U n k share the same geometric support (the reference shape). This implies that they can be interpolated onto the common mesh M c using the finite element interpolation operator P defined as:\nP (U i k )(x) = Nc J=1 U i k (x c J )φ c J (x) = N i I=1 Nc J=1 U i k,I φ i I (x c J )φ c J (x) ,(4)\nwhere {φ c I } N i I=1 is the finite element basis associated to M c , x c J is the coordinates of the J-th node of M c and U i k (x c J ) is evaluated using Equation (3). We are now in the much more favorable situation where all the fields of interest are expressed on a common mesh M c . More specifically, for each field of interest k and input mesh M i , we let U i k ∈ R Nc be the transported output fields onto the common mesh, such that\nU i k,I = U i k (x c I ).\nIn this setting, the vector representations of the output fields now have the same sizes N c . Notice that the derivation of the finite element interpolation is identical with higher-order Lagrange finite elements.\nTransporting the coordinates fields on the common mesh M c . The same procedure can be applied to the coordinate fields of the input meshes in order to build a shape embedding of the input meshes. Let Z i ℓ be the ℓ-th component of the coordinate field over the mesh M i , ℓ = 1, . . . , d Ω . Using the finite element basis associated to the mesh M i , the coordinates fields can be written as\nZ i ℓ (x) = N i I=1 Z i ℓ,I φ i I (x) ,\nwhere Z i ℓ,I denotes the ℓ-th of the coordinates of the node I in the mesh M i . Notice that the notation Z i ℓ,I is preferred to x i ℓ,I , since it denotes here the degrees of freedom of the coordinate fields defined over M i , whose continuity property is essential for the finite element interpolation stage. Then, in the same fashion as for the fields of interest, the coordinates fields are transferred on the morphed mesh M i and interpolated on the common mesh M c using the operator P given by Equation ( 4). For each coordinate ℓ = 1, . . . , d Ω and input mesh M i , we have the common representations Z i ℓ ∈ R Nc of the coordinate fields on the common mesh M c . Dimensionality reduction. At this stage, the input coordinates fields of the meshes and the output fields are expressed on the same common mesh M c , and can be compared using standard machine learning techniques. We propose to build low-dimensional embeddings of these quantities using Principal Component Analysis (PCA). For each output field, PCA is applied to the set of observations { U i k } n i=1 , leading to the fields low-dimensional embeddings that we denote by\n{ U i k } n i=1 . Similarly, PCA is applied to concatenated transported coordinate fields, {( Z i 1 , . . . , Z i dΩ )} n i=1 , leading to low- dimensional embeddings of the input geometries { Z i } n\ni=1 , that we refer to as the shape embeddings." }, { "figure_ref": [], "heading": "MMGP training", "publication_ref": [ "b76" ], "table_ref": [], "text": "Once the operations of mesh morphing, finite element interpolation on a common mesh and dimensional reduction described in the previous subsection have been carried out, we are left with reduced-size objects of same dimension.\nLet {X i } n i=1 ∈ R l Z +p\n, where p the number of nongeometrical parameters and l Z is the size of shape embedding, be such that X i = ( Z i , µ i ). Denoting l U k the size of the embedding of field U k , the machine learning task given by Equation ( 2) can be approximated by the following set of scalar and vector regression problems:\nF scalar,m : X i → w i m ∈ R, m = 1, . . . , q ,(5a)\nF vector,k :\nX i → U i k ∈ R l U k , k = 1, . . . , d .(5b\n) Gaussian processes can be trained in a classical fashion to address the regression problems (5a)-(5b).\nMMGP for a scalar output. Let D = {(X i , w i m0 )} n i=1 be a training dataset for one of the problems given by Equation (5a), i.e. for the m 0 -th output scalar. It can be shown by standard conditioning [76] that the posterior mean and variance of the prediction on some given test input X ⋆ are given by\nE[w ⋆ ] = k T ⋆ (K + σ 2 I) -1 w m0 , V[w ⋆ ] = K ⋆⋆ -k T ⋆ (K + σ 2 I) -1 k ⋆ ,\nwhere w m0 = {w i m0 } n i=1 , and K is the Gram matrix such that K i,j = c(X i , X j ) for 1 ≤ i, j ≤ n, the vector k ⋆ such that k ⋆j = c(X ⋆ , X j ), and the scalar K ⋆⋆ = c(X ⋆ , X ⋆ ), with c denoting the chosen kernel function which lengthscales are optimized, and σ denotes the optimized nugget parameter. This training procedure is repeated for the q scalar outputs.\nMMGP for an output field. Let D = {(X i , U i k0 )} n i=1 be a training dataset for one of the problems given by Equation (5b), i.e. for output field k 0 . A multioutput GP is first trained to predict the output embeddings U k0 . The predictions of the GP are then decoded with the inverse PCA mapping, and morphed back to the original input mesh M i . Due to this last nonlinear operation, the posterior distribution of the output field of interest U k0 is no longer Gaussian. The predictive uncertainties are thus obtained through Monte Carlo simulations. This training procedure is repeated for each of the d output fields." }, { "figure_ref": [], "heading": "Properties of the methodology", "publication_ref": [ "b59" ], "table_ref": [], "text": "The sequence of preprocessing operations, including mesh morphing, finite element interpolation, and PCA, leads to a non-linear dimensionality reduction. Leveraging these deterministic processes reduces the burden on the machine learning stage, potentially necessitating fewer training examples to achieve robust model performance on complex mesh-based data. In the numerical experiments presented in Section 4, the morphing technique is chosen a priori, with ongoing research focused on optimizing this morphing to minimize the number of PCA modes, which leads to a highly nonlinear dimensionality reduction stage that is finely tuned to the specific characteristics of the data.\nGaussian process regression between the input and output embeddings has several advantages. From a theoretical perspective, there exists conditions on the features of a continuous kernel so that it may approximate an arbitrary continuous target function [59]. Gaussian processes also come with built-in predictive uncertainties, that are marginally valid under the a priori Gaussian assumption. Nevertheless, the proposed methodology can be combined with any other regressor such as a deep neural network instead of the Gaussian process.\nFor clarity of the presentation, the MMGP methodology is illustrated with very simple, if not the simplest, morphing and dimensionality reduction techniques. Alternatives are possible for each algorithm brick. In particular, the fixed topology restriction may be lifted with other morphing algorithms, see Appendix B for more details." }, { "figure_ref": [], "heading": "Numerical experiments", "publication_ref": [], "table_ref": [], "text": "Three regression problems are considered in order to assess the efficiency of the proposed methodology. The chosen datasets are first described in Section 4.1. The experimental setup is summarized in Section 4.2, and the results are discussed in Section 4.3." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b18", "b7", "b50", "b60", "b13", "b0", "b13" ], "table_ref": [ "tab_0" ], "text": "Three datasets in computational fluid and solid mechanics are considered, described below, and summarized in Table 1. All the considered datasets involve geometric variabilities and meshes with possibly different number of nodes and edges. Additional details can also be found in Appendix A. Rotor37 dataset. We consider a 3D compressible steady-state Reynold-Averaged Navier-Stokes (RANS) simulation solved with elsA [18] using the finite volumes method. The inputs are given by a mesh representing the surface of a 3D compressor blade [8], and two additional parameters that correspond to an input pressure and a rotation speed. The outputs of the problem are given by 4 scalars (massflow m, compression rate τ , isentropic efficiency η, polyentropic efficiency γ), and 2 fields on the surface of the input mesh (temperature T , pressure P ).\nTensile2d dataset. The second dataset corresponds to a 2D quasi-static problem in solid mechanics.\nThe geometrical support consists of a 2D square, with two half-circles that have been cut off in a symmetrical manner. The inputs are given by a mesh, a pressure applied on the upper boundary, and 5 material parameters modeling the nonlinear elastoviscoplastic law of the material [50]. The boundary value problem is solved with the finite element method and the Z-set software [60]. The outputs of the problem are given by 4 scalars (p max , v max , σ max 22 , and σ max v\n) and 6 fields of interest (u, v, p, σ 11 , σ 12 , and σ 22 ). For the sake of brevity, the reader is referred to Appendix A for a description of these quantities.\nAirfRANS dataset. The last dataset is made of 2D incompressible RANS around NACA profiles and taken from Bonnet et al. [14]. The inputs are given by a mesh of a NACA profile and two parameters that correspond to the inlet velocity and the angle of attack. The outputs are given by 2 scalars (drag C D and lift C L coefficients), and 3 fields (the two components of the fluid velocity u and v, and the pressure field p). An additional version of this dataset is also considered, where the input meshes have been coarsened using the MMG remesher [1], and the output fields have been transferred to the coarsened meshes. The output scalars are unchanged. Illustrations of the input meshes can be found in the original paper [14]." }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [ "b24", "b36", "b35", "b29", "b79", "b63", "b25", "b73", "b28" ], "table_ref": [], "text": "Morphings. The Tutte's barycentric mapping onto the unit disk is used for morphing the meshes in the Rotor37 and Tensile2d datasets. The input meshes in the AirfRANS dataset are morphed onto the first mesh using RBF.\nPCA embeddings. Embeddings of sizes 32 and 64 are retained for respectively the spatial coordinates and output fields in the Rotor37 and AirfRANS datasets. Smaller embeddings of sizes 8 are considered for both the spatial coordinates and output fields in the Tensile2d dataset. Note that for the Tensile2d and AirfRANS, a more effective variant of PCA has been used, which can easily deal with very large meshes (see, e.g., [24,36]), with up to hundreds of millions of degrees of freedom. Details about this variant can be found in Appendix C.\nGaussian processes. Anisotropic Matern-5/2 kernels and zero mean functions are used for all the Gaussian processes priors. The lengthscales and nugget parameter are optimized by maximizing the marginal log-likelihood function with a L-BFGS algorithm and 10 random restarts using the GPy package [35].\nBaselines. The performance of MMGP is compared with two baselines, namely, a graph convolutional neural network (GCNN) with a UNet-type architecture [29] and the GeneralConv [79], and MeshGraphNets (MGN) [63]. The hyperparameters of the GNNs are chosen by relying on a grid search. The GCNN and MGN models are implemented with PyTorch Geometric [25] and DGL [73], respectively. Additional details about the architectures and hyperparameters can be found in Appendix D. Due to the sizes of the input meshes in the AirfRANS dataset, the considered GNN-based baselines are prohibitively expensive. Similarly to the work of [28], the GNNs are trained using coarsened input meshes as described in Section 4.1. The output fields predicted on the coarse meshes are then transferred back on the original fine meshes thanks to finite element interpolation.\nEvaluation metrics. Accuracy of the trained models is assessed by computing relative RMSE errors. Let {U i ref } n⋆ i=1 and {U i pred } n⋆ i=1 be respectively test observations and predictions of a given field of interest. The relative RMSE considered herein is defined as\nRRMSE f (U ref , U pred ) = 1 n ⋆ n⋆ i=11\nN i ∥U i ref -U i pred ∥ 2 2 ∥U i ref ∥ 2 ∞ 1/2\n, where it is recalled that N i is the number of nodes in the mesh M i , and max(U i ref ) is the maximum entry in the vector U i ref .\nSimilarly for scalar outputs, the following relative RMSE is computed:\nRRMSE s (w ref , w pred ) = 1 n ⋆ n⋆ i=1 |w i ref -w i pred | 2 |w i ref | 2 1/2 .\nGiven that the input meshes may have different number of nodes, the coefficients of determination Q 2 between the target and predicted output fields are computed by concatenating all the fields together.\nFor each of the considered regression problems, training is repeated 10 times in order to provide uncertainties over the relative RMSE and Q 2 scalar regression coefficients." }, { "figure_ref": [ "fig_4" ], "heading": "Results and discussion", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Predictive performance. The relative RMSE and Q 2 scalar regression coefficients are reported in Rotor37 dataset m 4.4e-3 (5e-4) 5.4e-3 (7e-5) 5.0e-4 (3e-6) 0.9816 (4e-3) 0.9720 (5e-4) 0.9998 (3e-6) p 4.4e-3 (5e-4) 5.3e-3 (7e-5) 4.8e-4 (1e-6) 0.9803 (5e-3) 0.9710 (9e-4) 0.9998 (2e-6) η 3.1e-3 (7e-4) 7.2e-3 (7e-5) 5.0e-4 (3e-6) 0.9145 (4e-2) 0.5551 (2e-3) 0.9979 (1e-6) γ 2.9e-3 (6e-4) 6.5e-3 (2e-5) 4.6e-4 (2e-7) 0.9068 (4e-2) 0.5257 (2e-3) 0.9977 (2e-6) P 1.7e-2 (8e-4) 1.7e-2 (2e-3) 7.2e-3 (5e-4) 0.9863 (1e-3) 0.9866 (3e-3) 0.9973 (4e-4) T 3.9e-3 (1e-4) 1.4e-2 (2e-3) 8.2e-4 (1e-5) 0.9930 (5e-4) 0.9956 (1e-3) 0.9997 (1e-5)\nTensile2d dataset p max 1.6e-0 (7e-1) 2.7e-1 (4e-2) 6.6e-1 (3e-1) 0.4310 (2e-1) 0.6400 (2e-1) 0.9435 (2e-2) v max 4.4e-2 (7e-3) 5.8e-2 (2e-2) 5.0e-3 (3e-5) 0.9245 (3e-2) 0.9830 (1e-2) 0.9999 (2e-5) σ max 22 3.1e-3 (7e-4) 4.5e-3 (1e-3) 1.7e-3 (2e-5) 0.9975 (1e-3) 0.9958 (1e-3) 0.9993 (2e-5) σ max v 1.2e-1 (4e-2) 2.4e-2 (9e-3) 5.0e-3 (3e-5) 0.9723 (2e-2) 0.9801 (1e-2) 0.9997 (7e-6) u 4.5e-2 (1e-2) 1.5e-2 (1e-3) 3.4e-3 (4e-5) 0.9623 (2e-2) 0.9270 (1e-2) 0.9997 (6e-6) v 7.4e-2 (2e-2) 9.7e-2 (7e-3) 5.5e-3 (8e-5) 0.9559 (3e-2) 0.9322 (1e-2) 0.9995 (1e-5) p 1.3e-1 (7e-2) 1.1e-1 (2e-2) 4.4e-2 (1e-2) 0.5691 (1e-1) 0.2626 (1e-1) 0.7785 (9e-2) σ 11 1.0e-1 (4e-2) 2.8e-2 (3e-3) 3.7e-3 (1e-4) 0.9304 (4e-2) 0.8693 (3e-2) 0.9999 (2e-6) σ 12 4.5e-2 (4e-3) 7.5e-3 (4e-4) 2.4e-3 (2e-5) 0.9617 (5e-3) 0.9868 (1e-3) 0.9999 (1e-6) σ 22 3.3e-2 (3e-3) 2.7e-2 (1e-3) 1.4e-3 (1e-5) 0.9662 (6e-3) 0.9782 (2e-3) 0.9999 (1e-6) AirfRANS dataset C D 6.1e-2 (2e-2) 4.9e-2 (7e-3) 3.3e-2 (2e-3) 0.9596 (2e-2) 0.9743 (1e-2) 0.9831 (2e-3) C L 4.1e-1 (1e-1) 2.4e-1 (8e-2) 8.0e-3 (6e-4) 0.9776 (8e-3) 0.9851 (1e-2) 0.9999 (2e-6) u 5.6e-2 (3e-3) 8.3e-2 (2e-3) 1.8e-2 (9e-5) 0.9659 (3e-3) 0.9110 (3e-3) 0.9749 (8e-5) v 4.2e-2 (2e-3) 1.2e-1 (2e-3) 1.5e-2 (3e-5) 0.9683 (3e-3) 0.7516 (5e-3) 0.9806 (3e-5) p 8.5e-2 (7e-3) 9.9e-2 (1e-2) 5.1e-2 (2e-5) 0.9602 (8e-3) 0.9390 (2e-2) 0.9934 (1e-5)\nUncertainty estimates. Once trained, the MMGP model provides access to predictive uncertainties for the output fields and scalars. Figure 4 shows an example of predicted pressure field for an arbitrary test input mesh of the Rotor37 experiment, together with the predictive variance and the point-wise relative absolute error. High relative errors are localized where the pressure field exhibits a discontinuity, known as a shock in compressor aerodynamics. The predictive variance is also higher near this region, reflecting that the GP-based surrogate model is uncertain about its prediction of the shock position. Computational times. The MMGP model can easily be trained on CPU hardware and with much lower computational times, see Table 3. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In summary, our proposed method presents an innovative approach to approximating field and scalar quantities of interest within the context of solving complex physics problems for design optimization. Our work introduces two key contributions: firstly, the utilization of mesh morphing pretreatment in conjunction with finite element interpolation, and secondly, the incorporation of shape embedding through dimensional reduction of coordinates, treating them as continuous fields over the geometric support. These innovations alleviate the machine learning task from the challenges of handling variablesized samples and the need to learn implicit shape embedding. By reducing the dimensionality of inputs and outputs, our approach allows for the application of efficient Gaussian process regression. Notably, our MMGP model exhibits several key advantages. It can seamlessly handle very large meshes, is amenable to efficient CPU training, is fairly interpretable, demonstrates high accuracy in our experimental results, and provides readily available predictive uncertainties. Future works will explore the extension of our method to accommodate time-dependent quantities of interest and investigate the optimization of the morphing process to enhance data compression and overall performance. Our research opens exciting avenues for advancing the capabilities of machine learning in the realm of physics-based design optimization. " }, { "figure_ref": [ "fig_7", "fig_8", "fig_9" ], "heading": "B Morphing strategies", "publication_ref": [ "b71", "b8", "b21", "b78", "b31", "b21", "b15", "b23", "b5", "b70", "b80" ], "table_ref": [], "text": "In this section, we briefly describe the Tutte's barycentric mapping [71] and the radial basis function morphing [9,21] used in the considered experiments.\nTutte's barycentric mapping. For this method, we are limited to connected triangular surface meshes of fixed topology, either in a 2D or in a 3D ambient space. Tutte's barycentric mapping starts by setting the value of the displacement of the boundary points of the mesh (usually onto the unit disk), and solve for the value at all the remaining nodes of the mesh. The physical features available on the mesh, and inherited from the problem, are used in the specification of the displacement of the boundary nodes.\nWe recall that x I , I = 1 . . . N , denote the mesh nodes coordinates. We assume that the numbering of the nodes starts with the interior points of the mesh 1 . . . N int , and ends with the N b nodes on its boundary N int + 1 . . . N . The morphed mesh node coordinates are denoted by x I , I = 1 . . . N . The coordinates of the boundary of the morphed mesh being known, we denote x b I = x I+Nint , I = 1 . . . N b . Then the following sparse linear system is solved for the morphing of the interior points:\nx I - 1 d(I) J∈N (I)∩ 1,Nint x J = - 1 d(I) J∈N (I)∩ Nint,N x b J-N int ,\nwhere N (I) and d(I) are respectively the neighbors and the number of neighbors of the node I in the graph (or in the mesh following its connectivity).\nIn the 2D solid mechanics case Tensile2d, we know the rank of the point separating the left and the bottom faces which we map onto the point (0, 1) of the target unit disk. The linear density of nodes on the boundary of the target unit disk is chosen to be the same as the one of the mesh sample (relative to the length of the boundary), see Figure 8 for an illustration. From [27, corollary 2], the morphing described above is called a parametrization, and defines an isomorphic deterministic transformation of the considered triangular surface mesh M, into a plane triangular mesh M of the unit disk. Notice that although these morphing techniques are called \"mesh parametrization\", this do not mean that we need to know any parametrization of the shape: these are deterministic transformations of the meshes, requiring no other information than the nodes locations and the triangles connectivities. This method is taken from the computer graphics community and has been improved over the years. In [78], a quality indicator called stretch metric is optimized during an iterative procedure, to obtain more regular morphed mesh. Recently in [31], a procedure was proposed to drastically improve mesh parametrization, even in difficult cases where some triangles are overlapping. It should be noted that such iterative procedures come with the additional cost of solving a series of sparse linear systems.\nRadial Basis Function morphing. In the same fashion as Tutte's barycentric mapping, RBF morphing methods start by setting the value of the displacement at some particular nodes of the mesh (here the boundary points of the mesh, but interior points can be considered as well with RBF), and solve for the location at all the remaining nodes of the mesh. The physical features available on the mesh are also used in the specification of the displacement of the boundary points. RBF morphing methods are compatible with 2D and 3D structured and unstructured meshes, do not require any mesh connectivity information, and can be easily implemented in parallel for partitioned meshes.\nWe use the RBF morphing method as proposed in [21]. Once the mapping for the N b boundary points of ranks N int + 1 . . . N is fixed, then the interior points 1 ≤ I ≤ N int are mapped such as\nx I = N b J=1 α J ϕ(∥x I -x b J ∥), 1 ≤ I ≤ N int ,\nwhere ϕ is a radial basis function with compact support and α J are determined by the interpolation conditions. More precisely, we choose the radial basis function with compact support ϕ(ξ) = (1 -ξ) 4 (4ξ + 1) and support radius equal to half of the mesh diameter, and interpolation conditions means that the morphing is known at the boundary points:\nM RBF α = x b , where M RBF I,J = ϕ(∥x b I -x b J ∥), 1 ≤ I, J ≤ N b .\nFor the AirfRANS dataset, we make use of the physical properties of the boundary condition to morph each mesh onto the first geometry of the training set. Referring to Figure 9 (bottom), we know which nodes lie the external boundary (in black), airfoil extrado (in red), airfoil intrado (in blue) and which nodes define the leading and trailing points (green crosses). We choose to keep the points at the external boundary fixed (zero mapping), map the leading and trailing edge to the ones of the mesh of the first training sample, and map the points on the extrado and intrado along the ones of the mesh of the first training sample while conserving local node density (relative to the length of the boundary). A zoom of the RBF morphing close to the airfoil for test sample 787 is illustrated in Figure 10. Notice that while Tutte's barycentric mapping requires solving a sparse linear system of rather large size N int , RBF morphing requires solving a dense linear system of smaller size N b . RBF morphing methods dealing with complex non-homogeneous domains have been proposed in [15]. Other methods. In [23], the morphing is computed by means of solving an elastic problem. See also [6,70] for literature reviews on mesh morphing methods. Mesh deformation algorithms compatible with topology changes have been proposed [80]." }, { "figure_ref": [], "heading": "C Dimensionality reduction", "publication_ref": [ "b20", "b22", "b62", "b68", "b48" ], "table_ref": [], "text": "The principal component analysis can be replaced by more effective dimensionality reduction techniques such as the snapshot-POD. The latter is a variant where the underlying ℓ 2 -scalar product used to compute the coefficients of the empirical covariance matrices is replaced by the L 2 (M c )-inner product. Define the symmetric positive-definite matrix M ∈ R Nc×Nc , such that\nM IJ = M c φ c I (x)φ c J (x)dx .\nIn general, a quadrature formula, in the form of a weighted sum over function evaluations on the common mesh, is chosen such that the integral are computed exactly for functions in the span of the finite element basis. Then, the empirical covariance matrix is computed as\n( U i k ) T M U j k i,j = Nc I,J=1 M c U i k (x c I )φ c I (x)U j k (x c J )φ c J (x)dx = M c P (U i k )(x)P (U j k )(x)dx ,\nwhich corresponds to the continuous formula for the computation of the correlations of the fields of interest transported and interpolated on the common morphed mesh. Hence, the empirical covariance matrix can take into account any heterogeneity of the common morphed mesh, which may occur after morphing. The same construction can be made for the spatial coordinate field, while its derivation is more technical, because it involves vector fields instead of scalar fields. The computation of the empirical covariance matrix can be easily be parallelized on numerous computer nodes, provided that the common morphed mesh has been partitioned in subdomains, which enable efficient dimensionality reduction for meshes up to millions of degrees of freedom, see [20].\nOther linear or nonlinear dimension reduction techniques can be considered, like mRMR feature selection [22,62], kernel-PCA [68] or neural network-based autoencoders [48]." }, { "figure_ref": [], "heading": "D Architectures and hyperparameters of GNN-based baselines D.1 Graph convolutional neural network", "publication_ref": [ "b67", "b26", "b29", "b29", "b47", "b40", "b42", "b57", "b45", "b55", "b72", "b79", "b16", "b77" ], "table_ref": [ "tab_4" ], "text": "A graph convolutional neural network (GCNN) [67] has been implemented using PyTorch Geometric [26] with the Graph U-Net [29] architecture and the following specifications: (i) top k pooling [29,47] layers with a pooling ratio of 0.5 to progressively aggregate information over nodes of the graph, (ii) feature sizes progressively increased after each top k pooling, i.e., 16, 32, 64, 96 and 128, (iii) between each pooling, residual convolution blocks [40] are added to combine two consecutive normalization-activation-convolution layers, (iv) BatchNorm [42] layers are introduced, and (v) LeakyReLU [57] activations are used with slope of 0.1 on negative values.\nA weighted multi-loss L that combines scalars and fields is used, and defined as\nL ((U, w) , (U ′ , w ′ )) = λ scalars L MSE (w, w ′ ) + λ fields d k=1 L MSE (U k , U ′ k ) ,\nwhere λ scalars and λ fields are two positive hyperparameters. For gradient descent, an Adam optimizer [45] is used with a cosine-annealing learning rate scheduler [55]. The following hyperparameters are optimized by grid search: (i) the learning rate, 13 values between 1.0 and 0.0001, (ii) the weight λ field ∈ {1, 10, 100, 1000}, and (iv) the type of convolution, chosen between GATConv [72], Gener-alConv [79], ResGatedGraphConv [16] and SGConv [77]. There are many other hyperparameters that could be tuned, as the number of layers or the number of features on each layer. The chosen hyperparameters are summarized in Table 4 for each experiment. In the case of the Rotor37 problem, " }, { "figure_ref": [], "heading": "D.2 MeshGraphNets", "publication_ref": [ "b63", "b3", "b46" ], "table_ref": [], "text": "The MGN model [63] is taken from Nvidia's Modulus [4] package that implements various deep surrogate models for physics-based simulations. The same set of hyperparameters is used for all the considered regression problems, which is chosen after conducting a grid search over the learning rate, the number of hidden nodes and edges features, the number of processor steps. The learning rate is set to 0.001, the numbers of hidden features hidden_dim_node_encoder, hidden_dim_edge_encoder, and hidden_dim_node_decoder are all set to 16. The number of processor steps is chosen as 10.\nThe rest of the MGNs hyperparameters are left to the default values used in the Modulus package. The batch size is set to 1, the activation is chosen as the LeakyReLU activation with a 0.05 slope, and 1, 000 epochs are performed for training the network. For scalar outputs, a readout layer taken from [46] is added to the model. The input nodes features are given by the spatial coordinates of the nodes, and possible additional fields such as the signed distance function (for the Tensile2d and AirfRANS problems), or the outward normals (for the Rotor37 problem). Given two node coordinates x i and x j , the edges features are chosen as exp(-∥x i -x j ∥ 2 2 /(2h 2 )), where h denotes the median of the edge lengths in the mesh.\nFor each considered regression problem, it is found that it is more effective to train two MGNs models, one dedicated to handling output fields and the other specialized for output scalars, respectively. Nevertheless, better hyperparameter tuning and more effective readout layers could lead to different conclusions regarding this matter." }, { "figure_ref": [ "fig_10" ], "heading": "D.3 Training on AirfRANS", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "As mentioned in Section 4.1, training GNNs on the AirfRANS dataset is computationally expensive due to the sizes of the input meshes. For this reason, the GNN-based baselines are trained on the AirfRANS-remeshed dataset (see Table 1) obtained by coarsening the input meshes (see Figure 11) and the associated output fields. Once trained, predictions on the initial fine meshes are obtained through finite element interpolation. It should be underlined that this strategy may hinder the performance of the GNN-based baselines, as the reconstructed fields are obtained by finite element interpolation." }, { "figure_ref": [], "heading": "E Additional results", "publication_ref": [], "table_ref": [], "text": "This section gathers additional results about the experiments considered in Section 4. " }, { "figure_ref": [ "fig_12" ], "heading": "E.1 Out-of-distribution inputs", "publication_ref": [], "table_ref": [], "text": "Figures 12, 13, and 14 show histograms of the logarithm of the predictive variance for the output scalars of interest on different sets of samples. The aim is to empirically assess if the MMGP model is able to identify out-of-distribution (OOD) inputs by attributing higher predictive variances. In the case of the Rotor37 problem, three OODs samples are generated such that the support of the covariates (µ 1 , µ 2 , and M) are disjoint with the support of the training distributions. It can be seen that the variances of the OOD samples are higher than the ones of the in-distribution samples. Similar observations are made for the Tensile2d and AirfRANS problems. While such an analysis can help to identify OOD inputs, it should be underlined that the predictive uncertainties of Gaussian processes are only valid under the Gaussian a priori assumption, which may not be verified in practice. For instance, the ellipsoid geometry has a similar variance as the in-distribution samples in Figure 13." }, { "figure_ref": [ "fig_14", "fig_15", "fig_16", "fig_17", "fig_14", "fig_15", "fig_16", "fig_17", "fig_18", "fig_19", "fig_20", "fig_21", "fig_22" ], "heading": "E.2 Predicted output fields", "publication_ref": [ "b13", "b13", "b38", "b64", "b29", "b13", "b13", "b13", "b13" ], "table_ref": [], "text": "Tensile2d dataset. For reproducibility matters, we mention that for the field p and the scalar p max , the denominators in the formulae RRMSE f and RRMSE s has been replaced by 1 when its value is below 1e -6 for preventing division by zero, which corresponds to replacing the relative error by the absolute error for samples that do not feature plastic behaviors.\nIn Figures 15161718, we illustrate the MMGP prediction, variance and relative error for all the considered fields: u, v, p (evrcum), σ 11 , σ 12 and σ 22 , for respectively the first training inputs, first test inputs, and two out-of-distribution geometries (ellipsoid and wedge). In particular, the wedge cut-off geometry features stress concentrations that are not present in the training set. We notice that the predictions for selected train and test inputs (Figures 15 and16) are accurate, with small relative errors and relatively small predictive variances, except for some small areas where the considered fields have larger magnitudes. As expected, the predictions for the ellipsoid and wedge cases (Figures 17 and18) are less accurate than for in-distribution shapes, but the predictive variances are larger, which confirms that MMGP informs that, locally, the prediction cannot be trusted. This phenomenon is particularly strong for the wedge case, that largely differs from the training set shapes.\nIn Figures 19 and20, we consider all the output interest fields. The 2D domain is visualized in 3D, in the form of three surfaces: a transparent blue for the 0.025-quantile, a white for the reference prediction and a transparent red for the 0.975-quantile. The point-wise 95% confidence interval is the distance (along the out-of-plane axis) between the transparent blue and red surfaces. We notice that the 95% confidence intervals are very small for the train and test inputs, larger for the ellipsoid case, and much larger for the wedge case (in particular σ 22 ). Not surprisingly, for the OOD shapes ellipsoid and wedge, some surfaces intersect, meaning that, locally, the reference solution is not inside the 95% confidence interval.\nIn Figure 21, we illustrate the finite element error occurring when predicting σ 11 with respect to the 95% confidence interval for samples taken from the training and testing sets. We notice that on the training set, the finite element error magnitude is comparable to the 95% confidence interval, which is very small on training samples. On the testing set, the 95% confidence interval is larger, and the finite element error magnitude can be neglected. AirfRANS dataset. Figures 22 and23 illustrate reference, MMGP prediction and relative errors for the fields of interest u, v and p on respectively test sample 430 and train sample 93. In the first row, zooms are provided close to the trailing edge to illustrate the accuracy of the prediction in the thin boundary layer. Relative errors have larger magnitudes on spatially restricted areas. We notice that on train sample 93, the areas with low relative error are larger than for test sample 430.\nIn Table 5, we compare MMGP and our trained GCNN and MGN, as well as the four models trained in [14], for the scalars of interest drag coefficient C D and lift coefficient C L , computed by post-processing the predicted fields instead of directly predicting them as output scalars. This postprocessing consists in integrating the reference and predicted wall shear stress (from the velocity) and pressure fields around the surface of the airfoil. The models from [14] are a MLP (a classical Multi-Layer Perceptron), a GraphSAGE [38], a PointNet [64] and a Graph U-Net [29] and the corresponding results are taken from [14,Table 19] (\"full dataset\" setting that we considered in this work). Refer to [14, appendix L] for a description of the used architecture. The limits of this comparison are (i) the mesh supporting the fields are not the same (they have been coarsen in [14] by process different from ours), (ii) the scalar integration routine are not identical (we integrate using finite element representations). Within these limits, MMGP appears competitive with respects to the models of [14] and our trained GCNN and MGN.\nTable 5: (AirfRANS) Relative errors (Spearman's rank correlation) for the predicted drag coefficient C D (ρ D ) and lift coefficient C L (ρ D ) for the four models of [14,Table 19], as well as GCNN, MGN and MMGP. These scalars of interest are computed as a postprocessing of the predicted fields (best is bold)." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "Relative error Spearman's correlation\nC D C L ρ D ρ L MLP 6\n.2e+0 (9e-1) 2.1e-1 (3e-2) 0.25 (9e-2) 0.9932 (2e-3) GraphSAGE 7.4e+0 (1e+0) 1.5e-1 (3e-2) 0.19 (7e-2) 0.9964 (7e-4) PointNet 1.7e+1 (1e+0) 2.0e-1 (3e-2) 0.07 (6e-2) 0.9919 (2e-3) Graph U-Net 1.3e+1 (9e-1) 1.7e-1 (2e-2) 0.09 (5e-2) 0.9949 (1e-3) GCNN 3.6e+0 (7e-1) 2.5e-1 (4e-2) 0.002 (2e-1) 0.9773 (4e-3) MGN 3.3e+0 (6e-1) 2.6e-1 (8e-2) 0.04 (3e-1) 0.9761 (5e-3) MMGP 7.6e-1 (4e-4) 2.8e-2 (4e-5) 0.71 (1e-4) 0.9992 (2e-6) " }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [ "b2", "b1" ], "table_ref": [], "text": "Available dataset and code The code corresponding to the two-dimensional solid mechanics case (Tensile2d) described in Section 4.1 is available at https://gitlab.com/drti/mmgp [3]. A documentation is available at https://mmgp.readthedocs.io/ [2], where details are provided on how to download the dataset Tensile2d and reproduce the corresponding numerical experiments.\nDetails regarding the datasets are provided in Appendix A. Morphing strategies and dimensionality reduction techniques are described in Appendices B and C. Details about the GNNs baselines are given in Appendix D. Finally, additional results about the considered experiments are gathered in Appendix E." }, { "figure_ref": [], "heading": "A Datasets", "publication_ref": [ "b13" ], "table_ref": [], "text": "This section provides additional details regarding the synthetic datasets Tensile2d and Rotor37. Regarding the AirfRANS dataset, the reader is referred to [14]." }, { "figure_ref": [], "heading": "A.1 Rotor37 dataset", "publication_ref": [ "b43", "b7" ], "table_ref": [], "text": "Examples of input geometries are shown in Figure 6 together with the associated output pressure fields. While the geometrical variabilities are moderate, it can be seen that they have a significant impact on the output pressure field. A design of experiment for the input parameters of this problem are generated with maximum projection LHS method [43]. For each input mesh and set of input parameters, a three-dimensional aerodynamics problem is solved with RANS, as illustrated in, e.g. [8]. The output scalars of the problem are obtained by post-processing the three-dimensional velocity.\nFigure 6: (Rotor37) Four geometries with their corresponding output pressure fields. The first panel shows the mesh, and the second to last panels show a superposition of the corresponding geometry and the mesh of the first one." }, { "figure_ref": [], "heading": "A.2 Tensile2d dataset", "publication_ref": [ "b11", "b50" ], "table_ref": [], "text": "Examples of input geometries are shown in Figure 7. A two-dimensional boundary value problem in solid mechanics is considered, under the assumption of small perturbations (see, e.g. [12]). The partial differential equation is supplemented with Dirichlet and Neumann boundary conditions: the displacement on the lower boundary is fixed, while a uniform pressure is applied at the top. The input parameters of the problem are chosen to be the magnitude of the applied pressure and 5 parameters involved in the elasto-visco-plastic constitutive law of the material [50]. The outputs of the problem are chosen as the components of the displacement field, u and v, the entries of the Cauchy stress tensor, σ 11 , σ 22 , σ 12 , and the cumulative plastic strain p. We also consider 4 output scalars obtained by post-processing the fields of interest: the maximum plastic strain p max accross the geometry, the maximum vertical displacement v max at the top of the geometry, and the maximum normal stress σ max 22 and Von Mises stress σ max v accross the geometry. It is worth emphasizing that the cumulative plastic strain p is challenging to predict, as illustrated in Section E." } ]
When learning simulations for modeling physical phenomena in industrial designs, geometrical variabilities are of prime interest. While classical regression techniques prove effective for parameterized geometries, practical scenarios often involve the absence of shape parametrization during the inference stage, leaving us with only mesh discretizations as available data. Learning simulations from such mesh-based representations poses significant challenges, with recent advances relying heavily on deep graph neural networks to overcome the limitations of conventional machine learning approaches. Despite their promising results, graph neural networks exhibit certain drawbacks, including their dependency on extensive datasets and limitations in providing built-in predictive uncertainties or handling large meshes. In this work, we propose a machine learning method that do not rely on graph neural networks. Complex geometrical shapes and variations with fixed topology are dealt with using well-known mesh morphing onto a common support, combined with classical dimensionality reduction techniques and Gaussian processes. The proposed methodology can easily deal with large meshes without the need for explicit shape parameterization and provides crucial predictive uncertainties, which are essential for informed decision-making. In the considered numerical experiments, the proposed method is competitive with respect to existing graph neural networks, regarding training efficiency and accuracy of the predictions.
MMGP: A MESH MORPHING GAUSSIAN PROCESS-BASED MACHINE LEARNING METHOD FOR REGRESSION OF PHYSICAL PROBLEMS UNDER NON-PARAMETERIZED GEOMETRICAL VARIABILITY
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the MMGP inference workflow for the prediction of an output field of interest. The lower rectangle in the illustration of the input of the GP represents the scalar inputs µ i .", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: (Tensile2d) Test predictions versus test targets obtained for the output scalars of interest.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (AirfRANS) Test sample 787, fields of interest u (U X), v (U Y ) and p: (left) reference, (middle) MMGP prediction, (right) relative error.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 55shows graphs of two output scalars with respect to the input pressure in the Tensile2d problem. The predictive intervals of the MMGP model are discriminative: they get wider as the input pressure falls out of the support of the training distribution. In order to assess the validity of the prediction intervals, we compute the prediction interval coverage probability (PICP), i.e. the average of test targets that fall into the 95% prediction interval. For the AirfRANS dataset, PICPs of 93.05% and 93.5% for respectively the outputs C L and C D are obtained by averaging the individual PICPs of 10 independent MMGP models. The prediction intervals are slightly over-confident but this could be corrected by e.g. conformalizing the Gaussian process[69].", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: (Rotor37) MMGP: prediction, predictive variance, and L 1 relative error of the pressure field for an arbitrary geometry in the test dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: (Tensile2d) MMGP: graphs of the predicted v max and σ max v with respect to the pressure, for four different test input meshes, and 11 values of input pressure that go beyond the training range (-50, -40), with 95% confidence intervals.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: (Tensile2d) Illustration of the four input meshes that are used in Figure 5.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: (Tensile2d) Illustration of the Tutte's barycentric mapping used in the morphing stage.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: (AirfRANS) RBF morphing for test sample 787; (top) complete mesh morphing, (bottom) illustration of the mapping of the boundary points.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: (AirfRANS) Zoom of the RBF morphing close to the airfoil for test sample 787.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: (AirfRANS) Example of an original mesh from the dataset (left) and the corresponding coarsened mesh in the AirfRANS-remeshed dataset (right).", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: (Rotor37) Histograms of log(variance) of MMGP predictions for the output scalars of interest on four sets: in grey the testing set (in distribution), in green and red respectively two sets where the input pressure µ 1 and rotation speed µ 2 are taken OOD, and in blue a set of geometry taken OOD.", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: (Tensile2d) Histograms of log(variance) of MMGP predictions in grey for the output scalars of interest on the testing set (in distribution). The variance of the MMGP prediction is identified for various configurations: the ellipsoid and wedge cases (where all the nongeometrical parameters are taken at the center of the training intervals), and 5 settings where the same geometry is taken in the testing set and the input pressure varies (the training interval is [-50, -40]).", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: (AirfRANS) Histograms of log(variance) of MMGP predictions in grey for the output scalars of interest on the testing set (in distribution). The variance of the MMGP prediction is identified for various configurations, where the same geometry is taken in the testing set and the inlet velocity (iv) and angle of attack (aoa) varies (only iv=45, 70, 90 and aoa=-0.04, 0.07, 0.18 are in the training intervals).", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: (Tensile2d) MMGP prediction for the first training input, where: U 1 , U 2 , evrcum, sig11, sig22, and sig12 correspond to u, v, p, σ 11 , σ 22 , and σ 12 , respectively.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: (Tensile2d) MMGP prediction for the first test input, where: U 1 , U 2 , evrcum, sig11, sig22, and sig12 correspond to u, v, p, σ 11 , σ 22 , and σ 12 , respectively.", "figure_data": "", "figure_id": "fig_15", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: (Tensile2d) MMGP prediction for an OOD ellipsoid geometry, where: U 1 , U 2 , evrcum, sig11, sig22, and sig12 correspond to u, v, p, σ 11 , σ 22 , and σ 12 , respectively.", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: (Tensile2d) MMGP prediction for an OOD wedge cut-off geometry, where: U 1 , U 2 , evrcum, sig11, sig22, and sig12 correspond to u, v, p, σ 11 , σ 22 , and σ 12 , respectively.", "figure_data": "", "figure_id": "fig_17", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: (Tensile2d) MMGP: train0, test1, ellipsoid and wedge cases, confidence intervals for u, v and p visualized as surfaces (for each field, the deformation factor is taken identical through the cases).", "figure_data": "", "figure_id": "fig_18", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 :20Figure 20: (Tensile2d) MMGP: train0, test1, ellipsoid and wedge cases, confidence intervals for σ 11 , σ 12 and σ 22 visualized as surfaces (for each field, the deformation factor is taken identical through the cases).", "figure_data": "", "figure_id": "fig_19", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Figure 21 :21Figure 21: (Tensile2d) Finite element interpolation error for the prediction of σ 11 compared to the 95% confidence interval for a sample from (left): the training set, (right) the testing set.", "figure_data": "", "figure_id": "fig_20", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 22 :22Figure 22: (AirfRANS) Test sample 430, fields of interest u (U X), v (U Y ) and p: (left) reference, (middle) MMGP prediction, (right) relative error.", "figure_data": "", "figure_id": "fig_21", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "Figure 23 :23Figure 23: (AirfRANS) Train sample 93, fields of interest u (U X), v (U Y ) and p: (left) reference, (middle) MMGP prediction, (right) relative error.", "figure_data": "", "figure_id": "fig_22", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Summary of the considered datasets with d Ω : dimension of the physical problem, p: number of input scalars, d: number of output fields, m: number of output scalars.", "figure_data": "Datasetstrain/test sizes d Ω p d m Avg. # nodesRotor371000/20032 2 429, 773Tensile2d500/20026 6 49, 425AirfRANS800/20022 3 2179, 779AirfRANS-remeshed800/20022 3 219, 527", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "for all the considered experiments. While the GNN-based baselines achieve good performance, the MMGP model consistently outperforms them with lower errors. It is worth emphasizing that the field p and scalar p max of the Tensile2d dataset are particularly challenging, see Appendix E.2 for more details. They represent the accumulated plasticity in the mechanical piece, which are non-zero for a small fraction of the training set. Figure2shows the graphs of the output scalars predictions versus the targets on test set for the Tensile2d case. Figure3also shows examples of fields prediction in the case of the AirfRANS problem. The MMGP model is able to accurately reproduce the output fields, with relative errors mostly located near the tips of the airfoils.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Means and standard deviations (gray) of the relative RMSE and Q 2 scalar regression coefficients for all the considered datasets and quantities of interest (QoI) (best is bold).", "figure_data": "RRMSEQ 2QoIGCNNMGNMMGPGCNNMGNMMGP", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Training computational times: GCNN and MGN on a Nvidia A100 Tensor Core GPU (neural network training), MMGP on a 48 cores Intel Xeon Gold 6342 CPU (Gaussian process regressors training). Between parenthesis are indicated the numbers of trainings carried-out to optimize hyperparameters (best is bold).", "figure_data": "DatasetGCNNMGNMMGPRotor37(200 ×) 24 h(6 ×) 13 h 14 min (10 ×) 2 min 49 sTensile2d (200 ×) 1 h 25 min (6 ×) 6 h 50 min (10 ×) 1 min 38 sAirfRANS (200 ×) 5 h 15 min (6 ×) 5 h 00 min (10 ×) 5 min 47 s", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Chosen hyperparameters for the GCNN architectures. GeneralConv the outwards normals to the surface of the compressor blade are added as input features to input graphs. Similarly, for the Tensile2d and AirfRANS problems, the signed distance function is added as an input feature.", "figure_data": "DatasetLearning rate λ fieldConvolutionRotor370.0210.0 GeneralConvTensile2d0.01100.0 GeneralConvAirfRANS0.00510.0", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Fabien Casenave; Brian Staber; Xavier Roynard
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Mmg platform website", "year": "2023-01-19" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "MMGP documentation", "year": "2023-10-16" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "MMGP github repository", "year": "2023-10-16" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "Modulus package website", "year": "2023-10-05" }, { "authors": "J Ackmann; P D Düben; T N Palmer; P K Smolarkiewicz", "journal": "", "ref_id": "b4", "title": "Machine-learned preconditioners for linear solvers in geophysical fluid flows", "year": "2020" }, { "authors": "M Alexa", "journal": "Computer graphics forum", "ref_id": "b5", "title": "Recent advances in mesh morphing", "year": "2002" }, { "authors": "K R Allen; T Lopez-Guevara; K Stachenfeld; A Sanchez-Gonzalez; P Battaglia; J Hamrick; T Pfaff", "journal": "", "ref_id": "b6", "title": "Physical design using differentiable learned simulators", "year": "2022" }, { "authors": "A Ameri", "journal": "", "ref_id": "b7", "title": "Nasa rotor 37 cfd code validation glenn-ht code", "year": "2009" }, { "authors": "N Arad; N Dyn; D Reisfeld; Y Yeshurun", "journal": "CVGIP: Graphical models and image processing", "ref_id": "b8", "title": "Image warping by radial basis functions: Application to facial expressions", "year": "1994" }, { "authors": "P Baque; E Remelli; F Fleuret; P Fua", "journal": "PMLR", "ref_id": "b9", "title": "Geodesic convolutional shape optimization", "year": "2018" }, { "authors": "P W Battaglia; J B Hamrick; V Bapst; A Sanchez-Gonzalez; V Zambaldi; M Malinowski; A Tacchetti; D Raposo; A Santoro; R Faulkner", "journal": "", "ref_id": "b10", "title": "Relational inductive biases, deep learning, and graph networks", "year": "2018" }, { "authors": "J Besson; G Cailletaud; J.-L Chaboche; S Forest", "journal": "Springer Science & Business Media", "ref_id": "b11", "title": "Non-linear mechanics of materials", "year": "2009" }, { "authors": "F E Bock; R C Aydin; C J Cyron; N Huber; S R Kalidindi; B Klusemann", "journal": "Frontiers in Materials", "ref_id": "b12", "title": "A review of the application of machine learning and data mining approaches in continuum materials mechanics", "year": "2019" }, { "authors": "F Bonnet; J Mazari; P Cinnella; P Gallinari", "journal": "", "ref_id": "b13", "title": "Airfrans: High fidelity computational fluid dynamics dataset for approximating reynolds-averaged navier-stokes solutions", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b14", "title": "", "year": "2022" }, { "authors": "M Botsch; L Kobbelt", "journal": "Blackwell Publishing, Inc", "ref_id": "b15", "title": "Real-time shape editing using radial basis functions", "year": "2005" }, { "authors": "X Bresson; T Laurent", "journal": "", "ref_id": "b16", "title": "Residual gated graph convnets", "year": "2017" }, { "authors": "S L Brunton; B R Noack; P Koumoutsakos", "journal": "Annual Review of Fluid Mechanics", "ref_id": "b17", "title": "Machine learning for fluid mechanics", "year": "2020" }, { "authors": "L Cambier; M Gazaix; S Heib; S Plot; M Poinot; J P Veuillot; J F Boussuge; M Montagnac", "journal": "Aerospace Lab", "ref_id": "b18", "title": "An overview of the multi-purpose elsa flow solver", "year": "2011" }, { "authors": "Y Cao; M Chai; M Li; C Jiang", "journal": "", "ref_id": "b19", "title": "Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network", "year": "2023" }, { "authors": "F Casenave; N Akkari; F Bordeu; C Rey; D Ryckelynck", "journal": "International Journal for Numerical Methods in Engineering", "ref_id": "b20", "title": "A nonintrusive distributed reduced-order modeling framework for nonlinear structural mechanics-application to elastoviscoplastic computations", "year": "2020" }, { "authors": "A De Boer; M S Van Der Schoot; H Bijl", "journal": "Computers & structures", "ref_id": "b21", "title": "Mesh deformation based on radial basis function interpolation", "year": "2007" }, { "authors": "C Ding; H Peng", "journal": "Journal of bioinformatics and computational biology", "ref_id": "b22", "title": "Minimum redundancy feature selection from microarray gene expression data", "year": "2005" }, { "authors": "R P Dwight", "journal": "Springer", "ref_id": "b23", "title": "Robust mesh deformation using the linear elasticity equations", "year": "2006" }, { "authors": "C Farhat; T Chapman; P Avery", "journal": "International Journal for Numerical Methods in Engineering", "ref_id": "b24", "title": "Structure-preserving, stability, and accuracy properties of the energy-conserving sampling and weighting method for the hyper reduction of nonlinear finite element dynamic models", "year": "2015" }, { "authors": "M Fey; J E Lenssen", "journal": "", "ref_id": "b25", "title": "Fast graph representation learning with PyTorch Geometric", "year": "2019" }, { "authors": "M Fey; J E Lenssen", "journal": "", "ref_id": "b26", "title": "Fast graph representation learning with PyTorch Geometric", "year": "2019" }, { "authors": "M S Floater", "journal": "Computer aided geometric design", "ref_id": "b27", "title": "Parametrization and smooth approximation of surface triangulations", "year": "1997" }, { "authors": "M Fortunato; T Pfaff; P Wirnsberger; A Pritzel; P Battaglia", "journal": "", "ref_id": "b28", "title": "Multiscale meshgraphnets", "year": "2022" }, { "authors": "H Gao; S Ji", "journal": "PMLR", "ref_id": "b29", "title": "Graph u-nets", "year": "2019" }, { "authors": "H Gao; L Sun; J.-X Wang", "journal": "Journal of Computational Physics", "ref_id": "b30", "title": "Phygeonet: Physics-informed geometry-adaptive convolutional neural networks for solving parameterized steady-state pdes on irregular domain", "year": "2021" }, { "authors": "V Garanzha; I Kaporin; L Kudryavtseva; F Protais; N Ray; D Sokolov", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b31", "title": "Foldover-free maps in 50 lines of code", "year": "2021" }, { "authors": "J Gawlikowski; C R N Tassi; M Ali; J Lee; M Humt; J Feng; A Kruspe; R Triebel; P Jung; R Roscher", "journal": "", "ref_id": "b32", "title": "A survey of uncertainty in deep neural networks", "year": "2021" }, { "authors": "J Gilmer; S S Schoenholz; P F Riley; O Vinyals; G E Dahl", "journal": "PMLR", "ref_id": "b33", "title": "Neural message passing for quantum chemistry", "year": "2017" }, { "authors": "M Götz; H Anzt", "journal": "IEEE", "ref_id": "b34", "title": "Machine learning-aided numerical linear algebra: Convolutional neural networks for the efficient preconditioner generation", "year": "2018" }, { "authors": " Gpy; Gpy", "journal": "", "ref_id": "b35", "title": "A Gaussian process framework in python", "year": "2012" }, { "authors": "S Grimberg; C Farhat; R Tezaur; C Bou-Mosleh", "journal": "International Journal for Numerical Methods in Engineering", "ref_id": "b36", "title": "Mesh sampling and weighting for the hyperreduction of nonlinear petrov-galerkin reduced-order models with local reduced-order bases", "year": "2021" }, { "authors": "X Guo; W Li; F Iorio", "journal": "", "ref_id": "b37", "title": "Convolutional neural networks for steady flow approximation", "year": "2016" }, { "authors": "W Hamilton; Z Ying; J Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "L Harsch; S Riedelbauch", "journal": "", "ref_id": "b39", "title": "Direct prediction of steady-state flow fields in meshed domain with graph networks", "year": "2021" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b40", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "A Heinlein; A Klawonn; M Lanser; J Weber", "journal": "", "ref_id": "b41", "title": "Combining machine learning and domain decomposition methods-a review", "year": "2020" }, { "authors": "S Ioffe; C Szegedy", "journal": "PMLR", "ref_id": "b42", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "V R Joseph; E Gul; S Ba", "journal": "Biometrika", "ref_id": "b43", "title": "Maximum projection designs for computer experiments", "year": "" }, { "authors": "G E Karniadakis; I G Kevrekidis; L Lu; P Perdikaris; S Wang; L Yang", "journal": "Nature Reviews Physics", "ref_id": "b44", "title": "Physics-informed machine learning", "year": "2021" }, { "authors": "D P Kingma; J Ba; Adam", "journal": "", "ref_id": "b45", "title": "A method for stochastic optimization", "year": "2014" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b46", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "B Knyazev; G W Taylor; M Amer", "journal": "Advances in neural information processing systems", "ref_id": "b47", "title": "Understanding attention and generalization in graph neural networks", "year": "2019" }, { "authors": "M A Kramer", "journal": "AIChE Journal", "ref_id": "b48", "title": "Nonlinear principal component analysis using autoassociative neural networks", "year": "1991" }, { "authors": "K Lee; K T Carlberg", "journal": "Journal of Computational Physics", "ref_id": "b49", "title": "Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders", "year": "2020" }, { "authors": "J Lemaitre; J.-L Chaboche", "journal": "Cambridge university press", "ref_id": "b50", "title": "Mechanics of solid materials", "year": "1994" }, { "authors": "Z Li; D Z Huang; B Liu; A Anandkumar", "journal": "", "ref_id": "b51", "title": "Fourier neural operator with learned deformations for pdes on general geometries", "year": "2022" }, { "authors": "J Ling; R Jones; J Templeton", "journal": "Journal of Computational Physics", "ref_id": "b52", "title": "Machine learning strategies for systems with invariance properties", "year": "2016" }, { "authors": "M Lino; C Cantwell; A A Bharath; S Fotiadis", "journal": "", "ref_id": "b53", "title": "Simulating continuum mechanics with multi-scale graph neural networks", "year": "2021" }, { "authors": "M Lino; S Fotiadis; A A Bharath; C D Cantwell", "journal": "Physics of Fluids", "ref_id": "b54", "title": "Multi-scale rotation-equivariant graph neural networks for unsteady eulerian fluid dynamics", "year": "2022" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b55", "title": "Sgdr: Stochastic gradient descent with warm restarts", "year": "2016" }, { "authors": "L Lu; P Jin; G E Karniadakis", "journal": "", "ref_id": "b56", "title": "Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators", "year": "2019" }, { "authors": "A L Maas; A Y Hannun; A Y Ng", "journal": "", "ref_id": "b57", "title": "Rectifier nonlinearities improve neural network acoustic models", "year": "2013" }, { "authors": "J Masci; D Boscaini; M Bronstein; P Vandergheynst", "journal": "", "ref_id": "b58", "title": "Geodesic convolutional neural networks on riemannian manifolds", "year": "2015" }, { "authors": "C A Micchelli; Y Xu; H Zhang", "journal": "Journal of Machine Learning Research", "ref_id": "b59", "title": "Universal kernels", "year": "2006" }, { "authors": "", "journal": "", "ref_id": "b60", "title": "Zset: nonlinear material & structure analysis suite", "year": "" }, { "authors": "F Monti; D Boscaini; J Masci; E Rodola; J Svoboda; M M Bronstein", "journal": "", "ref_id": "b61", "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "year": "2017" }, { "authors": "H Peng; F Long; C Ding", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b62", "title": "Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy", "year": "2005" }, { "authors": "T Pfaff; M Fortunato; A Sanchez-Gonzalez; P W Battaglia", "journal": "", "ref_id": "b63", "title": "Learning mesh-based simulation with graph networks", "year": "2021" }, { "authors": "C R Qi; H Su; K Mo; L J Guibas", "journal": "", "ref_id": "b64", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "M Raissi; P Perdikaris; G E Karniadakis", "journal": "Journal of Computational Physics", "ref_id": "b65", "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "year": "2019" }, { "authors": "R Rannacher; R Scott", "journal": "Mathematics of computation", "ref_id": "b66", "title": "Some optimal error estimates for piecewise linear finite element approximations", "year": "1982" }, { "authors": "F Scarselli; M Gori; A C Tsoi; M Hagenbuchner; G Monfardini", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b67", "title": "The graph neural network model", "year": "2009" }, { "authors": "B Schölkopf; A Smola; K.-R Müller", "journal": "Neural computation", "ref_id": "b68", "title": "Nonlinear component analysis as a kernel eigenvalue problem", "year": "1998" }, { "authors": "G Shafer; V Vovk", "journal": "Journal of Machine Learning Research", "ref_id": "b69", "title": "A tutorial on conformal prediction", "year": "2008" }, { "authors": "M L Staten; S J Owen; S M Shontz; A G Salinger; T S Coffey", "journal": "Springer", "ref_id": "b70", "title": "A comparison of mesh morphing methods for 3d shape optimization", "year": "2012" }, { "authors": "W T Tutte", "journal": "Proceedings of the London Mathematical Society", "ref_id": "b71", "title": "How to draw a graph", "year": "1963" }, { "authors": "P Veličković; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "", "ref_id": "b72", "title": "Graph attention networks", "year": "2017" }, { "authors": "M Wang; D Zheng; Z Ye; Q Gan; M Li; X Song; J Zhou; C Ma; L Yu; Y Gai", "journal": "", "ref_id": "b73", "title": "Deep graph library: A graph-centric, highly-performant package for graph neural networks", "year": "2019" }, { "authors": "D A White; W J Arrighi; J Kudo; S E Watts", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b74", "title": "Multiscale topology optimization using neural network surrogate models", "year": "2019" }, { "authors": "J Willard; X Jia; S Xu; M Steinbach; V Kumar", "journal": "", "ref_id": "b75", "title": "Integrating physics-based modeling with machine learning: A survey", "year": "2020" }, { "authors": "C K I Williams; C E Rasmussen", "journal": "MIT press", "ref_id": "b76", "title": "Gaussian processes for machine learning", "year": "2006" }, { "authors": "F Wu; A Souza; T Zhang; C Fifty; T Yu; K Weinberger", "journal": "PMLR", "ref_id": "b77", "title": "Simplifying graph convolutional networks", "year": "2019" }, { "authors": "S Yoshizawa; A Belyaev; H.-P Seidel", "journal": "IEEE", "ref_id": "b78", "title": "A fast and simple stretch-minimizing mesh parameterization", "year": "2004" }, { "authors": "J You; R Ying; J Leskovec", "journal": "", "ref_id": "b79", "title": "Design Space for Graph Neural Networks", "year": "2020-11" }, { "authors": "A Zaharescu; E Boyer; R Horaud", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b80", "title": "Topology-adaptive mesh deformation for surface evolution, morphing, and multiview reconstruction", "year": "2010" }, { "authors": "Y Zhu; N Zabaras", "journal": "Journal of Computational Physics", "ref_id": "b81", "title": "Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification", "year": "2018" }, { "authors": "O C Zienkiewicz; P B Morice", "journal": "McGraw-hill London", "ref_id": "b82", "title": "The finite element method in engineering science", "year": "1971" } ]
[ { "formula_coordinates": [ 2, 221.57, 499.59, 284.54, 30.47 ], "formula_id": "formula_0", "formula_text": "U k (x) = N I=1 U k,I φ I (x) , k = 1, . . . , d ,(1)" }, { "formula_coordinates": [ 2, 258.33, 662.84, 247.77, 8.99 ], "formula_id": "formula_1", "formula_text": "F : (µ, M) → (U, w).(2)" }, { "formula_coordinates": [ 4, 122.91, 110.84, 332.34, 84.23 ], "formula_id": "formula_2", "formula_text": "Z 1 ℓ Z i ℓ U i k U 1 k U n k U n k U 1 k GP Z 1 ℓ Z n ℓ Z n ℓ . . . . . . . . . . . . Z i U i" }, { "formula_coordinates": [ 4, 218.03, 659.34, 288.08, 32.12 ], "formula_id": "formula_3", "formula_text": "U i k (x) = N i I=1 U i k,I φ i I (x) , , k = 1, . . . , d ,(3)" }, { "formula_coordinates": [ 5, 179.02, 89.14, 327.08, 32.12 ], "formula_id": "formula_4", "formula_text": "P (U i k )(x) = Nc J=1 U i k (x c J )φ c J (x) = N i I=1 Nc J=1 U i k,I φ i I (x c J )φ c J (x) ,(4)" }, { "formula_coordinates": [ 5, 172.11, 178.5, 66.72, 15.28 ], "formula_id": "formula_5", "formula_text": "U i k,I = U i k (x c I )." }, { "formula_coordinates": [ 5, 255.58, 277.01, 100.84, 32.12 ], "formula_id": "formula_6", "formula_text": "Z i ℓ (x) = N i I=1 Z i ℓ,I φ i I (x) ," }, { "formula_coordinates": [ 5, 106.56, 457, 400.53, 38.64 ], "formula_id": "formula_7", "formula_text": "{ U i k } n i=1 . Similarly, PCA is applied to concatenated transported coordinate fields, {( Z i 1 , . . . , Z i dΩ )} n i=1 , leading to low- dimensional embeddings of the input geometries { Z i } n" }, { "formula_coordinates": [ 5, 278.46, 550.67, 91.59, 12.32 ], "formula_id": "formula_8", "formula_text": "Let {X i } n i=1 ∈ R l Z +p" }, { "formula_coordinates": [ 5, 160.84, 601.31, 345.26, 12.69 ], "formula_id": "formula_9", "formula_text": "F scalar,m : X i → w i m ∈ R, m = 1, . . . , q ,(5a)" }, { "formula_coordinates": [ 5, 257.77, 618.3, 244.19, 12.69 ], "formula_id": "formula_10", "formula_text": "X i → U i k ∈ R l U k , k = 1, . . . , d .(5b" }, { "formula_coordinates": [ 5, 231.43, 691.71, 149.14, 28.83 ], "formula_id": "formula_11", "formula_text": "E[w ⋆ ] = k T ⋆ (K + σ 2 I) -1 w m0 , V[w ⋆ ] = K ⋆⋆ -k T ⋆ (K + σ 2 I) -1 k ⋆ ," }, { "formula_coordinates": [ 7, 175.54, 689.92, 158.54, 30.43 ], "formula_id": "formula_12", "formula_text": "RRMSE f (U ref , U pred ) = 1 n ⋆ n⋆ i=11" }, { "formula_coordinates": [ 7, 326.98, 687.51, 102.89, 31.04 ], "formula_id": "formula_13", "formula_text": "N i ∥U i ref -U i pred ∥ 2 2 ∥U i ref ∥ 2 ∞ 1/2" }, { "formula_coordinates": [ 8, 186.73, 103.55, 238.54, 32.84 ], "formula_id": "formula_14", "formula_text": "RRMSE s (w ref , w pred ) = 1 n ⋆ n⋆ i=1 |w i ref -w i pred | 2 |w i ref | 2 1/2 ." }, { "formula_coordinates": [ 16, 172.6, 386.58, 266.8, 27.27 ], "formula_id": "formula_15", "formula_text": "x I - 1 d(I) J∈N (I)∩ 1,Nint x J = - 1 d(I) J∈N (I)∩ Nint,N x b J-N int ," }, { "formula_coordinates": [ 17, 213.06, 248.19, 185.89, 30.65 ], "formula_id": "formula_16", "formula_text": "x I = N b J=1 α J ϕ(∥x I -x b J ∥), 1 ≤ I ≤ N int ," }, { "formula_coordinates": [ 17, 106.2, 331.78, 229.99, 24.46 ], "formula_id": "formula_17", "formula_text": "M RBF α = x b , where M RBF I,J = ϕ(∥x b I -x b J ∥), 1 ≤ I, J ≤ N b ." }, { "formula_coordinates": [ 18, 246.1, 373.06, 119.8, 19.31 ], "formula_id": "formula_18", "formula_text": "M IJ = M c φ c I (x)φ c J (x)dx ." }, { "formula_coordinates": [ 18, 118.18, 435.35, 381.6, 30.58 ], "formula_id": "formula_19", "formula_text": "( U i k ) T M U j k i,j = Nc I,J=1 M c U i k (x c I )φ c I (x)U j k (x c J )φ c J (x)dx = M c P (U i k )(x)P (U j k )(x)dx ," }, { "formula_coordinates": [ 19, 150.25, 90.29, 311.51, 30.55 ], "formula_id": "formula_20", "formula_text": "L ((U, w) , (U ′ , w ′ )) = λ scalars L MSE (w, w ′ ) + λ fields d k=1 L MSE (U k , U ′ k ) ," }, { "formula_coordinates": [ 22, 180.5, 558.9, 242.35, 24.87 ], "formula_id": "formula_21", "formula_text": "C D C L ρ D ρ L MLP 6" } ]
2023-12-09
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b12", "b30", "b21", "b18", "b14", "b28", "b34", "b36", "b22", "b20", "b1", "b32", "b0", "b11", "b27", "b28", "b39", "b27", "b28", "b14", "b1" ], "table_ref": [], "text": "Non-autoregressive neural machine translation (NAT) models achieve significant acceleration on the inference, with even better translation performance, when compared to auto-regressive translation (AT) models on sentence-level MT (Gu et al., 2018(Gu et al., , 2019;;Stern et al., 2019;Ma et al., 2019;Lee et al., 2020;Huang et al., 2022;Shao and Feng, 2022). However, real applications (e.g., Google Translation and ChatGPT ) typically need to understand and respond in discourse, which requires a quick process of document-level content. Different * Contributed equally. † Corresponding author. from sentence-level MT, document-level MT considers a much larger inter-sentential context and models discourse dependence, such as anaphora, ellipsis, and lexical cohesion (Voita et al., 2019). The sentence-by-sentence translation done by NAT models cannot possibly produce contextual coherent translations, hindering the usage of NAT models in real scenarios.\nDocument-level context enhances the translation quality of AT models significantly (Werlen et al., 2018;Maruf et al., 2019;Liu et al., 2020;Bao et al., 2021;Sun et al., 2022;Bao et al., 2023). However, no precedent research explores the possibility of applying NAT models to document-level MT, which leaves open research questions including: 1) Can NAT models leverage document context to improve translation quality? 2) Can NAT models generate translations with cross-sentential cohesion and coherence? and 3) Can NAT models achieve the same performance as AT models? To address these questions, we investigate the challenges and opportunities of NAT models on document-level MT.\nNAT models primarily face two challenges: multi-modality (Gu et al., 2018), which causes failure when one source has multiple possible translations, and misalignment (Saharia et al., 2020;Shao and Feng, 2022), which causes repetition and disorder in translations. Previous studies reduce modalities using knowledge distillation (Zhou et al., 2019) and improve alignment using new loss (Saharia et al., 2020;Shao and Feng, 2022;Huang et al., 2022). These methods have proven effective on the sentence level, while their efficacy is uncertain on the document level. First, document-level MT necessitates the modeling of cross-sentential coherence and discourse structure. Second, the extended length of input and output sequences intensifies the potential modalities. Last, the alignment between the source and target becomes harder because the enlarged input and output sequences increase the alignment space exponentially.\nIn this paper, we first assess recent NAT models on document-level MT tasks to understand their abilities and limitations, where we observe model failures and unstable performance. According to these observations, we further introduce sentence alignment to NAT models, which equips the encoder and decoder of the NAT models with the group-attention (Bao et al., 2021), restricting the target-to-source attention inside each target and source sentence pairs. Since each target sentence is aligned with a source sentence, we do not need to consider the possibility of aligning the target sentence with other source sentences. Therefore, the alignment space between the input and output sequences is significantly reduced.\nExperiments on three benchmark datasets TED, News, and Europarl show that NAT models achieve a translation speed of more than 30 times faster than their AT counterparts on document-level MT, but still lag behind on the translation performance. Sentence alignment significantly enhances the translation performance for NAT models, closing the gap from 2.59 to 1.46 d-BLEU (document-level BLEU score) compared to the best AT results. To the best of our knowledge, we are the first to discuss the challenges and opportunities of NAT models in the context of document-level MT. We release our code and data to stimulate future research." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b11", "b19", "b25", "b14", "b9", "b30", "b12", "b4", "b26", "b11", "b11", "b19", "b10", "b29", "b28", "b8", "b7", "b37", "b22", "b38", "b20", "b1", "b32", "b0", "b3", "b16", "b35", "b6" ], "table_ref": [], "text": "Non-Autoregressive Machine Translation. Existing NAT models include three types: fully NAT models (Gu et al., 2018;Libovickỳ and Helcl, 2018;Qian et al., 2021;Huang et al., 2022), iterative NAT models (Ghazvininejad et al., 2019;Stern et al., 2019;Gu et al., 2019;Chan et al., 2020), and semiautoregressive model (Ran et al., 2020). All these models generate tokens in parallel at some level. The fully NAT models generate tokens in a single forward pass, while the other two types generate tokens in multiple iterations or steps. In this paper, we take the fully NAT models to investigate document-level MT.\nNAT models face two fundamental challenges: the multi-modality issue and the misalignment issue. The multi-modality issue happens when one source has multiple possible translations, breaking the conditional independence assumption. For example, \"thank you .\" in English can be translated into \"Vielen Dank .\" or \"Danke .\" in German. The second output token could either be \"Dank\" or \".\", which cannot be determined without a condition on the first token. The multi-modality issue causes vanilla NAT models to fail on complex datasets. Previous research leverages knowledge distillation to reduce modalities of a dataset, so that the conditional independence assumption is more likely satisfied (Gu et al., 2018). They generate a translation for each training sample using an AT model and then train NAT models on this generated training set. In this paper, we compare NAT models trained on both raw and knowledge-distilled data.\nThe misalignment issue elicits various NAT techniques. Vanilla NAT (Gu et al., 2018) uses an implicit token-alignment approach, enforcing a monotonic alignment between source and target by copy encoder outputs to decoder inputs. NAT+CTC (Libovickỳ and Helcl, 2018) introduces connectionist temporal classification (CTC) (Graves et al., 2006) to model the monotonic alignment explicitly. Others introduce non-monotonic alignment between source and target, such as n-gram matching (Shao et al., 2020;Shao and Feng, 2022), aligned crossentropy (AXE) (Ghazvininejad et al., 2020), and order-agnostic cross-entropy (OAXE) (Du et al., 2021). In this paper, we investigate NAT models using the first two techniques. Document-Level Machine Translation. Previous methods on document-level MT are dominated by auto-regressive models, which can be categorized into two approaches. The first approach splits a document into sentences and translates each sentence using its context as additional inputs (Zhang et al., 2018;Maruf et al., 2019;Zheng et al., 2021). The second approach takes a document as a whole translation unit and translates the document in one beam search (Liu et al., 2020;Bao et al., 2021;Sun et al., 2022;Bao et al., 2023). In this paper, we follow the second approach for our investigation of NAT models.\nEfficient Model for Long Sequence. Recent advances in efficient models such as Longformer (Beltagy et al., 2020), Reformer (Kitaev et al., 2020), Linformer (Wang et al., 2020), and FlashAttention (Dao et al., 2022) improve the time and space complexity of the attention mechanism, which may affect the inference speed of both the auto-regressive and non-autoregressive MT models. In this paper, we focus on the relative inference acceleration of NAT compared to AT models with standard multi-head attention, leaving advanced attention mechanisms for the future." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "AT Baselines for Document-Level MT", "publication_ref": [ "b33", "b1", "b39" ], "table_ref": [], "text": "Document-level MT can be formulated as a seq2seq problem, where the input document x and output document y are represented in token sequence. The auto-regressive factorization is\np θ (y|x) = T t=1 p θ (y t |y <t , x),(1)\nwhere T denotes the number of target tokens and y t conditions on all previous tokens y <t . We choose two AT baselines.\nTransformer (Vaswani et al., 2017) is the standard encoder-decoder model, which is widely used as a baseline. We represent each document as a sequence of tokens and train the model to do the seq2seq mapping.\nG-Transformer (Bao et al., 2021) extends Transformer by separating the self-attention and crossattention into sentence-level group-attention and document-level global-attention, as Figure 1 illustrates. The group-attention enables sentence alignment between input and output sequences.\nKnowledge Distillation (KD). In addition to the raw data, we also experiment with KD data for training NAT models (Zhou et al., 2019), following previous NAT studies. We use G-Transformer finetuned on sentence Transformer as the teacher to generate distilled document translations." }, { "figure_ref": [], "heading": "Existing NAT Models", "publication_ref": [ "b14", "b25", "b23", "b11", "b25", "b2", "b19", "b10", "b25", "b14" ], "table_ref": [], "text": "Due to the large amount of NAT models (a nonexhaustive search shows more than 60 recent NAT models proposed between 2020 and 2023), we follow the common practice (Huang et al., 2022;Qian et al., 2021) to choose representative NAT models for our investigation, leaving more complex iterative methods, semi-autoregressive methods, and pre-training settings for future. Specifically, we select representative fully NAT models for document-level experiments, where all the models are transformer-based and implemented in Fairseq (Ott et al., 2019).\nVanilla NAT (Gu et al., 2018) is the earliest NAT model, which factorizes translation into two parts\np θ (y|x) = p θ (T |x) • T t=1 p θ (y t |x),(2)\nwhere the first part predicts the length T of target y, the second part predicts target tokens y t given the length, and x denotes the source input. For simplicity, we take the model as a whole and use the θ to denote all the parameters, which includes the parameters of the length model and the translation model.\nAn implicit token alignment is used between the source and target by copying the outputs of the encoder to the inputs of the decoder, where if the source and target have different lengths, they interpolate the decoder inputs uniformly. During training, it initializes the decoder with T token positions, where T denotes the real number of target tokens. During inference, it first predicts a number T using p θ (T |x), and then initializes the decoder with T token positions to predict target tokens.\nGLAT (Qian et al., 2021) improves vanilla NAT with a glancing mechanism, which adopts an adaptive glancing sampling strategy, exposing some target fragments to the decoder inputs so that the decoder can fit the training target easier. The glancing mechanism reduces the difficulty of independent training of target tokens and improves the performance on sentence-level MT significantly.\nLatent-GLAT (Bao et al., 2022) further improves GLAT using latent variables, where the latent variables are supposed to alleviate the multimodality problem. It represents the modality and alignment in the latent variables, dividing the learning of the target tokens into the modeling of the latent variables and the reconstruction of the target sequence.\nNAT+CTC (Libovickỳ and Helcl, 2018) applies connectionist temporal classification (CTC) (Graves et al., 2006) alignment to decoder outputs\np θ (y|x) = a∈β(y) M i=1 p θ (a i |x),(3)\nwhere a denotes an alignment between y and the reserved M token positions in the decoder while β(y) denotes all possible alignments. The alignment a is assumed conditional independent given the source input x. The reserved token positions are usually set as times of the length of the source input.\nCTC alignment in Eq. 3 requires a summation over all possible alignments, which is generally intractable. However, with conditional independence assumption for a, the summation can be achieved using dynamic programming.\nGLAT+CTC (Qian et al., 2021) combines GLAT with CTC alignment. DA-Transformer (Huang et al., 2022) represents the output alignment in a directed acyclic graph (DAG), which contains vertices and edges. A path in DAG represents a possible alignment. It models translation as\np θ (y|x) = a∈β(y) p θ (a|x)p θ (y|a, x),(4)\nwhere a denotes a path represented in a sequence of vertex indexes and β(y) here denotes all paths having the same length as the target y. The right expression in Eq. 4 can further be expanded as\np θ (a|x) = M -1 i=1 p θ (a i+1 |a i , x),(5)\np θ (y|a, x) = M i=1 p θ (y i |a i , x),(6)\nwhere a i are dependent on each other in a linear chain and y i are independent with each other given a i and x. Similar to CTC alignment, DA-Transformer also adopts dynamic programming to marginalize all possible paths. The model provides three decoding strategies: greedy, lookahead, and beam search. In this paper, we choose lookahead decoding for balanced performance and inference speed." }, { "figure_ref": [ "fig_0" ], "heading": "NAT Models with Sentence Alignment", "publication_ref": [ "b1" ], "table_ref": [], "text": "We propose novel NAT models with sentence alignment, as shown in Figure 2. The sentence alignment is a key feature in the G-Transformer (Bao et al., 2021), which is implemented through a group-attention module. The group-attention stabilizes self-attention and cross-attention in the model over long sequences. Inspired by the success, we adopt it in our NAT models to reduce the alignment space.\nSpecifically, we adopt the G-Transformer encoder and decoder layers by removing the causal mask from self-attention in the decoder layers. We replace the encoder and decoder layers in NAT models with G-Transformer layers and redesign the initial output to include the special token of beginof-sentence \"<s>\" and end-of-sentence \"</s>\" for each target sentence.\nSubstituting the causal masking layer in G-Transformer is only a necessary part at the attention level to facilitate sentence alignment. Its effectiveness hinges on both model design and training loss. Specifically, we introduce new length prediction per sentence in G-Trans+GLAT, improve CTC loss for sentence alignment in G-Trans+GLAT+CTC, and improve DAG design to restrict the transition in each sentence in G-Trans+DAT as follows.\nG-Trans+GLAT predicts the target length per sentence instead of the whole sequence. It factorizes the translation as\np θ (y|x) = K j=1   p θ (Tj|xj) • T j t=1 p θ (yj,t|xj, x)   , (7)\nwhere K denotes the number of sentences. We predict the length T j of the j-th target sentence and generate tokens y j,t accordingly. In addition to Eq. 2, where the corresponding source sentence of y t is unknown, y j,t also conditions on the source sentence x j .\nFor each source sentence x j , we predict the length T j of the target sentence using a linear classifier based on the mean pool of output features of the source sentence tokens. Consequently, we calculate the length T of the target sequence by aggregating sentence lengths T = K j=1 T j . G-Trans+GLAT+CTC integrates the sentence alignment with the CTC alignment. The default CTC algorithm aggregates all possible latent alignments across the entire sequence, which may align a source sentence to a wrong target sentence (e.g., the first source sentence to the second target sentence). Such global alignment not only slows down the training process but also causes unstable model performance. Different from the global alignment in Eq. 3, we apply CTC alignment on each sentence\np θ (y|x) = K j=1   a j ∈β(y j ) M j i=1 p θ (aj,i|xj, x)   ,(8)\nwhere M j denotes the reserved token positions for the j-th target sentence. Since CTC alignment is restricted inside each target sentence, the alignment space β(y j ) is enormously reduced compared to the β(y) in Eq. 3.\nG-Trans+DA-Trans introduces the sentence alignment into the output DAG of the DA-Transformer. The default DAG models the whole sequence, which enables the transition from a vertex in one sentence to a vertex in another sentence, making the transition space linearly increase with the enlargement of the sequence length.\nTo address the issue, we enforce a constraint to isolate the vertex transitions of each sentence, forcing the path a on each sentence starting with a special vertex \"<s>\" and ending with a special vertex \"</s>\". The transition between sentences only happens from the vertex \"</s>\" of a previous sentence to the vertex \"<s>\" of the current sentence.\nFormally, we use the same factorization as Eq. 4 but with a different collection β(y) of paths. At the implementation level, we simply mask the transition matrix to disable transitions from one sentence to another sentence, so that the dynamic programming algorithm keeps unchanged." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b22", "b20" ], "table_ref": [], "text": "We evaluate the models using document-level MT benchmark (Maruf et al., 2019), which includes three datasets TED, News, and Europarl, representing three domains and various data scales for English-German translation. More details about each dataset and the preprocessing are in Appendix A.1. We follow Liu et al. (2020), evaluating the model performance in sentence-level BLEU score (s-BLEU) and document-level BLEU score (d-BLEU), which are explained in detail as Appendix A.2. We experiment on Base model (where Big model does not provide a stronger baseline) and evaluate the speedup using 1 GPU and 4 CPUs of a Tesla V100 environment as Appendix A.3." }, { "figure_ref": [], "heading": "Overall Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "As shown in Table 1, NAT models achieve high speedup but still suffer from a significant performance gap with their AT counterparts.\nSpeedup. The inference accelerations of NAT models on the document level are between 25x and 41x, which surpasses the accelerations on the sentence level (between 2x to 15x) by a big margin. Specifically, NAT and GLAT provide the biggest acceleration by around 40x. More complex models, such as Latent-GLAT, DA-Transformer, and G-Trans+GLAT, accelerate inference by approximately 30x. CTC-based models have lower accelerations between 20x and 27x. On average, the acceleration on the document level is about 30x, which means that document-level MT systems could potentially save 96% computational resources and energy consumption by using NAT models.\nPerformance. Though NAT models underperform G-Transformer, some outpace the Transformer. The lengthy text sequence challenges both the AT models and the NAT models, resulting in exceptionally low d-BLEU scores (<10) in some settings. We treat these low scores as model failures and investigate them in section 5.1.\nWith the help of sentence alignment, the performance gap between NAT models and AT baselines is largely reduced, especially when trained on KD data. For example, G-Trans+GLAT+CTC achieves an average d-BLEU of 28.05 on KD data, which is only 0.51 points lower than the d-BLEU of 28.56 of G-Transformer. However, its performance on Raw data is 4.14 points lower than G-Transformer, suggesting NAT models experience severe challenges with raw data. These results demonstrate that even though knowledge distillation and sentence alignment enhance NAT models largely, there is still a gap between the NAT models and the strongest AT baseline." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Breakdown Results", "publication_ref": [ "b14", "b14", "b13" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Sentence-level alignment. Sentence alignment substantially enhances the performance of NAT models. Specifically, for GLAT, sentence alignment enhances the performance on KD data from an average of 3.59 to 26.24 d-BLEU of G-Trans+GLAT. For GLAT+CTC, sentence alignment elevates the performance on KD data from 24.93 to 28.05 d-BLEU on average. This score is comparable to G-Transformer on KD data, leaving a minor gap of 0.51 d-BLEU on average. The better results of G-Trans+GLAT+CTC than G-Trans+GLAT suggest that sentence alignment complements output alignment techniques such as CTC.\nThe sentence alignment brings more benefits to models trained on raw data. As the average scores in Table 1 show, G-Trans+GLAT+CTC outperforms GLAT+CTC by 15.80 points on raw data and 3.12 points on KD data. G-Trans+DA-Trans outperforms DA-Transformer by 3.48 points on raw data and 0.73 points on KD data. These results suggest that sentence alignment mitigates the challenge brought by more modalities in raw data.\nToken-level alignment. NAT models with implicit alignment such as vanilla NAT, GLAT, and Latent-GLAT fail on document-level MT, resulting in messy translations (repetitions, wrong words, and meaningless phrases). Conversely, NAT models with explicit alignments, such as NAT+CTC, GLAT+CTC, and DA-Transformer, produce superior document-level performance. DA-Transformer delivers the best overall performance among existing NAT models, even outperforming Transformer on Europarl (KD). These results suggest that tokenlevel alignment plays an important role in NAT models to achieve good performance.\nHowever, we also observe that training with CTC loss on documents is not always stable, leading to occasional failure and a high variance in their performance (e.g., +/-2.66 d-BLEU on raw data and +/-0.84 on KD data for GLAT+CTC). The failure happens more frequently on raw data than on KD data. We speculate that this instability is caused by the increased alignment space on long sequences, which can be mitigated by sentence alignment. Experiments on G-Trans+GLAT+CTC show stable training and lower variance in performance (+/-0.19 d-BLEU). These results suggest that tokenlevel alignment alone is not enough to achieve stable training and good performance.\nKnowledge distillation. Intuitively, documents have more modalities because they are much longer than sentences. If it is the case, knowledge distillation will become more critical for training NAT models on document-level MT than on sentencelevel MT. We compare NAT models trained with and without knowledge distillation.\nAs Table 1 shows, NAT models trained on KD data generally outperform the same model trained on raw data. The improvement is especially significant for NAT models with explicit alignment. For example, DA-Transformer obtains an average of 26.84 d-BLEU on KD data, which is 5.24 points higher than 21.60 d-BLEU on raw data. In contrast, DA-Transformer on sentence-level MT achieves similar results on raw data and on KD data (Huang et al., 2022). The results suggest that compared to sentences, documents have more severe multimodality issues.\nAlthough knowledge distillation enhances the performance of NAT models, it also sets a ceiling to the performance on document-level MT. Comparing the performance of G-Transformer on KD data to raw data, we see that KD data downgrades the performance by about 1 d-BLEU on average (from 29.51 to 28.56). These results differ from previous results on sentence-level MT (Huang et al., 2022), where knowledge distillation maintains or even enhances the performance of AT models. The discrepancy calls for further research on techniques to reduce the modalities on the document level. Speedup on different sequence lengths. We evaluate GLAT and GLAT+CTC on various sequence lengths, including the single sentence and segments with a maximum length of 64, 128, 256, and 512. As Figure 3 shows, the speedup displays consistent trends among different datasets and NAT models. The GLAT generally has a higher speedup than GLAT-CTC, especially when the segment length goes above 256. The speedups on TED and News are almost identical, which is expected because the time costs are supposed to be irrelevant to the data domains. The trends suggest that we benefit more from the inference acceleration on longer documents, where document-level MT tasks provide NAT models with the best scenario.\nSpeedup on different batch sizes. The previous study (Helcl et al., 2022) reports that NAT models have limited acceleration on the sentence-level MT in parallel inference settings. We evaluate the models in the settings with different batch sizes, including 1, 2, 4, and 8 instances per batch. Given that document-level MT sequences are much longer than sentence-level MT sequences, we do not evaluate using a batch size larger than 8.\nAs Figure 4 shows, when we increase the batch size from 1 to 8, the overall speedup of GLAT decreases from 40x to 9x. However, if we consider a more strict calculation of the speedup, excluding the time for initializing the models (which takes almost constant time) from the total evaluation time, the inference speedup of GLAT is 125x and 44x for the batch size of 1 and 8, respectively. These results suggest that although the speedup ratio decreases for bigger batch sizes, NAT models on documentlevel MT still show significant acceleration in the parallel inference settings." }, { "figure_ref": [], "heading": "The Challenge of Long Sequence", "publication_ref": [], "table_ref": [], "text": "The long input and output in document-level MT bring unexplored challenges to NAT models. In section 4, we learn that NAT models have a significant performance gap with their AT counterparts, and various NAT models even fail in some settings. We investigate these challenges in this section, leaving other discussions in Appendix D." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "The Failure of NAT Models", "publication_ref": [ "b1", "b1" ], "table_ref": [ "tab_0" ], "text": "Previous study (Bao et al., 2021) suggests that Transformer fails on long sequences because of local minima of loss values during the training process. We first investigate this possibility for the cause of the failure on NAT models. We evaluate GLAT, GLAT+CTC, Transformer, and G-Transformer on different lengths of sequences, as shown in Figure 5a. Overall, the four models produce different patterns of trends. The d-BLEU score of the Transformer rises as long as the sequence length increases until the failure happens at the length of 512 tokens. In contrast, the d-BLEU scores of the GLAT and GLAT+CTC descend progressively as long as the sequence length increases, which suggests a performance decline instead of a training failure on NAT models. Further evidence on the bigger dataset Europarl confirms that the low performance of NAT models is different from the failure of Transformer. Bao et al. (2021) suggests that the local minima encountered by Transformer can be escaped by using a bigger dataset, resulting in normal scores on Europarl. We evaluate Transformer, GLAT, and GLAT+CTC on Europarl, obtaining d-BLEU scores of 33.41, 2.13, and 0.00, respectively. We could see that GLAT and GLAT+CTC still give low scores on the bigger dataset, suggesting a different cause from the local minima.\nWe look into the d-BLEU score details, where BLEU-n (n ∈ [1, 2, 3, 4]) denotes the precision on n-gram. As the d-BLEU scores on GLAT in Figure 5b shows, the BLEU-3/4 decreases rapidly, reaching almost 0.0 at the lengths of 256 and 512. The BLEU-1 remains relatively high at 27.3 on the length of 256, resulting in messy translations, where the generated tokens are related to the content but are repeated, disordered, or in the wrong collocation as shown in Table 2. In comparison, the d-BLEU scores on GLAT+CTC in Figure 5c decrease slowly, where the BLEU-1 even increases on the length of 512. We speculate the rapid decrease in BLEU-3/4 on GLAT is caused by the multi-modality and misalignment issues, which can be mitigated by explicit alignments, such as CTC.\nThe exceptional zero scores. Table 1 reveals some unexpected results. Both NAT+CTC and GLAT+CTC score a 0.00 d-BLEU on the raw Europarl dataset, a performance notably inferior to that of NAT and GLAT. This unprecedented anomaly may stem from the challenges of applying CTC loss on long sequences. The table further indicates that both combinations, NAT/GLAT+CTC and NAT/GLAT, experience training failures on the raw Europarl dataset. These failures are likely due to multi-modality and misalignment issues. These results empirically demonstrate that, while CTC loss can be effective, its application to documentlevel training is not consistently stable. We hypothesize that this instability arises from the expanded alignment space between lengthy input and output sequences, as detailed in the token-level alignment Source: companies with a high percentage of floating rate debt stand to lose the most, Goldman said. outside pure stock plays, consumers stand to benefit as well through the rising dollar.\nTarget: Unternehmen mit einem hohen Anteil an flexiblen Zinsen werden am meisten verlieren, sagte Goldman. außerhalb der reinen Aktienspiele werden Verbraucher ebenfalls durch den steigenden Dollar profitieren.\nGLAT: Unternehmen mit einem hohen , freien freien freien erverlieren die die meisten meisten , , . . . te te dazu kräften profitieren profitieren profitieren profitieren profitieren durch durch steigenden steigenden profitieren profitieren .\nGLAT+CTC: Unternehmen mit hohen der Schulden , die verlieren . außerhalb spielt Verbraucher den profitieren .\nTable 2: GLAT and GLAT+CTC produce good translation at the beginning but downgrade later with repetitions and missing translations.\ndiscussion in Section 4.3." }, { "figure_ref": [], "heading": "Document Context", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We compare AT and NAT models with document context to the Transformer baseline without document context (trained on sentence) in Appendix B. The results suggest that document context enhances AT model (G-Transformer) by 0.70 s-BLEU. However, NAT models with document context still underperform the Transformer baseline, indicating that current NAT models have a limited ability to utilize document context.\nWe further apply an ablation study on the document context to quantify its contribution. As Table 3 illustrates, the performance of G-Trans+GLAT+CTC without document context drops by approximately 0.28 s-BLEU on average over the three benchmarks. Specifically, the targetside context contributes merely 0.01, while the source-side context contributes 0.27. The context contributions in G-Trans+GLAT+CTC are less than that in G-Transformer (0.23 and 0.70 s-BLEU on target and source contexts, respectively). The minor contribution from the target-side context is expected, given that NAT models predict target sentences independently. The relatively low contribution of source-side context indicates that NAT models do not fully exploit the source-side contextual information.\nLeak of source context information on adjacent sentences? We observe serious repetitions in the translations generated by NAT models without sentence alignment, raising the concern that the information in a source sentence may leak into its adjacent sentences during translation. We measure the repetitions on NAT models with sentence alignment, where the sentence boundaries are assured by the model design. We find that these models do not generate obvious cross-sentential repetitions. For example, on the TED test set, G-Trans+GLAT generates translations with repetition ratios of 0.14 and 0.02 for 1 and 2 grams, respectively, which are almost identical to the ratios of the reference." }, { "figure_ref": [], "heading": "Discourse Phenomena", "publication_ref": [ "b34", "b1" ], "table_ref": [ "tab_2" ], "text": "We assess the discourse ability of NAT models using a human-annotated test suite in English-Russian MT (Voita et al., 2019). We train both the AT baselines and NAT models using the 1.5M document (only 4 sentences) pairs. In contrast to previous work (Bao et al., 2021), we do not use the additional 6M sentence pairs for training for the purpose of highlighting their discourse capabilities.\nAs Table 4 shows, the NAT models significantly underperform G-Transformer across the four discourse phenomena. The performance of G-Trans+GLAT+CTC matches the Transformer baseline (without document context) on deixis and lexical cohesion (lexcoh), but excels on the ellipsis of inflection (el.infl.) and ellipsis of verb phrase (el.VP). G-Trans+DA-Trans achieve relatively higher deixis than G-Trans+GLAT+CTC be- cause its DAG link models the target-side dependence somehow. These results suggest that current NAT models have some discourse abilities but still struggle with handling discourse phenomena." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "We investigated NAT models on document-level MT, revealing more severe affections of multimodality and misalignment issues on documents than on sentences. We proposed NAT models with sentence alignment, reducing the possible alignment space, and achieving the best results across three benchmarks. Our experiments show that NAT models significantly accelerate text generation in documents while their performance still lags behind their AT counterparts. Further analysis shows that fully NAT models underutilize document context, leading to loose discourse relations.\nAs the first research of NAT models on document-level MT, we hope this work could stimulate future research on reducing the modalities, exploiting document contexts, and modeling discourse dependence." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We do not enumerate all recent NAT models. For the purpose of our investigation, we only evaluate the pure (fully) NAT models, leaving other NAT models, such as semi-autoregressive and iterative NAT models, out of our scope. " }, { "figure_ref": [], "heading": "C Case Study.", "publication_ref": [], "table_ref": [], "text": "As a case in Table 7 shows, even though the source is not a lengthy document, sentence alignment enhances the translation quality significantly. Specifically, on the one hand, G-Trans+GLAT and G-Trans+GLAT+CTC estimate the target length more accurately than GLAT and GLAT+CTC because of their fine-grained length prediction on each target sentence. On the other, G-Trans+GLAT and G-Trans+GLAT+CTC show consistent translation quality between the beginning and the end of the document, while GLAT and GLAT+CTC show good quality at the beginning and poor quality at the end. This case illustrates the challenge of long text sequences to NAT models that the translation quality degrades as long as the distance to the beginning of the document increases, and demonstrates the advantage of sentence alignment to stop the degradation." }, { "figure_ref": [], "heading": "D More Discussion", "publication_ref": [], "table_ref": [], "text": "Can a larger model handle longer input sequences? A larger model does not necessarily solve the longer input sequence issue given the same amount of training data. For pre-trained language models, a larger model typically shows a stronger ability to handle long sequences because the increased parameters enable the model to capture more complex long-context dependencies. However, for a non-pretraining setting, a larger model does not necessarily show better performance on the current document-level MT benchmarks. The longer input makes the model more likely to overfit and a larger model will make it worse given the limited training corpus. Take the AT baseline G-Transformer as an example. When we increase the model size from Base to Big, the d-BLEU scores decline by 0.36, 1.41, and 0.10 on TED, News, and Europarl, respectively. When we further increase the model size to Large, the d-BLEU scores further decline by 16.53, 8.49, and 0.56 on TED, News, and Europarl, respectively.\nCan the model handle more complex sentence alignments, such as one source sentence divided into multiple target simple sentences? Actually, the current model can handle the case. The sentence-level alignment between the source and target is achieved by the same group tag assigned to the source tokens and the target tokens. We can treat the multiple simple sentences as a whole translation unit, assigning them the same group tag to map them to a single complex sentence." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank the anonymous reviewers for their valuable feedback. This work is funded by the China Strategic Scientific and Technological Innovation Cooperation Project (grant No. 2022YFE0204900) and the National Natural Science Foundation of China (grant NSFC No. 62161160339). Zhiyang Teng is partially supported by CAAI-Huawei MindSpore Open Fund (CAAIXSJLJJ-2021-046A)." }, { "figure_ref": [], "heading": "A Experimental Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Datasets", "publication_ref": [ "b22", "b17", "b1" ], "table_ref": [], "text": "We evaluate the models using document-level MT benchmark (Maruf et al., 2019), which includes three datasets covering three domains and various data scales for English-German translation. TED is from IWSLT17, which is transcribed from TED talks that each talk forms a document. tst2016-2017 is used to test the model, and the rest for development.\nNews is from News Commentary v11 for training set. For testing and development sets, it uses newstest2016 and newstest2015, respectively.\nEuroparl is from Europarl v7, where the training, development, and testing sets are randomly split.\nTable 5 shows the detailed statistics. We preprocess the documents by tokenizing and truecasing using MOSES (Koehn et al., 2007) and applying BPE with 30,000 merge operations. We follow Bao et al. (2021) to split each document into segments with a maximum length of 512 tokens." }, { "figure_ref": [], "heading": "A.2 Evaluation Metrics", "publication_ref": [ "b20", "b24" ], "table_ref": [], "text": "We follow Liu et al. (2020), evaluating the model performance in s-BLEU and d-BLEU.\ns-BLEU is calculated on each pair of sentences, which are obtained using the sentence alignment between source and target documents.\nd-BLEU is calculated on each pair of segments, taking the whole segment as a translation unit to compute the BLEU score.\nWe calculate the BLEU score (Papineni et al., 2002) using sacreBLEU on the detokenized cased words." }, { "figure_ref": [], "heading": "A.3 Model Configuration", "publication_ref": [], "table_ref": [], "text": "All the experiments are run on the Base model, which has 6 layers, 8 heads, 512 embedding dimensions, and 2048 hidden dimensions. We train the models on 4 Tesla V100/A100 GPUs for both AT and NAT. By default, we use the Tesla V100, but in case out-of-memory happens, we switch to Tesla A100 to re-train the model. We do not change the code of existing NAT models, and we obtain the default training and testing arguments from their official code. We update the arguments maxsource-positions and max-target-positions to fit the enlarged input and output sequences. We run all main experiments three times and report the median.\nWe assess the speedup on the test set using a batch size of 1 within a virtual environment equipped with 1 GPU and 4 CPUs of a Tesla V100." }, { "figure_ref": [], "heading": "B Contribution of Document Context", "publication_ref": [ "b37", "b38", "b1" ], "table_ref": [], "text": "Previous studies demonstrate that document context can significantly enhance MT performance (Zhang et al., 2018;Zheng et al., 2021;Bao et al., 2021). We evaluate its contribution as shown in Table 6. Comparing Transformer on sentencelevel MT and G-Transformer on document-level MT, we can see that document context enhances s-BLEU by 0.49, 0.55, and 1.05 on TED, News, and Europarl, respectively. However, the best NAT results produced by G-Trans+GLAT+CTC on KD data are still lower than the Transformer baseline by 0.47, 0.88, and 0.77 on TED, News, and Europarl," } ]
Non-autoregressive translation (NAT) models achieve comparable performance and superior speed compared to auto-regressive translation (AT) models in the context of sentence-level machine translation (MT). However, their abilities are unexplored in document-level MT, hindering their usage in real scenarios. In this paper, we conduct a comprehensive examination of typical NAT models in the context of document-level MT and further propose a simple but effective design of sentence alignment between source and target. Experiments show that NAT models achieve high acceleration on documents, and sentence alignment significantly enhances their performance. However, current NAT models still have a significant performance gap compared to their AT counterparts. Further investigation reveals that NAT models suffer more from the multi-modality and misalignment issues in the context of document-level MT, and current NAT models struggle with exploiting document context and handling discourse phenomena.
Non-Autoregressive Document-Level Machine Translation
[ { "figure_caption": "Figure 2 :2Figure 1: G-Transformer with sentence alignment between source and target documents, where the low layers use the group-attention and only the top 2 layers use the global-attention.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Speedup rises as long as the sequence length grows, evaluated on TED and News.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Speedup with various batch sizes evaluated on News, where the \"/ex\" conducts a strict calculation of speedup by excluding the time cost for model initialization from both the AT baseline and the NAT models.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "GLAT+CTC detailed BLEU scores on different sequence lengths.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: NAT vs. AT models on trends evaluated on News, where Transformer fails abruptly at the length of 512 while NAT models decrease progressively.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Table 7 :7Case study. GLAT and GLAT-CTC show good translation quality at the beginning but poor quality at the end of the document. G-Trans+GLAT and G-Trans+GLAT+CTC show more consistent translation quality for sentences at different positions.respectively, not to mention the G-Transformer baseline. These results indicate that NAT models have a limited ability to utilize document context.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Main results on raw data (Raw) and knowledge distilled data (KD), where the failures are marked and investigated in section 5.1. We use the official code from the respective papers of NAT models, except NAT+CTC, which is implemented by ourselves based on the GLAT code. \"*\" denotes a statistical significance at the level of p < 0.01 using a t-test, compared to the corresponding baseline NAT model without sentence alignment.", "figure_data": "TEDNewsEuroparlAverageMethod(d-BLEU)(d-BLEU)(d-BLEU)(d-BLEU)SpeedRawKDRawKDRawKDRaw KD-upAT BaselinesG-Transformer (Bao et al., 2021)27.23 26.4227.22 26.3834.09 32.8729.51 28.56-Transformer (Vaswani et al., 2017)0.6926.040.230.4333.41 26.4311.44 17.631.0xExisting NAT ModelsVanilla NAT (Gu et al., 2018)0.450.340.120.611.302.430.62 1.1341.2xGLAT (Qian et al., 2021)1.770.030.012.562.138.171.30 3.5940.0xLatent-GLAT (Bao et al., 2022)2.101.870.302.305.264.252.55 2.8129.2xNAT+CTC (Libovickỳ and Helcl, 2018)21.54 24.9815.95 24.030.0031.5812.50 26.86 27.7xGLAT+CTC (Qian et al., 2021)18.49 25.3110.23 20.010.0029.479.57 24.93 26.3xDA-Transformer (Huang et al., 2022)20.47 25.0213.99 23.3730.34 32.1421.60 26.84 30.8xNAT Models with Sentence AlignmentG-Trans+GLAT (ours)19.96* 24.23* 15.14* 23.04* 25.67* 31.45* 20.26 26.24 30.0xG-Trans+GLAT+CTC (ours)24.09* 26.31* 21.68* 25.61* 30.35* 32.24* 25.37 28.05 20.0xG-Trans+DA-Trans (ours)23.45* 25.73* 21.43* 24.70*30.36 32.2925.08 27.57 25.1x", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Impact of document context, evaluated in s-BLEU. ♢ -the scores are from the paper report, where the model is trained on the raw datasets. The NAT models are trained on the KD datasets.", "figure_data": "Method (s-BLEU)TED News Europarl DropG-Transformer ♢25.12 25.5232.39--target-side context25.05 25.4132.16-0.14-source-side context24.56 24.5831.39-0.70G-Trans+GLAT+CTC 24.16 24.0930.57--target-side context24.31 24.0030.47-0.01-source-side context23.96 24.0030.02-0.27", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Discourse phenomena. el.infl. -ellipsis of inflection. el.VP -ellipsis of verb phrase. lexcoh -lexical cohesion. ♡ -Transformer baseline trained on sentences. ♢ -G-Transformer baseline trained on documents.", "figure_data": "Methoddeixis el.infl. el.VP lexcohTransformer (sent) ♡50.053.028.445.9G-Transformer ♢87.182.479.858.6G-Trans+GLAT+CTC50.055.246.645.9G-Trans+DA-Trans56.733.021.045.2", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Guangsheng Bao; Zhiyang Teng; Hao Zhou; Jianhao Yan; Yue Zhang
[ { "authors": "Guangsheng Bao; Zhiyang Teng; Yue Zhang", "journal": "", "ref_id": "b0", "title": "Target-side augmentation for document-level machine translation", "year": "2023" }, { "authors": "Guangsheng Bao; Yue Zhang; Zhiyang Teng; Boxing Chen; Weihua Luo", "journal": "", "ref_id": "b1", "title": "G-transformer for document-level machine translation", "year": "2021" }, { "authors": "Yu Bao; Hao Zhou; Shujian Huang; Dongqi Wang; Lihua Qian; Xinyu Dai; Jiajun Chen; Lei Li", "journal": "", "ref_id": "b2", "title": "latent-glat: Glancing at latent variables for parallel text generation", "year": "2022" }, { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b3", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "William Chan; Chitwan Saharia; Geoffrey Hinton; Mohammad Norouzi; Navdeep Jaitly", "journal": "", "ref_id": "b4", "title": "Imputer: Sequence modelling via imputation and dynamic programming", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b5", "title": "", "year": "" }, { "authors": "Tri Dao; Dan Fu; Stefano Ermon; Atri Rudra; Christopher Ré", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "Flashattention: Fast and memory-efficient exact attention with io-awareness", "year": "2022" }, { "authors": "Cunxiao Du; Zhaopeng Tu; Jing Jiang", "journal": "PMLR", "ref_id": "b7", "title": "Orderagnostic cross entropy for non-autoregressive machine translation", "year": "2021" }, { "authors": "Marjan Ghazvininejad; Vladimir Karpukhin; Luke Zettlemoyer; Omer Levy", "journal": "PMLR", "ref_id": "b8", "title": "Aligned cross entropy for non-autoregressive machine translation", "year": "2020" }, { "authors": "Marjan Ghazvininejad; Omer Levy; Yinhan Liu; Luke Zettlemoyer", "journal": "", "ref_id": "b9", "title": "Mask-predict: Parallel decoding of conditional masked language models", "year": "2019" }, { "authors": "Alex Graves; Santiago Fernández; Faustino Gomez; Jürgen Schmidhuber", "journal": "", "ref_id": "b10", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "year": "2006" }, { "authors": "Jiatao Gu; James Bradbury; Caiming Xiong; O K Victor; Richard Li; Socher", "journal": "", "ref_id": "b11", "title": "Non-autoregressive neural machine translation", "year": "2018" }, { "authors": "Jiatao Gu; Changhan Wang; Junbo Zhao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Levenshtein transformer", "year": "2019" }, { "authors": "Jindřich Helcl; Barry Haddow; Alexandra Birch", "journal": "", "ref_id": "b13", "title": "Non-autoregressive machine translation: It's not as fast as it seems", "year": "2022" }, { "authors": "Fei Huang; Hao Zhou; Yang Liu; Hang Li; Minlie Huang", "journal": "", "ref_id": "b14", "title": "Directed acyclic transformer for nonautoregressive machine translation", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "Nikita Kitaev; Łukasz Kaiser; Anselm Levskaya", "journal": "", "ref_id": "b16", "title": "Reformer: The efficient transformer", "year": "2020" }, { "authors": "Philipp Koehn; Hieu Hoang; Alexandra Birch; Chris Callison-Burch; Marcello Federico; Nicola Bertoldi; Brooke Cowan; Wade Shen; Christine Moran; Richard Zens", "journal": "", "ref_id": "b17", "title": "Moses: Open source toolkit for statistical machine translation", "year": "2007" }, { "authors": "Jason Lee; Elman Mansimov; Kyunghyun Cho", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement", "year": "2020" }, { "authors": "Jindřich Libovickỳ; Jindřich Helcl", "journal": "", "ref_id": "b19", "title": "End-toend non-autoregressive neural machine translation with connectionist temporal classification", "year": "2018" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b20", "title": "Multilingual denoising pretraining for neural machine translation", "year": "2020" }, { "authors": "Xuezhe Ma; Chunting Zhou; Xian Li; Graham Neubig; Eduard Hovy", "journal": "", "ref_id": "b21", "title": "Flowseq: Nonautoregressive conditional sequence generation with generative flow", "year": "2019" }, { "authors": "Sameen Maruf; André Ft Martins; Gholamreza Haffari", "journal": "", "ref_id": "b22", "title": "Selective attention for context-aware neural machine translation", "year": "2019" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "", "ref_id": "b23", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b24", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Lihua Qian; Hao Zhou; Yu Bao; Mingxuan Wang; Lin Qiu; Weinan Zhang; Yong Yu; Lei Li", "journal": "", "ref_id": "b25", "title": "Glancing transformer for non-autoregressive neural machine translation", "year": "2021" }, { "authors": "Yankai Qiu Ran; Peng Lin; Jie Li; Zhou", "journal": "", "ref_id": "b26", "title": "Learning to recover from multi-modality errors for non-autoregressive neural machine translation", "year": "2020" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Mohammad Norouzi", "journal": "", "ref_id": "b27", "title": "Non-autoregressive machine translation with latent alignments", "year": "2020" }, { "authors": "Chenze Shao; Yang Feng", "journal": "", "ref_id": "b28", "title": "Non-monotonic latent alignments for ctc-based non-autoregressive machine translation", "year": "2022" }, { "authors": "Chenze Shao; Jinchao Zhang; Yang Feng; Fandong Meng; Jie Zhou", "journal": "", "ref_id": "b29", "title": "Minimizing the bagof-ngrams difference for non-autoregressive neural machine translation", "year": "2020" }, { "authors": "Mitchell Stern; William Chan; Jamie Kiros; Jakob Uszkoreit", "journal": "", "ref_id": "b30", "title": "Insertion transformer: Flexible sequence generation via insertion operations", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b31", "title": "", "year": "" }, { "authors": "Zewei Sun; Mingxuan Wang; Hao Zhou; Chengqi Zhao; Shujian Huang; Jiajun Chen; Lei Li", "journal": "", "ref_id": "b32", "title": "Rethinking document-level neural machine translation", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b33", "title": "Attention is all you need", "year": "2017" }, { "authors": "Elena Voita; Rico Sennrich; Ivan Titov", "journal": "", "ref_id": "b34", "title": "When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion", "year": "2019" }, { "authors": "Sinong Wang; Belinda Z Li; Madian Khabsa; Han Fang; Hao Ma", "journal": "", "ref_id": "b35", "title": "Linformer: Self-attention with linear complexity", "year": "2020" }, { "authors": "Lesly Miculicich Werlen; Dhananjay Ram; Nikolaos Pappas; James Henderson", "journal": "", "ref_id": "b36", "title": "Documentlevel neural machine translation with hierarchical attention networks", "year": "2018" }, { "authors": "Jiacheng Zhang; Huanbo Luan; Maosong Sun; Feifei Zhai; Jingfang Xu; Min Zhang; Yang Liu", "journal": "", "ref_id": "b37", "title": "Improving the transformer translation model with document-level context", "year": "2018" }, { "authors": "Zaixiang Zheng; Xiang Yue; Shujian Huang; Jiajun Chen; Alexandra Birch", "journal": "", "ref_id": "b38", "title": "Towards making the most of context in neural machine translation", "year": "2021" }, { "authors": "Chunting Zhou; Graham Neubig; Jiatao Gu", "journal": "", "ref_id": "b39", "title": "te te dazu kräften profitieren profitieren profitieren profitieren profitieren durch durch steigenden steigenden profitieren profitieren . die könnten könnten Gewinne Erträge höhere höhere erzielen erzielen erzielen , obwohl sich sich sich sich , schnell schnell diese diese Fuß wird wird . werden werden", "year": "2019" }, { "authors": "G-Trans+glat ; Goldman Goldman; ; Goldman", "journal": "", "ref_id": "b40", "title": "reine Aktienspielen würden die Verbraucher durch den steigenden Dollar . Sparer könnten Gewinne ebenso durch höhere Renditen sehen , auch sich die Experten darüber sind , wie schnell dies haben wird", "year": "" } ]
[ { "formula_coordinates": [ 3, 117.92, 172.85, 171.95, 33.58 ], "formula_id": "formula_0", "formula_text": "p θ (y|x) = T t=1 p θ (y t |y <t , x),(1)" }, { "formula_coordinates": [ 3, 106.24, 707.88, 183.62, 33.58 ], "formula_id": "formula_1", "formula_text": "p θ (y|x) = p θ (T |x) • T t=1 p θ (y t |x),(2)" }, { "formula_coordinates": [ 3, 349.15, 541.97, 175.99, 34.42 ], "formula_id": "formula_2", "formula_text": "p θ (y|x) = a∈β(y) M i=1 p θ (a i |x),(3)" }, { "formula_coordinates": [ 4, 101.31, 350.75, 188.56, 22.79 ], "formula_id": "formula_3", "formula_text": "p θ (y|x) = a∈β(y) p θ (a|x)p θ (y|a, x),(4)" }, { "formula_coordinates": [ 4, 117.91, 450.51, 171.96, 33.71 ], "formula_id": "formula_4", "formula_text": "p θ (a|x) = M -1 i=1 p θ (a i+1 |a i , x),(5)" }, { "formula_coordinates": [ 4, 107.32, 488.84, 182.55, 33.71 ], "formula_id": "formula_5", "formula_text": "p θ (y|a, x) = M i=1 p θ (y i |a i , x),(6)" }, { "formula_coordinates": [ 4, 325.79, 573.19, 199.22, 29.93 ], "formula_id": "formula_6", "formula_text": "p θ (y|x) = K j=1   p θ (Tj|xj) • T j t=1 p θ (yj,t|xj, x)   , (7)" }, { "formula_coordinates": [ 5, 96.34, 471.17, 193.39, 31.52 ], "formula_id": "formula_7", "formula_text": "p θ (y|x) = K j=1   a j ∈β(y j ) M j i=1 p θ (aj,i|xj, x)   ,(8)" } ]
10.1103/PhysRevPhysEducRes.18.010141
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b5", "b6", "b7", "b6" ], "table_ref": [], "text": "Using artificial intelligence (AI) machine learning (ML) models in education can have many potential benefits, including inferring student knowledge, computer adaptive testing, predicting possible school dropouts and providing insight into drivers of student boredom [1], to name just a few. A key hurdle to developing models is having enough labelled data (i.e., graded student responses) necessary to fine-tune models, which typically require thousands of examples (at a minimum) to reach acceptable levels of performance. Creating large, labelled datasets of educational data is challenging as most education data requires complex, holistic or subjective judgements to be made (e.g., evaluating student responses to open-ended questions) [2]. For these types of tasks, raters must use judgement, making inter-rater reliability (IRR) the key measure of data quality [3]. Achieving sufficiently high IRR is traditionally achieved by using expert raters to assign rubric-based scores, which can be expensive and time-consuming.\nA potentially promising approach for labelling large-scale data sets for educational AI models is working with crowdworkers but asking them to make preference-based or comparative judgements, rather than categorical ratings. Research from machine learning and the social sciences suggests that this approach can be easier for raters and improve inter-rater reliability [4,5]. Reinforcement learning from human feedback (RLHF) led to a decisive improvement in ChatGPT and is now rapidly being adopted as an approach [6]. RLHF is based on the insight that human feedback is critical to improving model outputs when aspects of these outputs are hard to define (for example, how helpful, honest, or harmful outputs are [6]. Researchers have begun to converge on the benefits of asking human raters to make relative judgements about which output they preferred (called preference-based rating). Making relative judgements is easier for raters when hard-to-define criteria are being used to rate outputs and generally leads to high inter-rater reliability [7].\nInterestingly, this dovetails with well-established work from educational psychology, which has established that there can be numerous benefits to making relative judgements when evaluating student work. Comparative judgement (the term used for making relative rather than absolute judgements) has a long history in psychology and is widely understood to improve rater accuracy and inter-rater reliability, lower training requirements and be less cognitively strenuous for raters than making absolute judgements [8]. Comparative judgement involves asking raters to compare and rank two or more pieces of work, rather than assigning them a score based on a predetermined scale or standard (as raters need to do when making absolute judgements) [7]. While there is substantial research both on using comparative judgement to evaluate educational data, and in using preference-based rating approaches when asking crowdworkers to make complex and or holistic judgements, there is very little research that combines these approaches. This paper aims to address that gap." }, { "figure_ref": [], "heading": "PRIOR WORK", "publication_ref": [ "b2", "b8", "b3", "b9", "b10", "b11", "b12", "b5", "b5", "b12", "b12", "b13", "b14", "b15", "b12", "b6", "b6", "b16", "b7", "b7" ], "table_ref": [], "text": "Crowdsourcing, which involves \"the outsourcing of a piece of work to a crowd of people via an open call for contributions\" is commonly used in machine learning research [3]. The benefits of crowdsourcing are numerous, including speed, scale, the ability to obtain judgments from much larger and more representative portions of the population, and the potential to harness the 'wisdom of crowd' effect [9].Crowdsourcing can include both paid and volunteer work and be done by experts or non-experts, but as crowdsourcing has become a more established method, the practice of contracting individuals to complete labelling, evaluation, and other tasks on online platforms, also known as crowdwork, has become increasingly common [4]. However, labelling examples of more complex tasks, such as evaluating the truthfulness or toxicity of a statement, or determining which product review is more helpful, are both more time-consuming to complete, and more difficult to achieve high levels of accuracy and reliability [10]. This has raised questions about how to improve the accuracy and interrater reliability of non-expert evaluation of these more complex tasks. Various ways to improve the accuracy of human judgements on complex tasks have been proposed, including offering raters initial practice and feedback, financial incentivization, pooling of judgments, and selectively screening raters to ensure attentiveness and quality [11,12]. Asking raters to make relative rather than absolute judgements (preference-based rating in machine learning or comparative judgement in educational psychology) is another way to improve rater accuracy and inter-rater reliability.\nRecent research has used reinforcement learning from human feedback (RLHF) to improve model performance across a range of tasks [13,6]. In the case of InstructGPT, one of the models behind chat GPT, RLHF was used during fine-tuning to better align model output with the type of response that users were likely to prefer. As part of this approach, labelers and researchers were asked to rank model outputs from best to worst based on which response they preferred. This data was then used to train a reward model, which was used to fine-tune the model to the stated preferences of the human labelers [6]. Incorporating crowdworkers preferences (i.e. asking them to make relative judgements) of the quality and appropriateness of model outputs was a key component of the increased relevance and naturalness seen in ChatGPT as compared to GPT-3, which uses the same underlying statistical model [13]. Furthermore, Stiennon et al (2022) found that comparative or preference-based approaches can lead to agreement rates between expert and non-expert raters that are almost as high as expert-toexpert agreement rates [13].\nUnlike absolute or categorical judgments, comparative judgement asks evaluators to make relative comparisons between two or more pieces of work, rather than scoring them against a predetermined standard. Psychology research dating back to Thurstone's law of comparative judgement has consistently shown that humans are better at making comparative ratings than they are at scale or absolute ratings (for example estimates of quantity, the intensity of sound or the order of weights [14,15,16]. This finding has now been echoed by recent machine learning research on reinforcement learning from human feedback [13]. In educational assessment, evidence suggests that using multiple pairwise ratings can result in very high reliability, equal to or exceeding area experts who undergo extensive calibration exercises [7]. This is especially the case for complex judgements that require a degree of holistic judgement, like evaluating open-response answers to reading comprehension questions [7]. Other research has also shown that the accuracy of expert raters of examination scripts can be higher when making comparative rather than absolute judgements [17]. Steedle & Ferrara (2016) point to three additional advantages of comparative judgement: improved accuracy, lower training requirements and higher efficiency [8]." }, { "figure_ref": [], "heading": "CURRENT STUDY 3.1 Overview", "publication_ref": [], "table_ref": [], "text": "We conducted two experiments intended to establish proof-ofprinciple for using crowdworkers to label educational data. In these experiments crowdworkers were hired to evaluate student responses to two differ methods of assessing reading ability. Our goal was to investigate both the overall accuracy and agreement levels of non-expert raters, and to test whether asking them to evaluate student responses comparatively rather than categorically, improved accuracy and inter-rater reliability.\nTo answer this question raters were assigned to one of two conditions: categorical judgement or comparative judgement. In the categorical judgement condition, the raters were asked to make an absolute judgement about a piece of student work. In the comparative condition, a different group of raters were given the same task and were asked to decide which answer was more correct. In both conditions, the tasks were identical, with the only difference being the type of judgement they were asked to make. Across both experiments, a total of approximately 300 crowdworkers were recruited through the online research platform Prolific. This experimental design allowed us to control for any unintended differences in difficulty between correct and incorrect candidate answers, as well as the impact of the order of questions on the cognitive load associated with the task. The assignments were also quasi-random, and the tasks were posted simultaneously on the Prolific platform, making it likely that the difference in accuracy and reliability levels is attributable to the differences between comparative and categorical judgments." }, { "figure_ref": [], "heading": "Rating Tasks and Datasets", "publication_ref": [], "table_ref": [], "text": "We used asked rater to evaluate student responses to two commonly used methods for assessing reading ability (a) short-answer responses to reading comprehension questions, and (b) oral reading fluency. These specific tasks were selected both because they are widely used for formative assessment of reading ability and because they are time-consuming for teachers to grade. We also selected these two tasks because they differ in the complexity of the rating process. While evaluating short-answer responses reading comprehension questions is a relatively straightforward task, evaluating oral reading fluency, in particularly prosody, is considered a task that required highly trained educators.\nFor the short-answer task, approximately 40 raters were recruited. They evaluated 10 examples, each one consisting of a short nonfiction passage, a reading comprehension question, and a candidate answer. In the categorical judgement condition, the raters were asked to decide if the candidate answer was correct or incorrect (equivalent to a two-way classification). In the comparative condition, a different group of raters were given the same passage, a question, and two candidate answers from the curated set (one correct and one incorrect) and were asked to decide which answer was more correct. In both conditions the passages, questions and candidate answers were identical and the only difference was the type of judgement they were asked to make. The passages and questions for this task were taken from SQUAD 2.0, a corpus of passages taken from Wikipedia, each accompanied by a direct question about the passage, and a correct answer.\nPassage: Soon after the Normans began to enter Italy, they entered the Byzantine Empire and then Armenia, fighting against the Pechenegs, the Bulgars, and especially the Seljuk Turks. Norman mercenaries were first encouraged to come to the south by the Lombard's to act against the Byzantines, but they soon fought in Byzantine service in Sicily, and then in Greek service. They were prominent alongside Varangian and Lombard contingents in the Sicilian campaign of George Maniacs in 1038 AD.\nQuestion: Who was the Normans' main enemy in Italy, the Byzantine Empire and Armenia?" }, { "figure_ref": [], "heading": "Experimental Conditions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Categorical Comparative", "publication_ref": [], "table_ref": [], "text": "Based on the passage is this a correct answer to the question?\n-------------------------------\"The Seljuk Turks, as well as the Pechenegs\" (a) yes\n(b) no ------------------------------\nBased on the passage which answer is more correct?\n-------------------------------" }, { "figure_ref": [], "heading": "(a) The Seljuk Turks, as well as the Pechenegs", "publication_ref": [], "table_ref": [], "text": "The Greeks, Italian, and Sicilians. " }, { "figure_ref": [], "heading": "Rater Selection and Quality Control", "publication_ref": [ "b3", "b4", "b17" ], "table_ref": [], "text": "As prior research indicated that the rater's native language and previous task completion success rate were associated with performance on the rating tasks, we recruited candidates from the US, UK, and Canada who had a historically successful completion rate of at least 85%. We also employed best practice quality control measures, including embedded attention checks and minimum completion time to ensure raters were diligent [4,5]. While the use of crowdworkers can be an efficient and cost-effective method for data annotation, prior research highlighted the importance of researchers taking steps to ensure that crowdworkers are treated fairly [18]. For this study raters were recruited from the online platform Prolific, known for its ethical practices including dispute resolution protocol, and were paid a benchmarked rate of GBP 8.5 per hour." }, { "figure_ref": [], "heading": "Reliability Measures", "publication_ref": [ "b18", "b19", "b20", "b20" ], "table_ref": [], "text": "A critical consideration for labelling and using educational data is ensuring that a given approach has high reliability, i.e., that the data would receive the same rating across various settings, timeframes, and by distinct raters. The simplest method for evaluating reliability with nominal data involves calculating the observed agreement. However, this metric does not account for the anticipated agreement based on chance, causing a bias towards evolution tasks with fewer categories [19]. To circumvent this issue, alternative reliability measures have been introduced, most notably Cohen's kappa [20], which corrects observed agreement for chance agreement. However, Cohen's kappa is restricted to the unique case of two raters, an uncommon scenario in large-scale data annotation. Consequently, another measure, Krippendorff's alpha [21] was developed, offering significant flexibility regarding the measurement scale and the number of raters. This has become the favored metric for assessing inter-rater reliability in various data labelling tasks. The value of Krippendorf's alpha can range from -1 to +1 and can be interpreted similarly to a correlation coefficient, with -1 being perfect disagreement, 0 being complete random chance, and 1 being perfect agreement. Adequate reliability benchmarks vary but are typically above 0.5. Landis and Koch [21] define substantial as reliability coefficients larger than 0.6 as substantial, and Krippendorff claims that alpha scores between 0.67 and 0.80 can be used for drawing provisional conclusions." }, { "figure_ref": [], "heading": "Significance Testing", "publication_ref": [ "b22", "b22", "b22" ], "table_ref": [], "text": "In the context of this study there are two questions relating to statistical significance: (a) are the reliability levels themselves statistically significant, and (b) is any observed difference in the reliability levels based on the methods of rating (i.e., categorical and comparative) significant?\nFor reliability measures, the confidence interval establishes a range within which the actual coefficient is likely to be found with a specified probability and hence can be used for hypothesis testing. For instance, if the goal is to demonstrate reliability score is greater than chance at a 95% confidence level, the lower bound of the twosided 95% confidence interval must exceed 0. Because Krippendorf's alpha already accounts for the level of agreement due to chance alone, in nearly all situations where even a minimally small sample is used and the alpha value is positive, the results are statistically significant [23].\nPerhaps more interestingly, if the objective is to demonstrate that the reliability exceeds a specific benchmark then the lower limit of the confidence interval must be greater than this benchmark. For instance, to demonstrate significant reliability (as defined by Landis and Koch), the lower limit of the confidence interval would need to be greater than alpha = 0.6. Confidence intervals can also be used for hypothesis testing to determine if the difference between two agreement levels is statistically significant. If the two confidence intervals do not overlap then the null hypothesis is rejected. For Krippendorff's alpha, the theoretical distribution of values necessary to calculate the confidence interval is not known [23] However, the empirical distribution can be obtained by the bootstrap approach, which Krippendorff proposed an algorithm for in the original paper. This method has been used by various other researchers, and we adopt the implementation and parameters laid out by Zapf et al [23]." }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Short Answer", "publication_ref": [], "table_ref": [], "text": "To evaluate the accuracy of the ratings we compared the crowdworker answer with the correct answer to the question as determined by ourselves. When using categorical judgment, the crowdwokers had an accuracy level of was 73%, which improved to 86% when they were presented the same student responses but were asked to make comparative judgment.\nThe results for the reliability of short-answer ratings are summarized in Table1. Krippendorf's alpha is interpreted similarly to most other inter-rater reliability scores, with 0 being perfect disagreement, and 1 being perfect agreement, with scores above 0.6 typically overlap. considered to have substantial reliability. Raters asked to rate responses categorically achieved an alpha of .66, whereas raters asked to rate responses comparatively had an alpha of .80. confidence interval equivalent to p < .01; a more stringent benchmark than the conventional p < .05. The 0.14 increase in. We can have high confidence in these reliability scores, based on the 99% inter-rater reliability is both large and highly statistically significant, as the respective 99% confidence intervals do not overlap. " }, { "figure_ref": [], "heading": "Oral Fluency", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Because evaluating the prosody of students reading, does not have an objectively correct score, it was not possible to calculate absolute accuracy for this tasks. The results for the reliability of the rating are summarized below in Table 2. Raters asked to evaluate the prosody of students' reading achieved had an inter-rater reliability score of 0.7, whereas raters asked to rate responses comparatively had an alpha of 0.78. We can have high confidence in these IRR estimates, based on the 99% confidence interval. The 0.08 increase in the reliability score is moderately large and highly statistically significant, as the respective 99% confidence intervals do not overlap. " }, { "figure_ref": [], "heading": "Interpretation", "publication_ref": [ "b9", "b23" ], "table_ref": [], "text": "The improvements in accuracy and reliability reported above can be considered substantial highly statistically significant. In the case of the accuracy of ratings for the short answer tasks, the floor for accuracy would be 50% (i.e., the coworker randomly guessing). Hence the 13-percentage point improvement in accuracy resulting from shifting the task from comparative to categorical, is ¼ of the total possible improvement on this task -moving from pure chance to perfect performance. The improvements to interrater reliability, 0.14 and 0.08, respectively, are highly statistically significant add larger than the increases in IRR produces by other commonly used methods to improve the quality crowd workers ratings such as such as candidate screening, pre-training, or incentive payments [10].\nIn addition to the improvement being substantial, the absolute level of reliability reaches or exceed commonly used benchmarks. To put these values in context, he NAEP, a gold-standard assessment of reading ability, directly assesses prosody with highly trained expert raters using a rubric -the same rubric used by raters in this study.\nVarious studies have reported inter-rater reliability ranging between 0.0 and 0.80 [24]. Hence the results of these two experiments suggest that (a) by structuring rating tasks as comparative rather than categorical judgments can substantially improve reliability, and (b) that crowdworkers can achieve moderate to high levels of inter-rater reliability, equivalent to those of highly trained expert raters, andb" }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Despite the high levels of statistical significance reported above, to attribute the increase in agreement to the change of the rating task from categorical to comparative, we must assume that the items being rated were the same (which is true) and that the raters were substantially similar. While we believe that quasi-random assignment of raters controlled for rater-specific differences, the lack of truly random assignment means we cannot definitively conclude this. Another limitation is that this effect was only demonstrated on two specific tasks, so it is yet to be determined whether this effect generalizes to other tasks." }, { "figure_ref": [], "heading": "DISCUSSION AND IMPLICATIONS", "publication_ref": [ "b13", "b14", "b15", "b3" ], "table_ref": [], "text": "In industry settings, especially within machine learning, using crowdsourcing to label data is de rigueur. However, it is much less commonly used in the social sciences, particularly in education research, to evaluate student data. This is likely due to a variety of reasons, one of the most important ones being concerns about the accuracy and reliability of non-experts evaluating student work. This study indicates (1) that it is possible to achieve high levels of inter-rater reliability with non-expert crowdworkers, (2) that reliability levels can be meaningfully increased if they are asked to make comparative rather than categorical judgements, (3) these high levels of IRR can be attained at a relatively modest cost and in a rapid timeframe.\nThese findings are in-line with prior research from psychology on the benefits of using comparative judgement [14,15,16], and with current trends in machine learning where crowdsourcing is routinely used to label complex data [4]. However, this study is one of the first, to our knowledge, to directly investigate the impact of combining these two approaches to explore the potential for education research and assessment. If our results are confirmed through further research, there would be several interrelated implications. First, it could establish proof-of-principle for crowdworkers making comparative judgements to label and/or evaluate student work. This in turn would allow for the creation of larger, more nuanced, and representative educational datasets, that could have a variety of beneficial applications." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This study aims to examine the feasibility of using crowdsourcing approaches with comparative judgement to create a scalable approach for evaluating and labelling educational data. The results from our two experiments suggest that using comparative judgement instead of categorical judgement to make holistic evaluations of complex educational data could improve accuracy and inter-rater reliability. Furthermore, it suggests that non-expert crowdworkers can label and evaluate education data with high accuracy when asked to make comparative judgments. These results are novel as they are among the first, to our knowledge, to investigate the potential of using comparative judgement with nonexpert crowdworkers for evaluating educational data. These early results suggest that further research exploring the effect of combining crowdsourcing and comparative judgement would be valuable. Furthermore, findings highlight the potential to combine machine learning and educational research methods and provide a framework for further interdisciplinary research in this field." } ]
Machine Learning models have many potentially beneficial applications in education settings, but a key barrier to their development is securing enough data to train these models. Labelling educational data has traditionally relied on highly skilled raters using complex, multi-class rubrics, making the process expensive and difficult to scale. An alternative, more scalable approach could be to use non-expert crowdworkers to evaluate student work, however, maintaining sufficiently high levels of accuracy and inter-rater reliability when using non-expert workers is challenging. This paper reports on two experiments investigating using non-expert crowdworkers and comparative judgement to evaluate complex student data. Crowdworkers were hired to evaluate student responses to open-ended reading comprehension questions. Crowdworkers were randomly assigned to one of two conditions: the control, where they were asked to decide whether answers were correct or incorrect (i.e., a categorical judgement), or the treatment, where they were shown the same question and answers, but were instead asked to decide which of two candidate answers was more correct (i.e., a comparative/preference-based judgement). We found that using comparative judgement substantially improved inter-rater reliability on both tasks. These results are in-line with well-established literature on the benefits of comparative judgement in the field of educational assessment, as well as with recent trends in artificial intelligence research, where comparative judgement is becoming the preferred method for providing human feedback on model outputs when working with non-expert crowdworkers. However, to our knowledge, these results are novel and important in demonstrating the beneficial effects of using the combination of comparative judgement and crowdworkers to evaluate educational data.
Leveraging Human Feedback to Scale Educational Datasets Combining Crowdworkers and Comparative Judgement
[ { "figure_caption": "Fig. 1 .Fig. 212Fig. 1. Example of short-answer rating task", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Rating TypePoint Estimate99% confidence(Krippendorf's alpha)intervalCategorical0.660.64 -0.67Comparative0.800.78 -0.82Change+ 0.14", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Rating TypePoint Estimate99% confidence(Krippendorf's alpha)intervalCategorical0.70.77 -0.79Comparative0.780.68 -0.73Change+ 0.08", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Owen Henkel; Libby Hills
[ { "authors": "Ryan S Baker", "journal": "blockchain, and robots", "ref_id": "b0", "title": "Artificial intelligence in education: Bringing it all together", "year": "2021" }, { "authors": "J Wilson; B Pollard; J M Aiken; M D Caballero; H J Lewandowski", "journal": "Physical Review Physics Education Research", "ref_id": "b1", "title": "Classification of open-ended responses to a research-based assessment using natural language processing", "year": "2022" }, { "authors": "J Belur; L Tompson; A Thornton; M Simon", "journal": "Sociological Methods & Research", "ref_id": "b2", "title": "Interrater Reliability in Systematic Review Methodology: Exploring Variation in Coder Decision-Making", "year": "2021" }, { "authors": "F Daniel; P Kucherbaev; C Cappiello; B Benatallah; M Allahbakhsh", "journal": "ACM Computing Surveys", "ref_id": "b3", "title": "Quality Control in Crowdsourcing", "year": "2018" }, { "authors": "Jennifer Vaughan; Wortman", "journal": "J. Mach. Learn. Res", "ref_id": "b4", "title": "Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Re-search", "year": "2017" }, { "authors": "Long Ouyang; Jeff Wu; Jiang; Xu; Almeida; Diogo; Carroll L Wainwright; Mishkin; Pamela; Chong Zhang", "journal": "", "ref_id": "b5", "title": "Training language models to follow instruction with human feedback", "year": "2022" }, { "authors": "Joshua A Mcgrane", "journal": "", "ref_id": "b6", "title": "Comparative judgment", "year": "2023" }, { "authors": "Jeffrey T Steedle; Steve Ferrara", "journal": "Applied Meas-urement in Education", "ref_id": "b7", "title": "Evaluating comparative judgment as an approach to essay scoring", "year": "2016" }, { "authors": "Jeff Howe", "journal": "Wired magazine", "ref_id": "b8", "title": "The rise of crowdsourcing", "year": "2006" }, { "authors": "Nikita Nangia; Saku Sugawara; Harsh Trivedi; Alex Warstadt; Clara Vania; Samuel R Bowman", "journal": "", "ref_id": "b9", "title": "What ingredients make for an effective crowdsourcing protocol for difficult NLU data collection tasks?", "year": "2021" }, { "authors": "James Surowiecki", "journal": "Economies, Societies and Nations", "ref_id": "b10", "title": "The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business", "year": "2004" }, { "authors": "David Q Sun; Christopher Hadas Kotek; Mayank Klein; William Gupta; Jason D Li; Williams", "journal": "", "ref_id": "b11", "title": "Improving human-labeled data through dynamic automatic conflict resolution", "year": "2020" }, { "authors": "Nisan Stiennon; Long Ouyang; Jeffrey Wu; Daniel Ziegler; Ryan Lowe; Chelsea Voss; Alec Radford; Dario Amodei; Paul F Christiano", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Learning to summarize with human feedback", "year": "2020" }, { "authors": "L L Thurstone", "journal": "Psychological Review", "ref_id": "b13", "title": "A law of comparative judgement", "year": "1927" }, { "authors": "Alastair Pollitt", "journal": "International Journal of Technology and Design Education", "ref_id": "b14", "title": "Comparative judgement for assessment", "year": "2012" }, { "authors": "Kate Gordon", "journal": "Journal of Experimental Psychology", "ref_id": "b15", "title": "Group judgments in the field of lifted weights", "year": "1924" }, { "authors": "Tim Gill; Tom Bramley", "journal": "Assessment in Education: Principles, Policy & Practice", "ref_id": "b16", "title": "How accurate are examiners' holistic judgements of script quality?", "year": "2013" }, { "authors": "M Silberman; Bill Six; Rochelle Tomlinson; Joel Laplante; Lilly Ross; Andrew Irani; Zaldivar", "journal": "Communications of the ACM", "ref_id": "b17", "title": "Responsible research with crowds: pay crowdworkers at least minimum wage", "year": "2018" }, { "authors": "W A Scott", "journal": "Public Opinion Quarterly", "ref_id": "b18", "title": "Reliability of content analysis: the case of nominal scale coding", "year": "1955" }, { "authors": "Mousumi Banerjee; Michelle Capozzoli; Laura Mcsweeney; Debajyoti Sinha", "journal": "Canadian journal of statistics", "ref_id": "b19", "title": "Beyond kappa: A review of interrater agreement measures", "year": "1999" }, { "authors": "K Krippendorff", "journal": "", "ref_id": "b20", "title": "Computing Krippendorff's Alpha-Reliability", "year": "2011" }, { "authors": "J R Landis; G G Koch", "journal": "Biometrics", "ref_id": "b21", "title": "The measurement of observer agreement for categorical data", "year": "1977" }, { "authors": "A Zapf; S Castell; L Morawietz", "journal": "BMC Med Res Methodol", "ref_id": "b22", "title": "Measuring interrater reliability for nominal data -which coefficients and confidence intervals are appropriate?", "year": "2016" }, { "authors": "Grant S Smith; David D Paige", "journal": "Reading Psychology", "ref_id": "b23", "title": "A Study of Reliability Across Multiple Raters When Using the NAEP and MDFS Rubrics to Measure Oral Reading Fluency", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 337.64, 691.12, 80.6, 20.15 ], "formula_id": "formula_0", "formula_text": "(b) no ------------------------------" } ]
2023-05-22
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b21" ], "table_ref": [], "text": "algorithm (Murdoch et al., 2018) and propose that the information flow in GNN's message propagation mechanism is decomposable. Then we design the decomposition schemes for the most commonly used layers and operations in GNNs, so as to isolate the information flow from distinct node groups. Furthermore, we explore the subgraph-level explanation via an aggregation algorithm that utilizes DEGREE and the structural information of the input graph to construct a series of subgraph sets as the explanation. DEGREE guarantees explanation fidelity by directly analyzing GNN feed-forward propagation, instead of relying on input perturbation or the use of alternative models. DEGREE is non-additive and can therefore uncover non-linear relationships between nodes. We quantitatively and qualitatively evaluate the DEGREE on both synthetic and real-world datasets to validate the effectiveness of our method. The contributions of this work are summarized as follows:\n• We propose a new explanation method (DEGREE) for GNNs, from the perspective of decomposition. By elucidating the feed-forward propagation mechanism within GNN, DEGREE allows capturing the contribution of individual components of the input graph to the final prediction. • We propose an aggregation algorithm that provides important subgraphs as explanation in order to mine the complex interactions between graph components. We combine the property of the message propagation mechanism to further reduce the computation. • We evaluate DEGREE on both synthetic and real-world datasets. The quantitative experiments show that our method could provide faithful explanations. The qualitative experiments indicate that our method may capture the interaction between graph components." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b7", "b9", "b33", "b37", "b37", "b2", "b26", "b29", "b27", "b0", "b22", "b25", "b24", "b34", "b32", "b17" ], "table_ref": [], "text": "Despite the great success in various applications, the black-box nature of deep models has long been criticized. Explainable Artificial Intelligence (XAI) tries to bridge the gap by understanding the internal mechanism of deep models (Du et al., 2019). Meanwhile, the need to tackle non-Euclidean data, such as geometric information, social networks, has given rise to the development of GNNs. Similar to the tasks on image or text data, GNNs focus on node classification (Henaff et al., 2015), graph classification (Xu et al., 2018;Zhang et al., 2018), and link prediction (Zhang & Chen, 2018;Cai & Ji, 2020). Message passing mechanism allows the information to flow from one node to another along edges, and empowers GNNs with convolutional capabilities for graph data.\nWhile the explainability in image and text domains is widely studied (Shrikumar et al., 2017;Sundararajan et al., 2017;Simonyan et al., 2013), the explainability of GNN is on the rise. First, some recent work adapts the interpretation methods used for traditional CNNs to GNNs (Baldassarre & Azizpour, 2019;Pope et al., 2019). They employ gradient values to investigate the contribution of node features to the final prediction. However, these methods ignore the topological information, which is a crucial property of graph data. Second, some methods trace the model prediction back to the input space in a backpropagation manner layer by layer (Schwarzenberg et al., 2019;Schnake et al., 2020). Third, some methods define a perturbation-based interpretation whereby they perturb node, edge, or node features and identify the components that affect the prediction most. Specifically, GNNExplainer and PGExplainer (Ying et al., 2019) maximize the mutual information between perturbed input and original input graph to identify the important features. Causal Screening (Wang et al., 2021) searches for the important subgraph by monitoring the mutual information from a cause-effect standpoint. CF-GNNExplainer (Lucic et al., 2021) proposes to generate counterfactual explanations by finding the minimal number of edges to be removed such that the prediction changes. In addition, XGNN (Yuan et al., 2020) builds a model-level explanation for GNNs by generating a prototype graph that can maximize the prediction. Moreover, due to the discrete and topological nature of graph data, XGNN defines graph generation as a reinforcement learning task instead of gradient ascent optimization.\nMany previous explanation methods for GNNs suffer from adversarial triggering issues, faithfulness issues and additive assumptions. To this end, we propose a decomposition based explanation for GNNs (DEGREE) to remedy these problems. DEGREE enables to track the contribution of the components from the input graph to the final prediction by decomposing a trained GNN. Thus, DEGREE guarantees the integrity of the input and eliminates the adversarial triggering issue of the perturbation-based approach. Since no surrogate models are used, DEGREE guarantees its faithfulness. Meanwhile, by integrating the decomposition to the normal layer, DEGREE does not have any additional training process. " }, { "figure_ref": [], "heading": "DEGREE: DECOMPOSITION BASED EXPLANATION FOR GNNS", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the details of the proposed explanation method. First, we introduce the notations and problem definition. Then, we discuss the general idea of decomposition based explanation. Finally, we develop concrete decomposition schemes for different GNNs layers." }, { "figure_ref": [], "heading": "PROBLEM DEFINITION", "publication_ref": [], "table_ref": [], "text": "We first introduce the notations used in this work. Given a graph G = (V, E), where V is the set of nodes and E is the set of edges between nodes. The adjacency matrix of G is denoted as A ∈ R N ×N , where N = |V| is the number of nodes, so V = {v 1 , v 2 , ..., v N }. The nodes are associated with features, and the feature matrix is denoted as X ∈ R N ×F , where F is the feature dimension.\nIn this work, we focus on explaining GNN-based classification models. Let f denote the target GNN model. f computes the likelihood of a node or graph belonging to the target class, where f : G → R |V| or f : G → R, for node or graph classification respectively.\nThe goal of explanation is to find the most important subgraph in G given f (G), which requires measuring the contribution of all possible subgraphs and find the ones with high contribution scores. However, there are two challenges to be addressed. (1) Given any subgraph of interest, how to estimate its contribution without breaking up the input graph? (2) The number of all subgraphs in G is usually very large, so how to choose candidate subgraphs for improving explanation efficiency?\nWe tackle the first challenge in the following part of Sec 3 and solve the second one in Sec 4." }, { "figure_ref": [], "heading": "DECOMPOSITION BASED EXPLANATION", "publication_ref": [], "table_ref": [], "text": "In general, a prediction model f contains multiple layers L t , t ∈ {1, . . . , T }:\nf (X) = L T • L T -1 • • • • • L 2 • L 1 (X). (1) Let X[t] denotes the input to L t , so X[t + 1] = L t (X[t]) and X[1] = X.\nThe symbol • denotes function composition. Here X[t] ∈ R N ×Ft is the embedding matrix at t-th layer, where F t is the latent dimension. The embedding vector of the i-th node at t-th layer is denoted as\nX i [t].\nThe core idea of decomposition based explanation is that, given a target node group (or subgraph) of interest, we estimate its contribution score to model prediction merely through feed-forward propagation. We call the information propagated from the target group as target portion, and the rest of information is called background portion. It is worth noting that, a node is in the target group does not necessarily mean it is important, while it only means we are interested in its importance score. In the following, we use γ and β to denote the target and background portion, respectively. Let m ∈ {0, 1} N , where m i = 1 means v i belongs to the target group and otherwise m i = 0.\nThen, the decomposition is initialized from the layer of node features, where the target portion and background portion of the input feature matrix are: X γ = diag(m)X and X β = (I -diag(m))X, respectively. In a neural network, information from different parts of input are merged in the feedforward process into latent representations, which poses challenges for explanation. Suppose the target and background portion in X[t] are known from prior layer, we could explain the model if we can still distinguish the information flows of the two portions inside L t . That is, at layer L t , suppose its input can be decomposed as X\n[t] = X γ [t] + X β [t]\n, the following relations need to hold for explanation:\nL D t (X γ [t], X β [t]) =     Γ(X γ [t], X β [t]) X γ [t+1] , B(X γ [t], X β [t]) X β [t+1]     (2) L t (X[t]) = X[t + 1] = X γ [t + 1] + X β [t + 1],(3)\nwhere\nL D t (•, •) denotes the decomposed version of layer L t . Γ(•, •) and B(•,\n•) corresponds to the contribution of the target and the background portion to layer L t . X γ [t + 1] and X β [t + 1] denotes the target and background portion of X[t + 1] as the input to the next layer. The decomposition above goes from the input, through all intermediate layers, to the final prediction. If a target node group or subgraph is important, then it should contributes to most of the prediction, meaning that\nΓ(X γ [T ], X β [T ]) ≈ f (X)." }, { "figure_ref": [ "fig_0" ], "heading": "INTUITIONS BEHIND DECOMPOSITION BASED EXPLANATION FOR GNN", "publication_ref": [], "table_ref": [], "text": "The intuition behind decomposition based explanation could be summarized as two rules: (1) the target and background portion at a higher layer mainly comes from the target and background portion at the lower layer respectively; (2) ideally there should be little interaction between the target portion and the background portion. Please note that the partition is not dimension-wise, meaning that each latent dimension may contain information from both target and background portions.\nFigure 1 briefly illustrates the working principle of GNNs: the model computes neural message for each node pair and aggregates message for them from their neighbors. A major step of decomposing GNNs is that: the target and background portion of a node are aggregated from the target and background portion of its neighbours, respectively. This can be easily illustrated by the distributive nature of the GNN information aggregation mechanism:\nX[t + 1] = AX[t] = A X γ [t] + X β [t] = AX γ [t] X γ [t+1] + AX β [t] X β [t+1]\n.\n(4) Nevertheless, the above equation is only a conceptual illustration. A real GNN model could consist of various layers, such as graph convolution layers, fully connected layers, activation layers and pooling layers. Several challenges still need to be tackled to develop an effective explanation method. First, how to design the decomposition scheme for different types of layers? Second, how to efficiently find out the important nodes and subgraphs, by choosing the appropriate target/background group given all possible node combinations?" }, { "figure_ref": [], "heading": "DECOMPOSING GNN LAYERS", "publication_ref": [ "b30" ], "table_ref": [], "text": "In this work, we consider the decomposition scheme for two commonly used GNN architectures: GCN (Kipf & Welling, 2016) and GAT (Veličković et al., 2017)." }, { "figure_ref": [], "heading": "DECOMPOSING GCNS", "publication_ref": [ "b28", "b20" ], "table_ref": [], "text": "The GCN architecture consists of graph convolution, fully connected layers, ReLU and maxpooling.\nGraph Convolution Layer: The graph convolution operation pass messages between nodes:\nX[t + 1] = D-1 2 Ã D-1 2 X[t]W + b,(5)\nwhere W and b denote the trainable weights and bias. Here b is optional. Ã = A + I denotes the adjacency matrix with self loop. The matrix Di,i = j Ãi,j is the diagonal degree matrix of Ã.\nThe corresponding decomposition can be designed as follows:\nγ[t] = D-1 2 Ã D-1 2 X γ [t]W , β[t] = D-1 2 Ã D-1 2 X β [t]W,(6)\nX γ [t + 1] = γ[t] + b • γ[t] γ[t] + β[t] , X β [t + 1] = β[t] + b • β[t] γ[t] + β[t] ,(7)\nwhere X γ [t] and X β [t] is the target and background portion of X[t], respectively. The derivation of γ[t] and β[t] is intuitive since graph convolution is a linear operation. Motivated by (Singh et al., 2018), γ[t] and β[t] have to compete for their share of b as in Eq 7. γ[t] ∈ R Ft+1 measures the dimension-wise magnitude of X γ [t] after the linear mapping ( β[t] is defined similarly).\nFully Connected Layer: A fully connected layer prevalent in the model is shown below:\nX[t + 1] = X[t]Θ + b,(8)\nwhere Θ and b denote trainable weights and bias. Structure-wise, it is very similar to the GCN. The decomposition can be designed as:\nX γ [t+1] = X γ [t]Θ+b• X γ [t]Θ X γ [t]Θ + X β [t]Θ , X β [t+1] = X β [t]Θ+b• X β [t]Θ X γ [t]Θ + X β [t]Θ . (9)\nReLU Activation: For the activation operator ReLU, we use the telescoping sum decomposition from Murdoch & Szlam (2017). We update the target term first and then update the background term by subtracting this from total activation:\nX γ [t + 1] = ReLU X γ [t] , X β [t + 1] = ReLU X γ [t] + X β [t] -ReLU X γ [t] .(10)\nMaxpooling: We track the node indices selected by pooling in both target and background portion." }, { "figure_ref": [], "heading": "DECOMPOSING GATS", "publication_ref": [], "table_ref": [], "text": "The graph attention layer in GAT is similar to Eq. 5, but uses the attention coefficients α i,j to aggregate the information (an alternative way to understand Eq. 5 is that α i,j = ( D-1 2 Ã D-1 2 ) i,j ):\nα i,j = exp LeakyReLU X i [t]W X j [t]W a k∈Ni∪{i} exp LeakyReLU X i [t]W X k [t]W a , (11\n)\nwhere represents the concatenation operation. W and a are parameters. X i [t] denotes the embedding of node i at layer L t . N i denotes the neighbors of node i.\nTherefore, a graph attention layer can be seen as consisting of four smaller layers: linear mapping, concatenation, LeakyReLU activation, and softmax operation. Decomposing a linear mapping is as trivial as decomposing an FC layer. To decompose the concatenation operator:\nX i [t] X j [t] = X γ i [t] X γ j [t] + X β i [t] X β j [t].(12)\nFor LeakyReLU, the idea of decomposition is the same as ReLU. For softmax operation, we split the coefficients proportionally to the exponential value of the target and the background term of input:\nX γ [t + 1] = sof tmax X[t] • exp X γ [t] exp X γ [t] + exp X β [t] , X β [t + 1] = sof tmax X[t] • exp X β [t] exp X β [t] + exp X γ [t] . (13\n)\nHere we employ the similar motivation that used to split bias term in Eq. 7, and let X γ [t] and X β [t] to compete for the original value. The detail of decomposing the attention coefficients can be found in Appendix B." }, { "figure_ref": [], "heading": "SUBGRAPH-LEVEL EXPLANATION VIA AGGLOMERATION", "publication_ref": [], "table_ref": [], "text": "Through decomposition, we could compute the contribution score of any given node groups. However, this is not enough for explaining GNNs. Our goal of explanation is to find the most important subgraph structure, but it is usually impossible to exhaustively compute and compare the scores of all possible subgraphs. In this section, we design an agglomeration algorithm to tackle the challenge." }, { "figure_ref": [], "heading": "CONTEXTUAL CONTRIBUTION SCORE", "publication_ref": [], "table_ref": [], "text": "We first introduce a new scoring function to be used in our algorithm. Different from the absolute contribution scores provided by decomposition, in many scenarios, we are more interested in the relative contribution of the target compared to its contexts. Let V γ ⊂ V be the target node group, and f D (•) be the contribution score calculated from decomposition. The relative contribution of V γ averaged over different contexts is calculated as:\nφ(V γ ) E C∼RW (NL(V γ )) f D (V γ ∪ C) -f D (C) , (14\n)\nwhere C is the context around V γ , and N L (V γ ) contains the neighboring nodes of V γ within Lhops. Here we use a random walk process RW () to sample C within the neighborhood around V γ . The reason for sampling within the neighborhood is based on the information aggregation, where a node collects the information from its neighbors within certain hops constrained by the GNN depth." }, { "figure_ref": [], "heading": "SUBGRAPHS CONSTRUCTION VIA AGGLOMERATION", "publication_ref": [], "table_ref": [], "text": "Our agglomeration algorithm initializes from individual nodes and terminates when the whole graph structure is included. Specifically, the interpretation process constructs a series of intermediate subgraph sets E = {S 1 , ..., S I }, where S i = {B 1 , ..., B Mi } contains M i subgraphs. At each step, the algorithm searches for the candidate node or node group v that most significantly affects the contribution of subgraph B m , m ∈ {1, ..., M i } according to the ranking score r(v):\ns(v) φ {v} ∪ B m -φ (B m ) , r(v) s(v) -E v s(v ) , s.t. v, v ∈ N (B m ),(15)\nwhere N (B m ) is the set of neighbor nodes or node groups to B m . Here s(v) measures the influence of v to B m , while r(v) further revises the value by considering the relative influence of v compared to other candidates v . At the beginning of our algorithm, B m = ∅. A node v is selected if r(v) ≥ q • max v r(v ), and we set q = 0.6 in experiments. The selected nodes are merged into subgraphs to form S i+1 . Small subgraphs will be merged into larger ones, so we have M i ≤ M j , i ≥ j.\nThe algorithm executes the above steps repeatedly and terminates until all nodes are included (i.e., M i = 1), or a certain pre-defined step budget is used up. Further details of the algorithm can be found in Section C of the Appendix." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "EXPERIMENTAL DATASETS", "publication_ref": [ "b34" ], "table_ref": [], "text": "Following the setting in previous work (Ying et al., 2019), we adopt both synthetic datasets and real-world datasets. The statistic of all datasets are given in Sec A in the Appendix." }, { "figure_ref": [], "heading": "SYNTHETIC DATASETS", "publication_ref": [ "b1" ], "table_ref": [], "text": "• BA-Shapes. BA-Shapes is a unitary graph based on a 300-node Barabási-Albert (BA) graph (Barabási & Albert, 1999). 80 five-node motifs are randomly attached to the base graph.\nThe motif is a \"house\" structured network in which the points are divided into top-nodes, middlenodes, or bottom-nodes. 10% of random edges are attached to perturb the graph. • BA-Community. BA-Community dataset is constructed by combining two BA-Shapes graphs.\nTo distinguish the nodes, the distribution of features of nodes in different communities differs. There are eight node classes based on the structural roles and community membership.\n• Tree-Cycles. The Tree-Cycles dataset germinates from an eight-level balanced binary tree base graph. The motif is a six-node cycle. 80 motifs are randomly added to the nodes of the base graph.\nThe nodes are classified into two classes, i.e., base-nodes and motif-nodes. • Tree-Grids. It is constructed in the same way as the Tree-Cycles dataset. The Tree-Grid dataset has the same base graph while replacing the cycle motif with a 3-by-3 grid motif." }, { "figure_ref": [], "heading": "REAL-WORLD DATASETS", "publication_ref": [ "b5", "b15", "b6" ], "table_ref": [], "text": "• MUTAG. It is a dataset with 4,337 molecule graphs. Every graph is labeled according to their mutagenic effect on the bacterium. As discussed in (Debnath et al., 1991), the molecule with chemical group NH 2 or NO 2 and carbon rings are known to be mutagenic. Since non-mutagenic molecules have no explicit motifs, only mutagenic ones are presented during the analysis. • Graph-SST2. It is a dataset of 70,042 sentiment graphs, which are converted through Biaffine parser (Liu et al., 2021). Every graph is labeled according to its sentiment, either positive or negative. The nodes denote words, and edges denote their relationships. The node features are initialized as the pre-trained BERT word embeddings (Devlin et al., 2019)." }, { "figure_ref": [], "heading": "EXPERIMENTAL SETUP", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "EVALUATION METRICS", "publication_ref": [], "table_ref": [], "text": "The interpretation problem is formalized as a binary classification problem distinguishing between important and unimportant structures (nodes or edges, depending on the nature of ground truth). A good explanation should assign high scores to the important structures and low scores to unimportant ones. We consider the nodes within the motif to be important for the synthetic dataset and the rest to be unimportant. In the MUTAG dataset, the \"N-H\" and \"N-O\" edges are important, and the rest are unimportant. We conduct quantitative experiments on the synthetic datasets and the MUTAG dataset, and qualitative experiments on the MUTAG dataset and the Graph-SST2 dataset. We adopt the Area Under Curve (AUC) to evaluate the performance quantitatively." }, { "figure_ref": [], "heading": "BASELINES METHODS AND IMPLEMENTATION DETAILS", "publication_ref": [ "b34", "b30", "b34", "b19", "b11" ], "table_ref": [], "text": "Baselines Methods. We compare with four baselines methods: GRAD (Ying et al., 2019), GAT (Veličković et al., 2017), GNNExplainer (Ying et al., 2019) and PGExplainer (Luo et al., 2020). ( 1) GRAD computes the gradients of GNN output with respect to the adjacency matrix or node features. (2) GAT averages the attention coefficients across all graph attention layers as edge importance.\n(3) GNNExplainer optimizes a soft mask of edges or node features by maximizing the mutual information. (4) PGExplainer learns an MLP (Multi-layer Perceptron) model to generate the mask using the reparameterization trick (Jang et al., 2017).\nConstruction of Target Models. We use all synthetic datasets together with the MUTAG dataset for quantitative evaluation experiments. We train a GCN and GAT model as the model to be explained for all datasets following the setup of previous work. Meanwhile, we construct DEGREE(GCN) and DEGREE(GAT) as the decomposed version for our method. We set the number of GNN layers to 3 for all datasets, except for the Tree-Grid dataset where it is 4. Since the 3-hop neighbors of some target nodes has only in-motif nodes (no negative samples). For the qualitative evaluation experiment, we use the MUTAG dataset and Graph-SST2 dataset. For all the model training, we use Adam optimizer. All the datasets are divided into train/validation/test sets.\nExplainer Setting. For all baseline methods, we keep the default hyper-parameters. For baselines (e.g., PGExplainer) who need training additional modules, we also split the data. We also split data for baselines requiring additional training (e.g. PGExplainer). We use all nodes in the motif for evaluation. For explainers that only provide node explanation, we average the score of its vertexes as edge explanation. The details of explainer settings can be found in Sec A in Appendix." }, { "figure_ref": [], "heading": "QUANTITATIVE EVALUATION", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In this section, we introduce experimental results on both synthetic datasets and the MUTAG dataset.\nFor node classification, the computation graph only contains nodes within l-hop from the target node, where l is the number of model layer. The reason is that the nodes outside the computation graph will not influence the final prediction. Table 1 shows the explanation AUC and time efficiency (the training time is shown outside the parentheses for PGExplainer). We have the following key findings. First, DEGREE achieves SOTA performances in most scenarios, showing its advantages in faithfulness over baseline methods. Second, the performance of DEGREE on GCN and GAT models can achieve similar high performance. This observation demonstrates the adaptability of our approach. Third, the improvement of AUC on BA-Community (˜9%) and MUTAG (˜5%) is more noticeable, where the two datasets distinguish themselves from others is that their features are not constants. It thus shows that our explanation method could well handle node features as they propagate through the graph structure. In terms of efficiency, DEGREE is implemented by decomposing the built-in forward propagation function, so there is no training process. The time cost is highly correlated to the complexity of the target model and the input size. We report further quantitative experiments in Appendix D." }, { "figure_ref": [ "fig_2", "fig_1", "fig_1", "fig_2" ], "heading": "QUALITATIVE EVALUATION", "publication_ref": [ "b5" ], "table_ref": [], "text": "In this section, we use Graph-SST2 and MUTAG datasets to visualize the explanation and demonstrate the effectiveness of our subgraph agglomeration algorithm in Sec 4.\n\"Maybe it's asking too much, but if a movie is truly going to inspire me, I want a little more than this.\"\n\"Though Ford and Neeson capably hold our interest, but it's just not a thrilling movie.\"\nFigure 3: The subgraph agglomeration results on the Graph-SST2 dataset. The first row shows an incorrect prediction, the second row shows the correct one. Red is negative, blue is positive.\nIn the first example, we show three visualizations from the MUTAG dataset in Figure 2. The first row represents a correctly predicted instance. Our model successfully identifies the \"NO2\" motif as a moderately positive symbol for mutagenicity. The \"H\" or the carbon ring is considered a negative sign for mutagenicity. Once the \"NO2\" and the ring join, they become a strong positive symbol for mutagenicity. This phenomenon is consistent with the knowledge that the carbon rings and \"NO2\" groups tend to be mutagenic (Debnath et al., 1991). We check instances with wrong predictions and show two representative examples. From the second row in Fig. 2, the GCN model precisely finds out the \"NH2\" motif with the ring motif as a strong mutagenic symbol. But another wandering part without connection shows a strong non-mutagenic effect, ultimately leading to an incorrect prediction. The second row shows another typical failure pattern. The model catches the \"NH2\" and part of the carbon ring as a mutagenic symbol, but the \"CH3\" on the bottom right shows a strong non-mutagenic effect. The model erroneously learns a negative interaction between them.\nIn the second example, we show two visualizations for the Graph-SST2 dataset in Figure 3. The sentence in the first row is labeled negative, yet its prediction is wrong. Our algorithm can explain the decision that the GNN model regards first half of the sentence (\"Maybe it's asking too much\") as negative, the second half (\"going to inspire me\", \"want a little more than this\") as positive. But the model can not tell the subjunctive tone behind the word \"if\", and consequently yields a positive but incorrect prediction. The sentence in the second row is negative, and the prediction is correct.\nOur algorithm precisely identifies the positive part (\"Though Ford and Neeson capably hold our interest\") and the negative part (\"but its just not a thrilling movie\"). Moreover, it reveals that the GCN model can correctly learn the transition relationship between these two components.\nWe observe that our method can detect non-linear interactions between subgraphs throughout the agglomeration process from above examples. It can help to diagnose the incorrect predictions and enhance the credibility of the model. More visualizations and efficiency study are in Appendix E." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this work, we present DEGREE which explains a GNN by decomposing its feedforward propagation process. After summarizing the fundamental rules for designing decomposition based explanation, we propose concrete decomposition schemes for those commonly used layers in GNNs.\nWe also design an algorithm to provide subgraph-level explanation via agglomeration, which efficiently employs the topological information in graphs. Experimental results show that DEGREE outperforms baselines in terms of faithfulness and can capture meaningful structures in graph data. " }, { "figure_ref": [], "heading": "A DATASETS AND EXPERIMENTAL SETTING", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_4" ], "text": "In this section, we introduce the detail of the datasets as wall as the experimental setting. The code can be found at https://anonymous.4open.science/r/DEGREE-3128.\nModel setup: We adopt GCN model and GAT model architecture for corresponding experiments. We use GCN(#in channel, #out channel, activation) to denote a GCN layer, and similar notation for GAT layer and fully connected layer.\nThe structure of GCN model for node classification task is following: GCN(#feature, 20, ReLU ) -GCN(#feature, 20, ReLU ) -GCN(#feature, 20, ReLU ) -FC(20, 20, ReLU ) -FC(20, #label, Sof tmax).\nFor graph classification task, we adopt a Max-pooling layer before the FC layers: GCN(#feature, 20, ReLU ) -GCN(#feature, 20, ReLU ) -GCN(#feature, 20, ReLU ) -M axpooling -FC(20, 20, ReLU ) -FC(20, #label, Sof tmax).\nFor GAT model architecture, we replace all GCN layers with GAT layer and keep the remaining setting unchanged. For experiments on Tree-Grid dataset, we adopt 4-layer GCN/GAT model by adding one more GCN/GAT layer with same setting before FC layers.\nDataset statistic: The statistics of synthetic datasets and real-world datasets are reported in Table 2. Experimental setting: For all datasets, we use a train/validation/test split of 80%/10%/10%. For all synthetic datasets, the GCN model is trained for 1,000 epochs and the GAT model is trained for 200 epochs. For MUTAG dataset, the GCN and GAT model is trained 30 epochs. For Graph-SST2 dataset, the GCN model is trained 10 epochs. We use Adam optimizer and set the learning rate to 0.005, the other parameters remain at their default values. We also report the accuracy metric reached on each dataset in Table 3. 4.\nWe quantified the relationship between the size of the calculation graph and the time taken. The result is reported in Figure 4 Figure 4: The quantitative studies of efficiency for different datasets and models. For the synthetic datasets, the horizontal coordinate represents the number of nodes in computation graph. For the MUTAG dataset, the horizontal coordinate represents the number of edges in computation graph. " }, { "figure_ref": [], "heading": "E QUALITATIVE EXPERIMENTS EXAMPLES", "publication_ref": [ "b13" ], "table_ref": [], "text": "In this section, we report more qualitative evaluation results on the Graph-SST2 and the MUTAG datasets. The results are reported in Figure 5 and 6.We also investigate the time efficiency of our agglomeration algorithm in terms of the relationship between q, the node number and the time spent.\nWe also investigate the time efficiency of our agglomeration algorithm in terms of the relationship between q, the node number and the time spent. (2021). We use the MUTAG dataset and the Graph-SST2 dataset, as presented in Sec 5.1. The target model is the same as that introduced in Sec 5.2.2. Note that we modify our method to search only for nodes that boost the score of the class of interest. We employ the ACC Liang et al. (2020) as an evaluation metric. The ACC reflects the consistency of predictions based on the whole graph and between interpreted subgraphs. Thus, ACC does not " }, { "figure_ref": [], "heading": "B ATTENTION DECOMPOSITION", "publication_ref": [], "table_ref": [], "text": "To calculate the attention coefficient, we need to first calculate the pre-normalized attention coefficient between node i and node j as:\nAnd we use αi,j to denote a vector which consist of the pre-normalized attention coefficients between node i and its neighbors. Then we calculate the normalized attention coefficient of node i via Sof tmax over its neighbors:\nWe use Sof tmax(| • |) to measure the dimension-wise magnitude, and let them compete for the original value. The division between two vectors is element-wise." }, { "figure_ref": [], "heading": "C ALGORITHM", "publication_ref": [], "table_ref": [], "text": "We conclude the computation steps of subgraph-level explanation (Sec 4) Algorithm 1, 2 and 3.\nAlgorithm 1: The algorithm of subgraph agglomeration Data: Graph G = (V, E), Label y, Target model f , Hyperparameter q. Result: Explanation tree T . Score Metric Function: φ from Algorithm 3. Initialization: score queue ScoresQ ← ∅, explanation tree T ← ∅.\nPublished as a conference paper at ICLR 2022 (a) Well, it probably won't have you swinging from the trees hooting it's praises, but it's definitely worth taking a look.\n(b) Trouble every day is a success in some sense, but it's hard to like a film so cold and dead.\n(c) Makes an unusual but pleasantly haunting debut behind the camera.\n(d) Not everything in this ambitious comic escapade works, but Coppola, along with his sister, Sofia, is a real filmmaker require ground truth label. We further define the sparsity as the ratio the size of the explanation subgraph to the original graph. At the same sparsity, the higher the ACC, the better the interpretation.\nFigure 8 shows the ACC of DEGREE, GNN-LRP and SubgraphX under various sparsity. We can find that DEGREE has competitive performance compared to GNN-LRP and SubgraphX. Besides, DEGREE has better time efficiency." }, { "figure_ref": [], "heading": "F.2 QUALITATIVE COMPARISON", "publication_ref": [], "table_ref": [], "text": "In this section we make a qualitative comparison between DEGREE and SubgraphX. We randomly select a number of similar molecules and visualize the explanations generated by DEGREE and SubgraphX. We report them in the Figure 9. We can find that none of the subgraphs generated by SubgraphX include the 'N-H' or 'N-O'. They only select the carbon ring as the important part. In contrast, DEGREE can precisely indicate that the mutagenicity is caused by the 'N-H' or 'N-O'." }, { "figure_ref": [], "heading": "F.3 FORWARD-LOOKING EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "In this section, we present a simple prospective experiment from the early stages of this work. The dataset was generated by modifying the MUTAG dataset by selecting half of the graphs in the dataset and picking a node at random in each graph, giving it a special feature value of 1 while giving the other nodes a background white noise feature. Our task is to predict whether a graph contains special nodes or not. We train a 3-layer GCN which achieves 100% accuracy. We then use DEGREE to calculate the contribution score for each node. DEGREE is able to locate special nodes with 100% accuracy. Figure 10 shows the visualisation." } ]
Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data. However, the black-box nature of GNNs prevents users from understanding and trusting the models, thus hampering their applicability. Whereas explaining GNNs remains a challenge, most existing methods fall into approximation based and perturbation based approaches with suffer from faithfulness problems and unnatural artifacts, respectively. To tackle these problems, we propose DEGREE (Decomposition based Explanation for GRaph nEural nEtworks) to provide a faithful explanation for GNN predictions. By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction. Based on this, we further design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods. The efficiency of our algorithm can be further improved by utilizing GNN characteristics. Finally, we conduct quantitative and qualitative experiments on synthetic and real-world datasets to demonstrate the effectiveness of DEGREE on node classification and graph classification tasks.
DEGREE: DECOMPOSITION BASED EXPLANATION FOR GRAPH NEURAL NETWORKS
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the DEGREE for decomposing GCN. Node features or latent embeddings contain target portion (orange hemisphere) and an background portion (blue hemisphere). (a)-(c) show the workflow of the GCN, exhibiting only the messages aggregation for node A. (d) demonstrates message aggregation after decomposition. (e) demonstrates the decomposed message flow.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The subgraph agglomeration results on MUTAG dataset. The first row shows a correct prediction. The second and the third row report two typical examples of errors. Red is mutagenic, blue is non-mutagenic, gray is not selected. The colored edges link the selected nodes. The process goes from left to right. The graph on the far left in each row displays the score for individual nodes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 3 :3The algorithm of score computation φ Data:Graph G = (V, E), model f , NodeSet N Result: score φ(f, y, N ) Sample a context set S by Random Walk within the L-hop neighbor region of N . Return: φ(f D , y, N ) = 1 |S| s∈S (f (N ∪ s) -f (s))D EFFICIENCY STUDY DEGREE is achieved by decomposing the feedforward process of the target model. Thus the efficiency is highly dependent on the model structure. We report the statistic of time consuming on each dataset for GCN and GAT model in Table", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "FIn this section, we perform additional experiments comparing DEGREE with GNN-LRPSchnake et al. (2020) and SubgraphX Yuan et al.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: The subgraph agglomeration results on MUTAG dataset with GCN graph classifier. All instances are incorrectly predicted. Red is mutagenic, blue is non-mutagenic, gray is not selected. The colored edges link the selected nodes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Relationship between the time efficiency(s), graph size and q on the MUTAG dataset.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: ACC of DEGREE, GNN-LRP and SubgraphX on MUTAG and Graph-SST2.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Qualitative comparison of DEGREE and SubgraphX. The first row shows the interpretation generated by SubgraphX. The second row is generated by DEGREE. The red color indicates mutagenicity.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Visualization of the forward-looking experiment. DEGREE can locate the special node with 100% accuracy.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Quantitative Experiment Result.", "figure_data": "Explanation AUCTaskNode ClassificationGraph ClassificationDatasetBA-ShapesBA-Community Tree-CyclesTree-GridMUTAGGRAD0.8820.7500.9050.6120.787GAT0.8150.7390.8240.6670.763GNNExplainer0.8320.7500.8620.8420.742PGExplainer0.9630.8940.9600.9070.836DEGREE(GCN) 0.991±0.0050.984±0.0050.958±0.004 0.925±0.0400.875±0.028DEGREE(GAT)0.990±0.0080.982±0.0100.919±0.027 0.935±0.0310.863±0.042Time Efficiency(s)GNNExplainer0.650.780.690.720.43PGExplainer116.72(0.014)35.71(0.024)117.96(0.09) 251.37(0.011)503.52(0.012)DEGREE(GCN)0.441.020.250.370.83DEGREE(GAT)1.982.440.961.030.79", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Statistics of the datasetDataset# of nodes # of edges # of graphs # of labelsfeaturesBA-Shapes7004,11014ConstantBA-Community1,4008,92018Generated from LabelsTree-Cycles8711,95012ConstantTree-Grid1,2313,41012ConstantMUTAG131,488266,8944,3372Node ClassGraph-SST2714,3251,288,56670,0422BERT Word Embedding", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "We introduce the hardware that we use for the experiments.", "figure_data": ": Accuracy performance of GNN modelsDatasetBA-Shapes BA-Community Tree-CyclesTree-GridMUTAGGraph-SST2TaskNode ClassificationGraph ClassificationModelGCN GATGCN GATGCN GAT GCN GAT GCN GATGCNTraining0.96 0.980.99 0.830.91 0.930.85 0.830.80 0.810.91Validation0.97 0.960.88 0.850.90 0.920.84 0.840.78 0.800.90Testing0.93 0.940.87 0.830.89 0.920.81 0.800.77 0.790.88Hardware setting:", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Efficiency performance.", "figure_data": "DatasetBA-Shapes BA-Community Tree-CyclesTree-GridMUTAGModelGCN GATGCN GATGCN GAT GCN GAT GCN GATAvg. Time (s)0.44 1.981.02 2.440.25 0.960.37 1.030.83 0.79", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Qizhang Feng; Ninghao Liu; Fan Yang; Ruixiang Tang; Mengnan Du; Xia Hu
[ { "authors": "Federico Baldassarre; Hossein Azizpour", "journal": "", "ref_id": "b0", "title": "Explainability techniques for graph convolutional networks", "year": "2019" }, { "authors": "Albert-László Barabási; Réka Albert", "journal": "science", "ref_id": "b1", "title": "Emergence of scaling in random networks", "year": "1999" }, { "authors": "Lei Cai; Shuiwang Ji", "journal": "", "ref_id": "b2", "title": "A multi-scale approach for graph link prediction", "year": "2020" }, { "authors": "Sergio Casas; Cole Gulino; Renjie Liao; Raquel Urtasun", "journal": "", "ref_id": "b3", "title": "Spatially-aware graph neural networks for relational behavior forecasting from sensor data", "year": "2019" }, { "authors": "Chun-Hao Chang; Elliot Creager; Anna Goldenberg; David Duvenaud", "journal": "", "ref_id": "b4", "title": "Explaining image classifiers by counterfactual generation", "year": "2018" }, { "authors": "Asim Kumar Debnath; Rosa L Lopez De Compadre; Gargi Debnath; Alan J Shusterman; Corwin Hansch", "journal": "Journal of medicinal chemistry", "ref_id": "b5", "title": "Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity", "year": "1991" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Mengnan Du; Ninghao Liu; Xia Hu", "journal": "Communications of the ACM", "ref_id": "b7", "title": "Techniques for interpretable machine learning", "year": "2019" }, { "authors": "Wenqi Fan; Yao Ma; Qing Li; Yuan He; Eric Zhao; Jiliang Tang; Dawei Yin", "journal": "", "ref_id": "b8", "title": "Graph neural networks for social recommendation", "year": "2019" }, { "authors": "Mikael Henaff; Joan Bruna; Yann Lecun", "journal": "", "ref_id": "b9", "title": "Deep convolutional networks on graph-structured data", "year": "2015" }, { "authors": "Qiang Huang; Makoto Yamada; Yuan Tian; Dinesh Singh; Dawei Yin; Yi Chang", "journal": "", "ref_id": "b10", "title": "Graphlime: Local interpretable model explanations for graph neural networks", "year": "2020" }, { "authors": "Eric Jang; Shixiang Gu; Ben Poole", "journal": "", "ref_id": "b11", "title": "Categorical reparameterization with gumbel-softmax", "year": "2017" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b12", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "Jian Liang; Bing Bai; Yuren Cao; Kun Bai; Fei Wang", "journal": "", "ref_id": "b13", "title": "Adversarial infidelity learning for model interpretation", "year": "2020" }, { "authors": "Cheng-Hao Liu; Maksym Korablyov; Stanisław Jastrzebski; Paweł Włodarczyk-Pruszyński; Yoshua Bengio; Marwin Hs Segler", "journal": "", "ref_id": "b14", "title": "Retrognn: Approximating retrosynthesis by graph neural networks for de novo drug design", "year": "2020" }, { "authors": "Meng Liu; Youzhi Luo; Limei Wang; Yaochen Xie; Hao Yuan; Shurui Gui; Haiyang Yu; Zhao Xu; Jingtun Zhang; Yi Liu; Keqiang Yan; Haoran Liu; Cong Fu; Bora Oztekin; Xuan Zhang; Shuiwang Ji", "journal": "", "ref_id": "b15", "title": "DIG: A turnkey library for diving into graph deep learning research", "year": "2021" }, { "authors": "Ninghao Liu; Qiaoyu Tan; Yuening Li; Hongxia Yang; Jingren Zhou; Xia Hu", "journal": "", "ref_id": "b16", "title": "Is a single vector enough? exploring node polysemy for network embedding", "year": "2019" }, { "authors": "Ana Lucic; Gabriele Maartje Ter Hoeve; Maarten Tolomei; Fabrizio De Rijke; Silvestri", "journal": "", "ref_id": "b17", "title": "Cfgnnexplainer: Counterfactual explanations for graph neural networks", "year": "2021" }, { "authors": "Scott Lundberg; Su-In Lee", "journal": "", "ref_id": "b18", "title": "A unified approach to interpreting model predictions", "year": "2017" }, { "authors": "Dongsheng Luo; Wei Cheng; Dongkuan Xu; Wenchao Yu; Bo Zong; Haifeng Chen; Xiang Zhang", "journal": "", "ref_id": "b19", "title": "Parameterized explainer for graph neural network", "year": "2020" }, { "authors": "James Murdoch; Arthur Szlam", "journal": "", "ref_id": "b20", "title": "Automatic rule extraction from long short term memory networks", "year": "2017" }, { "authors": "James Murdoch; Peter J Liu; Bin Yu", "journal": "", "ref_id": "b21", "title": "Beyond word importance: Contextual decomposition to extract interactions from lstms", "year": "2018" }, { "authors": "Soheil Phillip E Pope; Mohammad Kolouri; Charles E Rostami; Heiko Martin; Hoffmann", "journal": "", "ref_id": "b22", "title": "Explainability methods for graph convolutional neural networks", "year": "2019" }, { "authors": "Cynthia Rudin", "journal": "Nature Machine Intelligence", "ref_id": "b23", "title": "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead", "year": "2019" }, { "authors": "Thomas Schnake; Oliver Eberle; Jonas Lederer; Shinichi Nakajima; T Kristof; Klaus-Robert Schütt; Grégoire Müller; Montavon", "journal": "", "ref_id": "b24", "title": "Higher-order explanations of graph neural networks via relevant walks", "year": "2020" }, { "authors": "Robert Schwarzenberg; Marc Hübner; David Harbecke; Christoph Alt; Leonhard Hennig", "journal": "", "ref_id": "b25", "title": "Layerwise relevance visualization in convolutional text graph classifiers", "year": "2019" }, { "authors": "Avanti Shrikumar; Peyton Greenside; Anshul Kundaje", "journal": "PMLR", "ref_id": "b26", "title": "Learning important features through propagating activation differences", "year": "2017" }, { "authors": "Karen Simonyan; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b27", "title": "Deep inside convolutional networks: Visualising image classification models and saliency maps", "year": "2013" }, { "authors": "Chandan Singh; James Murdoch; Bin Yu", "journal": "", "ref_id": "b28", "title": "Hierarchical interpretations for neural network predictions", "year": "2018" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "PMLR", "ref_id": "b29", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio", "journal": "", "ref_id": "b30", "title": "Graph attention networks", "year": "2017" }, { "authors": "N Minh; My T Vu; Thai", "journal": "", "ref_id": "b31", "title": "Pgm-explainer: Probabilistic graphical model explanations for graph neural networks", "year": "2020" }, { "authors": "Xiang Wang; Yingxin Wu; An Zhang; Xiangnan He; Tat Seng; Chua ", "journal": "", "ref_id": "b32", "title": "Causal screening to interpret graph neural networks", "year": "2021" }, { "authors": "Keyulu Xu; Weihua Hu; Jure Leskovec; Stefanie Jegelka", "journal": "", "ref_id": "b33", "title": "How powerful are graph neural networks?", "year": "2018" }, { "authors": "Rex Ying; Dylan Bourgeois; Jiaxuan You; Marinka Zitnik; Jure Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Gnnexplainer: Generating explanations for graph neural networks", "year": "2019" }, { "authors": "Jiliang Hao Yuan; Xia Tang; Shuiwang Hu; Ji", "journal": "", "ref_id": "b35", "title": "Xgnn: Towards model-level explanations of graph neural networks", "year": "2020" }, { "authors": "Haiyang Hao Yuan; Jie Yu; Kang Wang; Shuiwang Li; Ji", "journal": "", "ref_id": "b36", "title": "On explainability of graph neural networks via subgraph explorations", "year": "2021" }, { "authors": "Muhan Zhang; Yixin Chen", "journal": "", "ref_id": "b37", "title": "Link prediction based on graph neural networks", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 108, 601.66, 396, 26.44 ], "formula_id": "formula_0", "formula_text": "f (X) = L T • L T -1 • • • • • L 2 • L 1 (X). (1) Let X[t] denotes the input to L t , so X[t + 1] = L t (X[t]) and X[1] = X." }, { "formula_coordinates": [ 3, 439.21, 640.34, 23.6, 9.68 ], "formula_id": "formula_1", "formula_text": "X i [t]." }, { "formula_coordinates": [ 4, 272.94, 149.2, 84.18, 10.31 ], "formula_id": "formula_2", "formula_text": "[t] = X γ [t] + X β [t]" }, { "formula_coordinates": [ 4, 186.91, 174.67, 317.1, 63.61 ], "formula_id": "formula_3", "formula_text": "L D t (X γ [t], X β [t]) =     Γ(X γ [t], X β [t]) X γ [t+1] , B(X γ [t], X β [t]) X β [t+1]     (2) L t (X[t]) = X[t + 1] = X γ [t + 1] + X β [t + 1],(3)" }, { "formula_coordinates": [ 4, 135.44, 242.69, 282.9, 12.2 ], "formula_id": "formula_4", "formula_text": "L D t (•, •) denotes the decomposed version of layer L t . Γ(•, •) and B(•," }, { "formula_coordinates": [ 4, 108, 297.49, 109.83, 10.31 ], "formula_id": "formula_5", "formula_text": "Γ(X γ [T ], X β [T ]) ≈ f (X)." }, { "formula_coordinates": [ 4, 175.69, 464, 256.2, 27.01 ], "formula_id": "formula_6", "formula_text": "X[t + 1] = AX[t] = A X γ [t] + X β [t] = AX γ [t] X γ [t+1] + AX β [t] X β [t+1]" }, { "formula_coordinates": [ 4, 230.71, 678.73, 273.3, 12.33 ], "formula_id": "formula_7", "formula_text": "X[t + 1] = D-1 2 Ã D-1 2 X[t]W + b,(5)" }, { "formula_coordinates": [ 5, 184.5, 98.47, 319.51, 12.33 ], "formula_id": "formula_8", "formula_text": "γ[t] = D-1 2 Ã D-1 2 X γ [t]W , β[t] = D-1 2 Ã D-1 2 X β [t]W,(6)" }, { "formula_coordinates": [ 5, 149.01, 117.56, 355, 23.51 ], "formula_id": "formula_9", "formula_text": "X γ [t + 1] = γ[t] + b • γ[t] γ[t] + β[t] , X β [t + 1] = β[t] + b • β[t] γ[t] + β[t] ,(7)" }, { "formula_coordinates": [ 5, 258.46, 220.74, 245.54, 8.99 ], "formula_id": "formula_10", "formula_text": "X[t + 1] = X[t]Θ + b,(8)" }, { "formula_coordinates": [ 5, 112.98, 264.48, 391.02, 25.06 ], "formula_id": "formula_11", "formula_text": "X γ [t+1] = X γ [t]Θ+b• X γ [t]Θ X γ [t]Θ + X β [t]Θ , X β [t+1] = X β [t]Θ+b• X β [t]Θ X γ [t]Θ + X β [t]Θ . (9)" }, { "formula_coordinates": [ 5, 123.42, 344.31, 380.58, 11.03 ], "formula_id": "formula_12", "formula_text": "X γ [t + 1] = ReLU X γ [t] , X β [t + 1] = ReLU X γ [t] + X β [t] -ReLU X γ [t] .(10)" }, { "formula_coordinates": [ 5, 172.14, 446.71, 327.71, 37.88 ], "formula_id": "formula_13", "formula_text": "α i,j = exp LeakyReLU X i [t]W X j [t]W a k∈Ni∪{i} exp LeakyReLU X i [t]W X k [t]W a , (11" }, { "formula_coordinates": [ 5, 499.85, 460.41, 4.15, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 216.79, 562.96, 287.21, 13.68 ], "formula_id": "formula_15", "formula_text": "X i [t] X j [t] = X γ i [t] X γ j [t] + X β i [t] X β j [t].(12)" }, { "formula_coordinates": [ 5, 176.46, 612.4, 323.39, 73.67 ], "formula_id": "formula_16", "formula_text": "X γ [t + 1] = sof tmax X[t] • exp X γ [t] exp X γ [t] + exp X β [t] , X β [t + 1] = sof tmax X[t] • exp X β [t] exp X β [t] + exp X γ [t] . (13" }, { "formula_coordinates": [ 5, 499.85, 645.98, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 6, 200.07, 254.9, 299.78, 14.85 ], "formula_id": "formula_18", "formula_text": "φ(V γ ) E C∼RW (NL(V γ )) f D (V γ ∪ C) -f D (C) , (14" }, { "formula_coordinates": [ 6, 499.85, 257.3, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 6, 129.21, 428.02, 374.79, 9.65 ], "formula_id": "formula_20", "formula_text": "s(v) φ {v} ∪ B m -φ (B m ) , r(v) s(v) -E v s(v ) , s.t. v, v ∈ N (B m ),(15)" } ]
2023-05-22
[ { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b7", "b12", "b11", "b4", "b13", "b1", "b17", "b15", "b14", "b24", "b27", "b23", "b19", "b19", "b2", "b8", "b20" ], "table_ref": [], "text": "Deep Neural Networks(DNN) have achieve high(often state-of-art) performance in various tasks. However, research about the robustness of DNNs [8] have shown that they are not reliable for outof-distribution inputs. In image classification tasks, we can reduce the classification success rate to a amazingly low level(sometimes even lower than random) by adding carefully crated noise to the input images. The noise added is confined to a certain degree, often quasi-perceptible by human. The existence of adversarial examples is a big threat to the reliability of DNNs. Therefore, it's crutial to expose as many blind spots of DNNs as possible.\nMany methods have been purposed. The most common method is to add small perturbations to the original images based on direct gradient ascent on the loss function(aka iterative/single step methods). Many iterative/single-step optimization methods have been purposed and achieve high success rate. For examples, BIM [13], I-FGSM [12], MI-FGSM [5], PGD [14] have all reached high performance. There are also generator-based methods that train a noise generator against a target model. Results ( [2], [18], [16] )have shown that generator-based methods have strong cross-model transferability. However, most prior works manipulate images in the pixel space, which often create spatially regular perturbation to the images. Though the perturbation is confined to a certain degree and is claimed imperceptible, human can still distinguish the pattern of noise in the adversarial examples. A preturbed image is given below as , with a common constraint l ∞ <= 16.\nRather than noising images directly in the pixel space, some works have explore adding noise beyond traditional methods. AdvGAN++ [15],ATGAN [25] explore the possibility to add noise in the down-sampled space. [28] explicitly search adversarial examples in the latent space created by GAN. [24] purpose to find adversarial examples in the latent space created by Variational Auto Encoder(VAE). However, we argue that these methods are purposed before a large, pre-trained encoder-decoder model exists. Thus the latent space they generated are relatively coarse and the adversarial examples they crafted are not as effective as those crafted in the pixel space. A more recent work [20] train a VAE model to project images into a latent space as part of the Stable Diffusion Model. Results in [20] shows the latent space created by the model is highly semantical and precise. We take advantage of them by utilizing their VAE as our pre-trained encoder-decoder structure. Therefore, we're able to make use of the semantic latent space and create purturbed images. In section 8.2 , we show that our method can achieve a comparable attack rate as those crafted in the pixel space, while our noise being much more imperceptible to human. A comparison is given below as figure 1. However, as shown in [3], preturbed images decoded from latent space are extremely hard to follow the common l p norm constraint. As illustrated in figure 2, the perturbation is almost imperceptible while the l ∞ distance is larger than the normal constraint. Thus a new evaluation metric is needed for the adversarial examples crated from latent space. Existing methods includes FID score [9], Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio(PGNR) [21]. In section6.1, we discuss the weakness of those pre-existing evaluation metrics, and purpose our novel metric based on SSIM loss and the fool rate. To the best of our knowledge, our work is the first to systemetically explore the quality of adversarial attack on latent space.\nOriginal Image PGD Attack Latent Attack PGD noise Latent Noise\nAs we are disturbing images in a semantic level, we expect the perturbation to have a high level of transferability. Thus in section8. We summarize our contributions as follows:\n• We give explanations about the reason to noise the images in the semantic latent space.\nBase on a pre-trained VAE from Stable Diffusion Model, we show that adversarial examples crafted from latent space can achieve a comparable fool rate as those crafted in the pixel space, while the perturbation being more imperceptible.\n• We purpose a novel evaluation metric for adversarial examples crafted from latent space.\nTo the best of our knowledge, our method is the first metric specifically designed for adversarial examples crafted from latent space.\n• We investigate the cross-model transferability of adversarial examples crafted from semantic latent space. We also investigate how the choice of loss funtions affects the transferability." }, { "figure_ref": [], "heading": "Original Image", "publication_ref": [], "table_ref": [], "text": "Latent Attack with 𝑙 ∞ = 0.82 PGD attack under 𝑙 ∞ < 0.82 " }, { "figure_ref": [], "heading": "background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Iterative Attack Methods", "publication_ref": [ "b11", "b12", "b4", "b13" ], "table_ref": [], "text": "In order to generate adversarial examples, many methods have been purposed. The most common method is to add small perturbations to the original images based on direct gradient ascent on the loss function(aka gradient-based methods). In the white box setting, many iterative optimization methods have been purposed and achieve high success rate. For examples, BIM [12], I-FGSM [13], MI-FGSM [5], PGD [14] have all reached high performance." }, { "figure_ref": [], "heading": "Variational Auto Encoder and Stable Diffusion Models", "publication_ref": [ "b10", "b19", "b19", "b3", "b23", "b16", "b6", "b27", "b2", "b21", "b2" ], "table_ref": [], "text": "Variational Auto Encoder(VAE) [11] is an encoder-decoder structure generative models for generating images. The encoder encodes image to a latent variable and the decoder decodes it back into pixel space. The sub-space that encoder encodes image into is called the latent space. As our research manipulate latent variables on latent space, the outcome of our experiments will strongly depend on a well-trained VAE that creates a proper latent space. After some pilot runs, we adopted the pre-trained VAE from Stable Diffusion Model [20] as our VAE module. Stable Diffusion Model is Latent Diffusion Model that use a U-Net to recurrently noise and denoise the latent variable image in the latent space. [20] purpose that the latent space created by LDMs eliminates imperceptible details of original images while maining the semantic information. The outstanding achievement of stable diffusion models strongly supports the authors' claim. Therefore, we adopt the pre-trained VAE from Stable Diffusion Model to create a strong semantic latent space for our research.\n3 Related Works 3.1 Adversarial Attack in the latent space [4] first purpose to generate adversarial examples by noising the latent variable created by VAE. They tends to learn a constant noise ∆z for all the latent variables. Later, an incremental work [24] trains a generator in the latent space to produce noised latent variables. [17] purpose AVAE, a model based on VAE and GAN [7] to produce adversarial examples. And [28] purposed to search for latent variables in a latent space created by GAN and create semantical change to the image. The forementioned works are in lack of quantitive analysis of the quality of the adversarial examples and the explanation of the reason to noise the latent space.\nNotably, a concurrent work [3] use Latent Diffusion Model to produce noised images, which also uses the same latent space as ours. They purpose to perform 5 steps of denoising process of DDIM [22] on the latent variable z = E(X) before decoding. They tends to maximizing the loss by updating z, which is much similar to ours method. However, we claim that though they take the structure into account by adding self-attention and crossattention loss, the denoising step could still change the semantic information to a human-perceptible extent. An example is given as figure 3. As their works manipulate the semantic meaning of the image on a high level that's visible to human, we argue that the denoising process of DDIM is too strong a perturbation to be used upon the latent variable. Thus we won't compare with their work in the experiment section, even though we reach comparable transferability as theirs.\nFigure 3: The image on the left is the original images, and the image on the right is produced using the open-source code of [3] by their default settings. As illustrated, the denoising process of DDIM has totally change the vegetable in the middle, which is a huge semantic change. The denoising process also purify the watermark on the background, which is also not expected.\nWe admit that our basic framework for crafting adversarial examples in the latent space is not novel and has much similarity with previous work. However, the explanation for the reason to noise the latent space and the evaluation metric we purpose are novel and important contributions of our work. Our experiments also illustrate the high transferability of adversarial examples crafted from latent space, which is a novel finding." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "We denote the target classification model as F . For the VAE we used, we seperately denote the encoder and the decoder as E and D. We denote the latent space as Z sem , our loss function as L In the noising phase, we assume to know every information about F. For a given dataset X, we first encode X into a latent variable z = E(X) ∈ Z sem The noising goal is to maximize:\nz ←-arg max z L(F(D(z )))\nThen the noised latent variable z is decoded back into pixel space as X = D(z ). The adversarial example is then generated." }, { "figure_ref": [], "heading": "Why Latent Space?", "publication_ref": [], "table_ref": [], "text": "Though crafting adversarial examples is a crutial topic in the field of adversarial machine learning, researches about crafting in the latent space is still relatively less explored. In this section, we will give explanations about the reasonabilityand neccessity of crafting adversarial examples in the latent space." }, { "figure_ref": [], "heading": "Comparison with Generator-Based Methods in the Pixel Space", "publication_ref": [ "b24" ], "table_ref": [], "text": "Many researches have been done on training a noise-generator to generate adversarial examples. The most common adversarial generator model structure consists of a down-sampler, a bottle-neck module and an up-sampler. We denote the down-sampler as D, the up-sampler as U, the bottle-neck module as B, then the generator can be denoted as G = U • B • D. If we view D and U as an encoder-decoder structure model, then G is a generator that produces noises in the latent space created by U and D. In fact, ATGAN [25] has explicitly purposed to add perturbation on the down-sampled space. Thus, we argue that the traditional generator-based methods can be viewed as a special case of our method, where the latent space is created by U and D. However, as U and D is often trained under the limitation of l p norm, the latent space created by them is less semantic than a l p norm-free latent space but contains more spatial details." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Making Use of the Feature of Human Recognition System", "publication_ref": [ "b18", "b7" ], "table_ref": [], "text": "The main difference between the latent space and the pixel space is that perturbations made on latent space cause semantic changes while those made on pixel space cause changes with spatial pattern. We argue that under the same extent of change, the Human Recognition System is more tolerant to semantic change rather than the spatial.\nIt's widely acknowledged that human recogizes images in a semantic-level. For example, the original image in figure 4 would be recogized by human as cat, a shoe and grass ground, along with their interactions with each other. It's shown in [19] that the CLIP model can encoded the semantic information of an image into a latent variable with strong semantic information, which partially proves that a feature extractor is able to extract semantic information from an image as well as human does. However, neural networks have been proved to have linear nature [8] in the feature space, while human recognition system is strongly non-linear. As illustrated in figure 4, a slightly semanticallypreturbed cat would still be recogized as cat to human. But actually it is an unnatural being and should never been seen before, as no cat will have a strange pow and be without mouth. We believe the core reason behind this scenario is that human kept a strong robustness to semantically out-of-box distribution, while neural networks are easily fooled due to their linear nature.\nHowever, human are much more sensitive to spatial change. We can easily recogize the spatial pattern of the noise, as to human the noise is a new semantical object that's independent to other objects in the image. Meanwhile, neural networks are equally easy to be fooled by both kind of change. Thus, it's reasonable for attackers to manipulate images by making semantical changes rather than spatial changes, as it's more covert and imperceptible to human beings." }, { "figure_ref": [], "heading": "Similarity-Delta Measure(SDM ): A Metric for Evaluating Adversarial Attacks on Latent Space", "publication_ref": [], "table_ref": [], "text": "During practice, we found a lack of evaluation methods for adversarial attacks on the latent space. We argue that most of the existing evaluation methods are not suitable for evaluating adversarial attacks on the latent space and give our detailed analysis on existing metrics. After, we purpose a novel metric called SDM to quantitively measure the quality of a latent space adversarial attack. The noise produced by PGD as colorful stripes can be easily seen by human, while the noise produced by our method is semantically more natural. Please zoom in to see the details." }, { "figure_ref": [], "heading": "Analyzing Existing Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "Unlike the perturbation in the pixel space, the perturbation in the latent space hardly follows the l p norm constraint. Some other metrics have been used to evaluate the adversarial examples crafted from latent space. We analyze those metrics and provide reasons for why they are not suitable to evaluate adversarial attacks on latent space." }, { "figure_ref": [], "heading": "Deep Learning Based Metrics", "publication_ref": [ "b8", "b22", "b6", "b10", "b26", "b8" ], "table_ref": [], "text": "The most famous deep learning based metric is Frechet Inception Distance score(FID) [9]. FID is a metric to evaluate the similarity between two set of images based on the feature extracted from the Inception model [23]. The lower the FID score is, the more similar the two set of images are. FID is widely used in GAN [7] and VAE [11] to evaluate the quality of the generated images. However, we claim that FID is not suitable for evaluating adversarial examples. We take a special situation as example, where the adversarial example is crafted against the Inception model, which is also used to extract the feature in FID scoring. In this situation, the adversarial example is attacking the feature extractor itself, thereby affecting the features. Thus, the FID score will not be the actual perception distance between the adversarial example and the original image.\nFrom a broader perspective, for other deep learning based metrics(like lpips [27] ), what they actually evaluate is the distribution similarity [9] between two sets of images. However, the adversarial example is often designed to be perceptually similar to the original image, but is actually out of the original distribution. Thus from the nature of deep learning based metrics, they are unfair for adversarial examples." }, { "figure_ref": [], "heading": "Traditional Metrics", "publication_ref": [ "b25", "b9" ], "table_ref": [], "text": "For traditional metrics, the most commonly used ones are Structural Similarity Index Measure(SSIM ) [26] and Peak Signal-to-Noise Ratio(P SN R) [10]. SSIM is a metric to evaluate the similarity between two images. P SN R is a metric to evaluate the quality of the image. Both of them are widely used in image processing. However, only reporting those losses is not enough, as lower SSIM and higher P SN R could allow more perturbation to the images, which often indicates a better attack rate. Thus a single loss without baseline is meaningless. Meanwhile, an universal baseline for such losses is hard to measure. For example, different datasets could have a different baseline of SSIM and P SN R, which makes cross-dataset comparison unfair. Even for the same dataset, different encoder-decoder model could make the baseline different.\nParticularly we give more detailed introduction about SSIM , as it will be used as our base metric for evaluating adversarial attacks. SSIM is defined as:\nSSIM(x, y) = (2µ x µ y + C 1 ) (2σ xy + C 2 ) µ 2 x + µ 2 y + C 1 σ 2 x + σ 2 y + C 2\nwhere µ x and µ y are the mean of x and y respectively, σ 2 x and σ 2 y are the variance of x and y respectively, σ xy is the covariance of x and y, C 1 and C 2 are two variables to stabilize the division with weak denominator. SSIM is a value between 0 and 1, where 1 means the two images are identical.\nIn practice, SSIM is often applied to a sliding window of size w × w on both image, and takes the mean of all the SSIM values as the final SSIM score. We also add a gaussian filter to the sliding window to make the SSIM score more robust to the noise. For implementation details please check appendix .\n6.2 SDM :Similarity-Delta Measure" }, { "figure_ref": [], "heading": "Similarity Functions", "publication_ref": [], "table_ref": [], "text": "Though using SSIM or PSNR as metrics is not adaquate, we still believe that traditional similarity functions are good metrics to evaluate the adversarial examples and can be used for our new metric. We let our chosen similarity function as S. For S we make the following assumptions:\n• S is a function that takes two images as input and outputs a value between 0 and 1, where higher value means the two images are more perceptually similar.\n• S is symmetric, which means S(X, X ) = S(X , X)\n• S(X, X ) = 1 if and only if X = X Many similarity functions satisfy the forementioned properties. Additionally, we assume that stronger adversarial examples lead to lower S. And we want to find a balance between the adversarial examples' strength and perceptional similarity to the original image, while being data-independent and model-independent. Thus we propose a new metric based on SSIM, which is Similarity-Delta Measure(SDM). SDM is defined under a fixed dataset X ∼ D X and an encoder-decoder model M.\nAdditionally, we denote Acc as the accuracy of the adversarial examples crafted from X and S as the S score of the adversarial examples.\nAcc M is the baseline accuracy, defined as :\nAcc M = E X∼D X [F(X, D(E(X)) = labels)]\nS M is the baseline S score on model M, defined as:\nS M = E X∼D X [S(X, D(E(X)))]\nThen we define: The SDM is defined as:\nSDM = log(1 -(1 -γ) * ∆Acc) ∆S + ε\nwhere ε, γ are small numbers to prevent undefined math operations.\nWe set γ = 1e -3 and ε = 1e -7 in our experiments.\nTo evaluate traditional adversarial attacks, we set S M = 1." }, { "figure_ref": [], "heading": "Intuition behind the Form of SDM", "publication_ref": [], "table_ref": [], "text": "As our goal is to let SDM be a metrics that balance the adversarial examples' strength and perceptional similarity to the original image, it's natural to make a trade-off between Acc and S by using a ratio of them. However, simply use SDM = Acc S is not model and data independent, as the absolute value of Acc and S strongly relys on the model and dataset. Thus we use the ratio of the relative value of Acc and S to make SDM model and data independent. The first version of SDM is defined as:\nSDM = ∆Acc ∆S\nHowever, from practice we observe that ∆S does not have a linear relation with ∆Acc. As ∆Acc rises, ∆S rises more and more rapidly, as illustrated in figure 6.3.\nIt's clear that the relation between ∆Acc and ∆S is not linear. Thus we needs to find a proper function f to make the relation between f (∆Acc) and ∆S linear(or constant). Intuitionally, the attack rate is harder to increase when it is closer to 1. Thus we must pay a greater decrease in S as the \"price\". We can formulate this by a differential equation:\nd(∆Acc) d(∆S) = f (1 -∆Acc)\nIn this paper, we assume f to be an linear function, where means f (x) = Kx. Thus we can solve the differential equation and get:\n∆S = 1 K log(1 -∆Acc)\nWe notice that the value of K is important, as it determines the slope of the curve of ∆Acc and ∆S, which represents the strength of an adversarial attacks. If K 1 > K 2 , ∆Acc 1 = ∆Acc 2 , we have ∆S 1 < ∆S 2 . Thus a larger K indicates that the adversarial attack is more semantically similar to the original image than other attacks under the same attack rate, which reflects the strength of the adversarial attack. Therefore, we set SDM = K. Then we get:\nSDM = K = log(1 -∆Acc) ∆S\nTo avoid the denominator to be zero, we add a small number ε to the denominator. To avoid the numerator to be undefined when ∆Acc = 1, we add a small number γ to the numerator. Thus we get the final form of SDM :\nSDM = log(1 -(1 -γ)∆Acc) ∆S + ε\nIn practice, we found SDM of this form is more stable to various attack rate, further analysis is given in section 6.5." }, { "figure_ref": [], "heading": "Model-Independency and Dataset-Independency of SDM", "publication_ref": [], "table_ref": [], "text": "We claim that SDM is strongly model-independent and dataset-independent. Though the absolute value of Acc and S can differ dramatically, by comparing with the baseline of the given model and datasets, we can still get a fair cross-model and cross-dataset comparison. Below we give detailed analysis about the validity of the baseline chosen.\nWe claim that a well-trained encoder-decoder structure should satisfy the following properties:\n(1) The encoder-decoder is trained to minimize the reconstruction loss L recon ,which should be equivalent to maximize some similarity function S . We assume S also satisfy the forementioned properties of similarity functions. The training goal of the encoder-decoder can be formulated as:\narg min E,D L recon = arg max E,D E X∼D X [S (X, D(E(X)))] = arg max E,D E X∼D X [S (X, X )]\n(2) If the encoder-decoder structure is perfect(with zero loss on any data), then the S score of the original image and the reconstructed image should be 1(which means they are identical). Meanwhile, any other image should have a S score less than 1. That can be formulated as:\nS(X, X ) = 1 if X = D(E(X)) S(X , X ) < 1 otherwise\n(3) However, it's impossible for an encoder-decoder model to be perfect while maintaining a highly semantical latent space. But we claim that for a well-trained encoder-decoder model, the maximum S score could only be achieved by the image that's decoded from the encoded image of itself. That can be formulated as:\nAssume max\nX ∈D(Zsem) (S(X, X )) = C ≤ 1 Then S(X, X ) = C if and only if X = D(E(X))\nWe give a intuition of why this property holds. Assume:\n∃z ∈ Z sem , z = E(X), S(D(z), X) > S(D(E(X)), X)\nDue to the property (1) of similarity functions, S (D(z), X) > S (D(E(X)), X) should also holds, which contradicts with property (1). Thus such z could not exist.\nHowever, it's indeed possible to have z ∈ Z sem with S(D(z), X) = S(D(E(X)), X), but in real world it's nearly impossible to find such z that happens to be exactly identical.\nGiven the forementioned properties of a well-trained encoder-decoder structure, we can now validate S M , as any perturbation on the original image should lead to a lower S score. Thus:\nM -S + ε > 0\nAs for Acc, if the accuracy of the preturbed image is higher than the baseline, then the perturbation is not adversarial. Thus for a valid adversarial attack, we have:\nAcc M -Acc ≥ 0\nThus we have some properties of the SDM score:\n• SDM ≥ 0 for a well-trained encoder-decoder model and a valid adversarial attack on latent space.\n• SDM is larger when adversarial are stronger and more human imperceptible.\n• SDM is highly model-independent and dataset-independent." }, { "figure_ref": [ "fig_7", "fig_8" ], "heading": "Choice of Similarity Function:SSIM", "publication_ref": [], "table_ref": [], "text": "Though SDM is highly model-independent and dataset-independent, it's still sensitive to the choice of similarity function. In figure 6.3 a strong correlation between the accuracy and SSIM is shown. Thus we choose SSIM as the similarity function in our experiments. We also show that use SSIM as S brings a high level of stability to the SDM metric.\nTo show how SSIM evaluates the image, we craft two adversarial examples from a same image in the latent space, but with different extent of perturbations. The first one have a SSIM score of 0.3627, the second one have a SSIM score of 0.5652. As illustrated in figure 8, we can clearly see the difference between purturbed images grows as the SSIM score lowers.\nSSIM:1 SSIM:0.5632 SSIM:0.3867 As illustrated in figure 9, we show the SDM score of a perturbed image with different target models, but with same encoder-decoder structure and dataset. We can see that the SDM score is highly stable in a region of ∆Acc ∈ (0.6, 0.995)(denoted as L), which is marked in green. SDM score are unstable with smaller or greater ∆Acc, which is marked as red and blue respectively. However, we argue that most adversarial attempt (as shown in 10) should falls in ∆Acc ∈ L. Too low or too high ∆Acc means the perturbation is either too weak or over-fits the datasets. Thus we argue that SDM is a stable metric for adversarial attacks. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_10" ], "heading": "Framework", "publication_ref": [], "table_ref": [], "text": "We adopt the pre-trained VAE from Stable Diffusion Model as our VAE module. For a dataset X, we encode it into z = E(X) ∈ Z sem , then decode it back into pixel space as X = D(z). We then feed X into the target model and calculate the loss function L, backpropagate the loss to the latent space and update the latent variable z by gradient ascent. We repeat this process for T step with a learning rate lr. The algorithm for the noising phase is shown in Algorithm 1. The sketch of our framework is illustrated in figure 11. \nAlgorithm 1 Noising Phase Input: X, F, E, D, L, T , lr Output: X z ← E(X) for t = 1 to T do X ← D(z) loss ← L(F(X )) z ← z + lr • ∇ z loss end for z ← z X ← D(z )" }, { "figure_ref": [], "heading": "Loss Function and Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "We use the most common loss function for adversarial attack, the cross entropy loss(denoted as L ce ).\nAs for evaluation metrics, we use our SDM metric to evaluate the quality of our adversarial attack. We also apply SDM to traditional attacking methods by setting S M = 1 and compare them with our method.\n8 Experiments" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "We use the VAE from Stable Diffusion Model v2.0. As for dataset, we choose the validation set of ImageNet-1k. Due to our limited time and resources, in practice we randomly choose 1000 images from the validation set as our dataset. We choose several classification models as our target model including FocalNet, ConvNext, ResNet101 and ViT_B. For each image, we first encode it into the latent space, then we do one forward, one backpropagation and do gradient ascent on the latent variable with a fixed lr. For each image we run 30 iterations of gradient ascent, as we find that the attack rate nearly converges after 30 epochs. For all PGD attack we use in the experiments, we set the number of iterations to 50, as we found that the loss of PGD attack converges after 50 iterations.\nWe set lr = 5e -3 for every attack methods." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Attacking The Target Models", "publication_ref": [ "b0", "b5", "b13", "b7" ], "table_ref": [], "text": "As found in [1], the family of ViT [6] differs from the traditional CNN architectures when facing adversarial attacks. So we choose models from both family and test their performance.\nWe also apply traditional iteration/single step methods to attack the models and test their performance. We choose PGD [14],FGSM [8] and test their performance with our method, and proves that we can reach comparable performance with traditional methods. We set γ = 0.999 and ε = 1e -7 in our experiments.\nTo quantify our adversarial examples, we use our SDM metrics for evaluation. We report our top-1 accuracy and the corresponding SDM score if our method and compare it to the traditional methods, as shown in table 8.2.2 and 2.\nThough our method has only achieve the best performance in terms of top-1 accuracy in the white-box setting when the target model is ViT_B, we achieve the best SDM score by about 10x compared to the traditional methods. This shows that our method is a stronger attack to the target model. However, it is not fair to compare the SDM score of our method with the traditional methods, as the traditional method are not designed to maintain a high S like the semantic space does. But we can still see that the huge gap between the SDM score of our method and the traditional method, which to some extent shows the superiority of our method." }, { "figure_ref": [], "heading": "Cross-Model Transferability", "publication_ref": [], "table_ref": [], "text": "As the perturbation added on images is highly semantical, we expect the perturbation to be more transferable. We test our perturbation on different models but in the same dataset. In this paper we purpose a basic iteration-based framework to add perturbation on latent variables. However, we believe that more advanced update technique for latent variables can be explored. Future researches may works on more efficient and achieving ways to update latent variables, or try generator-based methods on latent space. We also show that the performance of attacks in latent space strongly depends on the quality of the latent space, which is determined by the encoder-decoder structure. Therefore, future researches may explore new encoder-decoder structure to improve the attack rate. As for our evaluation metrics SDM , it still faces the problem of sudden decay when attack rate is to high. Thus, future works may modify and improve the SDM metric to enhance the stability." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we explain the reasonability and neccessity to add perturbation in the semantic latent space. We also propose a basic iteration-based framework to add perturbation on latent variables on a highly semantical latent space created by a pre-trained VAE module of Stable Diffusion Model. As adversarial examples created in the latent space is hard to follow the l p norm constraint, we purpose a novel metric named SDM to measure the quality of adversarial examples. To the best of our knowledge, SDM is the first metric that is specifically designed to evaluate adversarial attacks in latent space. We also show that SDM is highly model-independent and dataset-independent, while maintaining a high level of stability. We also test the cross-model transferability of perturbations crafted in latent space and proves their comparable performance with traditional methods. We believe that our SDM metric can help future researches to evaluate the strength of latent spaces attack with a quantitive measure, thereby promoting the researches on latent space attack. Most importantly, we give an illustration of the potential of adversarial attacks in the semantic latent space, and we believe that adversarial attacks in such highly semantical space can be a promising research direction in the future." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Thanks to all of you! I'll fix the appendix later. And the acknowledgements if available :D." } ]
Adversarial attacks against Deep Neural Networks(DNN) have been a crutial topic ever since [8] purposed the vulnerability of DNNs. However, most prior works craft adversarial examples in the pixel space, following the l p norm constraint. In this paper, we give intuitional explain about why crafting adversarial examples in the latent space is equally efficient and important. We purpose a framework for crafting adversarial examples in semantic latent space based on an pre-trained Variational Auto Encoder from state-of-art Stable Diffusion Model [20]. We also show that adversarial examples crafted in the latent space can also achieve a high level of fool rate. However, examples crafted from latent space are often hard to evaluated, as they doesn't follow a certain l p norm constraint, which is a big challenge for existing researches. To efficiently and accurately evaluate the adversarial examples crafted in the latent space, we purpose a novel evaluation matric based on SSIM[26] loss and fool rate.Additionally, we explain why FID[9] is not suitable for measuring such adversarial examples. To the best of our knowledge, it's the first evaluation metrics that is specifically designed to evaluate the quality of a adversarial attack. We also investigate the transferability of adversarial examples crafted in the latent space and show that they have superiority over adversarial examples crafted in the pixel space.
Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison between adversarial examples crafted in pixel space and latent space. Under latent attack, the perturbation is more covert, and the noise is highly semantic. PGD attack is under a noise budget of l ∞ < 16", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "2.1 we investigate the cross-model transferability of adversarial examples crafted from semantic latent space. The results show that adversarial examples crafted from latent space have a high level of transferability, outperforming the transferability of adversarial examples crafted by PGD and FGSM.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: A adversarial example crafted from latent space. The perturbation is almost imperceptible, comparing to a normal PGD attack with l ∞ = 0.82", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An illustration of the robustness of Human Recognition System to semantically out-of-box distribution. The PGD attack is under the l ∞ norm constraint with l ∞ < 16. The noise produced by PGD as colorful stripes can be easily seen by human, while the noise produced by our method is semantically more natural. Please zoom in to see the details.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :56Figure 5: target model is ConvNext_Base.", "figure_data": "", "figure_id": "fig_5", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The relation between ∆Acc and ∆S", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Adversarial examples with different SSIM scores. When SSIM score is low, the perturbation is more human imperceptible.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: SDM score with unstable region marked in red and blue respectively.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: stable SDM score without sudden decay under different target models.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "FrozenFigure 11 :11Figure 11: The framework of our method.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "The results are shown in table 8.2.2.For the absolute attack rate, we can see PGD attack still outperforms others. However, our method achieve a very close performance to PGD attack, while being much transferable. Our method reaches the best performance in both the average attack rate and avarage ∆Acc except for the case when our training target model is ViT_B. However, we have the best attack rate and the best ∆Acc in this case. The results indicate our method has the best performance in transferability comparing to the traditional method. As expected, perturbation crafted in semantic latent space is much more transferable than regular pixel space-based attacks. Transferability comparison on classification with different models. Here we report the top-1 accuracy and average ∆Acc, average accuracy of four target models. The higher the ∆Acc the better. For the white-box situation where the surrogate model is the same as the target model, we set the background to be gray. The best results are bolded. All the traditional iteration/single step method are within the perturbation budget of l ∞ < 10.", "figure_data": "ConvNext_baseResNet101FocalNet_baseViT_BAverage ∆AccAverage AccClean0.83700.83400.84700.7410N/A0.8148PGD(ConvNext)0.01100.55700.40700.63500.49460.5397FGSM(ConvNext)0.39800.60300.51600.61400.34090.5325Ours(ConvNext)0.02000.39000.25900.54800.61580.3042PGD(FocalNet)0.45880.61500.01900.63100.46190.4309FGSM(FocalNet)0.55900.63800.42200.62600.30600.5613Ours(FocalNet)0.27900.41900.03000.53100.60310.3148PGD(ViT_B)0.76300.72400.76600.00300.32790.5640FGSM(ViT_B)0.63500.61100.63500.06300.35600.4860Ours(ViT_B)0.66900.66700.67300.00200.40090.5026", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "We report the SDM score with the highest attack rate of our method and traditional methods on different models. The higher the SDM score the better. The top 3 SDM score is bolded.", "figure_data": "PGD(ConvNext)Ours(ConvNext)PGD(FocalNet)Ours(FocalNet)PGD(ViT_B)Ours(ViT_B)SDM score22.48153.814.78149.821.68225.39 Future Work", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Boyang Zheng; Shanghai Jiao Tong
[ { "authors": "Ahmed Aldahdooh; Wassim Hamidouche; Olivier Deforges", "journal": "", "ref_id": "b0", "title": "Reveal of vision transformers robustness against adversarial attacks", "year": "2021" }, { "authors": "Nicholas Carlini; David Wagner", "journal": "", "ref_id": "b1", "title": "Towards evaluating the robustness of neural networks", "year": "2017" }, { "authors": "Jianqi Chen; Hao Chen; Keyan Chen; Yilan Zhang; Zhengxia Zou; Zhenwei Shi", "journal": "", "ref_id": "b2", "title": "Diffusion models for imperceptible and transferable adversarial attack", "year": "2023" }, { "authors": "Antonia Creswell; Anil A Bharath; Biswa Sengupta", "journal": "", "ref_id": "b3", "title": "Latentpoison -adversarial attacks on the latent space", "year": "2017" }, { "authors": "Yinpeng Dong; Fangzhou Liao; Tianyu Pang; Hang Su; Jun Zhu; Xiaolin Hu; Jianguo Li", "journal": "", "ref_id": "b4", "title": "Boosting adversarial attacks with momentum", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b5", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b6", "title": "Generative adversarial networks", "year": "2014" }, { "authors": "Ian J Goodfellow; Jonathon Shlens; Christian Szegedy", "journal": "", "ref_id": "b7", "title": "Explaining and harnessing adversarial examples", "year": "2015" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b8", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2018" }, { "authors": " Instruments", "journal": "", "ref_id": "b9", "title": "Peak signal-to-noise ratio as an image quality metric", "year": "2013" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b10", "title": "Auto-encoding variational bayes", "year": "2022" }, { "authors": "Alexey Kurakin; Ian Goodfellow; Samy Bengio", "journal": "", "ref_id": "b11", "title": "Adversarial examples in the physical world", "year": "2017" }, { "authors": "Alexey Kurakin; Ian Goodfellow; Samy Bengio", "journal": "", "ref_id": "b12", "title": "Adversarial machine learning at scale", "year": "2017" }, { "authors": "Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu", "journal": "", "ref_id": "b13", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2019" }, { "authors": "Puneet Mangla; Surgan Jandial; Sakshi Varshney; Vineeth N Balasubramanian", "journal": "", "ref_id": "b14", "title": "Advgan++ : Harnessing latent layers for adversary generation", "year": "2019" }, { "authors": "Muzammal Naseer; Salman H Khan; Harris Khan; Fahad Shahbaz Khan; Fatih Porikli", "journal": "", "ref_id": "b15", "title": "Cross-domain transferability of adversarial perturbations", "year": "2019" }, { "authors": "Antoine Plumerault; Hervé Le Borgne; Céline Hudelot", "journal": "", "ref_id": "b16", "title": "Avae: Adversarial variational auto encoder", "year": "2020" }, { "authors": "Omid Poursaeed; Isay Katsman; Bicheng Gao; Serge Belongie", "journal": "", "ref_id": "b17", "title": "Generative adversarial perturbations", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b18", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b19", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nitish Shukla; Sudipta Banerjee", "journal": "", "ref_id": "b20", "title": "Generating adversarial attacks in the latent space", "year": "2023" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b21", "title": "Denoising diffusion implicit models", "year": "2022" }, { "authors": "Christian Szegedy; Wei Liu; Yangqing Jia; Pierre Sermanet; Scott Reed; Dragomir Anguelov; Dumitru Erhan; Vincent Vanhoucke; Andrew Rabinovich", "journal": "", "ref_id": "b22", "title": "Going deeper with convolutions", "year": "2014" }, { "authors": "Ujjwal Upadhyay; Prerana Mukherjee", "journal": "IEEE Signal Processing Letters", "ref_id": "b23", "title": "Generating out of distribution adversarial attack using latent space poisoning", "year": "2021" }, { "authors": "Xiaosen Wang; Kun He; Chuanbiao Song; Liwei Wang; John E Hopcroft", "journal": "", "ref_id": "b24", "title": "At-gan: An adversarial generator model for non-constrained adversarial examples", "year": "2020" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh", "journal": "CRC Press", "ref_id": "b25", "title": "Structural similarity based image quality assessment", "year": "2017" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b26", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Zhengli Zhao; Dheeru Dua; Sameer Singh", "journal": "", "ref_id": "b27", "title": "Generating natural adversarial examples", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 177.21, 298.19, 198.15, 78.58 ], "formula_id": "formula_0", "formula_text": "Original Image PGD Attack Latent Attack PGD noise Latent Noise" }, { "formula_coordinates": [ 4, 246.29, 707.91, 119.41, 17.43 ], "formula_id": "formula_1", "formula_text": "z ←-arg max z L(F(D(z )))" }, { "formula_coordinates": [ 7, 206.62, 148.95, 192.49, 25.4 ], "formula_id": "formula_2", "formula_text": "SSIM(x, y) = (2µ x µ y + C 1 ) (2σ xy + C 2 ) µ 2 x + µ 2 y + C 1 σ 2 x + σ 2 y + C 2" }, { "formula_coordinates": [ 7, 213.29, 550.58, 185.43, 17.29 ], "formula_id": "formula_3", "formula_text": "Acc M = E X∼D X [F(X, D(E(X)) = labels)]" }, { "formula_coordinates": [ 7, 238.24, 599.22, 135.53, 17.29 ], "formula_id": "formula_4", "formula_text": "S M = E X∼D X [S(X, D(E(X)))]" }, { "formula_coordinates": [ 8, 234.08, 278.18, 142.64, 23.78 ], "formula_id": "formula_5", "formula_text": "SDM = log(1 -(1 -γ) * ∆Acc) ∆S + ε" }, { "formula_coordinates": [ 8, 272.98, 454.64, 64.84, 23.54 ], "formula_id": "formula_6", "formula_text": "SDM = ∆Acc ∆S" }, { "formula_coordinates": [ 8, 253.07, 570.02, 107.06, 23.78 ], "formula_id": "formula_7", "formula_text": "d(∆Acc) d(∆S) = f (1 -∆Acc)" }, { "formula_coordinates": [ 8, 254.12, 638.39, 103.76, 23.78 ], "formula_id": "formula_8", "formula_text": "∆S = 1 K log(1 -∆Acc)" }, { "formula_coordinates": [ 9, 242.87, 83.15, 125.07, 23.79 ], "formula_id": "formula_9", "formula_text": "SDM = K = log(1 -∆Acc) ∆S" }, { "formula_coordinates": [ 9, 238.79, 161.5, 133.23, 23.79 ], "formula_id": "formula_10", "formula_text": "SDM = log(1 -(1 -γ)∆Acc) ∆S + ε" }, { "formula_coordinates": [ 9, 197.59, 362.55, 216.82, 40.12 ], "formula_id": "formula_11", "formula_text": "arg min E,D L recon = arg max E,D E X∼D X [S (X, D(E(X)))] = arg max E,D E X∼D X [S (X, X )]" }, { "formula_coordinates": [ 9, 237.55, 462.47, 143.18, 30.38 ], "formula_id": "formula_12", "formula_text": "S(X, X ) = 1 if X = D(E(X)) S(X , X ) < 1 otherwise" }, { "formula_coordinates": [ 9, 204.08, 555.69, 203.83, 38.89 ], "formula_id": "formula_13", "formula_text": "X ∈D(Zsem) (S(X, X )) = C ≤ 1 Then S(X, X ) = C if and only if X = D(E(X))" }, { "formula_coordinates": [ 9, 194.03, 625.96, 223.95, 17.29 ], "formula_id": "formula_14", "formula_text": "∃z ∈ Z sem , z = E(X), S(D(z), X) > S(D(E(X)), X)" }, { "formula_coordinates": [ 10, 273, 85.86, 66.01, 17.29 ], "formula_id": "formula_15", "formula_text": "M -S + ε > 0" }, { "formula_coordinates": [ 10, 270.26, 145.87, 71.47, 17.29 ], "formula_id": "formula_16", "formula_text": "Acc M -Acc ≥ 0" }, { "formula_coordinates": [ 11, 107.64, 582.9, 115.4, 128.08 ], "formula_id": "formula_17", "formula_text": "Algorithm 1 Noising Phase Input: X, F, E, D, L, T , lr Output: X z ← E(X) for t = 1 to T do X ← D(z) loss ← L(F(X )) z ← z + lr • ∇ z loss end for z ← z X ← D(z )" } ]
10.48550/ARXIV.2201.07198
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b30", "b27", "b16", "b5", "b7" ], "table_ref": [], "text": "Automatic text simplification (ATS) is the task of simplifying a text's lexical and structural complexity while preserving its original meaning. Easyto-read texts can help people with learning deficiencies or non-native speakers gain access to texts that they could not understand otherwise. On the one hand, ATS can be used to create assisting tools for people with reading disabilities or professional translators (Suárez-Figueroa et al., 2022). On the other hand, ATS can be applied as a preprocessing step for other natural language processing tasks such as machine translation or information retrieval to improve their performances (Štajner and Popovic, 2016), making it an important field of study.\nIn German, there exist multiple levels of simplified language. In contrast to the underspecified simple language, the so-called Leichte Sprache (Easy Language) enforces a very strong simplification level and follows predefined structural rules (Netzwerk Leichte Sprache, 2013). These rules include conveying only one message per sentence (structural simplification), restriction to common words (lexical simplification), and usage of simplified grammar (syntactical simplification). This simplified grammar breaks with standard German grammar, for example, by using dative instead of genitive to indicate possession. We consider Easy Language as a standalone language style. Therefore, we refer to Easy Language data as monolingual data in the further course of the paper, even though it is German as well.\nThis work shows the benefits of fine-tuning language models for specific styles and characteristics. We publish and discuss a collection of causal language models fine-tuned for German Easy Language. As shown in previous work (Gururangan et al., 2020), pre-training language models for specific domains can benefit the performances of downstream tasks in the respective domain. We extend this analysis to the language style of Easy Language. In addition, the fine-tuned models can be used to generate text with the specificities of Easy Language, for example, in data augmentation applications. Finally, we present how these models can serve as plug-in-decoders in BARTlike architectures (Lewis et al., 2020) to speed up and improve the training on sequence-to-sequence (seq2seq) tasks. Therefore, our contributions are the following:\n• We publish five German Easy Language causal language models and extensively evaluate their language style adaptions.\n• We assess the models' performance on the two downstream tasks of text complexity prediction and text simplification.\n• We suggest an ATS training process that exploits our pre-trained language models. This process reduces the number of trained param-eters by over 90% while preserving state-ofthe-art performance.\nWith the reduction of trainable parameters, less aligned data is needed to train an ATS system. Especially for languages other than English, where aligned data is sparse, pre-trained causal language models can improve ATS performance. We publish our code and results for further research and application1 ." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b18", "b21", "b9", "b17", "b8", "b31", "b11" ], "table_ref": [], "text": "Causal language models can complete text based on a prompt. In contrast to masked language models, where the models know about the context before and after a specific token, these causal language models rely only on the input and the previously outputted tokens. Therefore, they are called autoregressive models. The Generative Pre-trained Transformer (GPT) (Radford et al., 2019) is a prominent example of such an autoregressive language model. It was trained on a collection of web data and, thus, outputs text for general purposes. Previous work has fine-tuned GPT for multiple domains and tasks, such as the task of quest generation in games (?) or the medical domain (Schneider et al., 2021).\nIn addition to domain adaption, GPT was tailored to specific text styles and characteristics. These style transfer approaches include fine-tuning for poem generation (Liao et al., 2019) or the reduction of non-normative clauses (Peng et al., 2020). Li et al. (2022) trained a GPT model to mimic the language of people with dementia. By calculating the perplexities of texts with the fine-tuned and original version, they could distinguish samples from healthy and diseased people. Sun and Wan (2022) adapted a language model for simple language by only masking easy-tounderstand words in training. However, this model is a masked language model that can only fill in blanks and not generate text from scratch. Most similar to our work is the TransformerLM by Maruyama and Yamamoto (2019) trained for Japanese text simplification. The authors used a parallel corpus to directly fine-tune a GPT model for simplification. In contrast, our models are finetuned on monolingual Easy Language data. Therefore, they do not require alignments and can be used for a broader range of tasks." }, { "figure_ref": [], "heading": "German Text simplification", "publication_ref": [ "b32", "b20", "b25", "b19" ], "table_ref": [], "text": "In contrast to the English language, automatic text simplification in German has seen little research. The first system for Easy Language was proposed by Suter et al. (2016) and consisted of a collection of hand-crafted rules, including sentence splitting and paraphrasing. Säuberli et al. (2020) published the first neural simplification approach based on the transformer architecture, together with an aligned corpus. They discussed multiple data augmentation strategies, but their results lacked fluency and content preservation. Based on an extended version of this dataset, Spring et al. (2021) built a controllable simplification system that can output different simplification levels based on the Common European Framework of References for Languages (CEFR), but not specifically Easy Language. Finally, Rios et al. (2021) proposed a modified mBART architecture for document-level simplification. In our paper, we adopted their architecture to evaluate our language models on the downstream task of ATS." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b15" ], "table_ref": [], "text": "Several sources are available in Easy Language; however, they mostly encompass news websites, and only a few are aligned with articles in standard German. In the following sections, we detail the information on the data used in our training, including the Easy Language monolingual corpus utilized for fine-tuning German language models and the parallel corpus for the downstream task of text simplification. The dataset utilized for the downstream task of text complexity prediction is publicly available as a part of the GermEval 2022 shared task (Mohtaj et al., 2022) (refer to Subsection 5.4). We published scrapers to recreate our sources for the use of the academic community2 . We also provide an overview of available monolingual and parallel data sources for simplified German beyond our training data in Appendix A." }, { "figure_ref": [], "heading": "Monolingual corpus", "publication_ref": [ "b33", "b24", "b33" ], "table_ref": [], "text": "An overview of the available monolingual data can be found in Table 1. The publicly available Easy Language datasets are very limited: The Simple German corpus published by Toborek et al. (2022) contains texts on health and medication, public administration, politics, information texts for disabled people, and news articles. The second publicly available resource is a small corpus published by Siegel et al. (2019). It contains election programs, excerpts from the Bible, children's stories, and Red Cross documents.\nKurier, InfoEasy, and NDR are public broadcasting services in Austria, Switzerland, and northern Germany, respectively, and have specific columns in Easy Language. In addition, Hurraki and Lebenshilfe offer online dictionaries in Easy Language, while Einfachstars contains news articles about celebrities. These three data sources diversify our covered domains and styles of writing. More details about the data sources can be found in Table 8 Toborek et al. (2022) 28,356 misc." }, { "figure_ref": [], "heading": "Total 544,467", "publication_ref": [], "table_ref": [], "text": "Table 1: Overview of the monolingual data used for language model fine-tuning." }, { "figure_ref": [], "heading": "Parallel corpus", "publication_ref": [ "b19" ], "table_ref": [], "text": "For training the text simplification model, we used the publicly available 20 Minuten dataset 3 . The dataset consists of full articles paired with shortened, simplified summaries from the Swiss news magazine 20 Minuten. It comprises 17,905 article pairs in the training dataset and 200 pairs in the validation and test set each (Rios et al., 2021). The dataset's compression ratio (the reduction in the word count of simplified summaries) was estimated at 11%." }, { "figure_ref": [], "heading": "Preprocessing pipeline", "publication_ref": [], "table_ref": [], "text": "Analyzing the outputs of publicly available language models in standard German, we noticed that in many cases, especially for the news headline-like 3 https://github.com/ZurichNLP/20Minuten input, the output contained noise, such as HTML tags or URLs. For this reason, coupled with the fact that we obtained data from multiple sources using various formats, we built a shared preprocessing pipeline to standardize the input for the fine-tuning of the language models as well as the simplified parts in the aligned dataset. Our pipeline removed redundant tags and characters. Some Easy Language texts use bullet points to break down sentences. Since most of the data did not follow this guideline, we converted the existing bullet points into comma-separated phrases. Another feature of Easy Language is the hyphenation of compound nouns. We compiled a list of hyphenated nouns in the monolingual dataset and used it to replace equivalent non-hyphenated compound nouns." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Our approach is divided into two parts. First, we fine-tuned generative language models for German Easy Language. Then, we used these models as plug-in decoders in a BART-based simplification task." }, { "figure_ref": [], "heading": "Fine-tuning language models", "publication_ref": [ "b37", "b28" ], "table_ref": [ "tab_2" ], "text": "We selected five different pre-trained GPT-based models from Huggingface (Wolf et al., 2020) as the base for our language models, four German models, and one multilingual model. As shown in Table 2, the models differ in their original training data, initialization, and size. All German models use an embedding size of 1024, while mGPT has a size of 2048. To fine-tune the models, we used a NVIDIA A100 GPU. We trained for one epoch, with a learning rate of 1e -4 , a weight decay of 0.01, and a batch size of eight together with a gradient accumulation of four. However, due to the large model size, we had to decrease the batch size to one for mGPT. The dropout parameters for the embedding, the attention mechanism, and the fully connected layers were set to 0.1 each. Su et al. (2022) proposed a new learning objective for generative language models, the contrastive loss. This loss adds a similarity regularization to the cross entropy loss to enforce discriminative token representations. We used this loss function together with an AdamW optimizer for our finetuning. " }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Text simplification", "publication_ref": [ "b10", "b19", "b3", "b4", "b28" ], "table_ref": [], "text": "The simplification task can be considered as a translation-like seq2seq problem. Thus, we used an encoder-decoder architecture based on mBART's architecture (Liu et al., 2020). It consists of a BERT-like encoder and a GPT-like decoder. Additionally, mBART was pre-trained on multilingual data (including German) on a denoising objective and forms the current baseline for transformerbased German ATS (Rios et al., 2021). The baseline's mBART-encoder was modified to use sliding attention to be applied to article inputs. Thus, it was possible to use long input sequences efficiently. We adapted this architecture and replaced the mBARTdecoder with our fine-tuned GPT models. For the target text, we used the same preprocessing used for fine-tuning the decoder models. As our language models already output text in the desired style, no further training of the decoder was necessary. Therefore, we only trained the encoderdecoder cross attention to align the encoding of the complex articles with our language models. This was proven successful for machine translation with pre-trained language models by Gheini et al. (2021).\nTraining only the cross attention reduced the number of parameters to be updated, making the training of the simplification more efficient. In addition, the language models were not updated, and thus, we avoided catastrophic forgetting (Goodfellow et al., 2013) of their German language comprehension. We trained with the same hyperparameters as the baseline, except we set label smoothing to zero and added a contrastive part to the loss function (Su et al., 2022). We trained on a single NVIDIA TITAN X. Similar to the baseline, the training converged after 3 to 4 days according to validation loss, which means training for about 20 epochs. Due to hardware limitations, we trained with a batch size of one and a gradient accumulation of 32." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "This section describes four experiments to compare our fine-tuned (FT) models with their original (O) versions. First, we measured the models' perplexities on easy and normal texts and analyzed the readability of their outputs. In addition, the models were evaluated on two downstream tasks; text complexity prediction and automatic text simplification." }, { "figure_ref": [], "heading": "Perplexity scores", "publication_ref": [ "b36" ], "table_ref": [ "tab_4" ], "text": "The perplexity describes how likely a specific model will produce a given text. A lower perplexity score indicates a better match between the model and text. We evaluated how well our models adapt to the style of Easy Language. Therefore, the finetuned and original models' perplexities on easy and normal texts were compared. The data was collected from the MDR, a public broadcasting service in Germany that publishes news articles in Easy Language. We manually aligned 100 paragraphs from the easy and original articles. To calculate the perplexity of the data, we used the tutorial code from Huggingface (transformers, 2022) that implements perplexity as a sliding window over the input data. We adapted the code for a samplewise calculation and averaged the perplexity over all samples. Perplexity is highly dependent on the tokenization and the length of the samples (Wang et al., 2022). Therefore, we cannot determine the best fine-tuned models by selecting the model with the lowest perplexity. However, the fine-tuned and original versions of the models use the same tokenizers. Thus, we can compare their perplexities and assess the effects of fine-tuning.\nTable 3 shows the average perplexity values for the easy and normal texts. No model has seen any of the data before in training. All fine-tuned models show a lower perplexity for the Easy Language samples. In contrast, except for one model, the original models perform better on the normal texts. This suggests that the fine-tuned models match the specificities and structure of Easy Language better and, thus, that they are more likely to produce similar texts." }, { "figure_ref": [], "heading": "Easy text", "publication_ref": [], "table_ref": [], "text": "Normal " }, { "figure_ref": [], "heading": "Readability and Easy Language characteristics", "publication_ref": [ "b0" ], "table_ref": [ "tab_5", "tab_5", "tab_7" ], "text": "To evaluate the readability of the models' outputs, we compared the Flesch Reading Ease (FRE) scores (Amstad, 1978) of sample outputs. We prompted the models with six different inputs: \"Das\"(This), \"Heute\"(Today), \"Wir\"(We), \"Die Türkei\"(Turkey), \"Dieses Haus\"(This house), and \"Mein Vater\"(My father). The models had to output 100 new tokens, and we set a repetition penalty to enforce novel content in the output. Moreover, three different decoding strategies (contrastive search, sampling, and beam search) were used, resulting in 18 output texts per model. Finally, the FRE score was calculated for each of the model outputs. This score considers the average sentence length and the average number of syllables per word, which favors concise sentences with short words. Therefore, a higher score indicates a more accessible text. Table 4 shows each model's average FRE score. The fine-tuned models achieve a higher score, which implies that their output is more readable than their original's. In addition, we counted the number of suggested newline (\\n) tokens. As presented in Table 4, the fine-tuned models output this token more often. This shows that they adapted to the Easy Language characteristic of only writing one thought per line.\nAverage To further investigate this conformity with Easy Language, we gave the models the input sentence \"Heute scheint die Sonne\" (Today sun is shining) and let them predict the next token. As highlighted in Table 5, most of the fine-tuned models proposed to end the sentence, i.e., predicted a point or a modifier. In contrast, the original models added further information by continuing the sentence with a comma or an \"and\". The original models propose to continue the sentence, while the fine-tuned models only put one thought per sentence." }, { "figure_ref": [], "heading": "Suggested next token", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Human grammar evaluation", "publication_ref": [ "b4" ], "table_ref": [], "text": "Fine-tuning language models to a specific style can result in catastrophic forgetting (Goodfellow et al., 2013). To test if our fine-tuning for Leichte Sprache influences the output quality of the models, we asked human reviewers to rate the models' grammaticality. The reviewers were not paid for their review but participated voluntarily. We selected the outputs of the prompt \"Dieses Haus\"(This house) with decoding strategy contrastive from Section 5.2. Then, we presented the output of each original and its respective fine-tuned model side by side and asked the participants to select the candidate with fewer grammatical errors. Participants could also state that both models were equal. Overall, seven native speakers and one non-native speaker participated in the survey. The distribution of answers is shown in Figure 1. While most participants preferred the fine-tuned version of gerpt2 and mGPT, the fine-tuning of oscar decreased its grammar score. When averaging over all responses and models, the worsening of the grammaticality by fine-tuning the models on Leichte Sprache is neglectable." }, { "figure_ref": [], "heading": "Text complexity prediction", "publication_ref": [ "b5", "b15" ], "table_ref": [ "tab_9" ], "text": "Fine-tuning models for a specific domain improves their performance on different tasks within this domain (Gururangan et al., 2020). To test if this applies to our models, we evaluated them on the downstream task of text complexity prediction. Therefore, we added a linear layer on top of the language model heads and fine-tuned the models for the respective task. The data for this task came from the GermEval 2022 shared task on text complexity assessment (Mohtaj et al., 2022). This shared task's goal was to predict a sentence's complexity on a continuous scale between 1 and 7. We split the shared task's training data into train, evaluation, and test subsets with a ratio of 80:10:10 and fine-tuned our models for ten steps with a batch size of eight, i.e., on 80 samples total. Table 6 reports the mean squared errors on the unseen test set after the few-shot fine-tuning. The first two models have a high error for both the fine-tuned and original models. As the model only performed ten training steps, the results highly depend on the initialization. For the other three models, however, the fine-tuned models clearly outperform the original models. This gives evidence that with the fine-tuning on Easy Language data, the models get a better understanding of text complexity and, thus, can better discriminate easy from normal texts. Most of the fine-tuned models outperform their originals." }, { "figure_ref": [], "heading": "Mean squared error", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Text simplification", "publication_ref": [ "b19" ], "table_ref": [ "tab_11" ], "text": "We used our pre-trained language models as plugin decoders in a mBART simplification model. As the decoders already know how to output Easy Language, we only trained the encoder-decoder cross attention. Due to computational limitations, we could not test all our language models on the text simplification downstream task. Therefore, we selected the two most promising ones, gerpt2 and german_gpt. Table 7 shows how our simplification models perform on the 20 Minuten test dataset compared to the baseline by Rios et al. (2021). To generate the simplifications, we used a beam size of four and calculated the metrics with Huggingface evaluate. Our models outperform the baseline on the SARI metric; however, they fall behind when comparing ROUGE-L and BLEU scores. All of these metrics assess how well the proposed output overlaps with a reference simplification and do not consider synonyms. SARI is a score explicitly tailored to the task of simplification, while BLEU and ROUGE-L are general translation/seq2seq metrics. Herefore, a better SARI score may be an indication that our models do more rephrasing than the baseline model and, thus, yield better simplifications. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "With this paper, we have published a collection of causal language models for German Easy Language. These models mimic the style of Easy Language and favor short and precise sentences. In addition, they adapt to the conventions of only conveying one thought per sentence and putting a line break after every sentence. We exploited these pre-trained models in a sequence-to-sequence text simplification task. As the models were already fine-tuned to the desired output style, we only had to train the encoder-decoder cross attention and, thus, reduced the number of trainable parameters by 93%. With this, training a style-transfer system becomes feasible for settings with few aligned data or a lack of computational power." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b12", "b2" ], "table_ref": [], "text": "This paper focuses on the style transfer of Easy Language for German. Due to their word inflections and high average word length, languages like German are harder to learn for language models (Mielke et al., 2019). Therefore, the proposed approach may work even better on easier-to-model languages, but we did not test any other language.\nIn addition, the style transfer of simplified language uses the same vocabulary as the original language and only reduces its diversity. Our approach has yet to be evaluated on other styles, for example, ones that introduce new words.\nWhen evaluating the influence of fine-tunung on the grammaticality of the model outputs, we found that even the original models were not perfect and produced grammatical errors. One possible reason is relying on GPT2-based models that are relatively small and, thus, perform worse than state-of-theart language models like PaLM (Chowdhery et al., 2022). In addition, the German base models are often already fine-tuned versions of English models, and thus, may already suffer from catastrophic forgetting due to fine-tuning." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "ATS systems can provide more accessible versions of texts, however, a good text simplification is targeted to the knowledge and language level of its audience. Therefore, to utilize these systems for the target group directly, the systems need to be deployed in a controllable setting where the user can set the level of simplification or ask for additional explanations if necessary. Nevertheless, there are also applications where ATS systems can increase the amount of accessible information on the internt withput being used by the target group directly. For example, these systems can yield a draft simplification for professional translators or can be helpful for public state authorities that are forced by law to offer online information in Easy Language. Another problem is the possible stigmatization of users if they request a simplified version of the data (Hansen-Schirra, 2020). Finally, the availability of information in Easy Language is very sparse; thus, it is hard to fact-check material on the internet with other sources. This makes the target group of Easy Language highly vulnerable to misinformation and fake news. Hence, our generative models must be used with care as they do not provide hallucination control.\nAmong the sources of our dataset, there is a significant bias towards news articles as well as some regional bias due to the large proportion of articles related to Austria, Switzerland, and northern Germany. As all sources are from official website articles, and the dataset does not include user comments, we expect the data to be unoffensive and of high quality. Nevertheless, we find topical biases such as the COVID-19 pandemic due to the years from which the articles were scraped. In respect of any intellectual property laws, we published the scrapers used to obtain the data but not the data itself." } ]
Automatic text simplification systems help to reduce textual information barriers on the internet. However, for languages other than English, only few parallel data to train these systems exists. We propose a two-step approach to overcome this data scarcity issue. First, we fine-tuned language models on a corpus of German Easy Language, a specific style of German. Then, we used these models as decoders in a sequence-to-sequence simplification task. We show that the language models adapt to the style characteristics of Easy Language and output more accessible texts. Moreover, with the style-specific pre-training, we reduced the number of trainable parameters in text simplification models. Hence, less parallel data is sufficient for training. Our results indicate that pre-training on unaligned data can reduce the required parallel data while improving the performance on downstream tasks.
Language Models for German Text Simplification: Overcoming Parallel Data Scarcity through Style-specific Pre-training
[ { "figure_caption": "Figure 1 :1Figure 1: Human grammar evaluation with a ranking task. Participants selected which model output of the fine-tuned and original versions showed fewer grammatical mistakes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "in Appendix A. Our fine-tuning data combines all sources included in Table1. The combined data was shuffled and randomly split into a training set containing 90% of the data and a validation set with 10% of the total.", "figure_data": "DatasetSentences DomainHurraki56,785 lexiconLebenshilfe7,144 lexiconEinfachstars129,674 newsNachrichtenleicht122,842 newsKurier67,827 newsNDR60,749 newsInfoEasy10,310 newsSiegel et al. (2019)4,210 misc.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Training setup and number of parameters for different German GPT2 models. These models were used as base for our Easy Language fine-tuning.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison of perplexity scores between easy and normal texts. Lower score means better match. The fine-tuned models fit easy German text better, while the original models favor normal texts.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Flesch Reading Ease score averaged over different prompts and decoding strategies, and total number of \\n tokens suggested. The fine-tuned models output more simple texts.", "figure_data": "ModelFTFRE \\n tokens O FT Ogerpt265.17 51.09 67 34german_gpt 75.09 70.89 79 74wechsel70.72 55.86 69 18oscar68.21 49.32 610mGPT72.16 55.30 106 29", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Suggested next token for the input sentence \"Heute scheint die Sonne\" (Today the sun is shining).", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Mean squared error after fine-tuning for continuous text complexity prediction on 80 sentences.", "figure_data": "", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "To achieve this result, our models needed training on only 7% of the trainable parameters of the baseline while preserving state-of-the-art performance.", "figure_data": "ScoreBaseline*gerpt2 german_gpt FT FTROUGE-L 19.9618.5217.93SARI33.2942.2542.74BLEU6.294.954.80#Params trained416M29M29M", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Text simplification performance on the 20 Minuten testset. For our models, only the cross attention was trained which reduced the number of trained parameters by far;", "figure_data": "", "figure_id": "tab_11", "figure_label": "7", "figure_type": "table" } ]
Miriam Anschütz; Joshua Oehms; Thomas Wimmer; Bartłomiej Jezierski; Georg Groh
[ { "authors": "Toni Amstad", "journal": "", "ref_id": "b0", "title": "Wie verständlich sind unsere Zeitungen?", "year": "1978" }, { "authors": "Dennis Aumiller; Michael Gertz", "journal": "", "ref_id": "b1", "title": "Klexikon: A german dataset for joint summarization and simplification", "year": "2022" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b2", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Mozhdeh Gheini; Xiang Ren; Jonathan May", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Cross-attention is all you need: Adapting pretrained Transformers for machine translation", "year": "2021" }, { "authors": "Ian J Goodfellow; Mehdi Mirza; Da Xiao; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b4", "title": "An empirical investigation of catastrophic forgetting in gradientbased neural networks", "year": "2013" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": "Silvia Hansen-Schirra", "journal": "Easy language research: text and user perspectives", "ref_id": "b6", "title": "Easy language, plain language, easy language plus: perspectives on comprehensibility and stigmatisation", "year": "2020" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Changye Li; David Knopman; Weizhe Xu; Trevor Cohen; Serguei Pakhomov", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "GPT-D: Inducing dementia-related linguistic anomalies by deliberate degradation of artificial neural language models", "year": "2022" }, { "authors": "Yi Liao; Yasheng Wang; Qun Liu; Xin Jiang", "journal": "", "ref_id": "b9", "title": "Gpt-based generation for classical chinese poetry", "year": "2019" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "Multilingual denoising pre-training for neural machine translation", "year": "2020" }, { "authors": "Takumi Maruyama; Kazuhide Yamamoto", "journal": "", "ref_id": "b11", "title": "Extremely low resource text simplification with pretrained transformer language model", "year": "2019" }, { "authors": "Sabrina J Mielke; Ryan Cotterell; Kyle Gorman; Brian Roark; Jason Eisner", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "What kind of language is hard to language-model", "year": "2019" }, { "authors": "Benjamin Minixhofer", "journal": "", "ref_id": "b13", "title": "GerPT2: German large and small versions of GPT2", "year": "2020" }, { "authors": "Benjamin Minixhofer; Fabian Paischer; Navid Rekabsaz", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models", "year": "2022" }, { "authors": "Salar Mohtaj; Babak Naderi; Sebastian Möller", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Overview of the GermEval 2022 shared task on text complexity assessment of German text", "year": "2022" }, { "authors": "Das Netzwerk; Leichte Sprache", "journal": "", "ref_id": "b16", "title": "Die regeln für leichte sprache", "year": "2013" }, { "authors": "Xiangyu Peng; Siyan Li; Spencer Frazier; Mark Riedl", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Reducing non-normative text generation from language models", "year": "2020" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b18", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Annette Rios; Nicolas Spring; Tannon Kew; Marek Kostrzewa; Andreas Säuberli; Mathias Müller; Sarah Ebling", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "A new dataset and efficient baselines for document-level text simplification in German", "year": "2021" }, { "authors": "Andreas Säuberli; Sarah Ebling; Martin Volk", "journal": "European Language Resources Association", "ref_id": "b20", "title": "Benchmarking data-driven automatic text simplification for German", "year": "2020-03" }, { "authors": "Elisa Terumi; Rubel Schneider; João Vitor Andrioli De Souza; Yohan Bonescki Gumiel; Claudia Moro; Emerson Cabrera; Paraiso ", "journal": "", "ref_id": "b21", "title": "A gpt-2 language model for biomedical texts in portuguese", "year": "2021" }, { "authors": "Stefan Schweter", "journal": "", "ref_id": "b22", "title": "German gpt-2 model", "year": "2020" }, { "authors": "Oleh Shliazhko; Alena Fenogenova; Maria Tikhonova; Vladislav Mikhailov; Anastasia Kozlova; Tatiana Shavrina", "journal": "", "ref_id": "b23", "title": "mgpt: Few-shot learners go multilingual", "year": "2022" }, { "authors": "Melanie Siegel; Dorothee Beermann; Lars Hellan", "journal": "", "ref_id": "b24", "title": "Aspects of linguistic complexity: A german -norwegian approach to the creation of resources for easy-to-understand language", "year": "2019" }, { "authors": "Nicolas Spring; Annette Rios; Sarah Ebling", "journal": "Held Online. IN-COMA Ltd", "ref_id": "b25", "title": "Exploring German multi-level text simplification", "year": "2021" }, { "authors": "Sanja Štajner; Marc Franco-Salvador; Paolo Rosso; Simone Paolo; Ponzetto ", "journal": "", "ref_id": "b26", "title": "Cats: A tool for customized alignment of text simplification corpora", "year": "2018" }, { "authors": "Sanja Štajner; Maja Popovic", "journal": "", "ref_id": "b27", "title": "Can text simplification help machine translation", "year": "2016" }, { "authors": "Yixuan Su; Tian Lan; Yan Wang; Dani Yogatama; Lingpeng Kong; Nigel Collier", "journal": "", "ref_id": "b28", "title": "A contrastive framework for neural text generation", "year": "2022" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "Mari ; Carmen Suárez-Figueroa; Isam Diab; Edna Ruckhaus; Isabel Cano", "journal": "Universal Access in the Information Society", "ref_id": "b30", "title": "First steps in the development of a support application for easy-to-read adaptation", "year": "2022" }, { "authors": "Renliang Sun; Xiaojun Wan", "journal": "", "ref_id": "b31", "title": "Simplebert: A pre-trained model that learns to generate simple words", "year": "2022" }, { "authors": "Julia Suter; Sarah Ebling; Martin Volk", "journal": "", "ref_id": "b32", "title": "Rulebased automatic text simplification for german", "year": "2016" }, { "authors": "Vanessa Toborek; Moritz Busch; Malte Boßert; Christian Bauckhage; Pascal Welke", "journal": "", "ref_id": "b33", "title": "A New Aligned Simple German Corpus", "year": "2022" }, { "authors": "", "journal": "Huggingface transformers", "ref_id": "b34", "title": "Perplexity of fixedlength models", "year": "2022" }, { "authors": "Susanna Värtinen; Perttu Hämäläinen; Christian Guckelsberger", "journal": "IEEE Transactions on Games", "ref_id": "b35", "title": "Generating role-playing game quests with gpt language models", "year": "2022" }, { "authors": "Yequan Wang; Jiawen Deng; Aixin Sun; Xuying Meng", "journal": "", "ref_id": "b36", "title": "Perplexity from plm is unreliable for evaluating text quality", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" } ]
[]
2023-05-22
[ { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b26", "b28", "b32", "b33", "b26", "b28", "b33", "b8", "b17", "b31", "b14", "b14", "b26", "b10", "b20", "b20" ], "table_ref": [], "text": "Semi-supervised learning (SSL) aims to learn from large amounts of unlabeled data together with a limited number of labeled data so as to mitigate the need for costly large-scale manual annotations. Although extensive studies have shown that deep neural networks can achieve high accuracy even with limited samples when trained in the semi-supervised manner [1,27,29,33,34], the majority of existing approaches assume that the distribution of labeled and unlabeled data are class-balanced. This is in stark contrast to realistic scenarios where data are oftentimes long-tailed, i.e., the majority of samples belong to a few dominant classes while the remaining classes have far fewer samples, as illustrated in Figure 1(a). * Corresponding author. Conventional SSL algorithms perform poorly on minority classes, with the help of BMB, the model exhibits significant accuracy increase for the minority classes, while maintaining comparable accuracy for the majority classes.\nThe long-tailed nature of classes makes it particularly challenging for SSL compared to conventional supervised training pipelines. This results from the fact that the mainstream of SSL relies on pseudo labels produced by teacher networks [27,29,34], which is trained with a handful of labeled samples that are drawn from a skewed class distribution. As a result, these generated pseudo labels are biased towards the majority classes and thus the class imbalance is further amplified, resulting in deteriorated performance particularly on minority classes, as shown in Figure 1(b).\nOne popular strategy to mitigate the class imbalance problem in long-tailed supervised learning (SL) is data re-sampling, which balances the training data by under-sampling the majority classes and over-sampling the minority classes. While seemingly promising, generalizing the re-sampling method from SL to SSL is nontrivial since the method requires knowledge about the labels and class distribution of training data, which are missing in SSL that mainly learns from unlabeled data. As a result, existing re-sampling approaches for SSL still produce relatively unsatisfactory performance [9,18,32]. It is clear that the SSL performance would be further improved with better-tailored re-sampling strategies that can bridge the gap mentioned above between SL and SSL. Motivated by this, we attempt to address the challenges encountered on re-sampling in SSL and demonstrate that re-sampling can also achieve good results in class-imbalanced SSL.\nWith this in mind, we introduce Balanced Memory Bank (BMB), a semi-supervised framework for long-tailed classification. BMB contains a balanced feature memory bank and an adaptive weighting module, cooperating with each other to re-calibrate the training process. In particular, the balanced feature memory bank stores historical features of unlabeled samples with their corresponding pseudo labels that are updated online. During training, a certain number of pseudo-annotated features are selected from the memory bank to supplement features in the current batch, and features of the minority classes are more likely to be chosen to enhance the classifier's capacity for classifying the tail categories. It is worth noting that when inserting features into the memory bank, we update the memory with only a subset of samples to keep the memory bank class-rebalanced instead of storing all samples from the current batch, ensuring the model to learn from a diverse set of data. In addition, the adaptive weighting module aims to address the class imbalance issue in SSL by assigning higher weights to the losses of samples from minority classes and lower weights to those from majority ones, which enables the model to learn a more balanced classifier.\nWe conduct experiments on the commonly-studied datasets CIFAR10-LT [15] and CIFAR100-LT [15] and show that BMB achieves better performance than previous state-of-the-arts. As demonstrated in Figure 1(b), with the help of BMB, the accuracy of minority classes exhibits significant boost compared to the baseline model [27]. Furthermore, we also conduct experiments on larger-scale datasets, ImageNet127 [11] and ImageNet-LT [21], which are more realistic and challenging. BMB outperforms state-of-the-art approaches with clear margins, highlighting its effectiveness in more practical settings. Specifically, compared to the previous state-of-the-arts, BMB achieved improvements of 8.2% on the 1% labeled subset of ImageNet127 (with a resolution of 64×64) and 4.3% on the 50% labeled subset of ImageNet-LT. It is worth pointing out that we are the first to evaluate class-imbalanced semi-supervised algorithms on ImageNet-LT [21], which is a more challenging benchmark with up to 1,000 categories, making it more difficult to handle the bias in pseudo labels. Besides, the imbalance in ImageNet-LT is more severe (the rarest class only contains 5 samples) which will be even fewer in semi-supervised setting. This makes the modeling for the minority class more difficult. We believe the class-imbalanced SSL should focus more on such realistic and challenging benchmarks to drive further progress.\nThe main contributions of this paper are summarized as follows:\n• We present BMB, a novel semi-supervised learning framework for class-imbalanced classification. It comprises a balanced memory bank and an adaptive weighting module, which work collaboratively to rebalance the learning process in class-imbalanced SSL. • We conduct extensive experiments on various datasets to verify the effectiveness of BMB, and achieve state-of-the-art results on several benchmarks. Notably, we pioneered the experimentation with ImageNet-LT, which provides a more challenging and realistic benchmark for future works." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Semi-supervised Learning", "publication_ref": [ "b15", "b22", "b28", "b16", "b33", "b32", "b33", "b26" ], "table_ref": [], "text": "To mitigate the expensive data annotation cost in SL, a range of approaches aim to learn from unlabeled data, in order to enhance the performance on limited labeled data. One widely used approach is consistency regularization [16,23,29] that enforces consistent predictions for similar inputs, serving as a regularization term during training. Pseudo labeling [17,34] is another line of research that assigns pseudo labels to unlabeled data based on the predictions of a teacher model. When pseudo labels are assigned by the model itself, this is generally known as self-training [33,34]. FixMatch [27] builds upon both consistency regularization and pseudo labeling, and presents state-of-the-art performance on classbalanced datasets, but produces limited results when the data distribution is imbalanced. Our approach differs from the standard SSL method that we wish to explicitly build a class-balanced classifier by a balanced memory bank that alleviates the difficulties of SSL under long-tailed datasets." }, { "figure_ref": [], "heading": "Class-imbalanced Supervised Learning", "publication_ref": [ "b2", "b5", "b19", "b1", "b3", "b7", "b3", "b5", "b11", "b34", "b21", "b27", "b18", "b30", "b35" ], "table_ref": [], "text": "Real-world data usually exhibit a long-tailed distribution, with a significant variance in the number of samples across different categories. To improve the performance of tail classes, re-weighting methods [3,6,20] assign a higher loss weight for the minority classes and a lower one for the majority classes, forcing the model to pay more attention to the minorities. Re-sampling approaches [2,4,8] attempt to achieve re-balancing at the sampling level, i.e., minority-classes are over-sampled or majority-classes are undersampled. However, this usually leads to overfitting or information loss [4,6]. In addition, two-stage training approaches [12,35] decouple the learning of representations and the classifiers. The feature extractor is obtained in the first stage, and a balanced classifier is trained in the second stage with the extractor fixed. More recently, logits compensation [22,28] and contrastive-based methods [19,31,36] also show promising performance. These methods resort to the known data distributions to achieve re-balancing among different classes. However, this information is unknown for unlabeled dataset in semi-supervised scenario." }, { "figure_ref": [], "heading": "Class-imbalanced Semi-supervised Learning", "publication_ref": [ "b12", "b31", "b23", "b6" ], "table_ref": [], "text": "There is a growing interest in the class-imbalanced problem for SSL. However, it is extremely challenging to deal with the classimbalanced data in SSL due to the unknown data distribution and the unreliable pseudo labels provided by a biased teacher model. DARP [13] formulates a convex optimization to refine inaccurate pseudo labels. CReST [32] selectively chooses unlabeled data to complement the labeled set, and the minority classes are selected with a higher frequency. DASO [24] introduces a semantic-aware feature classifier to refine pseudo labels. CoSSL [7] disentangles the training of the feature extractor and the classifier head, and introduces interaction modules to couple them closely. Unlike these approaches, we address class-imbalance through re-sampling with the help of a memory bank to update pseudo labels in an online manner, while estimating the distribution of the unlabeled data through a simple yet effective approach. This allows for an end-toend training pipeline in a single stage." }, { "figure_ref": [], "heading": "PRELIMINARY: A SEMI-SUPERVISED FRAMEWORK 3.1 Notation", "publication_ref": [], "table_ref": [], "text": "We assume a semi-supervised dataset contains 𝑁 labeled samples and 𝑀 unlabeled samples and refer to the labeled set as X = {(𝑥 𝑖 , 𝑦 𝑖 )} 𝑁 𝑖=1 and the unlabeled set as U = {𝑢 𝑗 } 𝑀 𝑗=1 , respectively. We use the index 𝑖 for labeled data, the index 𝑗 for unlabeled data and index 𝑘 for the label space. The number of training samples in the 𝑘-𝑡ℎ class is denoted as 𝑁 𝑘 and 𝑀 𝑘 for the labeled and unlabeled set, respectively, i.e., 𝐾 𝑘=1 𝑁 𝑘 = 𝑁 and 𝐾 𝑘=1 𝑀 𝑘 = 𝑀. Without loss of generality, we let 𝑁 1 ≥ 𝑁 2 ≥ • • • ≥ 𝑁 𝐾 for simplicity. We use 𝑓 (𝑥; 𝜃 ) to represent the mapping function of the model, 𝛼 and A to represent the weak augmentation and the strong augmentation respectively. We use 𝛾 𝑙 = 𝑁1 𝑁 𝐾 and 𝛾 𝑢 = 𝑀 1 𝑀 𝐾 to reflect the imbalance ratios for the labeled and unlabeled datasets respectively." }, { "figure_ref": [], "heading": "FixMatch", "publication_ref": [ "b26" ], "table_ref": [], "text": "FixMatch [27] is one of the most popular SSL algorithms that enables deep neural networks to effectively learn from unlabeled data. A labeled example 𝑥 𝑖 is first transformed to its weakly augmented version 𝛼 (𝑥 𝑖 ) and then taken as input by the model 𝑓 . The supervised loss during training is calculated following Eq. ( 1):\nL 𝑠 = 1 𝐵 𝐵 ∑︁ 𝑖=1 H(𝑦 𝑖 , 𝑓 (𝛼 (𝑥 𝑖 )))(1)\nwhere 𝐵 refers to the batch size, 𝑦 𝑖 is the label of 𝑥 𝑖 , and H(•, •) denotes the standard cross-entropy loss.\nGiven an unlabeled sample 𝑢 𝑗 , two different views A (𝑢 𝑗 ) and 𝛼 (𝑢 𝑗 ) are obtained by applying the strong augmentation A and the weak augmentation 𝛼 to the sample. The predicted probability vector on 𝑢 𝑗 is denoted as q 𝑗 = 𝑓 (𝛼 (𝑢 𝑗 )), which is then converted into a pseudo categorical label: q𝑗 = arg max(q 𝑗 ) as the supervisory signal for the unlabeled sample. Finally, a cross-entropy loss is computed on the prediction of the strongly augmented view A (𝑢 𝑗 ):\nL 𝑢 = 1 𝐵 𝐵 ∑︁ 𝑗=1 I(𝑚𝑎𝑥 (q 𝑗 ) ≥ 𝜏)H( q𝑗 , 𝑓 (A (𝑢 𝑗 )))(2)\nwhere 𝜏 denotes the threshold for filtering out those low-confidence and potentially noisy pseudo labels, and I is the indicator function.\nThe total training loss is the sum of both supervised and unsupervised 1 losses: L = L 𝑠 + 𝜆 𝑢 L 𝑢 , where 𝜆 𝑢 is a hyperparameter controlling the weight of the unsupervised loss." }, { "figure_ref": [], "heading": "OUR APPROACH", "publication_ref": [], "table_ref": [], "text": "Our goal is to develop a SSL framework for long-tailed classification with minimal surgery to the standard SSL training process, and effectively alleviating the issue of class imbalance. To this end, we present BMB, an effective framework with an online-updated memory bank storing class-rebalanced features and their corresponding pseudo labels. The carefully designed memory bank serves as an additional source of training data for the classifier to cope with imbalanced class distributions. To further emphasize the minority classes during training, we also utilize a re-weighting strategy to adaptively assign weights to the loss terms for different samples. This ensures a more stable memory updating process especially during the initial stage of training." }, { "figure_ref": [ "fig_2" ], "heading": "Overall Framework", "publication_ref": [ "b6", "b11", "b17", "b34" ], "table_ref": [], "text": "Previous studies [7,12] have shown that imbalanced training data have little impact on encoders (i.e., feature extractors), and the bias towards majority classes mainly occurs in classifiers. To balance the classifier, there are studies [18,35] introducing an additional branch to assist the learning process. Inspired by this, we build our BMB on top of the conventional SSL framework by equipping it with an extra classifier, and ensure it to be class-rebalanced through carefully designed techniques. As depicted in Figure 2, BMB comprises a shared feature encoder and two distinct classifiers.\nMore specifically, each classifier performs its own role in the whole framework, and with one referred to as the base classifier and another one as the auxiliary classifier, respectively. The base classifier aims to help the encoder to extract better features, and its training follows the traditional SSL methods without any additional re-balancing operation. In contrast, the auxiliary classifier is responsible for making a reliable prediction without biasing towards the majority classes. To make the auxiliary classifier more balanced, we introduce a memory bank that caches historical features to provide more balanced training data, and a loss re-weighting strategy is utilized to ensure the memory bank being well-initialized and maintained.\nThe training process of BMB is end-to-end, and all the components are jointly trained. During inference, the base classifier is discarded, and the output of the auxiliary classifier is used as the final prediction." }, { "figure_ref": [], "heading": "Balanced Feature Memory Bank", "publication_ref": [], "table_ref": [], "text": "We construct a memory bank structure with a fixed storage size to cache historical features and their corresponding pseudo labels. The memory bank consists of three key operations: enqueue, dequeue, and get, which are crucial for maintaining and utilizing the memory bank effectively. The enqueue operation adds features to the memory bank, while the dequeue operation eliminates unnecessary features when the memory bank reaches its maximum capacity.\nDuring training with the memory bank, the get operation retrieves data from the memory based on a predefined strategy to supplement the features in the current batch. Figure 3 illustrates the memory bank structure and these operations. " }, { "figure_ref": [], "heading": "Pseudo labels", "publication_ref": [], "table_ref": [], "text": "Figure 3: The maintenance mechanism of the feature memory bank. The 𝑒𝑛𝑞𝑢𝑒𝑢𝑒 and 𝑑𝑒𝑞𝑢𝑒𝑢𝑒 operation is based on the current class distribution in the memory bank, and their goal is to make the memory class-balanced. The 𝑔𝑒𝑡 operation samples features from the memory according to the estimated unlabeled data distribution, and the minority-class features are selected with a higher probability to complement features in the current batch.\ndenote the count of features in the memory belonging to the 𝑘-𝑡ℎ class, we aim to ensure the in-memory distribution (𝐶 1 , • • • , 𝐶 𝐾 ) is as uniform as possible. As such, we carefully design the updating strategy accomplished by the enqueue and dequeue operations.\nFor each training step, the enqueue operation adds the most recent features to the memory with a varying probability. Specifically, if a feature has been confidently pseudo-annotated as belonging to the 𝑘-th class category, it is put into the memory with a probability based on the number of features for the 𝑘-th class in the bank:\n𝑃 𝑖𝑛 𝑘 = 1 (𝐶 𝑘 ) 𝛽 (3\n)\nwhere 𝛽 is a hyperparameter larger than 0. With Eq. ( 3), features from categories that are seldom seen are more likely to be put into the memory bank. When the memory bank reaches its maximum capacity, incoming features and their pseudo labels will need to replace existing ones in the memory bank. In this case, we use the dequeue operation to discard a certain number of features and their pseudo labels. To maintain a class-balanced memory bank, we remove the majority features with a higher probability, while the minority features are removed with a lower probability calculated as follows:\n𝑃 𝑜𝑢𝑡 𝑘 = 1 - 1 (𝐶 𝑘 ) 𝛽(4)\nwhere 𝛽 > 0 is a coefficient that controls the balance level of the memory, and a larger value makes the memory more balanced.\nGet. After obtaining a class-rebalanced memory bank, we design an algorithm to perform re-sampling at the feature level via the get operation, aiming to balance the auxiliary classifier. We employed reversed sampling based on the distribution of training samples to compensate the imbalance in the current batch and thus eliminating the bias effects of the long-tail phenomenon. Specifically, features that belong to the 𝑘-th class will be sampled with the probability described in:\n𝑃 𝑔𝑒𝑡 𝑘 = 1 (𝑀 𝑘 ) 𝜆(5)\nwhere 𝑀 𝑘 refers to the number of unlabeled training data belonging to class 𝑘, and 𝜆 controls the level of reversed sampling. With a larger 𝜆, the minority classes will be over-sampled, which can compensate for the imbalanced data in current batch. The sampled features and the corresponding pseudo labels are used in the training process of the auxiliary classifier, with the corresponding loss term denoted as L 𝑚𝑒𝑚 .\nUnlabeled data distribution estimation. The re-sampling operation relies on the distribution information (i.e., the number of samples contained in each category) of the unlabeled data, which is not available in SSL. Therefore, it is necessary to estimate it appropriately. A straightforward approach is to use the labeled data distribution as a proxy, assuming that the training data are sampled from the same distribution, but this assumption may not hold when the distributions do not match. For a more accurate estimation, we use the number of accumulated pseudo labels to substitute 𝑀 𝑘 with M𝐾 as in Eq. ( 6):\nM𝑘 = | P | ∑︁ 𝑗=1 1(𝑝 𝑗 = 𝑘)(6)\nwhere P denotes all the pseudo labels of the unlabeled dataset, 𝑝 𝑗 is the 𝑗-th pseudo label and 1 is the indicator function. In this way, we can obtain the estimated distribution ( M1 , M2 , • • • , M𝑘 ) of the unlabeled dataset." }, { "figure_ref": [], "heading": "Adaptive Loss Re-weighting", "publication_ref": [], "table_ref": [], "text": "The class-rebalanced memory bank enables online re-sampling to alleviate the class imbalance issue. However, solely relying on the memory bank can be problematic since the pseudo labels may exhibit bias towards the majority classes in the early stage of training. This can lead to reduced effectiveness of the memory bank since the in-memory samples belonging to the minority classes is scarce and the pseudo labels are unreliable. Furthermore, enriching the batch with features from the memory bank itself may not be enough for perfectly balancing the majority and minority classes since the number of samples for each class within the batch is an integer, making the re-calibration during training \"discrete\" as opposed to a continuous process with more controllable variance. To this end, we propose an adaptive weighting method that not only ensures the memory bank being well-initialized and maintained, but also enables a flexible and continuous calibration with controllable variance that further mitigates the class imbalance. Formally, for each sample 𝑥 𝑖 from the labeled set, an adaptive loss weight 𝑊 (𝑥 𝑖 ) is generated to re-weight the loss for the auxiliary classifier 𝑓 𝑎 as below:\n𝑊 (𝑥 𝑖 ) = 𝑁 𝐾 𝑁 𝑦 𝑖 𝛼 (7)\nwhere 𝑁 𝐾 is the number of samples from the class with the least samples, and 𝑁 𝑦 𝑖 is the number of samples from class 𝑦 𝑖 . The weight is inversely proportional to the number of samples in class 𝑦 𝑖 and the hyper-parameter 𝛼 controls the variance of weights, where a larger value will lead to more diverse weights across different classes. The adaptive weights are then injected into the original supervised loss as follows:\nL 𝑎 𝑠 = 1 𝐵 𝐵 ∑︁ 𝑖=1 𝑊 (𝑥 𝑖 )H(𝑦 𝑖 , 𝑓 𝑎 (𝛼 (𝑥 𝑖 )))(8)\nFor the unlabeled sample 𝑢 𝑗 , the adaptive weight is computed in a similar way, except that we replace the number of samples in Eq. ( 7) with the estimated distribution M:\n𝑊 (𝑢 𝑗 ) = M𝐾 M q𝑗 𝛼(9)\nwhere q𝑗 = arg max(𝑓 𝑎 (𝛼 (𝑢 𝑗 ))) is the predicted pseudo label of 𝑢 𝑗 .\nThe unsupervised loss of the auxiliary classifier then becomes:\nL 𝑎 𝑢 = 1 𝐵 𝐵 ∑︁ 𝑗=1 𝑊 (𝑢 𝑗 )I(𝑚𝑎𝑥 (q 𝑗 ) ≥ 𝜏)H( q𝑗 , 𝑓 𝑎 (A (𝑢 𝑗 )))(10)" }, { "figure_ref": [], "heading": "Training and Inference", "publication_ref": [], "table_ref": [], "text": "BMB is an end-to-end trainable framework where all modules are trained collaboratively. The total loss, defined in Eq. ( 11), consists two parts: one for the base classifier and the other for the auxiliary classifier.\nL 𝑡𝑜𝑡𝑎𝑙 = L 𝑏𝑎𝑠𝑒 + L 𝑎𝑢𝑥(11)\nThe loss of the base classifier, denoted as L 𝑏𝑎𝑠𝑒 = L 𝑏 𝑠 + 𝜆 𝑢 L 𝑏 𝑢 , is simply the weighted sum of the original supervised and unsupervised losses described in Sec. 3.2, the superscript 𝑏 here is used to distinguish from the auxiliary classifier. As for the auxiliary classifier, the loss can be expressed as L 𝑎𝑢𝑥 = L 𝑎 𝑠 + 𝜆 𝑢 L 𝑎 𝑢 + 𝜆 𝑚 L 𝑚𝑒𝑚 . This is also a summation over the supervised and unsupervised losses, however, the weights are adaptively adjusted as described in Sec. 4.3. In addition, an extra term L 𝑚𝑒𝑚 is included to utilize the training samples selected from the class-rebalanced memory bank. There are two hyperparameters 𝜆 𝑢 and 𝜆 𝑚 used to control the weight of different part.\nDue to the absence of re-balancing adjustment for the base classifier, it is expected to be biased. Therefore, during inference, we ignore it and rely solely on the prediction from the auxiliary classifier, which is considered to be more class-balanced. However, this dose not imply that the base classifier is useless. As will be shown in Sec. 5.4, the base classifier helps extracting better features, which is crucial for the auxiliary classifier's training." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "This section describes our experimental evaluation, where we compare BMB with state-of-the-art methods and conduct ablation studies to validate the effectiveness of each design choice in BMB." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b20", "b10", "b14", "b20", "b25" ], "table_ref": [], "text": "To validate the effectiveness of BMB, we perform experiments on several datasets, including ImageNet-LT [21], ImageNet127 [11] and the long-tailed version of CIFAR [15]. [21] is constructed by sampling a subset from the orignial ImageNet [26] dataset following the Pareto distribution with power 𝛼=6. It contains 115.8K images across 1,000 categories, and the distribution is extremely imbalanced. The most frequent class has 1280 samples, while the least frequent class only has 5 samples. To create a semi-supervised version of this dataset, we randomly sample 20% and 50% of the training data to form the labeled set, while all remaining training data is used as the unlabeled set, with their labels ignored. Due to the challenging nature of this dataset, previous SSL algorithms have not been evaluated on it. Nevertheless, we believe that testing on such more realistic datasets is crucial." }, { "figure_ref": [], "heading": "ImageNet-LT. ImageNet-LT", "publication_ref": [ "b10", "b25", "b6", "b31" ], "table_ref": [], "text": "ImageNet127. ImageNet127 [11] is a large-scale dataset, which groups the 1,000 categories of ImageNet [26] into 127 classes based on their hierarchical structure in WordNet. It is naturally longtailed with an imbalance 𝛾 ≈ 256. The most majority class contains 277,601 images, while the most minority class only has 969 images. Following [7,32], we randomly select 1% and 10% of its training sample as the labeled set, with the remaining training samples treated as the unlabeled set. The test set is also imbalanced due to the category grouping, and we keep it untouched while reporting the averaged class recall as an evaluation metric." }, { "figure_ref": [], "heading": "CIFAR-LT.", "publication_ref": [ "b5", "b6", "b14" ], "table_ref": [], "text": "The original CIFAR dataset is class balanced, to achieve the predefined imbalance ratios 𝛾, we follow common practice [6,7] by randomly selecting samples for each class from the original balanced dataset [15]. Specifically, we select 𝑁 𝑘 = 𝑁 1 • 𝛾 𝜇 𝑘 labeled samples and 𝑀 𝑘 = 𝑀 1 • 𝛾 𝜇 𝑘 unlabeled samples for the 𝑘-𝑡ℎ class, where 𝜇 𝑘 = -𝑘-1 𝐾-1 . For CIFAR10 we set 𝑁 1 =1500, 𝑀 1 =3000, and for CIFAR100, we set 𝑁 1 =150 and 𝑀 1 =300. The test set remains untouched and balanced." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b9", "b6", "b12", "b31", "b24", "b6", "b12", "b31", "b13", "b6", "b24", "b0", "b6", "b12" ], "table_ref": [], "text": "Network architecture. On the ImageNet127 and ImageNet-LT datasets, we use ResNet50 [10] as the encoder, and train it from scratch. When conducting experiments on CIFAR10-LT and CIFAR100-LT, we follow the common practice in previous works [7,13,32], and use the randomly initialized WideResNet-28 [25] as the encoder. In all cases, the base classifier and the auxiliary classifier are both single-layer linear classifiers.\nTraining setups. For a fair comparison, we keep the training and evaluation setups identical to those in previous works [7,13,32]. Specifically, we train the models for 500 epochs on ImageNet127, CIFAR10-LT and CIFAR100-LT, and 300 epochs on ImageNet-LT, with each epoch consisting of 500 iterations. For all datasets, we utilize Adam [14] optimizer with a constant learning rate of 0.002 without any scheduling. The batch size is 64 for both labeled and unlabeled data across all datasets. The size of the class-rebalanced memory bank is 128 for CIFAR10-LT, 256 for CIFAR100-LT and ImageNet127, and 1024 for ImageNet-LT. At each training step, we select a certain proportion of features from the memory and use their pseudo-labels for training. Specifically, we select 50% of features on CIFAR and ImageNet127, and 25% on ImageNet-LT. More detailed hyperparameters setting can be found in the supplementary A. The partition intervals for the 50% subset can be found in the header of subtable (b).\nEvaluation metrics. For ImageNet127, we report the averaged class recall of the last 20 epochs due to the imbalanced test set. For ImageNet-LT, we save the checkpoint that achieves the best accuracy on the on validation set, and report its accuracy on a holdout test set. For CIFAR, we report the averaged test accuracy of the last 20 epochs, following the approaches in [7,25]. It is worth noting that we evaluate the performance using an exponential moving average of the parameters over training with a decay rate of 0.999, as is common practice in [1,7,13]." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b18", "b20", "b6", "b6", "b6", "b14", "b24", "b26", "b6", "b31", "b12", "b25" ], "table_ref": [], "text": "ImageNet-LT. We conduct experiments on the 20% and 50% labeled subsets of the original dataset. In the 20% subset, there has only one labeled sample in most scarce class, leading to an extremely difficult task. To gain a deeper understanding of each method, following previous works [19,21], we not only consider the overall top-1 accuracy across all classes but also evaluate the accuracy of three disjoint subsets: many-shot, medium-shot, and few-shot classes.\nThe experimental results and partitioning rules for these subsets can be found in Tab. 1. We can observe that BMB achieves an overall accuracy that exceeds other methods by 2.8% and 4.3% on the 20% and 50% subsets, respectively.\nImageNet127. To remain consistent with prior work [7] and save computational resources, we adopt the approach described in [ Table 3: Top-1 accuracy (%) on CIFAR10-LT and CIFAR100-LT with different imbalance ratio, and the test dataset is remain balanced. We reproduce all the algorithms using the codebase released by [7] for a fair comparison.\nto downsample the images in ImageNet to resolutions of 32 × 32 or 64 × 64, which was also employed by [7]. This yield a downsampled variant of ImageNet127 that we used for our experiments. The outcomes obtained under various resolutions and labeled subsets are summarized in Tab. 2. We can observe that our method outperforms other methods significantly in all settings, particularly in the 1% subset with a 64 × 64 resolution, where we surpass the second-best method by 8.2%.\nCIFAR10-LT and CIFAR100-LT. We also conduct experiments on CIFAR [15], assuming that the labeled and unlabeled datasets share the same distribution, i.e. 𝛾 = 𝛾 𝑙 = 𝛾 𝑢 . We report results with 𝛾 = 20 for CIFAR10-LT and 𝛾 = 10 for CIFAR100-LT. We run each experiment with three random seeds and report the means and standard deviations in Tab. 3. Our BMB show the best accuracy comparing with previous state-of-the-art methods, and achieve an improvement of 1.0% on CIFAR100-LT with 𝛾 = 10.\nResults under mismatched data distributions. In more realistic scenarios, labeled and unlabeled data may not share the same distribution, making it crucial to test method effectiveness when 𝛾 𝑙 ≠ 𝛾 𝑢 . The ImageNet-LT and ImageNet127 datasets are unsuitable for such testing since their imbalance ratios are fixed. Therefore, ImageNet (𝛾 𝑙 = 50)\n𝛾 𝑢 = 1 𝛾 𝑢 = 20 𝛾 𝑢 = 100\nVanilla [25] 33.2 34.7 34.1 FixMatch [27] 38.9 39.5 37.8 CoSSL [7] 39.6 39.3 38.1 CReST+ [32] 39.5 39.8 40.3 DARP [13] 46 we sample a subset from the original ImageNet dataset [26] using the same method as for constructing the long-tailed CIFAR. Specifically, we set 𝑁 1 = 600, 𝑀 1 = 300 and fix 𝛾 𝑙 = 50 while 𝛾 𝑢 varies between 1, 20 and 100. The experimental results presented in Tab. 4 demonstrate that our method achieves the highest accuracy across different settings. We attribute this to our method's ability to make no assumptions about the distribution of unlabeled data and estimate it through an effective method." }, { "figure_ref": [ "fig_7", "fig_6", "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Ablation Studies", "publication_ref": [ "b29", "b6", "b11" ], "table_ref": [], "text": "To investigate the importance of different components and the settings of key hyperparameters, we conduct ablation experiments and related discussions in this section. The implementation details can be found in supplementary A.\nMain components of BMB. There are two main components in BMB: the class-rebalanced feature memory bank and the adaptive weighting module. To investigate the effectiveness of each component, we gradually add each one and present the experimental results in Tab. 5. We observe that our method only achieves a modest improvement of 1.0% over the baseline when using the memory bank alone. However, when the adaptive weighting module is attached to the memory, the accuracy is further improved by 2.2%.\nRebalancing degree of the memory bank. In the maintenance and updating of the memory bank, there is a critical parameter that controls the degree of rebalancing, namely the coefficient 𝛽 in Eq. ( 3) and Eq. ( 4). As we can see from the equations, a larger value of 𝛽 can lead to a more balanced distribution of features from different classes in the memory bank. When 𝛽 equals zero, the maintenance of the memory bank becomes random, and all features are added to or removed from the memory bank with equal probability, regardless of their category. We visualize the distribution of data in the memory bank under different values of 𝛽 in Fig. 6. When 𝛽 = 0, the data in the memory bank exhibits a imbalanced distribution, this is because the unlabeled data is inherently imbalanced. When 𝛽 = 1, the imbalance is significantly alleviated and the distribution is very close to the ideal balanced distribution (when 𝛽 = ∞). This indicates the our algorithm is effective in rebalancing the imbalanced data.\nTo further investigate how the in-memory data distribution affects model performance, we present the accuracy of the model at various values of 𝛽 in Fig. 5. It can be observed that the model performs poorly when the memory bank is imbalanced, and the accuracy increases as 𝛽 increases. However, when 𝛽 becomes too large, the performance starts to decline. We speculate that this is due to an excessive emphasis on data balance, which may affect the updating rate of data. The results indicate that maintaining a moderately balanced memory bank is necessary.\nNecessity of the base classifier. Two separate classifiers are used in the training process of BMB: a class-rebalanced auxiliary classifier and a vanilla base classifier. During inference, only the auxiliary classifier is utilized while the base classifier is discarded. To validate the need for the base classifier, we employ t-SNE [30] to visualize the representations extracted by the encoder trained with different classifier configurations in Fig. 4. As depicted in Fig. 4(a), the quality of the features extracted by the encoder is poor when only the auxiliary classifier is utilized. However, when the base classifier is incorporated on top of it (Fig. 4(c)), the extracted features are significantly enhanced. Meanwhile, as shown in Fig. 4(b), the quality of the extracted features is also decent when only the base classifier is used, which is in line with the findings in [7,12] that the imbalanced data has little effect on the encoder." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This work delved into the challenging and under-explored problem of class-imbalanced SSL. We proposed a novel approach named BMB, which centers on an online-updated memory bank. The memory caches the historical features and their corresponding pseudo labels, and a crafted algorithm is designed to ensure the inside data distribution to be class-rebalanced. Building on this well-curated memory, we apply a re-sampling strategy at the feature level to mitigate the impact of imbalanced training data. To better re-calibrate the classifier and ensure the memory bank being well-initialized and maintained, we also introduce an adaptive weighting module to assist the memory bank. With all the crafted components working in synergy, BMB successfully rebalanced the learning process of the classifier, leading to state-of-the-art performance across multiple imbalanced SSL benchmarks." }, { "figure_ref": [], "heading": "A IMPLEMENTATION DETAILS", "publication_ref": [], "table_ref": [], "text": "Before introducing the specific setting of different hyperparameters in our experiments, we first present the symbols and their corresponding definitions in Tab. 6. Throughout all experiments, we set the value of 𝜆 𝑢 =1, while the other parameters' specific settings will be explained below. Additionally, at the initial stage of training, we perform model warmup by disregarding the unlabeled data in the loss calculation because the pseudo labels are unreliable. Specifically, we carry out 10 epochs of warmup for ImageNet-LT and 20 epochs for other datasets.\nImageNet-LT. For both the 20% and 50% labeled subsets, we set 𝜏 to 0.7, 𝛽 to 3 and 𝜆 𝑚 to 0.75. Regarding the 20% labeled subset, we set 𝛼 to 0.5, whereas for the 50% labeled subset, we set it to 0.75. Similarly, we set 𝜆 to 1.25 for the 20% labeled subset and 0.75 for the 50% labeled subset.\nImageNet127. We assess BMB using the 1% and 10% labeled subsets of ImageNet127, with resolutions of 32×32 and 64×64. We maintain the value of 𝜏 to 0.95 for all experiments, while the other parameter settings for each setup are presented in Tab. 7.\nCIFAR-LT. We set 𝜏=0.95, 𝛼=1.5 and 𝛽=3 for both CIFAR10-LT and CIFAR100-LT. Additionally, we set 𝜆=0.75 and 𝜆 𝑚 =0.25 for CIFAR10-LT, and 𝜆=1.25 and 𝜆 𝑚 =1.25 for CIFAR100-LT.\nMismatched ImageNet. When conducting experiments on Ima-geNet with mismatched labeled and unlabeled sets, including 𝛾 𝑢 =1, 20 and 100, we set the following hyperparameters: 𝜏=0.7, 𝛼=1, 𝛽=3, 𝜆=0.5, and 𝜆 𝑢 =0.75.\nAblation studies. We carry out ablation studies on the CIFAR100-LT dataset with 𝛾 = 10. For these experiments, we maintain consistency with the main experiments except for the specific parameter being explored. " }, { "figure_ref": [ "fig_9" ], "heading": "B MORE EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "We conduct additional ablation experiments in this section to gain further insight into BMB.\nDegree of reversed sampling from the memory bank. When re-sampling from the memory bank, we adopt a reversed sampling strategy, where the parameter 𝜆 controls the extent of reversal. A larger value results in a higher probability of selecting minority classes, and the experimental results visualized in Fig. 7 are consistent with this. We plot the accuracy achieve with different values of 𝜆 in Fig. 8 and observe that a moderate value is suitable, as excessively large or small will lead to deteriorate results. Different configurations of the memory bank. The BMB employs a memory bank to cache the features of the unlabeled data along with their pseudo labels. By default, only the strongly augmented feature 𝐸 (A (𝑢 𝑗 ) is in the memory bank, where 𝐸 (•) denotes the feature extractor. However, each unlabeled sample undergoes two different augmentations, which produces two different versions of features, namely 𝐸 (A (𝑢 𝑗 )) and 𝐸 (𝛼 (𝑢 𝑗 )). Consequently, there are multiple configurations of the memory bank, and we can store only one version or both of them. As shown in Tab. 8, the model performs well in all cases, and achieves the best result when only 𝐸 (A (𝑢 𝑗 )) is stored." } ]
Exploring a substantial amount of unlabeled data, semi-supervised learning (SSL) boosts the recognition performance when only a limited number of labels are provided. However, traditional methods assume that the data distribution is class-balanced, which is difficult to achieve in reality due to the long-tailed nature of real-world data. While the data imbalance problem has been extensively studied in supervised learning (SL) paradigms, directly transferring existing approaches to SSL is nontrivial, as prior knowledge about data distribution remains unknown in SSL. In light of this, we propose Balanced Memory Bank (BMB), a semi-supervised framework for long-tailed recognition. The core of BMB is an online-updated memory bank that caches historical features with their corresponding pseudo labels, and the memory is also carefully maintained to ensure the data therein are class-rebalanced. Additionally, an adaptive weighting module is introduced to work jointly with the memory bank so as to further re-calibrate the biased training process. We conduct experiments on multiple datasets and demonstrate, among other things, that BMB surpasses state-of-the-art approaches by clear margins, for example 8.2% on the 1% labeled subset of Ima-geNet127 (with a resolution of 64 × 64) and 4.3% on the 50% labeled subset of ImageNet-LT.
BMB: Balanced Memory Bank for Imbalanced Semi-supervised Learning
[ { "figure_caption": "Long-tailed distribution of CIFAR10-LT. Accuracy of each category on CIFAR10-LT.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: (a) Both labeled and unlabeled data follow a longtailed distribution in imbalanced SSL. Classes with more samples are referred to as majority classes, while those with fewer samples are referred to as minority classes. (b)Conventional SSL algorithms perform poorly on minority classes, with the help of BMB, the model exhibits significant accuracy increase for the minority classes, while maintaining comparable accuracy for the majority classes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The overall framework of BMB, which consists of a shared encoder and two separate classifiers, i.e. an auxiliary and a base classifier, respectively. The auxiliary classifier is trained carefully to avoid being biased towards the majority classes. The base classifier is responsible for facilitating the training of the encoder to extract better features. During inference, only the balanced auxiliary classifier is used while the base classifier is discarded.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Enqueue and Dequeue. The basic intuition of maintaining the memory bank is to keep it category balanced. Specifically, let 𝐶 𝑘", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) auxiliary classifier only (b) base classifier only (c) both auxiliary and base classifiers", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: T-SNE [30] visualization of the extracted representations learned under different classifier configurations: (a) only the auxiliary classifier, (b) only the base classifier and (c) both the auxiliary and the base classifier are included in the training process of BMB (the default configuration of BMB).", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The test accuracy under different 𝛽 values, indicating how the balance degree effects the model's performance.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The distribution of samples from different classes in the memory bank. The larger the value of 𝛽, the more balanced the distribution will be.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "symbol meaning 𝜏 the threshold above which we retain a pseudo label 𝛼 controls the variance of weights in adaptive weighting 𝛽 controls the re-balancing degree of the memory bank 𝜆 controls the degree of reversed sampling 𝜆 𝑢 the relative weight of the loss term L 𝑢 𝜆 𝑚 the relative weight of the loss term L 𝑚𝑒𝑚Table 6: A list of hyperparameters and their respective definitions.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The distribution of data sampled from the memory bank, a larger 𝜆 leads to a more reversed result.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :Table 8 :88Figure 8: The accuracy achieved with varying values of 𝜆.", "figure_data": "", "figure_id": "fig_10", "figure_label": "88", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "overallmany-shot medium-shot few-shot> 20≤20 & >4≤4Vanilla [25]12.524.15.81.0FixMatch [27] 16.532.47.21.2CReST+ [32]18.033.39.41.6CoSSL [7]19.134.511.01.9DARP [13]23.040.713.82.7BMB (ours)25.8 ↑2.8 41.6 ↑0.918.2 ↑4.45.7 ↑3.0(a) ImageNet-LT 20% labeled subsetoverallmany-shot medium-shot few-shot> 50≤50 & >10≤10Vanilla [25]20.936.313.52.6FixMatch [27] 25.244.215.93.0CReST+ [32]27.345.618.95.1CoSSL [7]28.646.920.64.7DARP [13]30.950.322.25.9BMB (ours)35.2 ↑4.3 51.2 ↑0.929.0 ↑6.812.0 ↑6.1(b) ImageNet-LT 50% labeled subset", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Averaged class recall (%) under different input resolutions and different scales of labeled data. We reproduce all the other algorithms using the same codebase released by[7] for a fair comparison. The best results are in bold.", "figure_data": "5]", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Top-1 (%) accuracy on the long-tailed version of Im-ageNet dataset[26], where distributions of the labeled and unlabeled datasets are mismatched.", "figure_data": ".746.746.9BMB (ours)49.948.748.4adaW memory accuracy (%)56.257.259.4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "We incrementally introduced each module of BMB to evaluate their individual importance.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Hyperparameter settings for ImageNet127 dataset.", "figure_data": "labeled ratio resolution𝛼𝛽𝜆𝜆 𝑚1%32×3221 0.75 0.51%64×641.75 110.510%32×321.5 110.7510%64×641.5 1 1.251", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" } ]
Wujian Peng; Zejia Weng; Hengduo Li; Zuxuan Wu
[ { "authors": "David Berthelot; Nicholas Carlini; Ian Goodfellow; Nicolas Papernot; Avital Oliver; Colin A Raffel", "journal": "", "ref_id": "b0", "title": "Mixmatch: A holistic approach to semi-supervised learning", "year": "2019" }, { "authors": "Mateusz Buda; Atsuto Maki; Maciej A Mazurowski", "journal": "Neural networks", "ref_id": "b1", "title": "A systematic study of the class imbalance problem in convolutional neural networks", "year": "2018" }, { "authors": "Kaidi Cao; Colin Wei; Adrien Gaidon; Nikos Arechiga; Tengyu Ma", "journal": "", "ref_id": "b2", "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "year": "2019" }, { "authors": "Kevin W Nitesh V Chawla; Lawrence O Bowyer; Philip Hall; Kegelmeyer", "journal": "JAIR", "ref_id": "b3", "title": "SMOTE: synthetic minority over-sampling technique", "year": "2002" }, { "authors": "Patryk Chrabaszcz; Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b4", "title": "A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets", "year": "2017" }, { "authors": "Yin Cui; Menglin Jia; Tsung-Yi Lin; Yang Song; Serge Belongie", "journal": "", "ref_id": "b5", "title": "Classbalanced loss based on effective number of samples", "year": "2019" }, { "authors": "Dengxin Yue Fan; Anna Dai; Bernt Kukleva; Schiele", "journal": "", "ref_id": "b6", "title": "Cossl: Co-learning of representation and classifier for imbalanced semi-supervised learning", "year": "2022" }, { "authors": "Haibo He; Edwardo A Garcia", "journal": "TKDE", "ref_id": "b7", "title": "Learning from imbalanced data", "year": "2009" }, { "authors": "Ju He; Adam Kortylewski; Shaokang Yang; Shuai Liu; Cheng Yang; Changhu Wang; Alan Loddon; Yuille ", "journal": "", "ref_id": "b8", "title": "Rethinking Re-Sampling in Imbalanced Semi-Supervised Learning", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b9", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Minyoung Huh; Pulkit Agrawal; Alexei A Efros", "journal": "", "ref_id": "b10", "title": "What makes ImageNet good for transfer learning?", "year": "2016" }, { "authors": "Bingyi Kang; Saining Xie; Marcus Rohrbach; Zhicheng Yan; Albert Gordo; Jiashi Feng; Yannis Kalantidis", "journal": "", "ref_id": "b11", "title": "Decoupling representation and classifier for long-tailed recognition", "year": "2020" }, { "authors": "Jaehyung Kim; Youngbum Hur; Sejun Park; Eunho Yang; Sung Ju Hwang; Jinwoo Shin", "journal": "", "ref_id": "b12", "title": "Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "CoRR", "ref_id": "b13", "title": "Adam: A Method for Stochastic Optimization", "year": "2014" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b14", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Samuli Laine; Timo Aila", "journal": "", "ref_id": "b15", "title": "Temporal ensembling for semi-supervised learning", "year": "2017" }, { "authors": "Dong-Hyun Lee", "journal": "", "ref_id": "b16", "title": "Pseudo-label: The simple and efficient semisupervised learning method for deep neural networks", "year": "2013" }, { "authors": "Hyuck Lee; Seungjae Shin; Heeyoung Kim", "journal": "", "ref_id": "b17", "title": "ABC: Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning", "year": "2021" }, { "authors": "Tianhong Li; Peng Cao; Yuan Yuan; Lijie Fan; Yuzhe Yang; S Rogerio; Piotr Feris; Dina Indyk; Katabi", "journal": "", "ref_id": "b18", "title": "Targeted supervised contrastive learning for long-tailed recognition", "year": "2022" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b19", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Ziwei Liu; Zhongqi Miao; Xiaohang Zhan; Jiayun Wang; Boqing Gong; Stella X Yu", "journal": "", "ref_id": "b20", "title": "Large-scale long-tailed recognition in an open world", "year": "2019" }, { "authors": "Aditya Krishna Menon; Sadeep Jayasumana; Ankit Singh Rawat; Himanshu Jain; Andreas Veit; Sanjiv Kumar", "journal": "", "ref_id": "b21", "title": "Long-tail learning via logit adjustment", "year": "2021" }, { "authors": "Takeru Miyato; Shin-Ichi Maeda; Masanori Koyama; Shin Ishii", "journal": "TPAMI", "ref_id": "b22", "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "year": "2018" }, { "authors": "Youngtaek Oh; Dong-Jin Kim; In So Kweon", "journal": "", "ref_id": "b23", "title": "DASO: Distribution-aware semantics-oriented pseudo-label for imbalanced semi-supervised learning", "year": "2022" }, { "authors": "Avital Oliver; Augustus Odena; Colin Raffel; Ekin Dogus Cubuk; Ian J Goodfellow", "journal": "", "ref_id": "b24", "title": "Realistic Evaluation of Deep Semi-Supervised Learning Algorithms", "year": "2018" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael S Bernstein; Alexander C Berg; Li Fei-Fei", "journal": "IJCV", "ref_id": "b25", "title": "ImageNet Large Scale Visual Recognition Challenge", "year": "2014" }, { "authors": "Kihyuk Sohn; David Berthelot; Nicholas Carlini; Zizhao Zhang; Han Zhang; Colin A Raffel; Ekin Dogus Cubuk; Alexey Kurakin; Chun-Liang Li", "journal": "", "ref_id": "b26", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "2020" }, { "authors": "Jingru Tan; Changbao Wang; Buyu Li; Quanquan Li; Wanli Ouyang; Changqing Yin; Junjie Yan", "journal": "", "ref_id": "b27", "title": "Equalization Loss for Long-Tailed Object Recognition", "year": "2020" }, { "authors": "Antti Tarvainen; Harri Valpola", "journal": "", "ref_id": "b28", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "Laurens Van Der Maaten; Geoffrey E Hinton", "journal": "JMLR", "ref_id": "b29", "title": "Visualizing Data using t-SNE", "year": "2008" }, { "authors": "Peng Wang; Kai Han; Xiu-Shen Wei; Lei Zhang; Lei Wang", "journal": "", "ref_id": "b30", "title": "Contrastive learning based hybrid networks for long-tailed image classification", "year": "2021" }, { "authors": "Chen Wei; Kihyuk Sohn; Clayton Mellina; Alan Yuille; Fan Yang", "journal": "", "ref_id": "b31", "title": "Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning", "year": "2021" }, { "authors": "Qizhe Xie; Zihang Dai; Eduard Hovy; Thang Luong; Quoc Le", "journal": "", "ref_id": "b32", "title": "Unsupervised data augmentation for consistency training", "year": "2020" }, { "authors": "Qizhe Xie; Minh-Thang Luong; Eduard Hovy; Quoc V Le", "journal": "", "ref_id": "b33", "title": "Self-training with noisy student improves imagenet classification", "year": "2020" }, { "authors": "Boyan Zhou; Quan Cui; Xiu-Shen Wei; Zhao-Min Chen", "journal": "", "ref_id": "b34", "title": "Bbn: Bilateralbranch network with cumulative learning for long-tailed visual recognition", "year": "2020" }, { "authors": "Jianggang Zhu; Zheng Wang; Jingjing Chen; Yi-Ping Phoebe Chen; Yu-Gang Jiang", "journal": "", "ref_id": "b35", "title": "Balanced contrastive learning for long-tailed visual recognition", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 388.29, 442.39, 169.91, 24.75 ], "formula_id": "formula_0", "formula_text": "L 𝑠 = 1 𝐵 𝐵 ∑︁ 𝑖=1 H(𝑦 𝑖 , 𝑓 (𝛼 (𝑥 𝑖 )))(1)" }, { "formula_coordinates": [ 3, 356.59, 588.38, 201.61, 24.75 ], "formula_id": "formula_1", "formula_text": "L 𝑢 = 1 𝐵 𝐵 ∑︁ 𝑗=1 I(𝑚𝑎𝑥 (q 𝑗 ) ≥ 𝜏)H( q𝑗 , 𝑓 (A (𝑢 𝑗 )))(2)" }, { "formula_coordinates": [ 4, 414.48, 426.97, 140.55, 21.24 ], "formula_id": "formula_2", "formula_text": "𝑃 𝑖𝑛 𝑘 = 1 (𝐶 𝑘 ) 𝛽 (3" }, { "formula_coordinates": [ 4, 555.03, 433.04, 3.17, 7.94 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 4, 405.34, 566.76, 152.86, 21.24 ], "formula_id": "formula_4", "formula_text": "𝑃 𝑜𝑢𝑡 𝑘 = 1 - 1 (𝐶 𝑘 ) 𝛽(4)" }, { "formula_coordinates": [ 5, 147.9, 95.59, 146.15, 21.14 ], "formula_id": "formula_5", "formula_text": "𝑃 𝑔𝑒𝑡 𝑘 = 1 (𝑀 𝑘 ) 𝜆(5)" }, { "formula_coordinates": [ 5, 141.66, 325.34, 152.39, 27.36 ], "formula_id": "formula_6", "formula_text": "M𝑘 = | P | ∑︁ 𝑗=1 1(𝑝 𝑗 = 𝑘)(6)" }, { "formula_coordinates": [ 5, 141.59, 639.49, 152.45, 22.16 ], "formula_id": "formula_7", "formula_text": "𝑊 (𝑥 𝑖 ) = 𝑁 𝐾 𝑁 𝑦 𝑖 𝛼 (7)" }, { "formula_coordinates": [ 5, 374.97, 125.15, 183.23, 24.75 ], "formula_id": "formula_8", "formula_text": "L 𝑎 𝑠 = 1 𝐵 𝐵 ∑︁ 𝑖=1 𝑊 (𝑥 𝑖 )H(𝑦 𝑖 , 𝑓 𝑎 (𝛼 (𝑥 𝑖 )))(8)" }, { "formula_coordinates": [ 5, 404.58, 191.19, 153.62, 27.23 ], "formula_id": "formula_9", "formula_text": "𝑊 (𝑢 𝑗 ) = M𝐾 M q𝑗 𝛼(9)" }, { "formula_coordinates": [ 5, 336.44, 250.79, 221.76, 24.75 ], "formula_id": "formula_10", "formula_text": "L 𝑎 𝑢 = 1 𝐵 𝐵 ∑︁ 𝑗=1 𝑊 (𝑢 𝑗 )I(𝑚𝑎𝑥 (q 𝑗 ) ≥ 𝜏)H( q𝑗 , 𝑓 𝑎 (A (𝑢 𝑗 )))(10)" }, { "formula_coordinates": [ 5, 395.74, 357.14, 162.46, 8.43 ], "formula_id": "formula_11", "formula_text": "L 𝑡𝑜𝑡𝑎𝑙 = L 𝑏𝑎𝑠𝑒 + L 𝑎𝑢𝑥(11)" }, { "formula_coordinates": [ 7, 419.52, 104.99, 91.07, 7.32 ], "formula_id": "formula_12", "formula_text": "𝛾 𝑢 = 1 𝛾 𝑢 = 20 𝛾 𝑢 = 100" } ]
10.3386/w27008
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b18", "b12", "b24", "b40", "b37", "b47", "b29", "b41", "b17", "b27" ], "table_ref": [], "text": "Experts in a field sometimes conduct historical studies to synthesize and document the key research ideas, topics of interest, methods, and datasets that shaped a field of study. They document how new research topics eclipsed older ones and contributed to shaping the trajectory of the research area (Kuhn, 1970). Aspiring scientists learn the craft of their discipline by delving into the examination of past scientific accomplishments documented in research papers. However, conducting such a historical study is challenging: Experts in a field rely on years of experience and peruse large amounts of past published articles to determine the chronological progression of a research field. Further, the exponential growth of scientific publications in recent years has rendered it arduous even for domain experts to stay current. Therefore, an automated method to track the temporal evolution of research topics can be beneficial in offering an overview of the field and assisting researchers in staying abreast of advancements more efficiently.\nIn this work, we propose a systematic framework to examine the evolutionary journey of research topics within the realm of Natural Language Processing (NLP), harnessing causal discovery and inference techniques. Prior research on historical analysis of NLP has predominantly concentrated on scrutinizing metadata associated with research papers (Hall et al., 2008;Mohammad, 2019;Uban et al., 2021;Singh et al., 2023;Wahle et al., 2023) such as number of citations, title, author profile, affiliation, and publication venue. These studies have examined the research trends through unigram or bigram frequency analysis, but they do not provide insights into the underlying causes propelling these research topics.\nOur study centers on four distinct fundamental types of entities in NLP research: tasks representing well defined problems; methods, signifying the solutions or approaches employed to tackle the tasks; datasets, indicating the relevant textual resources such as corpora and lexicons; and metrics, encompassing the evaluation techniques tailored to specific tasks. We abbreviate these types as TDMM for short. Specifically, we examine the interplay between an NLP task that is commonly viewed as a focused research topic (e.g., Machine Translation) and the key entities that exert pivotal influence on the target task (such as \"BLEU\" (Papineni et al., 2002) or \"Transformers\" (Vaswani et al., 2017)).\nOur goal is to identify the TDMM entities (E) associated with a specific task (t) and assess their causal influence on the task's research trends (TDMM-Task causal analysis). Specifically, we address the following key research questions associated with a task entity t: . Tables show the top causal entities/types for different periods (excluding 1979-1989 due to limited MT papers).\nt? (b) Are there discernible causal relationships between t and E? (c) What is the extent of the causal impact exerted by E on t? Unlike Uban et al. ( 2021) and Koch et al. (2021) that heavily rely on manual annotations and have limited coverage, our analysis is based on TDMM entities automatically extracted from 55K papers in the ACL Anthology 2 . Our framework not only recognizes the key entities driving the research direction of a research topic but also measures the causal effects of these entities on the target topic in an end-to-end fashion. Figure 1 shows the most influential entities for Machine Translation (MT) in different time periods. For instance, \"statistical models\" used to be the popular method for MT in 1990-2002, and the evaluation metric \"BLEU\" is one of the top causal entities driving the MT research in 2003-2017. In the era of pre-trained large language models (LLMs) starting from 2018, \"transformer\" has become the popular method for MT. For another research topic of \"Speech recognition\", our framework uncovers the influential role of \"language modeling\" between 1979 to 2022, where speech recognition models utilize probability scores from language models to recognize coherent text from speech (Negri et al., 2014).\nIn this work, we analyze 16 tasks from a diverse set of research areas identified by ACL 2018 organizers. Our framework is versatile and applicable to other tasks and domains, benefiting both young and experienced researchers. It can aid in litera-2 https://aclanthology.org/ ture surveys by identifying related research areas and enable young researchers to delve into new research focuses by establishing connections among different research areas.\nIn summary, we make three-fold contributions in this study: Firstly, we propose a framework to quantify research activities, including (1) trends and stability of an NLP research task, and (2) relation intensity between TDMM entities and NLP research tasks. Secondly, we employ causal analysis algorithms to uncover causal structures and measure effects between tasks and related TDMM entities (TDMM-Task causal analysis). To the best of our knowledge, this represents the first historical study of a scientific research anthology from a causal perspective. Finally, through extensive experiments on the ACL Anthology, we offer an empirical overview of the NLP research landscape. In the following sections, we will refer to TDMM-Task causal analysis as causal analysis." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b12", "b38", "b35", "b4", "b32" ], "table_ref": [], "text": "Scientific Trends Analysis The analysis of scientific trends has been a research focus since Hall et al. (2008). In the field of \"scientometrics\", extensive literature explores citation patterns and utilizes topological measures in citation networks for trend analysis (Small, 2006;Shibata et al., 2008;Boyack and Klavans, 2022).\nAnother line of research focuses on metadata and content analysis. For instance, Prabhakaran et al. (2016) employed rhetorical framing to exam-" }, { "figure_ref": [], "heading": "Papers from the ACL Anthology", "publication_ref": [ "b28", "b2", "b33", "b7", "b23", "b8", "b15", "b10", "b16" ], "table_ref": [], "text": "Causal Graph (DirectLiNGAM)\nStep 1: Pre-processing (2021) analyzed relationships between NLP research topics based on their co-occurrence in text and the degree of correlation between their popularity over time. In our work, we develop entity recognition models to extract TDMM entities from NLP research papers and focus on analyzing the causal relations between a task entity and its related TDMM entities.\nCausality in NLP Existing works on NLP applying causal analysis algorithms mainly focus on two directions. The first line of work discovers causal relations among textual features or expressions of events in texts and uses them in various downstream tasks, such as question answering (Oh et al., 2016), commonsense reasoning (Bosselut et al., 2019;Sap et al., 2019), and relation extraction (Do et al., 2011;Mirza and Tonelli, 2014;Dunietz et al., 2017).\nIn another avenue of this field, researchers represent causal elements using textual features (Jin et al., 2021;Fong and Grimmer, 2016;Veitch et al., 2020;Keith et al., 2020) and define the causal graph structure based on domain knowledge. Our work falls within this line of research, where we employ causal algorithms to analyze the trends in NLP research topics and the underlying causes." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [ "b13" ], "table_ref": [], "text": "ACL Anthology Corpus Following prior work by Mohammad (2020), we utilize ACL Anthology as the source of NLP Research papers. For this work, we collect 55,366 NLP papers that belong to the \"ACL Events\" category3 from the ACL anthology published between 1979 and 2022. For each paper, we use GROBID (GRO, 2008(GRO, -2022) ) and the PDF table parser from Hou et al. (2019) to extract sentences from each of the individual sections as well as from the table and figure captions. In a post-processing step, we remove all the URLs from the extracted sentences. On average, we have 1,258 papers per year and 1,117 sentences per paper.\nIt is worth noting that certain NLP paper preprints may become accessible on preprint servers before they are officially published in the ACL Anthology. However, we argue that the peer review process in ACL Anthology serves as a robust quality assurance mechanism. Hence, we consider ACL Anthology a more reliable source compared to preprint servers." }, { "figure_ref": [], "heading": "TDMM Entity Extraction", "publication_ref": [ "b0", "b14", "b20", "b34" ], "table_ref": [ "tab_1" ], "text": "To identify tasks, datasets, metrics, and methods entities from NLP papers, we developed two entity taggers based on Flair (Akbik et al., 2018). The first tagger is based on the TDMSci annotations (Hou et al., 2021) (Luan et al., 2018) to extract method entities. On the testing datasets of TDMSci and SciERC, the two taggers achieve a micro-average F1 of 0.77 and 0.78 for the type partial match (Segura-Bedmar et al., 2013), respectively. In type partial match, a predicted entity is considered correct if it partially overlaps with a gold entity and has the same type. For example, \"Penn Treebank\" is counted as a correct prediction even if the corresponding gold annotation is \"Penn Treebank dataset\".\nTo further improve the precision of the TDMM taggers, we include only entities that appear in more than five papers in the dataset. For each paper, we collect the most frequent task mentions appearing in the title, abstract, experiment section, table, and figure captions to approximate the tasks that the paper has done research on.\nTaxonomy for Periods of Reference In order to facilitate in-depth analysis, in this paper, we adopt a taxonomy that partitions our reference time frame into four distinct intervals. Table 1 illustrates the defined intervals. These intervals have been designed to approximate the overarching trends observed in NLP research throughout the years, aligning with our perspective on the field's evolution. It is important to acknowledge that the exact boundaries and thematic emphases may differ based on varying perspectives and specific research areas within NLP. However, we highlight that our framework and methodologies are highly adaptable, allowing end users to effortlessly apply them to any desired time interval or a specific analysis." }, { "figure_ref": [], "heading": "Entity Influence in NLP Research: A Regression Analysis", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "Before conducting the causal analysis, we aim to identify the key variables that significantly impact the evolution of NLP Research. influence on the research direction of NLP. To achieve this understanding, we employ Multiple Linear Regression (see Appendix C for details), a widely utilized tool in economics research (Barrios and Hochberg, 2020). Figure 2 (step1/step2) illustrates the framework. Our analysis assumes that if the TDMM entities have played a role in the emergence or disappearance of task entities, this influence will be reflected in the number of unique task entities in subsequent years, which can be captured through regression analysis. While the study does not provide specific information on the precise influence of each TDMM entity on individual task entities, the partial regression coefficients shed light on the types of entities responsible for influencing the overall task entity landscape.\nMethod. Mathematically, we predict the number of task entities Y t in a given year t as a function of the cumulative counts of all types of entities {X\nt -1 i } (TDMM entities) until that year, t -1 , given by Y t = r 0 + i r i X t -1\ni . {r i } quantifies the relationship strength between the predicted variable (number of task entities) and the independent variables (number of TDMM entities).\nEvaluation. We evaluate the regression model using the R 2 measure (coefficient of determination) to assess the goodness of fit. Additionally, we perform a null hypothesis test to determine the statistical significance of the partial regression co-efficients.\nResults and Discussion. 1) Optimized Number of Variables. In our initial experiment, we determine the optimal number of variables and summarize the corresponding R 2 values in Table 2. Additionally, all regression coefficients are statistically significant at 5% level, indicating their strong relationship with the predicted variable. Discussion: The overall results indicate that the model achieves a good fit to the data when all four variables (number of tasks, datasets, metrics, and method entities) are used to predict the number of task entities in subsequent years. We also explore the possibility of reducing the number of variables while maintaining similar performance. Interestingly, using only one variable results in a significant drop of 0.1 in the R 2 value (R 2 value 0.87), indicating a poor fit to the model. Conversely, increasing the number of variables improves the model fit, suggesting the significance of all four variables in analyzing research trends (R 2 value 0.97). It is worth noting that we exhaustively explored various combinations of variables, including those presented in the table, and consistently obtained similar results.\n2) Influence of the Variables. In the second experiment, we assess the association between the target variable and each independent variable. In Table 3, we present the regression coefficients corresponding to each entity type. Larger values of regression coefficients indicate a stronger relationship between the target variable and the respective independent variable. Discussion: Overall, we note that the gradual emergence of newer tasks has been a driving force behind research progress. However, when we analyze the trends within each year interval, we uncover more nuanced patterns. During the Early Years (1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989), when NLP was in its nascent stage as an independent research field, the focus was on creating new datasets to fuel research advancements. In the Formative Years (1990Years ( -2002)), we witnessed the introduction of new methods, particularly data-driven approaches, which played a crucial role in shaping the field. newer datasets in a relatively short span of time, driven by the research needs and the data requirements of deep learning models. These highlight key factors influencing research trajectory over time." }, { "figure_ref": [], "heading": "Causal Methodology for NLP Research Analysis", "publication_ref": [], "table_ref": [], "text": "Drawing on the insights gained from the Regression Analysis (Section 4), we now establish the cornerstone of our study by defining three causal variables that drive the causal analysis in the subsequent sections. Using causal discovery and inference techniques, we analyze the causal relationships among the variables and measure the impact of TDMM entities on target task entities based on these relationships. Figure 2 illustrates the architecture that underpins our framework." }, { "figure_ref": [], "heading": "Causal Variables", "publication_ref": [ "b39", "b32", "b48", "b26", "b21" ], "table_ref": [], "text": "Task Frequency Shift Value: Distinguishing from the previous approaches (Tan et al., 2017;Prabhakaran et al., 2016), that rely on word frequencies, we define task frequency f (y) t as the number of published papers focusing on a specific task y in a given year t, normalized by the total number of papers published on the same year. The task frequency shift value ∆f req t 2 t 1 (y) captures the average change in the number of published papers on y between two years t 1 < t 2 . This value serves as a measure of the research trends associated with the task during that time interval, indicating whether it experienced growth or decline. The frequency shift value is given by: ∆f req\nt 2 t 1 (y) = f (y)t 2 -f (y)t 1 t 2 -t 1 .\nTask Stability Value: We introduce the concept of task stability value to measure the change in the research context of a given task, y, between two years, t 1 < t 2 . This value quantifies the overlap in neighboring TDMM entities that appear in the same publication as y within the specified time interval.\nTo calculate task stability, we adapt the semantic stability approach of Wendlandt et al. (2018) to our setting and define it specifically for task entities. Following Mondal et al. (2021), we represent each paper in our dataset as a sequence of TDMM entity mentions, removing non-entity tokens. We then employ \"Skip-gram with negative sampling\" (Mikolov et al., 2013) to obtain embeddings from this representation. Formally, let e 1 , e 2 , ..., e n be this entity representation of a paper, and the objective of skip-gram is to maximize the mean log probability 1 n n i=1\n-c≤j≤c logp(e i+j |e i ), where c is called the context window size. Finally, the task stability value ∆stability t 2 t 1 (y) of y between t 1 and t 2 is computed as the percentage overlap between the nearest l neighboring entities of the given task in two representation spaces. The stability value is given by: ∆stability t 2 t 1 (y) =\n|N l t 1 (y)∩N l t 2 (y)| |N l t 1 (y)∪N l t 2\n(y)| , where N l t (y) is the set of l neighbours of y in the representation space of year t. In this study, we consider the context window c to encompass the entire document, and we set the value of l to 5." }, { "figure_ref": [], "heading": "Entity Change Value:", "publication_ref": [], "table_ref": [], "text": "We use entity change value to track emerging and disappearing of specific TDMM entities associated with a task, quantifying these changes and capturing related entity occurrences within a specific time period. Put simply, we measure the difference in the co-occurrence frequency of a TDMM entity x and a task y between two years t 1 and t 2 . When we identify a significant change in the co-occurrence frequency of x and y over this period, it likely signals a shift in the relation between x and y and, in turn, a shift in NLP Research trends. We define entity change value δ y (x) t 2 t 1 of an entity x of type τ (x) ∈ {task, dataset, metric, method} with respect to a task y as the absolute difference in frequencies of x cooccurring with y in the same sentence, between years t 1 and t 2 normalized by the total number of entities of the same type as x that co-occur with y in both years. The entity change value is given by:\nδ y (x) t 2 t 1 = |Ct 1 (x,y)-Ct 2 (x,y)|\n∀e:τ (e)=τ (x) (Ct 1 (e,y)+Ct 2 (e,y)) , where the frequency of x co-occurring with y in year t is given by C t (x, y).\nIn summary, we quantify task trends and research context changes using task frequency change and task stability values. Below we explore the relationship between entity change values and these two variables and estimate the causal impact of TDMM entities on task research landscapes." }, { "figure_ref": [], "heading": "Causal Algorithms", "publication_ref": [ "b36", "b6", "b45" ], "table_ref": [], "text": "Causal Structure Discovery To uncover the causal structure among variables from observational data, we employ DirectLinGAM (Shimizu et al., 2011), which assumes a non-Gaussian datagenerating process. Since the variables in Section 5.1 come from non-Gaussian frequency distributions, DirectLinGAM is suitable. It uses an entropy-based measure to subtract the effect of each independent variable successively. Unlike PC-Stable (Colombo and Maathuis, 2014), it does not require iterative search or algorithmic parameters. We apply DirectLiNGAM with a 5% significance level for causal discovery (see Appendix D for details).\nCausal Inference Once the causal structure between the variables has been established, we leverage this structure to assess the causal effects. Specifically, we measure the causal effects by the entity change value of entity x on the frequency shift and subsequently on the stability values associated with a given task y. For this purpose, we use the probability density function instead of probability mass, as all our causal variables are continuous in nature. We measure the causal effects in two steps: first, we estimate the probability density of the entity change variable using a linear regression model. In the next step, we regress the frequency shift and stability against the entity change value, weighted by the inverse probability densities obtained in the previous step. We model the functional form of this regression using a spline to avoid bias due to misspecification. Finally, we calculate the causal effect as Veitch and Zaveri (2020)\n: µ(∆f req t 2 t 1 (y)) = E[∆f req t 2 t 1 (y)|δ y (x) t 2 t 1 ] and similarly, µ(∆stability t 2 t 1 (y)) = E[∆stability t 2 t 1 (y)|δ y (x) t 2 t 1 ]." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "Correlation-based measures provide a simple way to quantify the association between variables. However, they fall short of explaining complex causeeffect relationships and can yield misleading results. Causality is essential for gaining a deeper understanding of variable relationships, enhancing the robustness and reliability of our findings beyond the limitations of correlation. We discuss more about the importance of causal methods over correlation-based measures in Section 7. In this section, our focus is on uncovering relationships among causal variables (Section 6.1) and measuring the impact of TDMM entities on target task entities (Section 6.2)." }, { "figure_ref": [ "fig_1", "fig_3" ], "heading": "Causal Relation between the Variables", "publication_ref": [ "b5" ], "table_ref": [], "text": "Figure 3 shows the discovered causal graph for the frequency shift of task entities. Overall, we observe that the entity change values of associated tasks, datasets, metrics, and methods have a direct causal effect on the frequency shift values of the target tasks. Since frequency shift value quantifies the trend in NLP research, we infer from the causal graph that the trend of a task is governed primarily by the life cycles of its associated TDMM entities. We see similar causal relation on task stability value (see Figure 4, Appendix A). Evaluation: We perform a sensitivity analysis of the causal graph by adding Gaussian noise with zero mean and unit variance to the entity change values in the data (Cinelli et al., 2019). This gives an estimate of the robustness of the graph in the presence of unobserved confounders. We observe that the graph is stable to unobserved confounding, giving all edge probabilities greater than 0.5." }, { "figure_ref": [], "heading": "Causal Impact of the Variables", "publication_ref": [ "b4", "b31", "b3" ], "table_ref": [ "tab_4" ], "text": "The organizers of ACL 20184 categorize NLP research into 21 areas, and provide a set of popular tasks for each area. Out of those, we curate 16 areas and select one task from each based on its frequency of occurrence in our corpus. We estimate the effect of TDMM entities (entity change value) behind the development of these tasks (frequency shift value) (see Section 5.1) and summarize the results in Table 4. Since we do not have confounders (Section 6.1), evaluating the causal effect reduces to estimating the conditional expectation of the frequency shift values given the entity change values. We present detailed results in Table 5. We examine the results by addressing the following set of inquiries. Named Entity Recognition (NER) has also been influenced by Hidden Markov Models, particularly in its early days (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002), as NER is often formulated as a sequence tagging problem. Various parser algorithms were employed to solve the problem in the period between 2003 and 2017.\nFor Semantic Parsing, parser algorithms have been instrumental and have had a significant impact on research in this area. Between 1979 and 1989, Grammar Induction techniques were used to elicit the underlying semantic parse trees.\nFrom 1990 to 2002, researchers employed various statistical models in Morphological Analysis, which is evident from our results.\nIn Semantic Role Labeling, Support Vector Machines and Neural Network Models have been widely used to solve this task.\nIn Co-reference Resolution, Neural Network models have gained prominence starting in 2018. However, from 2003 to 2017, Integer Linear Programming was also utilized to address this problem.\nPre-trained Language Models (LLMs) have demonstrated superior performance in several NLP tasks, including Question Answering. Researchers have also explored parsing algorithms to parse questions and align them with potential answers.\nFurthermore, Textual Entailment and Summarization have been heavily influenced by pre-trained LLMs between 2018 and 2022, as evident from our results. Q2. How have changes in data availability contributed to the NLP Research Tasks? High-quality datasets play a crucial role in advancing NLP research. While new methodologies are important, they cannot fully propel the field forward without the support of high-quality datasets. Researchers understand the significance of dataset quality and actively curate datasets to drive advancements in the field. Our findings further confirm the prevalence of this trend, highlighting the strong emphasis on dataset quality in NLP research.\nIn the early stages of deep neural models, such as Recurrent Neural Networks (RNNs), the creation of large datasets became essential for efficient model training. Between 2018 and 2022, several datasets were curated, with MultiWoz being the most widely used dataset for research in Dialogue Systems.\nIn the domain of Machine Translation, the significance of datasets in shaping research direction cannot be overlooked. The influence of WMT datasets on Machine Translation research is evident from our findings.\nFor Morphological Analysis, the Universal De-pendency Treebank dataset is frequently used as a benchmark, indicating its importance in driving research in this area.\nDuring the period of 1990-2002, the creation of the MUC-VI dataset played a crucial role in advancing research in Co-reference resolution.\nIn the field of Sentiment Analysis, the Twitter dataset holds significant importance in driving research in this domain.\nOverall, our analysis underscores the vital role of datasets in shaping and driving research across various NLP tasks. Q3. Do evaluation metrics drive paradigm shifts in NLP research? Most NLP tasks rely on a standard set of metrics borrowed from other domains, such as machine learning and computer vision, to evaluate system performance. However, there is limited research dedicated to improving these metrics within the field of NLP, as it often requires theoretical knowledge beyond the scope of NLP itself. Despite this, our analysis in Table 5 reveals some noteworthy exceptions. Metrics explicitly designed for evaluating NLP tasks, such as BLEU and METEOR, have demonstrated significant impact in advancing Machine Translation research. Similarly, the metric ROUGE has influenced research in the field of Summarization. While perplexity scores are commonly used to measure the generalization capabilities of probability distributions, they are predominantly utilized for evaluating language models in NLP tasks.\nQ4. What is the causal impact of cross-pollination of ideas between related NLP tasks? We consistently observe a pattern of related NLP tasks evolving in tandem, borrowing ideas and tech-niques from one another. This trend is clearly reflected in our findings. For instance, Speech Recognition and Machine Translation are linked as researchers explore end-to-end systems that translate speech, and our results show that Machine Translation has had the greatest influence on Speech Recognition research between 2003 and 2022.\nNamed Entity Recognition (NER) is commonly approached as a sequence tagging problem, and it is influenced by related tasks such as POS Tagging (2003-2017) and Relation Extraction (2018-2022), as these problems are often jointly solved. Similarly, POS Tagging initially posed as a text classification problem (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002), is significantly impacted by the word segmentation task, as evident from our results in the period of 2018-2022.\nIn recent years (2018)(2019)(2020)(2021)(2022), dependency and semantic parsing have been jointly solved using the same neural model, highlighting the influence of dependency parsing on research in semantic parsing. Sentiment Analysis has garnered considerable research interest and is commonly framed as a text classification problem. Additionally, Argument Mining, which involves understanding the sentiments behind arguments, is influenced sentiment analysis. Furthermore, the classification of various argument components, such as claims and evidence, is often approached as text classification problems, as evidenced by our results.\n7 Discussion: Correlation and Causation \"correlation does not imply causation\" - Pearson (1892) Causation and correlation, although related, are distinct concepts. While they can coexist, correlation does not simply imply causation. Causation signifies a direct cause-and-effect relationship, where one action leads to a specific outcome. In contrast, correlation simply indicates that two actions are related in some way, without one necessarily causing the other.\nIn our work, we focus on causal inference from data. While correlation-based measures provide a straightforward method for quantifying associations between variables, they often fall short when it comes to explaining complex cause-and-effect relationships.\nTo demonstrate the effectiveness of our framework, we establish a simple baseline using a PMIbased correlation measure (Bouma, 2009). For this analysis, we select Machine Translation as our tar-get task entity due to its prominent presence in our corpus and the NLP research landscape. We calculate the PMI scores of Machine Translation with all other TDMM entities. The PMI score represents the probabilities of co-occurrence between two entities in sentences from research papers, normalized by their individual occurrence probabilities.\nInterestingly, we find that accuracy, an entity of type metric, has the highest PMI score with Machine Translation among all other entities. However, it is important to note that accuracy is a widely used metric across various NLP tasks, and it is not specifically developed for machine translation, nor has machine translation influenced the concept of accuracy. This observation emphasizes the insufficiency of relying solely on correlation-based metrics to understand and analyze research influence on an entity.\nWe observe that relying solely on correlations can lead to misleading results and interpretations. Therefore, in order to understand the influence of associated TDMM entities on NLP Task entities, we utilize causal algorithms that enable us to gain insights into the cause-and-effect dynamics among the variables we study." }, { "figure_ref": [], "heading": "Concluding Remarks", "publication_ref": [ "b9" ], "table_ref": [ "tab_1" ], "text": "In this paper, we retrospectively study NLP research from a causal perspective, quantifying research trends of task entities and proposing a systematic framework using causal algorithms to identify key reasons behind the emergence or disappearance of NLP tasks. Our analysis reveals that tasks and methods are the primary drivers of research in NLP, with datasets following their influence, while metrics have minimal impact. It is important to note that in our analysis, we have structured the reference time into four distinct intervals (see Table 1); however, it can be applied to diverse timeframes, ranging from longer periods to brief intervals, including single years. This adaptability, in the context of rapid recent advancements in NLP, allows to zoom in on local trends and developments that might otherwise go unnoticed (such as the influence of in-context learning on NLP tasks).\nWe believe our causal analysis enhances understanding of the interplay of research entities in NLP, contributing to the growing body of work on causality and NLP (Feder et al., 2021). We provide with additional analysis and insights in Appendix B." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This work is centered on NLP research papers from ACL Anthology, with a focus on papers from the \"ACL Events\" category. The \"ACL Events\" category encompasses major conferences, workshops, and journals, including ACL, NAACL, EMNLP, EACL, AACL, CL, and TACL. We also include papers published at COLING from the \"non-ACL Events\" category. Nevertheless, it is important to acknowledge the presence of NLP papers beyond ACL Anthology in AI journals, regional conferences, and preprint servers. Furthermore, we recognize that certain NLP papers may become available on preprint servers before their official publication in peer-reviewed venues. In this study, we focus on ACL Anthology, which can introduce a time lag when assessing the early impact of influential papers released as preprints (e.g., BERT) or only on preprint servers (e.g., RoBERTa). To address such challenges, we leave the curation and inclusion of NLP research papers from these alternative sources for future works.\nOur framework requires research papers tagged with entities as input. Hence, the quality of the tags plays a crucial role in the causal inference of our proposed method. The taggers generate noisy outputs and, thus, might require human intervention to denoise the tags. Moreover, causal algorithms require a large amount of data to produce statistically significant results. Hence, research areas that are less explored or newly emerging may not always be suitable for this framework to be applied. Additionally, we highlight that in this work, we do not consider extra-linguistic factors like author affiliations, funding, gender, etc. We leave them for future research work. " }, { "figure_ref": [], "heading": "B Appendix: Supplementary Analysis", "publication_ref": [], "table_ref": [], "text": "In addition to the primary results presented in the paper (Section 6), in this section, we describe the supplementary analysis." }, { "figure_ref": [], "heading": "B.1 NLP Tasks and Their Dataset Evolution", "publication_ref": [], "table_ref": [], "text": "Frequently Pursued NLP Tasks. From Table 5 in our paper, we observe that overall (from 1979-2022), among all the tasks, \"Text Classification\" (column 6) holds a remarkable position. This prominence stems from the frequent usage of various NLP tasks being framed or aligned as \"Text Classification\" or borrowing concepts from it to address other tasks such as \"Sentiment Analysis\" or \"Word Sense Disambiguation.\" Additionally, our framework offers the flexibility to perform a similar analysis between any chosen periods.\nEvolution of Datasets in NLP Tasks. Referring to Table 5 in our paper, in the context of \"Speech Recognition,\" we observe a shift in influential datasets over different periods. Between 1990-2002, the \"WSJ Corpus\" took the lead, while in the subsequent period of 2003-2017, the \"ATIS Dataset\" had more influence. Interestingly, between 2018-2022, the trend shifted once again to the \"Switchboard Dataset\".\nA similar trend is reflected in the \"Summarization\" task as well: in the years 1990-2002, \"Wordnet\" played a significant role, while the \"Gigaword Dataset\" took over in 2003-2017. However, in the most recent period of 2018-2022, \"Pubmed\" emerged as the notable dataset for the \"Summarization\" task.\nCommon Datasets Across NLP Tasks. We observe from Table 5 (column 6) that across the entire span from 1979 to 2022, the \"Penn Treebank\" dataset emerged as a pivotal influence, significantly impacting tasks such as \"Language Modeling,\" \"POS Tagging,\" and \"Semantic Parsing.\" Using our framework, a similar analysis could also be done between any chosen periods." }, { "figure_ref": [ "fig_3" ], "heading": "B.2 Entitiy Influence on Task Frequency and Stability", "publication_ref": [ "b50" ], "table_ref": [], "text": "Influence of Research Entities on Task Stability.\nWe measure the causal effect of research entities on Task Stability Value (see Section 5.1). From the resulting causal graph (Figure 4), we observe that the entity change values of associated tasks, datasets, metrics, and methods directly impact the stability value of the target task, similar to the task frequency shift value.\nCorrelations Between Task Frequency Change and Stability. We observe a slightly positive correlation between frequency change and stability of research tasks with a Pearson coefficient of 0.08. This is because when a new task emerges, initially, a few researchers start working on it, which gradually increases its frequency of appearance. At the same time, researchers experiment with various methods and datasets to solve these newly emerged tasks, causing high instability (e.g., Math Problem Solving (Zhang et al., 2018)). On the contrary, the opposite is not always true: well-defined tasks are often the most researched, and yet researchers always explore new ideas on these tasks, which harms stability." }, { "figure_ref": [], "heading": "C Appendix: Multiple Linear Regression", "publication_ref": [ "b30" ], "table_ref": [], "text": "We use multiple linear regression to regress a variable on several variables (Pearl et al., 2016).\nFor instance, if we want to predict the value of a variable Y using the values of variables X 1 , X 2 , ..., X k-1 , X k , we perform multiple linear regression of Y on {X 1 , X 2 , ..., X k-1 , X k }, and estimate a regression relationship (Eqn. 1), which represents an inclined plane through the (k + 1)dimensional coordinate system.\nY = r 0 + k i=1 r i X i(1)\nThe Gauss-Markov theorem (Williams and Rasmussen, 2006) simplifies the computation of par-" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Ilia Kuznetsov for his feedback on the initial version of this work. We appreciate all the anonymous reviewers for their helpful comments and suggestions for further analysis. This work has been funded by the German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In this work, we use publicly available data from ACL Anthology and do not involve any personal data. It is important to recognize that, while our framework is data-driven, individual perspectives toward research are inherently subjective. Decisions involving science should consider data as well as ethical, social, and other qualitative factors. Furthermore, we underscore that the low influence of TDMM entities in our analysis should not be the sole reason for devaluing research papers or reducing their investments. Ethical and academic considerations should guide decisions on research evaluation and resource allocation. 1979-1989 1990-2002 2003-2017 2018-2022 1979-2022 Language Modeling \nTable 5: The primary reason behind the frequency shift of the tasks. We analyze the trends in four different periods of reference. Most influential Task(T), Dataset(D), Method(M) and Metric(m) are given in the decreasing order of their influence. \"-\" means there is not enough data instances for the causal analysis.\ntial regression coefficients (r 1 , ..., r k in Eqn 1). It states that if we write Y as a linear combination of X 1 , X 2 , ..., X k-1 , X k and noise term ϵ,\nthen, regardless of the distributions of the variables Y, X 1 , X 2 , ..., X k , the best least-square coefficients are obtained when ϵ is uncorrelated with each regressors, i.e.," }, { "figure_ref": [], "heading": "Cov(ϵ, X", "publication_ref": [ "b36" ], "table_ref": [], "text": "D Appendix: Algorithms D.1 DirectLinGAM Algorithm 1: Causal Graph Discovery: DirectLinGAM-Algorithm 1 Given a p-dimensional random vector x, a set of its variable subscripts U and a p × n data matrix of the random vector as X, initialize an ordered list of variables K := ϕ and m := 1; 2 Repeat until p -1 subscripts are appended to K: Perform least square regression of x i and x j , ∀i ∈ U -K(i ̸ = j) and compute the residual vectors r (j) and the residual data matrix R (j) from the matrix X, ∀j ∈ U -K. Find a variable x m independent of its residuals and append m to the end of K; 3 Append the remaining variable to the end of K; 4 Construct a strictly lower triangular matrix B by following the order in K, and estimate the connection strengths b ij by using some conventional covariance-based regression such as least squares and maximum likelihood approaches on the original random vector x and the original data matrix X;\nIn Algorithm 1, we describe the DirectLinGAM algorithm (oracle version) in high level as described by Shimizu et al. (2011)." } ]
Understanding the fundamental concepts and trends in a scientific field is crucial for keeping abreast of its continuous advancement. In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques. We define three variables to encompass diverse facets of the evolution of research topics within NLP and utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data. Subsequently, we leverage this structure to measure the intensity of these relationships. By conducting extensive experiments on the ACL Anthology corpus, we demonstrate that our framework effectively uncovers evolutionary trends and the underlying causes for a wide range of NLP research topics. Specifically, we show that tasks and methods are primary drivers of research in NLP, with datasets following, while metrics have minimal impact. 1
A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and Why?
[ { "figure_caption": "Figure 1 :1Figure 1: Evolution of Machine Translation (MT) research. Blue line: Number of MT papers (1979-2022). Tables show the top causal entities/types for different periods (excluding 1979-1989 due to limited MT papers).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Causal Graph of TDMM Entities (entity change values) and Task Entity Frequency Shift.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Q1.What role do the methodologies play in causally driving the shift in NLP tasks? New methodologies have a significant influence on research in various areas of Natural Language Processing (NLP). In the field of Language Modeling, we observe a shift in influence between different methodologies over time. Between 2003 and 2017, Recurrent Neural Networks (RNNs) had the most decisive impact on Language Modeling research. However, this trend shifted with the emergence of Transformers, which have since become the dominant influence in research on this task. Dialogue Systems, which involve automatic response generation, are closely related to Language Modeling. Therefore, research in this area is highly influenced by Generative Models. From 1990 to 2002, Probabilistic Models played a crucial role in shaping Dialogue Systems research, while RNNs took the lead between 2003 and 2017. Machine Translation, another task related to Language Modeling, requires the generation of the translated text. Naturally, we observe the influence of similar entities in Machine Translation research. Probabilistic Models had the most decisive impact between 1990 and 2002. In recent years (2018-2022), Transformers have emerged as the dominant influence in this research area. In the field of Speech Recognition, Hidden Markov Models (HMMs) have shown a significant influence. HMMs have played a crucial role in shaping Speech Recognition research between 1979 to 2002.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Causal Graph: The graph shows that the emergence and disappearance of TDMM entities (entity change values) have a direct causal effect on the stability of task entities.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "type of entities (X i ) influence the research trends of an NLP task?3.1 calculate variable values for each task y", "figure_data": "Grobid ParserPDF Table ParserTDMM Entity Taggers2018-2022TaskΔfreqẟ(Transformer)ẟ(Meteor)….Machine Translation….….….….….….….….….3.2 causal structure discovery3.3 causal inference analysisTarget Task2018-2022 Associated EntitiesCausal EffectTransformers0.2515Machine Translation Language Generation METEOR0.2503 0.2507WMT Dataset0.247Figure 2: System architecture.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Chronological Periods of NLP Research.", "figure_data": "for", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Variable Selection for Regression.", "figure_data": "VariablesR-Squared (↑)unique tasks0.87+ unique datasets0.91+ unique methods0.93+ unique metrics0.97", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Subsequently, from 2003 to 2017, statistical methods underwent a revolution, and later in the same period, neural network methods experienced a resurgence, indicating significant shifts in research trends. Now, in the present Deep Learning Era (2018-2022), we observe a rapid creation of Variables Influencing NLP task entities.", "figure_data": "YearsPartial Regression CoefficientTasks Datasets Methods Metrics1979-19890.352.240.210.021990-20020.820.892.860.812003-20175.376.267.000.692018-20221.473.381.790.411979 -2022 3.501.072.920.54", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Causal analysis identifies the main drivers (Methods, Tasks, Datasets) of frequency shifts in NLP tasks across four periods, with \"-\" indicating insufficient data for analysis.", "figure_data": "Primary CauseTask1979-19891990-20022003-20172018-20221979-2022Language Modeling--Recurrent Neural Networks MTransformers MTransformers MDialogue System-Probabilistic Generative Models M Recurrent Neural Networks MMultiWoz DMultiWoz DMachine Translation-Probabilistic Generative Models MWMT Data DTransformers MTransformers MSpeech RecognitionHidden Markov Models MHidden Markov Models MMachine Translation TMachine Translation THidden Markov Models MNamed Entity Recognition-Hidden Markov Models MPOS Tagging TRelation Extraction TPOS Tagging TPOS Tagging-Text Classification TParser Algorithms MWord Segmentation TWord Segmentation TSemantic ParsingGrammar Induction MParser Algorithms MParser Algorithms MDependency Parsing TParser Algorithms MMorphological Analysis-Statistical Models MDependency Parsing TUD Treebank DStatistical Models MSemantic Role Labeling--Support Vector Machines MNeural Network Models M Support Vector Machines MCo-reference Resolution-MUC-VI Text Collection DInteger Linear Programming M Neural Network Models MNeural Network Models MWord Sense Disambiguation-Wordnet DMaximum Entropy Models M Neural Network Models MWordnet DSentiment Analysis--Twitter Dataset DText Classification TText Classification TArgument Mining--Text Classification TSentiment Analysis TSentiment Analysis TQuestion AnsweringParsing Algorithms MInformation Extraction TInformation Extraction TPre-Trained LLMs MInformation Extraction TTextual Entailment--Stastical Models MPre-Trained LLMs MPre-Trained LLMs MSummarization-Wordnet DSentence Compression TPre-Trained LLMs MPre-Trained LLMs M", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Aniket Pramanick; Yufang Hou; Saif M Mohammad; Iryna Gurevych
[ { "authors": "Alan Akbik; Duncan Blythe; Roland Vollgraf", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Contextual string embeddings for sequence labeling", "year": "2018" }, { "authors": "M John; Yael Barrios; Hochberg", "journal": "National Bureau of Economic Research", "ref_id": "b1", "title": "Risk perception through the lens of politics in the time of the covid-19 pandemic", "year": "2020" }, { "authors": "Antoine Bosselut; Hannah Rashkin; Maarten Sap; Chaitanya Malaviya; Asli Celikyilmaz; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "COMET: Commonsense transformers for automatic knowledge graph construction", "year": "2019" }, { "authors": "Gerlof Bouma", "journal": "Proceedings of GSCL", "ref_id": "b3", "title": "Normalized (pointwise) mutual information in collocation extraction", "year": "2009" }, { "authors": "Kevin W Boyack; Richard Klavans", "journal": "Quantitative Science Studies", "ref_id": "b4", "title": "An improved practical approach to forecasting exceptional growth in research", "year": "2022" }, { "authors": "Carlos Cinelli; Daniel Kumor; Bryant Chen; Judea Pearl; Elias Bareinboim", "journal": "PMLR", "ref_id": "b5", "title": "Sensitivity analysis of linear structural causal models", "year": "2019" }, { "authors": "Diego Colombo; Marloes H Maathuis", "journal": "Journal of Machine Learning Research", "ref_id": "b6", "title": "Order-independent constraint-based causal structure learning", "year": "2014" }, { "authors": "Quang Do; Yee Seng Chan; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Minimally supervised event causality identification", "year": "2011" }, { "authors": "Jesse Dunietz; Lori Levin; Jaime Carbonell", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "The BECauSE corpus 2.0: Annotating causality and overlapping relations", "year": "2017" }, { "authors": "Amir Feder; Katherine A Keith; Emaad Manzoor; Reid Pryzant; Dhanya Sridhar; Zach Wood-Doughty; Jacob Eisenstein; Justin Grimmer; Roi Reichart; Margaret E Roberts; Brandon M Stewart; Victor Veitch; Diyi Yang", "journal": "", "ref_id": "b9", "title": "Causal inference in natural language processing: Estimation, prediction, interpretation and beyond", "year": "2021" }, { "authors": "Christian Fong; Justin Grimmer", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Discovery of treatments from text corpora", "year": "2016" }, { "authors": "Jonathan Grudin", "journal": "AI Magazine", "ref_id": "b11", "title": "AI and HCI: Two fields divided by a common focus", "year": "2009" }, { "authors": "David Hall; Daniel Jurafsky; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Studying the history of ideas using topic models", "year": "2008" }, { "authors": "Yufang Hou; Charles Jochim; Martin Gleize; Francesca Bonin; Debasis Ganguly", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Identification of tasks, datasets, evaluation metrics, and numeric scores for scientific leaderboards construction", "year": "2019" }, { "authors": "Yufang Hou; Charles Jochim; Martin Gleize; Francesca Bonin; Debasis Ganguly", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "TDMSci: A specialized corpus for scientific literature entity tagging of tasks datasets and metrics", "year": "2021" }, { "authors": "Zhijing Jin; Zeyu Peng; Tejas Vaidhya; Bernhard Schoelkopf; Rada Mihalcea", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Mining the cause of political decision-making from social media: A case study of COVID-19 policies across the US states", "year": "2021" }, { "authors": "Katherine Keith; David Jensen; Brendan O' Connor", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Text and causal inference: A review of using text to remove confounding from causal estimates", "year": "2020" }, { "authors": "Bernard Koch; Emily Denton; Alex Hanna; Jacob Gates; Foster ", "journal": "", "ref_id": "b17", "title": "Reduced, reused and recycled: The life of a dataset in machine learning research", "year": "2021" }, { "authors": " Thomas S Kuhn", "journal": "Chicago University of Chicago Press", "ref_id": "b18", "title": "The structure of scientific revolutions", "year": "1970" }, { "authors": "Shixia Liu; Yang Chen; Hao Wei; J Yang; Kun Zhou; Steven Mark Drucker", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b19", "title": "Exploring topical lead-lag across corpora", "year": "2015" }, { "authors": "Yi Luan; Luheng He; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction", "year": "2018" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "", "ref_id": "b21", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "Paramita Mirza; Sara Tonelli", "journal": "Dublin City University and Association for Computational Linguistics", "ref_id": "b23", "title": "An analysis of causality between events and its relation to temporal information", "year": "2014" }, { "authors": "M Saif; Mohammad", "journal": "", "ref_id": "b24", "title": "The state of NLP literature: A diachronic analysis of the ACL anthology", "year": "2019" }, { "authors": "M Saif; Mohammad", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Examining citations of natural language processing literature", "year": "2020" }, { "authors": "Ishani Mondal; Yufang Hou; Charles Jochim", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "End-to-end construction of NLP knowledge graph", "year": "2021" }, { "authors": "Matteo Negri; Marco Turchi; G C José; Daniele De Souza; Falavigna", "journal": "Dublin City University and Association for Computational Linguistics", "ref_id": "b27", "title": "Quality estimation for automatic speech recognition", "year": "2014" }, { "authors": "Jong-Hoon Oh; Kentaro Torisawa; Chikara Hashimoto; Ryu Iida; Masahiro Tanaka; Julien Kloetzer", "journal": "AAAI Press", "ref_id": "b28", "title": "A semi-supervised learning approach to whyquestion answering", "year": "2016" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b29", "title": "BLEU: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Judea Pearl; Madelyn Glymour; Nicholas P Jewell", "journal": "John Wiley & Sons", "ref_id": "b30", "title": "Causal inference in statistics: A primer", "year": "2016" }, { "authors": "Karl Pearson", "journal": "Nature", "ref_id": "b31", "title": "The grammar of science", "year": "1892" }, { "authors": "William L Vinodkumar Prabhakaran; Dan Hamilton; Dan Mcfarland; Jurafsky", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Predicting the rise and fall of scientific topics from trends in their rhetorical framing", "year": "2016" }, { "authors": "Maarten Sap; Le Ronan; Emily Bras; Chandra Allaway; Nicholas Bhagavatula; Hannah Lourie; Brendan Rashkin; Noah A Roof; Yejin Smith; Choi", "journal": "", "ref_id": "b33", "title": "Atomic: An atlas of machine commonsense for ifthen reasoning", "year": "2019" }, { "authors": "Isabel Segura-Bedmar; Paloma Martínez; María Herrero-Zazo", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "SemEval-2013 task 9 : Extraction of drug-drug interactions from biomedical texts (DDIExtraction 2013)", "year": "2013" }, { "authors": "Naoki Shibata; Yuya Kajikawa; Yoshiyuki Takeda; Katsumori Matsushima", "journal": "Technovation", "ref_id": "b35", "title": "Detecting emerging research fronts based on topological measures in citation networks of scientific publications", "year": "2008" }, { "authors": "Shohei Shimizu; Takanori Inazumi; Yasuhiro Sogawa; Aapo Hyvarinen; Yoshinobu Kawahara; Takashi Washio; Patrik O Hoyer; Kenneth Bollen; Patrik Hoyer", "journal": "Journal of Machine Learning Research-JMLR", "ref_id": "b36", "title": "Directlingam: A direct method for learning a linear non-gaussian structural equation model", "year": "2011-04" }, { "authors": "Janvijay Singh; Mukund Rungta; Diyi Yang; Saif Mohammad", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Forgotten knowledge: Examining the citational amnesia in NLP", "year": "2023" }, { "authors": "Henry Small", "journal": "Scientometrics", "ref_id": "b38", "title": "Tracking and predicting growth areas in science", "year": "2006" }, { "authors": "Chenhao Tan; Dallas Card; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Friendships, rivalries, and trysts: Characterizing relations between ideas in texts", "year": "2017" }, { "authors": "Ana Sabina Uban; Cornelia Caragea; Liviu P Dinu", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Studying the evolution of scientific topics and their relationships", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b41", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b42", "title": "", "year": "" }, { "authors": "Victor Veitch; Dhanya Sridhar; David Blei", "journal": "", "ref_id": "b43", "title": "Adapting text embeddings for causal inference", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b44", "title": "", "year": "" }, { "authors": "Victor Veitch; Anisha Zaveri", "journal": "", "ref_id": "b45", "title": "Sense and sensitivity analysis: Simple post-hoc analysis of bias due to unobserved confounding", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b46", "title": "", "year": "" }, { "authors": "Jan Philip Wahle; Terry Ruas; Mohamed Abdalla; Bela Gipp; Saif M Mohammad", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "We are who we cite: Bridges of influence between natural language processing and other academic fields", "year": "2023" }, { "authors": "Laura Wendlandt; Jonathan K Kummerfeld; Rada Mihalcea", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Factors influencing the surprising instability of word embeddings", "year": "2018" }, { "authors": "K I Christopher; Carl Edward Williams; Rasmussen", "journal": "MIT press", "ref_id": "b49", "title": "Gaussian processes for machine learning", "year": "2006" }, { "authors": "Dongxiang Zhang; Lei Wang; Nuo Xu; Bing Tian Dai; Heng Tao Shen", "journal": "", "ref_id": "b50", "title": "The gap of semantic parsing: A survey on automatic math word problem solvers", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 306.14, 627.2, 218.26, 30.76 ], "formula_id": "formula_0", "formula_text": "t -1 i } (TDMM entities) until that year, t -1 , given by Y t = r 0 + i r i X t -1" }, { "formula_coordinates": [ 5, 416.59, 655.81, 95.58, 17.64 ], "formula_id": "formula_1", "formula_text": "t 2 t 1 (y) = f (y)t 2 -f (y)t 1 t 2 -t 1 ." }, { "formula_coordinates": [ 6, 225.2, 288.49, 61.32, 24.51 ], "formula_id": "formula_2", "formula_text": "|N l t 1 (y)∩N l t 2 (y)| |N l t 1 (y)∪N l t 2" }, { "formula_coordinates": [ 6, 70.87, 644.56, 155.44, 16.77 ], "formula_id": "formula_3", "formula_text": "δ y (x) t 2 t 1 = |Ct 1 (x,y)-Ct 2 (x,y)|" }, { "formula_coordinates": [ 6, 306.14, 550.3, 219.79, 55.33 ], "formula_id": "formula_4", "formula_text": ": µ(∆f req t 2 t 1 (y)) = E[∆f req t 2 t 1 (y)|δ y (x) t 2 t 1 ] and similarly, µ(∆stability t 2 t 1 (y)) = E[∆stability t 2 t 1 (y)|δ y (x) t 2 t 1 ]." }, { "formula_coordinates": [ 13, 372.98, 707.81, 152.16, 33.71 ], "formula_id": "formula_5", "formula_text": "Y = r 0 + k i=1 r i X i(1)" } ]
10.18653/v1/P19-1279
2024-01-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b26", "b28", "b48", "b19", "b7", "b3", "b19", "b2" ], "table_ref": [], "text": "Entity typing is a fundamental task in Natural Language Processing (NLP), with important applications to entity linking (Onoe and Durrett, 2020) and relation extraction (Peng et al., 2020;Zhong and Chen, 2021), among others. In recent years, the main focus has been on fine-grained entity typing (Ling and Weld, 2012;Gillick et al., 2014), where around 100 different entity types are considered, or even ultra-fine entity typing (Choi et al., 2018), where around 10000 types are considered. A key challenge then consists in compiling enough training data. This is particularly problematic because the distribution of entity types is highly skewed, with many types occurring only rarely in text. The main strategy thus far has been to create automatically labelled training sets. For instance, Ling and Weld (2012) relied on the fact that entity mentions in Wikipedia are linked to the article of the corresponding entity, which is in turn linked to Freebase (Bollacker et al., 2008). Entity mentions in Wikipedia can thus be linked to their Freebase types without any manual effort. However, these distantly supervised training sets are still highly skewed. As a result, models trained on such datasets may concentrate more on learning to recognise the most prevalent entity types than on deriving meaningful entity representations (i.e. embeddings which accurately capture semantic types of entities).\nFor this reason, we propose to first train a general-purpose entity encoder, which maps entity mentions to meaningful embeddings, independent of a particular label set. We can then train an entity type classifier in the usual way, using the embeddings from our encoder as input. Our approach relies on a supervision signal that has thus far remained largely unexplored for entity typing: coreference chains. In particular, we train an entity encoder with contrastive learning to represent coreferring entity mentions close to each other in the embedding space. While conceptually straightforward, this training signal forces the entity encoder to identify subtle cues in the context of an entity mention, to characterise the entity at a level which is sufficiently fine-grained to distinguish it from other entities. Our strategy only need access to an off-the-shelf coreference resolution system. This means that we can train the entity encoder on different genres of text and generate as much training data as is needed.\nFigure 1 illustrates the three main steps of our approach. In the first step, an off-the-shelf coreference resolution system is applied to a large collection of stories. Second, we use contrastive learning to train an entity encoder, which maps mentions from the same coreference chain to similar vectors, while mentions from different chains are mapped to dissimilar vectors. In the third step, to learn a fine-grained entity typing model, we simply train a Figure 1: Illustration of our proposed strategy. In the first step, an off-the-shelf coreference resolution method is used to identify coreference chains in stories. In the second step, we use contrastive learning to train an encoder which maps mentions from the same coreference chain to similar vectors. In the third step, we use standard training data to learn a linear classifier for each considered entity type. linear classifier in the resulting embedding space for each considered entity type.\nAn important challenge in implementing the proposed strategy is that coreference resolution systems are still far from perfect. Whenever two mentions are erroneously assumed to be referring to the same entity, the entity encoder is trained on a noisy signal, which has a detrimental impact on the overall performance of the method. In our experiments, we found that the success of our strategy indeed strongly depends on the quality of the coreference resolution system that is used. In fact, our best results are obtained by using two different systems, and only keeping coreference links that are predicted by both. When adopting this strategy, our model outperforms the current state-of-the-art in three entity typing benchmarks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b5", "b48", "b46", "b9", "b40", "b19", "b31", "b25", "b27", "b13", "b47", "b30", "b45", "b0", "b8", "b35", "b39", "b33", "b6", "b23", "b34", "b48" ], "table_ref": [], "text": "Entity Typing The standard approach to entity typing is to use a fine-tuned Language Model (LM) of the BERT family (Devlin et al., 2019) to obtain embeddings of entity mentions (Zhong and Chen, 2021;Ye et al., 2022) and then train a standard classifier on top of these embeddings. Some alternative strategies have also been explored. For instance, Li et al. (2022a) cast the problem of entity typing as a natural language inference (NLI) problem. The main drawback of the NLI approach is that it requires individual testing for every entity type, making it highly inefficient for fine-grained entity typing. Large Language Models (LLMs) are similarly impractical to use in most application settings. Even when disregarding efficiency concerns, the impact of LLMs on the task of entity typing has thus far been limited (Han et al., 2023). The most successful approaches use a form of multi-task finetuning to adapt LLMs to information extraction tasks, but they still fail to consistently outperform BERT (Wang et al., 2023).\nFine-grained Entity Typing Most work on finegrained entity typing uses distant supervision of some kind. As already mentioned in the introduction, one strategy is to rely on Wikipedia links in combination with an external knowledge base (Ling and Weld, 2012). A common problem with distantly supervised datasets is that they can be noisy: the fact that an entity has a particular type does not necessarily imply that this information is expressed in a given sentence mentioning that entity. To address this issue, several authors have proposed strategies for denoising distantly supervised datasets for entity typing (Ren et al., 2016;Onoe and Durrett, 2019;Pan et al., 2022). Given that two sentences referring to the same entity may emphasise different elements, a similar problem can also arise in our case. For example, we might have a sentence referring to Ben Affleck as an actor and another referring to him as a director. As the embedding of an entity mention should capture the semantic type that is represented in the relevant sentence context, using such sentence pairs will confuse the model. We may anticipate that such instances will be rare, however, as we only take into account co-referring entity mentions that originate from the same story. Another possible source of noise comes from mistakes that are made by the coreference resolution system. This effect will be analysed in Section 4.\nPre-training Entity Encoders Previous work has already explored a number of pre-training strategies for learning entity representations. First, methods such as SpanBERT (Joshi et al., 2020) focus on learning better representations of text spans. Within this class of methods, strategies that rely on InfoNCE have also been considered (Wang et al., 2020). While our method also uses In-foNCE, the training signal is fundamentally different: the aforementioned methods focus on learning span representations, using tasks such as reconstructing the correct order of tokens in shuffled text spans. Such models have not proven superior to the standard BERT model for entity typing. In our experiments, we also found that modelling text spans is not essential for entity typing, as our best configuration simply uses the embedding of the head token of an entity span (see Section 4.2). Another line of work, which includes models such as ERNIE (Zhang et al., 2019), KnowBERT (Peters et al., 2019), LUKE (Yamada et al., 2020), KE-PLER (Wang et al., 2021c) and K-Adapter (Wang et al., 2021a), improve LMs by modelling entities as separate tokens and leveraging information from knowledge graphs. The main focus of these models is to improve the amount of factual knowledge that is captured, rather than on learning the representations of (possibly) previously unseen entities.\nOur approach also has some similarities with the matching-the-blanks model for relation extraction (Baldini Soares et al., 2019). The idea of this model is to learn a label-independent relation encoder, similar to how we are learning a labelindependent entity encoder. In their case, the supervision signal comes from the idea that sentences mentioning the same pair of entities are likely to express the same relationship, hence the relation embeddings obtained from such sentences should be similar. Building on this approach, a number of authors have recently used InfoNCE to encode similar ideas (Han et al., 2021;Wan et al., 2022;Wang et al., 2022). Varkel and Globerson (2020) use a contrastive loss to pre-train a mention encoder for coreference resolution based on two heuristics: (i) if the same name appears multiple times in a document, the corresponding embeddings should be similar and (ii) the mention encoder should be able to reconstruct masked pronouns. The usefulness of contrastive learning for pre-training BERT encoders has also been observed more generally, for instance for learning sentence, phrase and word embeddings (Gao et al., 2021;Liu et al., 2021a,b;Wang et al., 2021b;Li et al., 2022b).\nLeveraging Coreference Chains To the best of our knowledge, the idea of pre-training an entity encoder based on coreference chains has not yet been considered. However, a number of authors have proposed multi-task learning frameworks in which coreference resolution and entity typing are jointly learned, along with other tasks such as relation and event extraction (Luan et al., 2018;Wadden et al., 2019). Surprisingly, perhaps, such approaches have failed to outperform simpler entity typing (and relation extraction) models (Zhong and Chen, 2021)." }, { "figure_ref": [], "heading": "Our Approach", "publication_ref": [], "table_ref": [], "text": "In Section 3.1, we first discuss the basic entity typing model that we rely on in this paper. Section 3.2 subsequently describes our proposed pre-training strategy based on coreference chains." }, { "figure_ref": [], "heading": "Entity Typing", "publication_ref": [ "b19", "b5", "b3" ], "table_ref": [], "text": "Let us assume that we are given a sentence in which some entity mentions are highlighted, e.g.:\n[Alice] was unsure what was wrong with [the patient in front of her].\nOur aim is to assign (possibly fine-grained) semantic types to these entity mentions. For instance, using the FIGER (Ling and Weld, 2012) taxonomy, the first mention should be assigned the types Person and Doctor, while the second mention should be assigned Person. To make such predictions, a given entity mention e in sentence s is first mapped to an embedding Enc(s, e) ∈ R n using an encoder. For the experiments in our paper, this encoder takes the form of a language model from the BERT family (Devlin et al., 2019). Specifically, we use the final-layer embedding of the head word of the given entity span as the representation of the mentioned entity. For instance, for the second mention in the aforementioned example, the patient in front of her, we use the embedding of the head word, patient, as the representation of the entity span. This is motivated by the fact that the head word is most likely to reflect the semantic type of the entity (Choi et al., 2018). We find the head word using the SpaCy dependency parser1 .\nWe pre-train the entity encoder Enc based on coreference chains, as will be explained in Section 3.2. For each entity type t, we learn a vector a t ∈ R n and bias term b t ∈ R. The probability that the mention m should be assigned the type t is then estimated as:\nP (t|s, e) = σ(a t • Enc(s, e) + b t ) (1)\nwith σ the sigmoid function. This entity type classifier is trained using binary cross-entropy on a standard labelled training set. The encoder Enc is optionally also fine-tuned during this step. When using the classifier for entity typing, we assign all labels whose predicted probability is above 0.5." }, { "figure_ref": [], "heading": "Pre-training the Entity Encoder", "publication_ref": [ "b0", "b28", "b5" ], "table_ref": [], "text": "To pre-train the entity encoder Enc, we start from a collection of stories (e.g. news stories). Using off-the-shelf coreference resolution systems, we identify mentions within each story that are likely to refer to the same entity. Let us write (s, e) to denote an entity mention e appearing in sentence s. Then we consider the following self-supervision signal: if (s 1 , e 1 ) and (s 2 , e 2 ) are co-referring mentions, then the contextualised representations of e 1 and e 2 should be close to each other in the embedding space. In particular, we use a contrastive loss to encode that the representations of the tokens appearing in e 1 and e 2 should be more similar to each other than to the tokens appearing in the mentions of other entities. Each mini-batch is constructed from a small set of stories {S 1 , ..., S k }. Let us write X i for the set of entity mentions (s, e) in story S i that belong to some coreference chain. To alleviate the impact of noisy coreference links, we adopt two strategies:\n• We only include coreference links that are predicted by two separate coreference resolution systems. This reduces the number of spurious links that are considered.\n• As negative examples, we only consider entity mentions from different stories. This prevents us from using entity mentions that refer to the same entity, but were missed by the coreference resolution system.\nLet us write T i for the set of tokens of the mentions in X i . For a given token t, we write Enc(t) for its contextualised representation. We write Given a mention (s, e), the model can often infer the semantic type of the entity based on the mention span itself. To encourage the model to learn to identify cues in the sentence context, we sometimes mask the entity during training, following existing work on relation extraction (Baldini Soares et al., 2019;Peng et al., 2020). Specifically, for each input (s, e) ∈ X, with 15% probability we replace the head of the entity span by the [MASK] token. Note that, unlike previous work, we only mask the head word of the phrase.\nT = T 1 ∪ ... ∪ T k and T -i = T \\ T i .\nFinally, following Baldini Soares et al. ( 2019), we also use the Masked Language Modelling objective during training, to prevent catastrophic forgetting. Our overall loss thus becomes:\nL = L entity + L MLM\nwhere L entity is the loss function defined in (2) and L MLM is the masked language modelling objective from BERT (Devlin et al., 2019)." }, { "figure_ref": [], "heading": "Experimental Analysis", "publication_ref": [ "b22", "b7", "b19", "b18", "b1", "b11", "b24", "b16", "b29", "b44", "b4", "b27", "b17", "b34", "b36" ], "table_ref": [ "tab_1" ], "text": "In this section, we evaluate the performance of our proposed strategy on (fine-grained) entity typing. 2Experimental Setup In all our experiments, we initialise the entity encoder with a pre-trained language model. We consider bert-base-uncased3 , albert-xxlarge-v14 and roberta-large5 for this purpose, as these are commonly used for entity typing. We use the Gigaword corpus6 as the collection of stories. This corpus consists of around 4 million news stories from four different sources. We use two state-of-the-art coreference resolution systems: the Explosion AI system Coreferee v1.3.17 and the AllenNLP coreference model8 . As explained in Section 3.2, we only keep coreference links that are predicted by both of these systems.\nOnce the encoder has been pre-trained, we train an entity type classifier on the standard training set for each benchmark. We report results for two different variants of this process: one where the entity encoder is fine-tuned while training the entity type classifiers and one where the encoder is frozen. We will refer to these variants as EnCore and EnCorefrozen, respectively. We train all of the models for 25 epochs with the AdamW optimizer (Loshchilov and Hutter, 2019) and save the checkpoint with the best result on the validation set. The temperature τ in the contrastive loss was set to 0.05.\nBenchmarks Our central hypothesis is that the proposed pre-training task makes it possible to learn finer-grained entity representations. As such, we focus on fine-grained entity typing as our main evaluation task. We use the OntoNotes (Gillick et al., 2014) and FIGER (Ling and Weld, 2012) benchmarks. OntoNotes is based on the news stories from the OntoNotes 5.0 corpus9 . We use the entity annotations that were introduced by We also experiment on standard entity typing, using the ACE 2005 corpus10 , which covers the following text genres: broadcast conversation, broadcast news, newsgroups, telephone conversations and weblogs. It differentiates between 7 entity types. For this benchmark, the entity spans are not provided. We thus need to identify entity mentions in addition to predicting the corresponding types. We treat the problem of identifying entity span as a sequence labelling problem. We follow the strategy from Hohenecker et al. ( 2020), but start from our pre-trained entity encoder rather than a standard LM. We summarise this strategy in Appendix A. We use the standard training/development/test splits that were introduced by Li and Ji (2014). Following standard practice, we report the results in terms of micro-averaged F1. We take individual sentences as input. Existing work on this benchmark jointly evaluates span detection and entity typing, i.e. a prediction is only correct if both the span and the predicted type are correct. We will refer to this as the strict evaluation setting, following Bekoulis et al. (2018). We also consider the lenient setting from, where a prediction is scored as correct as soon as the type is correct and the predicted span overlaps with the gold span.\nTable 1 summarises the main characteristics of the considered datasets.\nBaselines We report results for a number of simplified variants of our main model. First, we consider a variant which uses the same strategy for training the entity type classifier as our full model, but without pre-training the entity encoder on the Gigagword corpus. This variant is referred to as the base model. Second, we investigate a setup in which the entity encoder is pre-trained on Gigaword, but only using the masked language modelling (MLM) objective. This setting, which we refer to as MLM-only, allows us to analyse to what extent improvements over the base model are due to the continued training of the language model.\nFor reference, we also compare our models with the published results of state-of-the-art models. For fine-grained entity typing, we consider the following baselines: DSAM (Hu et al., 2021) is an LSTMbased model, which we include as a competitive baseline; Box4Types (Onoe et al., 2021) uses hyperboxes to represent mentions and types, to take advantage of the hierarchical structure of the label space; PICOT (Zuo et al., 2022) uses a contrastive learning strategy based on the given type hierarchy; Relational Inductive Bias (RIB) (Li et al., 2021) uses a graph neural network to model correlations between the different labels. Entity mentions are encoded using a transformer layer on top of pretrained ELMo (Peters et al., 2018) embeddings; LITE (Li et al., 2022a) assigns entity types by finetuning a pre-trained Natural Language Inference model; SEPREM (Xu et al., 2021) improves on the standard RoBERTa model by exploiting syntax during both pre-training and fine-tuning, and then using a standard entity typing model on top of their pre-trained model; MLMET (Dai et al., 2021) extends the standard distantly supervised training data, using the BERT masked language model for generating weak labels; DenoiseFET (Pan et al., 2022) uses a denoising strategy to improve the quality of the standard distantly supervised training set, and furthermore exploits prior knowledge about the labels, which is extracted from the parameters of the decoder of the pre-trained BERT model; PKL (Li et al., 2023) improves on DenoiseFET by incorporating pre-trained label embeddings.\nFor ACE 2005, we consider the following baselines: DyGIE++ (Wadden et al., 2019) uses multitask learning to jointly train their system for coreference resolution, entity typing, relation extraction and event extraction; TableSeq (Wang and Lu, 2020) " }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b48", "b46" ], "table_ref": [ "tab_5" ], "text": "Table 2 summarises the results for fine-grained entity typing. As can be seen, EnCore outperforms the base and MLM-only models by a large margin, which clearly shows the effectiveness of the proposed pre-training task. Remarkably, EnCorefrozen performs only slightly worse. The best results are obtained with roberta-large. Our model furthermore outperforms the baselines on both OntoNotes and FIGER, except that RIB achieves a slightly higher micro-averaged F1 on FIGER. It should be noted that several of the baselines introduce techniques that are orthogonal to our contribution in this paper, e.g. denoising the distantly supervised training sets (DenoiseFET), incorporating prior knowledge about the type labels (PKL) and exploiting label correlations (RIB), which would likely bring further benefits when combined with our pre-training strategy.\nTable 3 summarises the results for standard entity typing (ACE 2005). We can again see that En-Core consistently outperforms the MLM-baseline, which in turn consistently outperforms the base model. Comparing the different encoders, the best results for our full model are obtained with albert-xxlarge-v1, which is consistent with what was found in previous work (Zhong and Chen, 2021;Ye et al., 2022). Finally, we can see that our full model outperforms all baselines." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [ "b17" ], "table_ref": [ "tab_6", "tab_6" ], "text": "We now analyse the performance of our method in more detail. For this analysis, we will focus on ACE 2005 under the lenient setting and OntoNotes. Throughout this section, unless mentioned otherwise, we use bert-base-uncased for the encoder.\nEncoding Entity Spans We represent entities using the embedding of the head word. In Table 4 we compare this approach with the following alternatives:\nMASK We replace the entity mention by a single MASK token and use the final-layer encoding of this token as the embedding of the entity.\nPrompt Given a mention (s, e), we append the phrase \"The type of e is [MASK].\" The finallayer encoding of the MASK-token is then used as the mention embedding.\nMasked triple This strategy is similar to Prompt but instead of appending a sentence, we append the phrase \"<e, hasType, [MASK]>\".\nSpecial tokens: full span We add the special tokens <m> and </m> around the entire entity Table 2: Results for fine-grained entity typing, in terms of macro-F1 and micro-F1 (%). BB stands for bert-baseuncased, BBc stands for bert-base-cased, BL stands for bert-large-uncased, ALB stands for albert-xxlarge and RL stands for roberta-large DenoiseFET results are taken from (Li et al., 2023); all other baseline results are taken from the original papers.\nspan. We take the final-layer encoding of the <m> token as the embedding of the entity.\nSpecial tokens: head In this variant, we add the special tokens <m> and </m> around the head word of the entity span.\nHead word This is the method adopted in our main experiments. In this case, we simply use the embedding of the head word of the entity mention, without using special tokens.\nIn all cases, we use the entity typing model that was described in 3.1. Note that we do not consider ACE 2005 for this analysis, as the entity spans have to be predicted by the model for this dataset, which means that aforementioned alternatives cannot be used. For this analysis, we train the entity encoder on the training data of the considered benchmark, without using our coreference based pre-training strategy. The results in Table 4 show that using the embedding of the head word clearly outperforms the considered alternatives. Another interesting ob- servation is that encapsulating the head of the entity mention performs slightly better than encapsulating the entire entity span, whereas it is the latter variant that is normally used in the literature. It is also notable, and somewhat surprising, that Masked triple outperforms Prompt." }, { "figure_ref": [], "heading": "Pre-training Strategies", "publication_ref": [], "table_ref": [ "tab_7", "tab_7" ], "text": "In Table 5 we compare four strategies for pre-training the entity encoder based on coreference chains. In particular, we analyse the effect of two aspects:\n• When training our model, the negative examples for the contrastive loss (Section 3.2) are always selected from other stories. Here we analyse the impact of choosing these negative examples from the same story instead.\n• During training, in 15% of the cases, we mask the head of the entity span. Here we consider two other possibilities: (i) never masking the entity span and (ii) masking the entire span. Choosing the negative examples from the same story has a number of implications. First, it may mean that false negatives are included (i.e. coreference links that were missed by the system). Second, it means that the overall number of negative examples becomes smaller, since they have to come from a single story. However, these downsides may be offset by the fact that negative examples from the same story may be harder to discriminate from the positive examples, since the story context is the same, and using harder negatives is typically beneficial for contrastive learning. For this analysis we use EnCore-frozen. As can be seen in Table 5, choosing negative examples from the same story overall has a clearly detrimental impact. We also find that masking is important, where masking only the head of the entity span leads to the best results. This masking strategy has not yet been used in the literature, to the best of our knowledge." }, { "figure_ref": [], "heading": "Coreference Resolution", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "In Table 6 we analyse the importance of using only high-quality coreference links. In particular, we compare three configurations: (i) using all links predicted by the Explosion AI system; (ii) using all links predicted by the AllenNLP system; and (iii) using only the links that are predicted by both systems. For this analysis, we use EnCore-frozen. As can be seen, the AllenNLP system overall performs better than the Explosion AI system. However, the best results are obtained by combining both systems. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b12" ], "table_ref": [], "text": "We have proposed a strategy which uses coreference chains to pre-train an entity encoder. Our strategy relies on the natural idea that coreferring entity mentions should be represented using similar vectors. Using a contrastive loss for implementing this intuition, we found that the resulting encoders are highly suitable for (fine-grained) entity typing.\nIn our analysis, we found that restricting our strategy to high-quality coreference links was important for its success. We also found that focusing on the head of the entity span, rather than the span itself, was beneficial, both when it comes to representing the entity span and when it comes to masking entities during training (where only masking the head was found to be most helpful).\nOur model is pre-trained on individual sentences. This means that during testing, we cannot exploit cross-sentence context. Prior work has found such cross-sentence context to be helpful for benchmarks such as ACE2005, so it would be of interest to extend our model along these lines. Furthermore, we have not yet applied our model to ultra-fine entity typing, as this task requires us to cope with labels for which we have no, or only very few training examples. This would require combining our entity encoder with entity typing models that can exploit label embeddings, such as UNIST (Huang et al., 2022), which we have left as an avenue for future work. " }, { "figure_ref": [], "heading": "A Entity Span Detection", "publication_ref": [ "b10" ], "table_ref": [], "text": "We treat the problem of entity span detection as a sequence labelling problem, following the strategy from Hohenecker et al. (2020). Specifically, each token in the input sentence is then labelled with an appropriate tag, which could either be one of the entity types from the considered dataset or a tag which denotes that the token does not belong to any entity span. To assign these tags, we again use the encoder that was pre-trained on coreference chains. However, rather than looking only at the head word of a given entity span, we now consider the embedding of every token in the sentence. Specifically, we train a linear classifier to predict the correct tag from the contextualised representation of each token, while optionally also fine-tuning the encoder. Since most tokens do not belong to any entity span, the training data will inevitably be highly imbalanced. For this reason, during training, we ignore the majority of tokens that are outside of any entity span. Specifically, following Hohenecker et al.\n(2020), we only consider such tokens when they are immediately preceding or succeeding an entity span." }, { "figure_ref": [ "fig_0" ], "heading": "B Additional Analysis", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "Prediction confidence In Table 8, we compare the confidence of the EnCore and MLM-only models for the gold label predictions. We observe that in the first example, EnCore more confidently predicts the label for delegation as /organization than MLM-only, which places delegation in the more generic label class /other with lower confidence. In the second and third case, we observe that EnCore is more certain to label the currency terms dollars and RMB with the second-level label /other/currency than with the more general first level label /other, whereas MLM-only assigns a very low confidence to /other/currency. A similar pattern can also be observed in the last example.\nWe have observed the same trend throughout the test set: EnCore consistently makes more confident predictions than MLM-only. This is especially evident for the second-and third-level labels.\nBreakdown by Label A closer examination of the model outputs in Figure 2 " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements This research was supported by EPSRC grant EP/W003309/1 and undertaken using the supercomputing facilities at Cardiff University operated by Advanced Research Computing at Cardiff (ARCCA) on behalf of the Cardiff Supercomputing Facility and the HPC Wales and Supercomputing Wales (SCW) projects. We acknowledge the support of the latter, which is partfunded by the European Regional Development Fund (ERDF) via the Welsh Government." } ]
Entity typing is the task of assigning semantic types to the entities that are mentioned in a text. In the case of fine-grained entity typing (FET), a large set of candidate type labels is considered. Since obtaining sufficient amounts of manual annotations is then prohibitively expensive, FET models are typically trained using distant supervision. In this paper, we propose to improve on this process by pre-training an entity encoder such that embeddings of coreferring entities are more similar to each other than to the embeddings of other entities. The main problem with this strategy, which helps to explain why it has not previously been considered, is that predicted coreference links are often too noisy. We show that this problem can be addressed by using a simple trick: we only consider coreference links that are predicted by two different off-the-shelf systems. With this prudent use of coreference links, our pretraining strategy allows us to improve the stateof-the-art in benchmarks on fine-grained entity typing, as well as traditional entity extraction.
EnCore: Fine-Grained Entity Typing by Pre-Training Entity Encoders on Coreference Chains
[ { "figure_caption": "Figure 2 :2Figure 2: Comparison of the percentage of correct predictions per gold label by the MLM-only and EnCore models (with roberta-large) on the OntoNotes test set. The instances of a label that are accurately predicted are expressed as a percentage of the total number of occurrences of the corresponding gold label.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison of the percentage of correct predictions per gold label by the MLM-only and EnCore models (with roberta-large) on the FIGER test set. The instances of a label that are accurately predicted are expressed as a percentage of the total number of occurrences of the corresponding gold label.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "For a given token t, we write C t for the set of tokens that are part of the same coreference chain. The encoder is trained using InfoNCE (van denOord et al., 2018): ′′ in the denominator ranges over T -i ∪ {t}. The token pairs in the numerator correspond to positive examples, i.e. tokens whose embeddings should be similar, while the denominator ranges over both positive and negative examples. The temperature τ > 0 is a hyper-parameter, which controls how hard the separation between positive and negative examples should be.", "figure_data": "k i=1 t∈T i t ′ ∈Ctlogexp cos(Enc(t),Enc(t ′ )) τ t ′′ exp cos(Enc(t),Enc(t ′′ )) τ(2)where t", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overview of the considered benchmarks, showing the number of entity types, and the number of entity mentions in the training, development and test sets.", "figure_data": "Dataset# Types Train Dev.TestACE 2005726.5K 6.4K 5.5KOntoNotes893.4M8K2KFIGER1132M1K0.5K", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results for entity typing on ACE 2005, in terms of micro-F1 (%). BB stands for bert-base-uncased, ALB stands for albert-xxlarge and RL stands for roberta-large. Configurations with ⋄ rely on crosssentence context and are thus not directly comparable with our method.", "figure_data": "StrictLenientBB ALB RL BB ALB RLDyGIE++ ⋄88.6-----UniRe ⋄88.8 90.2----PURE ⋄90.1 90.9----PL-Marker ⋄89.8 91.1----PURE88.7 89.7----TableSeq-89.4 88.9---Base model86.8 87.1 86.9 90.3 90.8 90.6MLM-only87.1 87.8 87.5 90.7 91.2 90.9EnCore-frozen 89.9 90.5 90.1 91.8 92.3 92.0EnCore90.8 91.9 91.0 92.4 93.1 92.6StrategyOntoNotesmacro microMASK70.766.8Prompt72.168.7Masked triple72.869.4Special tokens: full span 75.270.8Special tokens: head76.171.3Head word76.972.9", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of different strategies for encoding entity spans (using bert-base-uncased).", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of different strategies for pretraining the entity encoder (using bert-base-uncased).", "figure_data": "Neg. samplesMaskingACE05 OntoNotesmicro macro microSame storyNone83.982.174.9Same storyEntire span84.782.975.3Different stories Entire span88.886.278.9Different stories Head91.887.380.6Coreference SystemsACE05 OntoNotesmicro macro microExplosion AI86.483.479.4AllenNLP90.786.880.1Explosion AI + AllenNLP91.887.380.6", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of different coreference resolution strategies (using bert-base-uncased).", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparison of the MLM-only and EnCore models (using roberta-large) on partitions of the OntoNotes test set.", "figure_data": "ModelOne Label Two Labels Three labelsmacro micro macro micro macro microMLM-only 79.8 75.6 53.0 50.9 39.1 38.4EnCore82.7 78.7 59.8 58.5 44.6 43.6Performance on Fine and Coarse Labels InTable 7 we compare our full model with the MLM-only variant on different partitions of the OntoNotestest set. We specifically compare EnCore andMLM-only on those examples with one-level la-bels (5.3K); two-level labels (3.0K); and three-level labels (0.6K). Examples with one-level la-bels only require the model to determine the top-level entity type (e.g /organisation). Exam-ples with two-level labels call for more precisefiner-grained differentiations (e.g. /organisationand /organisation/company). Examples withthree-level labels call for even more precision(e.g. /organisation, /organisation/companyand /organization/company/broadcast). En-Core performs better than MLM-only in every sce-nario, as can be observed, with the difference be-ing least pronounced in the case of single-levellabels. This supports the idea that our pre-trainingtechnique is particularly useful for learning finer-grained entity types. A more detailed breakdownof the results, which is provided in the appendix,shows that EnCore consistenly outperforms MLM-only on all labels, both for OntoNotes and FIGER.", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "reveals that EnCore consistently beats the MLM-only model across all entity types. The OntoNotes test set, for example, contains 1130 /person gold labels. MLM-only predicts only 67.96% of these accurately, compared to 85.49% for EnCore. As an example of a label at the second level, there are 74 /person/artist gold labels in the test set; the MLM-only model correctly predicts 21.62% of these, whereas EnCore correctly predicts 35.14%. At the third level, there are 58 /person/artist/author gold labels. The MLMonly model predicts only 13.79% of them correctly, while EnCore predicts 25.86% correctly. These patterns are consistently seen over the whole label set. This is also true for the FIGER test set, as shown in Figure3. At the beginning of 1993 , six cities such as Zhuhai , Foshan , etc. also organized a delegation to advertise in the US and Canada for students studying abroad. Last year , its foreign exchange income was up to more than 2.1 billion US dollars, and in the first half of this year exports again had new growth. In 1997 , this plant made over 4,400 tons of Mao -tai ; with sales income exceeding 500 million yuan RMB , and profit and taxes reaching 370 million RMB , both being the best levels in history. In the near future , the Russian Tumen River Region Negotiation Conference will also be held in Vladivostok. Comparison of the confidence of the MLM-only and EnCore models (with roberta-large) on sample cases from the OntoNotes test set. The words in bold in the input sentences are the entity spans' head word. The MLM-only and EnCore columns indicate the confidence of the MLM-only and EnCore models, respectively.", "figure_data": "SentenceGold label MLM-only EnCore(1) /organization0.260.60/other0.540.15(2) /other0.630.97/other/currency0.040.98(3) /other0.310.94/other/currency0.020.96(4) /location0.250.98/location/city0.070.73", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" } ]
Frank Mtumbuka; Steven Schockaert
[ { "authors": "Baldini Livio; Nicholas Soares; Jeffrey Fitzgerald; Tom Ling; Kwiatkowski", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Matching the blanks: Distributional similarity for relation learning", "year": "2019" }, { "authors": "Giannis Bekoulis; Johannes Deleu; Thomas Demeester; Chris Develder", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Adversarial training for multi-context joint entity and relation extraction", "year": "2018" }, { "authors": "Kurt Bollacker; Colin Evans; Praveen Paritosh; Tim Sturge; Jamie Taylor", "journal": "", "ref_id": "b2", "title": "Freebase: a collaboratively created graph database for structuring human knowledge", "year": "2008" }, { "authors": "Eunsol Choi; Omer Levy; Yejin Choi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Ultra-fine entity typing", "year": "2018" }, { "authors": "Hongliang Dai; Yangqiu Song; Haixun Wang", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Ultra-fine entity typing with weak supervision from a masked language model", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "SimCSE: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "Dan Gillick; Nevena Lazic; Kuzman Ganchev; Jesse Kirchner; David Huynh", "journal": "", "ref_id": "b7", "title": "Contextdependent fine-grained entity type tagging", "year": "2014" }, { "authors": "Jiale Han; Bo Cheng; Wei Lu", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Exploring task difficulty for few-shot relation extraction", "year": "2021" }, { "authors": "Ridong Han; Tao Peng; Chaohao Yang; Benyou Wang; Lu Liu; Xiang Wan", "journal": "", "ref_id": "b9", "title": "Is information extraction solved by chatgpt? an analysis of performance, evaluation criteria, robustness and errors", "year": "2023" }, { "authors": "Patrick Hohenecker; Frank Mtumbuka; Vid Kocijan; Thomas Lukasiewicz", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Systematic comparison of neural architectures and training approaches for open information extraction", "year": "2020" }, { "authors": "Yanfeng Hu; Xue Qiao; Luo Xing; Chen Peng", "journal": "IEEE Access", "ref_id": "b11", "title": "Diversified semantic attention model for fine-grained entity typing", "year": "2021" }, { "authors": "James Y Huang; Bangzheng Li; Jiashu Xu; Muhao Chen", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Unified semantic typing with meaningful label inference", "year": "2022" }, { "authors": "Mandar Joshi; Danqi Chen; Yinhan Liu; Daniel S Weld; Luke Zettlemoyer; Omer Levy", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b13", "title": "Span-BERT: Improving pre-training by representing and predicting spans", "year": "2020" }, { "authors": "Bangzheng Li; Wenpeng Yin; Muhao Chen", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b14", "title": "a. Ultra-fine entity typing with indirect supervision from natural language inference", "year": "2022" }, { "authors": "Jiacheng Li; Jingbo Shang; Julian Mcauley", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "UCTopic: Unsupervised contrastive learning for phrase representations and topic mining", "year": "2022" }, { "authors": "Jinqing Li; Xiaojun Chen; Dakui Wang; Yuwei Li", "journal": "", "ref_id": "b16", "title": "Enhancing label representations with relational inductive bias constraint for fine-grained entity typing", "year": "2021-08" }, { "authors": "Na Li; Zied Bouraoui; Steven Schockaert", "journal": "", "ref_id": "b17", "title": "Ultra-fine entity typing with prior knowledge about labels: A simple clustering based strategy", "year": "2023" }, { "authors": "Qi Li; Heng Ji", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Incremental joint extraction of entity mentions and relations", "year": "2014" }, { "authors": "Xiao Ling; Daniel S Weld", "journal": "AAAI Press", "ref_id": "b19", "title": "Fine-grained entity recognition", "year": "2012-07-22" }, { "authors": "Fangyu Liu; Ivan Vulić; Anna Korhonen; Nigel Collier", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "a. Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders", "year": "2021" }, { "authors": "Qianchu Liu; Fangyu Liu; Nigel Collier; Anna Korhonen; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "MirrorWiC: On eliciting word-in-context representations from pretrained language models", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b22", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Yi Luan; Luheng He; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction", "year": "2018" }, { "authors": "Yasumasa Onoe; Michael Boratko; Andrew Mccallum; Greg Durrett", "journal": "", "ref_id": "b24", "title": "Modeling fine-grained entity types with box embeddings", "year": "2021" }, { "authors": "Yasumasa Onoe; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Learning to denoise distantly-labeled data for entity typing", "year": "2019" }, { "authors": "Yasumasa Onoe; Greg Durrett", "journal": "", "ref_id": "b26", "title": "Interpretable entity representations through large-scale typing", "year": "2020" }, { "authors": "Weiran Pan; Wei Wei; Feida Zhu", "journal": "", "ref_id": "b27", "title": "Automatic noisy label correction for fine-grained entity typing", "year": "2022-07-29" }, { "authors": "Hao Peng; Tianyu Gao; Xu Han; Yankai Lin; Peng Li; Zhiyuan Liu; Maosong Sun; Jie Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Learning from Context or Names? An Empirical Study on Neural Relation Extraction", "year": "2020" }, { "authors": "Matthew E Peters; Mark Neumann; Mohit Iyyer; Matt Gardner; Christopher Clark; Kenton Lee; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Deep contextualized word representations", "year": "2018" }, { "authors": "Matthew E Peters; Mark Neumann; Robert Logan; Roy Schwartz; Vidur Joshi; Sameer Singh; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Knowledge enhanced contextual word representations", "year": "2019" }, { "authors": "Xiang Ren; Wenqi He; Meng Qu; Clare R Voss; Ji Heng; Jiawei Han", "journal": "ACM", "ref_id": "b31", "title": "Label noise reduction in entity typing by heterogeneous partial-label embedding", "year": "2016-08-13" }, { "authors": "Aäron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b32", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Yuval Varkel; Amir Globerson", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Pre-training mention representations in coreference models", "year": "2020" }, { "authors": "David Wadden; Ulme Wennberg; Yi Luan; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Entity, relation, and event extraction with contextualized span representations", "year": "2019" }, { "authors": "Zhen Wan; Fei Cheng; Qianying Liu; Zhuoyuan Mao; Haiyue Song; Sadao Kurohashi", "journal": "", "ref_id": "b35", "title": "Relation extraction with weighted contrastive pre-training on distant supervision", "year": "2022" }, { "authors": "Jue Wang; Wei Lu", "journal": "", "ref_id": "b36", "title": "Two are better than one: Joint entity and relation extraction with tablesequence encoders", "year": "2020" }, { "authors": "Ruize Wang; Duyu Tang; Nan Duan; Zhongyu Wei; Xuanjing Huang; Jianshu Ji; Guihong Cao; Daxin Jiang; Ming Zhou; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters", "year": "2021" }, { "authors": "Shufan Wang; Laure Thompson; Mohit Iyyer", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Phrase-BERT: Improved phrase embeddings from BERT with an application to corpus exploration", "year": "2021" }, { "authors": "Shusen Wang; Bosen Zhang; Yajing Xu; Yanan Wu; Bo Xiao", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "RCL: Relation contrastive learning for zero-shot relation extraction", "year": "2022" }, { "authors": "Xiao Wang; Weikang Zhou; Can Zu; Han Xia; Tianze Chen; Yuansen Zhang; Rui Zheng; Junjie Ye; Qi Zhang; Tao Gui; Jihua Kang; Jingsheng Yang; Siyuan Li; Chunsai Du", "journal": "", "ref_id": "b40", "title": "InstructUIE: Multi-task instruction tuning for unified information extraction", "year": "2023" }, { "authors": "Xiaozhi Wang; Tianyu Gao; Zhaocheng Zhu; Zhengyan Zhang; Zhiyuan Liu; Juanzi Li; Jian Tang", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b41", "title": "KEPLER: A unified model for knowledge embedding and pre-trained language representation", "year": "2021" }, { "authors": "Yijun Wang; Changzhi Sun; Yuanbin Wu; Junchi Yan; Peng Gao; Guotong Xie", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Pre-training entity relation encoder with intra-span and inter-span information", "year": "2020" }, { "authors": "Yijun Wang; Changzhi Sun; Yuanbin Wu; Hao Zhou; Lei Li; Junchi Yan", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "UniRE: A unified label space for entity relation extraction", "year": "2021" }, { "authors": "Zenan Xu; Daya Guo; Duyu Tang; Qinliang Su; Linjun Shou; Ming Gong; Wanjun Zhong; Xiaojun Quan; Daxin Jiang; Nan Duan", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Syntax-enhanced pre-trained model", "year": "2021" }, { "authors": "Ikuya Yamada; Akari Asai; Hiroyuki Shindo; Hideaki Takeda; Yuji Matsumoto", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "LUKE: Deep contextualized entity representations with entityaware self-attention", "year": "2020" }, { "authors": "Deming Ye; Yankai Lin; Peng Li; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Packed levitated marker for entity and relation extraction", "year": "2022" }, { "authors": "Zhengyan Zhang; Xu Han; Zhiyuan Liu; Xin Jiang; Maosong Sun; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "ERNIE: Enhanced language representation with informative entities", "year": "2019" }, { "authors": "Zexuan Zhong; Danqi Chen", "journal": "", "ref_id": "b48", "title": "A frustratingly easy approach for entity and relation extraction", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 105.85, 208.65, 184.02, 10.75 ], "formula_id": "formula_0", "formula_text": "P (t|s, e) = σ(a t • Enc(s, e) + b t ) (1)" }, { "formula_coordinates": [ 4, 306.14, 101.47, 218.27, 24.32 ], "formula_id": "formula_1", "formula_text": "T = T 1 ∪ ... ∪ T k and T -i = T \\ T i ." }, { "formula_coordinates": [ 4, 370.95, 566.43, 88.15, 10.82 ], "formula_id": "formula_2", "formula_text": "L = L entity + L MLM" } ]
2023-08-10
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b44", "b10", "b31", "b9", "b1", "b33", "b9", "b1" ], "table_ref": [], "text": "Time series forecasting predicts future values based on past observations. While extensively studied, most research focuses on regularly sampled and fully observed multivariate time series (MTS) (Lim and Zohren 2021;Zeng et al. 2022;De Gooijer and Hyndman 2006). Limited attention is given to irregularly sampled time series with missing values (IMTS) which is commonly seen in many real-world applications. IMTS has independently observed channels at irregular intervals, resulting in sparse data alignment. The focus of this work is on forecasting IMTS. Additionally, there is another type called irregular multivariate time series which is fully observed but with irregular time intervals (Figure 1 illustrates the differences) which is not the interest of this paper.\nOrdinary Differential Equations (ODE) model continuous time series predicting system evolution over time based on the rate of change of state variables as shown in Eq. 1. In all cases the observation range is from time t 0 to t 6 and the forecasting range is from time t 7 to t 10 .\nd dt x(t) = f (t, x(t))(1)\nODE-based models (Schirmer et al. 2022;De Brouwer et al. 2019;Biloš et al. 2021;Scholz et al. 2023) are able to forecast at arbitrary time points. However, ODE models can be slow because of their auto-regressive nature and computationally expensive numerical integration process. Also, some ODE models cannot directly handle missing values in the observation space, hence, they often rely on missing value indicators (De Brouwer et al. 2019;Biloš et al. 2021) which are given as additional channels in the data.\nIn this work, we propose a novel model called GraFITi: graphs for forecasting IMTS. GraFITi converts IMTS data into a Sparsity Structure Graph and formulates forecasting as edge weight prediction in the graph. This approach represents channels and timepoints as disjoint nodes connected by edges in a bipartite graph. GraFITi uses multiple graph neu-ral network (GNN) layers with attention and feed-forward mechanisms to learn node and edge interactions. Our Sparsity Structure Graph, by design, provides a more dynamic and adaptive approach to process IMTS data, and improves the performance of the forecasting task.\nWe evaluated GraFITi for forecasting IMTS using 3 realworld and 1 synthetic dataset. Comparing it with state-ofthe-art methods for IMTS and selected baselines for MTS, GraFITi provides superior forecasts.\nOur contributions are summarized as follows:\n• We introduce a novel representation of irregularly sampled multivariate time series with missing values (IMTS) as sparse bipartite graphs, the Sparsity Structure Graph, that efficiently can handle missing values in the observation space of the time series (Section 4). • We propose a novel model based on this representation, GraFITi, that can leverage any graph neural network to perform time series forecasting for IMTS (section 5). • We provide extensive experimental evaluation on 3 real world and 1 synthetic dataset that shows that GraFITi improves the forecasting accuracy of the best existing models by up to 17% and the run time improvement up to 5 times (section 6).\nWe provide the implementation at https://anonymous.4open.science/r/GraFITi-8F7B." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "This work focus on the forecasting of irregularly sampled multivariate time series data with missing values using graphs. In this section, we discuss the research done in: forecasting models for IMTS, Graphs for forecasting MTS, and models for edge weight prediction in graphs." }, { "figure_ref": [], "heading": "Forecasting", "publication_ref": [ "b22", "b24", "b29", "b34", "b13", "b37", "b4", "b29", "b34", "b37", "b35", "b41", "b4", "b29", "b9", "b31", "b33", "b1", "b40", "b2", "b6", "b30", "b8", "b26" ], "table_ref": [], "text": "of IMTS Research on IMTS has mainly focused on classification (Li and Marlin 2015;Lipton, Kale, and Wetzel 2016;Rubanova, Chen, and Duvenaud 2019;Shukla and Marlin 2021;Horn et al. 2020;Tashiro et al. 2021) and interpolation (Che et al. 2018;Rubanova, Chen, and Duvenaud 2019;Shukla and Marlin 2021;Tashiro et al. 2021;Shukla and Marlin 2022;Yalavarthi, Burchert, and Schmidt-Thieme 2022), with limited attention to forecasting tasks. Existing models for these tasks mostly rely on Neural ODEs (Che et al. 2018). In Latent-ODE (Rubanova, Chen, and Duvenaud 2019), an ODE was combined with a Recurrent Neural Network (RNN) for updating the state at the point of new observation. The GRU-ODE-Bayes model (De Brouwer et al. 2019) improved upon this approach by incorporating GRUs, ODEs, and Bayesian inference for parameter estimation. The Continuous Recurrent Unit (CRU) (Schirmer et al. 2022) based model uses a state-space model with stochastic differential equations and kalman filtering. The recent LinODENet model (Scholz et al. 2023) enhanced CRU by using linear ODEs and ensure self-consistency in the forecasts. Another branch of study involves Neural Flows (Biloš et al. 2021), which use neural networks to model ODE solution curves, rendering the ODE integrator unnecessary. Among various flow architectures, GRU flows have shown good performance.\nUsing graphs for MTS In addition to CNNs, RNNs, and Transformers, graph-based methods have been studied for IMTS forecasting. Early GNN-based approaches, such as (Wu et al. 2020), required a pre-defined adjacency matrix to establish relationships between the time series channels. More recent models like the Spectral Temporal Graph Neural Network (Cao et al. 2020) (STGNN) and the Time-Aware Zigzag Network (Chen, Segovia, and Gel 2021) improved on this by using GNNs to capture dependencies between variables in the time series. On the other hand, Satorras, Rangapuram, and Januschowski (2022) proposed a bipartite setup with induced nodes to reduce graph complexity, built solely from the channels. Existing graph-based time series forecasting models focus on learning correlations or similarities between channels, without fully exploiting the graph structure. Recently, GNNs were used for imputation and reconstruction of MTS with missing values, treating MTS as sequences of graphs where nodes represent sensors and edges denote correlation (Cini, Marisca, and Alippi 2022;Marisca, Cini, and Alippi 2022). Similar to previous studies, they learn similarity or correlation among channels." }, { "figure_ref": [], "heading": "Graph Neural Networks for edge weight prediction", "publication_ref": [ "b18", "b39", "b11", "b12", "b45", "b28", "b42", "b15", "b49", "b43", "b21" ], "table_ref": [], "text": "Graph Neural Networks (GNNs) are designed to process graph-based data. While most GNN literature such as Graph Convolutional Networks, Graph Attention Networks focuses on node classification (Kipf and Welling 2017;Velickovic et al. 2017), a few studies have addressed edge weight prediction. Existing methods (De Sá and Prudêncio 2011;Fu et al. 2018) in this domain rely on latent features and graph heuristics, such as node similarity (Zhao et al. 2015), proximity measures (Murata and Moriyasu 2007), and local rankings (Yang and Wang 2020). Recently, deep learning-based approaches (Hou and Holder 2017;Zulaika et al. 2022;You et al. 2020) were proposed. Another branch of research deals with edge weight prediction in weighted signed graphs (Kumar et al. 2016) tailored to social networks. However, all proposed methods typically operate in a transductive setup with a single graph split into training and testing data, which may not be suitable for cases involving multiple graphs like ours, where training and evaluation need to be done on separate graph partitions." }, { "figure_ref": [], "heading": "The Time Series Forecasting Problem", "publication_ref": [], "table_ref": [], "text": "An irregularly sampled multivariate times series with missing values, is a finite sequence of pairs S = (t n , x n ) n=1:N where t n ∈ R is the n-th observation timepoint and x n ∈ (R ∪ {NaN}) C is the n-th observation event. Components with x n,c = NaN represent observed values by channel c at event time t n , and x n,c = NaN represents a missing value. C is the total number of channels.\nA time series query is a pair (Q, S) of a time series S and a sequence Q = (q k , c k ) k=1:K such that the value of channel c k ∈ {1, . . . , C} is to be predicted at time q k ∈ R. We call a query a forecasting query, if all its query timepoints are after the last timepoint of the time series S, an imputation query if all of them are before the last timepoint of S and a mixed query otherwise. In this paper, we are interested in forecasting only.\nA vector y ∈ R K we call an answer to the forecasting query: y k is understood as the predicated value of time series S at time q k in channel c k . The difference between two answers y, y ′ to the same query can be measured by any loss function, for example by a simple squared error\nℓ(y, y ′ ) := 1 K K k=1 (y k -y ′ k ) 2\nThe time series forecasting problem is as follows: given a dataset of pairs D := (Q i , S i , y i ) i=1:M of forecasting queries and ground truth answers from an unknown distribution p data and a loss function ℓ on forecasting answers, find a forecasting model ŷ that maps queries (Q, S) to answers ŷ(Q, S) such that the expected loss between ground truth answers and forecasted answers is minimal:\nL(ŷ; p data ) := E (Q,S,y)∼p data ℓ(y, ŷ(Q, S))" }, { "figure_ref": [ "fig_1" ], "heading": "Sparsity Structure Graph Representation", "publication_ref": [ "b34", "b39", "b39", "b46" ], "table_ref": [], "text": "We describe the proposed Sparsity Structure Graph representation and convert the forecasting problem as edge weight prediction problem. Using the proposed representation:\n• We explicitly obtain the relationship between the channels and times via observation values allowing the inductive bias of the data to pass into the model. • We elegantly handle the missing values in IMTS in the observation space by connecting edges only for the observed values.\nMissing values represented by NaN-values are unsuited for standard arithmetical operations. Therefore, they are often encoded by dedicated binary variables called missing value indicators or masks: x n ∈ (R × {0, 1}) C . Here, (x n,c,1 , 1) encodes an observed value and (0, 0) encodes a missing value. Usually, both components are seen as different scalar variables: the real value x n,c,1 and its binary missing value indicator / mask x n,c,2 , the relation between both is dropped and observations simply modeled as\nx n ∈ R 2C .\nWe propose a novel representation of a time series S using a bipartite graph G = (V, E). The graph has nodes for channels and timepoints, denoted as V C and V T respectively\n(V = V C ∪ V T ). Edges E ⊆ V C × V T in\nthe graph connect each channel node to its corresponding timepoint node with an observation. Edge features F edge are the observation values and node features F node are the channel IDs and timepoints. Nodes V C := {1, . . . , C} represent channels and nodes V T := {C + 1, . . . , C + N } represent unique timepoints:\nV := {1, . . . , C + N } = VC ∪ VT E := {i, j} | xi-C,j = NaN, i ∈ VT , j ∈ VC F node v := v : v ∈ VC tj : v ∈ VT , j = v -C F edge e := xi-C,j for e = {i, j} ∈ E with i ∈ VT , j ∈ VC (2)\nFor an IMTS, missing values make the bipartite graph sparse, meaning |E| ≪ C • N . However, for a fully observed time series, where there are no missing values, i.e. |E| = C • N , the graph is a complete bipartite graph. We extend this representation to time series queries (S, Q) by adding additional edges between queried channels and timepoints, and distinguish observed and queried edges by an additional binary edge feature called target indicator. Note that the target indicator used to differentiate the observed edge and target edge is different from the missing value indicator which is used to represent the missing observations in the observation space. Given a query Q = (q k , c k ) k=1:K , let (t ′ 1 , . . . , t ′ K ′ ) be an enumeration of the unique queried timepoints q k . We introduce additional nodes V Q := {C + N + 1, . . . , C + N + K ′ } so that the augmented graph, together with the node and edge features is given as\nV := V C ∪ V T ∪ V Q = {1, . . . , C + N + K ′ } E := {i, j} | x i-C,j = NaN, i ∈ V T , j ∈ V C ∪ {i, j} | i ∈ V Q , j ∈ V C , (t ′ i-N -C , j) ∈ Q F node v :=    v : v ∈ V C t j : v ∈ V T , j = v -C t ′ j : v ∈ V Q , j = v -C -N F edge e := (x i,j,1 , 1) : e = {i, j} ∈ E, i ∈ V T , j ∈ V C (0, 0) : e = {i, j} ∈ E, i ∈ V Q , j ∈ V C (3)\nwhere\n(t ′ i-N -C , j) ∈ Q is supposed to mean that (t ′ i-N -C , j\n) appears in the sequence Q. To denote this graph representation, we write briefly\nts2graph(X, Q) := (V, E, F node , F edge ) (4)\nThe conversion of an IMTS to a Sparsity Structure Graph is shown in Figure 2.\nTo make the graph representation (V, E, F node , F edge ) of a time series query processable by a graph neural network, node and edge features have to be properly embedded, otherwise, both, the nominal channel ID and the timepoint are hard to compute on. We propose an Initial Embedding layer that encodes channel IDs via a onehot encoding and time points via a learned sinusoidal encoding (Shukla and Marlin 2021):\nh node,0 v := FF(onehot(F node v )) : v ∈ V C sin(FF(F node v )) : v ∈ V T ∪ V Q (5) h edge,0 e := FF(F edge e ) for e ∈ E(6)\nwhere onehot denotes the binary indicator vector and FF denotes a separate fully connected layer in each case.\nThe final graph neural network layer (h node,L , h edge,L ) has embedding dimension 1. The scalar values of the query edges are taken as the predicted answers to the encoded forecasting query:\nŷ := graph2ts(h node,L , h edge,L , V, E) = (h edge,L e k ) k=1:K where e k = {C + N + k ′ , c k } with t ′ k ′ = q k (7)\n5 Forecasting with GraFITi\nGraFITi first encodes the time series query to graph using Eq. 4 and compute initial embeddings for the nodes (h node,0 ) and edges (h edge,0 ) using Eqs. 5 and 6 respectively. Now, we can leverage the power of graph neural networks for further processing the encoded graph. Node and edge features are updated layer wise, from layer l to l + 1 using a graph neural network:\n(h node,l+1 , h edge,l+1 ) := gnn (l) (h node,l , h edge,l , V, E) (8)\nThere have been a variety of gnn architectures such as Graph Convolutional Networks (Kipf and Welling 2017), Graph Attention Networks (Velickovic et al. 2017), proposed in the literature. In this work, we propose a model adapting the Graph Attention Network (Velickovic et al. 2017) to our graph setting and incorporate essential components for handling sparsity structure graphs. While a Graph Attention Network computes attention weights by adding queries and keys, we found no advantage in using this approach (see supplementary material). Thus, we utilize standard attention mechanism, in our attention block, as it has been widely used in the literature (Zhou et al. 2021). Additionally, we also use edge embeddings in our setup to update node embeddings in a principled manner." }, { "figure_ref": [ "fig_3" ], "heading": "Graph Neural Network (gnn)", "publication_ref": [ "b38", "b38" ], "table_ref": [], "text": "First, we define Multi-head Attention block (MAB) and Neighborhood functions that are used in our gnn.\nA Multi-head attention block (MAB) (Vaswani et al. 2017) is represented as:\nMAB(Q, K, V) := α(H + FF(H))\nwhere\nH := α(Q + MHA(Q, K, V))(9)\nwhere Q, K and, V are called queries, keys, and values respectively, MHA is multi-head attention (Vaswani et al. 2017), α is a non-linear activation.\nAlgorithm 1: Graph Neural Network (gnn (l) )\nRequire: h node,l , h edge,l , V, E for u ∈ V do Hu ← [h node,l v h edge,l e ] v∈N (u) //e = {u, v} h node,l+1 u ← MAB (l) (h node,l u , Hu, Hu) for e = {u, v} ∈ E do h edge,l+1 e ← α h edge,l e + FF (l) h node,l u h node,l v h edge,l e return h node,l+1 , h edge,l+1\nThe Neighborhood of a node u is defined as the set of all the nodes connected to u through edges in E:\nN (u) := {v | {u, v} ∈ E} (10)\nGraFITi consists of L gnnlayers. In each layer, node embeddings are updated using neighbor node embeddings and edge embeddings connecting them. For edge embeddings, we use the embeddings of the adjacent nodes and the current edge embedding. The overall architecture of GraFITi is shown in Figure 3." }, { "figure_ref": [], "heading": "Update node embeddings", "publication_ref": [], "table_ref": [], "text": "To update embedding of a node u ∈ V , first, we create a sequence of features H u concatenating its neighbor node embedding h node,l v and edge embedding h edge,l e , e = {u, v} where v ∈ N (u). We then pass h node,l u as queries and H u as keys and values to MAB.\nh node,l+1 u := MAB (l) h node,l u , H u , H u(11)\nH u := [h node,l v h edge,l e ] v∈N (u) , e = {u, v}(12)\nUpdating edge embeddings: To compute edge embedding h edge,l+1\ne , e = {u, v} we concatenate h node,l u , h node,l v and h edge,l e , and pass it through a dense layer (FF) followed by a residual connection and nonlinear activation. where e = {u, v}. Note that, although our edges are undirected, we compute the edge embedding by concatenating the embeddings in a specific order i.e., the channel embedding, time embedding and edge embedding. We show the process of updating nodes and edges in layer l using a gnnin Algorithm 1.\nAnswering the queries: As mentioned in Section 4, our last gnn (L) layer has embedding dimension 1. Hence, after processing the graph features through L many gnn layers, we use Eq. 7 to decode the graph and provide the predicted answers to the time series query. A forward pass of GraFITi is presented in Algorithm 2.\nComputational Complexity: The computational complexity of GraFITi primarily comes from using MAB in Eq. 11. For a single channel node u ∈ {1, ..., C}, the maximum complexity for computing its embedding is N (u) since only neighborhood connections are used for the update, and N (u) ⊆ {C + 1, ..., C + N + K ′ }. Thus, computing the \n| u ∈ V } //Eq. 5 h edge,0 ← {h edge,0 u,v\n| {u, v} ∈ E} //Eq: 6 //Graph Neural Network for l ∈ {1, . . . , L} do h node,l+1 , h edge,l+1 ← gnn (l) (h node,l , h edge,l , V, E) //Alg. 1 ŷ ← graph2ts(h node,L , h edge,L , V, E) //Eq: 7 return ŷ embeddings of all channel nodes is O(|E|). Similarly, the computational complexity of MAB for computing the embeddings of all nodes in\nV T ∪ V Q is also O(|E|). A feed forward layer FF : R Y → R Z will have a computational complexity of O(Y Z).\nDelineating from GRAPE (You et " }, { "figure_ref": [], "heading": "Competing algorithms", "publication_ref": [ "b9", "b31", "b33", "b34", "b46", "b47", "b44", "b9", "b5", "b4", "b4", "b0", "b13" ], "table_ref": [], "text": "Here, we provide brief details of the models that are compared with the proposed GraFITi for evaluation.\nWe select 4 IMTS forecasting models for comparison, including GRU-ODE-Bayes (De Brouwer et al. 2019), Neural Flows (Biloš et al. 2021), CRU (Schirmer et al. 2022) and LinODENet (Scholz et al. 2023). Additionally, we use the well established IMTS interpolation model mTAN (Shukla and Marlin 2021). It is interesting to verify the performance of well established MTS forecasting models for IMTS setup. We do this by adding missing value indicators as the separate channels to the series and process the time series along with the missing value indicators. Hence we compare with Informer+, Fedformer+, DLinear+ and NLinear+ which are variants of Informer (Zhou et al. 2021), FedFormer (Zhou et al. 2022), DLinear and NLinear (Zeng et al. 2022) respectively. We also compare with the published results from (De Brouwer et al. 2019) for the NeuralODE-VAE (Chen et al. 2018), Sequential VAE (Krishnan, Shalit, andSontag 2015, 2017), GRU-Simple (Che et al. 2018), GRU-D (Che et al. 2018) and T-LSTM (Baytas et al. 2017). 2019), applied 5-fold cross-validation and selected hyperparameters using a holdout validation set (20%). For evaluation, we used 10% unseen data. All models were trained on Mean Squared Error, which is also the evaluation metric.\nHyperparamter search We searched the following hyperparameters for GraFITi: L ∈ {1, 2, 3, 4}, #heads in MAB from {1, 2, 4}, and hidden nodes in dense layers from {16, 32, 64, 128, 256}. We followed the procedure of Horn et al. (2020) for selecting the hyperparameters. Specifically, we randomly sampled sets of 5 different hyperparameters and choose the one that has the best performance on validation dataset. We used the Adam optimizer with learning rate of 0.001, halving it when validation loss did not improve for 10 epochs. All models were trained for up to 200 epochs, using early stopping with a patience to 30 epochs. Hyperparameters for the baseline models are presented in the supplementary material. All the models were experimented using the PyTorch library on a GeForce RTX-3090 GPU." }, { "figure_ref": [], "heading": "Experimental results", "publication_ref": [ "b33", "b1", "b9", "b33", "b1", "b9" ], "table_ref": [ "tab_1" ], "text": "First, we set the observation and prediction range of the IMTS following (Scholz et al. 2023;Biloš et al. 2021;De Brouwer et al. 2019). For the USHCN dataset, the model observes for the first 3 years and forecasts the next 3 time steps. For the medical datasets, the model observes for the first 36 hours in the series and predicts the next 3 time steps. The results, including the mean and standard deviation, are presented in Table 2. The best result is highlighted in bold and the next best in italics. Additionally, we also provide the published results from (Scholz et al. 2023;Biloš et al. 2021;De Brouwer et al. 2019) in brackets for comparison. The proposed GraFITi model is shown to be superior compared to all baseline models across all the datasets. Specifically, in the MIMIC-III and MIMIC-IV datasets, GraFITi provides around 11.2% and 17.2% improvement in forecasting accuracy compared to the next best IMTS forecasting model LinODEnet. The results on the USHCN dataset have high variance, therefore, it is challenging to compare the models on this dataset. However, we experimented on it for completeness. Again, we achieve the best result with 9.2% improvement compared to the next best model. We note that, the MTS forecasting models that are adapted for the IMTS task, perform worse than any of the IMTS forecasting models demonstrating the limitation of MTS models applied to IMTS tasks." }, { "figure_ref": [ "fig_5" ], "heading": "Efficiency comparison", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We compare the efficiency of leading IMTS forecasting models: GraFITi, LinODEnet, CRU, Neural Flow, and GRU-ODE-Bayes. We evaluate them in terms of both execution time (batch size: 64) and MSE. The results, presented in Figure 4, show that Varying observation and forecast ranges This experiment is conducted with two different observation ranges (24 and 36 hours) and two different prediction ranges for each observation range. Specifically, for the observation range of 24 hours, the prediction ranges are 12 and 24 hours, and for the observation range of 36 hours, the prediction ranges are 6 and 12 hours. This approach allows for a more comprehensive evaluation of the model's performance across various scenarios of observation and prediction ranges. The results are presented in Table 3. Again for varying observation and forecast ranges, GraFITi is the top performer, followed by LinODENet. Significant gains in forecasting accuracy are observed in the MIMIC-III and MIMIC-IV datasets. On average, GraFITi improves the accuracy of LinODEnet, the next best IMTS forecasting model, by 8.5% in MIMIC-III, 15.5% in MIMIC-IV, and 2.6% in the Physionet'12 dataset. " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "The GraFITi model is a potential tool for forecasting on IMTS. It outperforms existing state-of-the-art models, even in highly sparse datasets (up to 98% sparsity in MIMIC-IV). However, we note a limitation in the compatibility of the model with certain data configurations. In particular, the GraFITi faces a challenge when applied to Asynchronous Time Series datasets. In such datasets, channels are observed asynchronously at various time points, resulting in disconnected sparse graphs. This disconnection hinders the flow of information and can be problematic when channels have a strong correlation towards the forecasts as model may not be able to capture these correlations. It can be observed from Table 4 where GraFITi is compared with the next best baseline model LinODENet for varying sparsity levels using MIMIC-III dataset. The performance of GraFITi deteriorates with increase in sparsity levels and gets worst when the series become asynchronous.\nMoreover, the existing model cannot handle meta data associated with the IMTS. Although, these meta data points could be introduced as additional channel nodes, this would again disconnect the graph due to lack of edges connecting the time nodes to meta data nodes. One possible solution to both the challenges is to interconnect all the channel nodes including meta data if present, and apply a distinct multihead attention on them. Therefore, in future, we aim to enhance the capability of the proposed model to handle Asynchronous Time Series datasets and meta data in the series." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a Graph based model called GraFITi for the forecasting of irregularly sampled time series with missing values (IMTS). First we represent the time series as a Sparsity Structure Graph with channels and observation times as nodes and observation measurements as edges; and re-represent the task of time series forecasting as an edge weight prediction problem in a graph. An attention based architecture is proposed for learning the interactions between the nodes and edges in the graph. We experimented on 4 datasets including 3 real world and 1 synthetic dataset for various observation and prediction ranges. The extensive experimental evaluation demonstrates that the proposed GraFITi provides superior forecasts compared to the stateof-the-art IMTS forecasting models." }, { "figure_ref": [], "heading": "A Ablation studies A.1 Importance of target edge", "publication_ref": [], "table_ref": [ "tab_5", "tab_6" ], "text": "In the current graph representation, target edges are connected in the graph providing rich embedding to learn the edge weight. However, we would like to see the performance of the model without query edges in the graph. For this, we compared the experimental results of GraFITi and GraFITi\\T where GraFITi\\T is the same architecture without query edges in the graph. The predictions are made after L -1 th layer by concatenating the channel embedding and the sinusoidal embedding of the query time node and passing it through the dense layer. The results for the real IMTS datasets are reported in Table 5. We see that the performance of the GraFITi deteriorated significantly without query edges in the graph. GraFITi exploit the sparse structure in the graph for predicting the weight of the query edge. The attention mechanism help the target edge to gather the useful information from the incident nodes.\nA.2 MAB vs GAT as Graph Neural Network module gnn (Eq 8) In Table 6, we compare the performance of MAB and GAT as a gnn module in the proposed GraFITi. We use MIMIC-III, MIMIC-IV and Physionet'12 for the comparison. We notice that the performance of MAB and GAT are similar. Hence, we use MAB as the gnn module in the proposed GraFITi. Please note that the objective of this work is not to find the best gnn module but to show IMTS forecasting using graph neural networks." }, { "figure_ref": [], "heading": "B Hyperparameter search", "publication_ref": [ "b44" ], "table_ref": [], "text": "We search the following hyperparameters for IMTS forecasting models as mentioned in the respective works: GRU-ODE-Bayes: We set the number of hidden layers to 3 and selected solver from {euler, dopri} Neural Flows We searched for the flow layers from {1, 4} and set the hidden layers to 2 LinODENet We searched for hidden size from {64, 128}, latent size from {128, 192}. We set the encoder with 5-block ResNet with 2 ReLU pre-activated layers each, StackedFilter of 3 KalmanCells, with linear one in the beginning.\nCRU We searched for latent state dimension from {10, 20, 30}, number of basis matrices from {10, 20} and bandwidth from {3, 10}.\nmTAN We searched the #attention heads from {1, 2, 4}, #reference time points from {8, 16, 32, 64, 128}, latent dimensions form {20, 30, 40, 50}, generator layers from {25, 50, 100, 150}, and reconstruction layers from {32, 64, 128, 256}.\nFor the all the MTS forecasting models, we used the default hyperparameters provided in (Zeng et al. 2022)." } ]
Forecasting irregularly sampled time series with missing values is a crucial task for numerous real-world applications such as healthcare, astronomy, and climate sciences. Stateof-the-art approaches to this problem rely on Ordinary Differential Equations (ODEs) which are known to be slow and often require additional features to handle missing values. To address this issue, we propose a novel model using Graphs for Forecasting Irregularly Sampled Time Series with missing values which we call GraFITi. GraFITi first converts the time series to a Sparsity Structure Graph which is a sparse bipartite graph, and then reformulates the forecasting problem as the edge weight prediction task in the graph. It uses the power of Graph Neural Networks to learn the graph and predict the target edge weights. GraFITi has been tested on 3 real-world and 1 synthetic irregularly sampled time series dataset with missing values and compared with various stateof-the-art models. The experimental results demonstrate that GraFITi improves the forecasting accuracy by up to 17% and reduces the run time up to 5 times compared to the state-ofthe-art forecasting models.
GraFITi: Graphs for Forecasting Irregularly Sampled Time Series
[ { "figure_caption": "Figure 1: (a) multivariate time series forecasting, (b) irregular multivariate time series forecasting, (c) forecasting irregularly sampled multivariate time series with missing values.In all cases the observation range is from time t 0 to t 6 and the forecasting range is from time t 7 to t 10 .", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Representation of IMTS as Sparsity Structure Graph. (b) is the Sparsity Structure Graph representation of (a) where times and channels are the nodes and observation measurements are the edges with observations values. Target edges are provided with (0, 0).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overall architecture of GraFITi.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "For MIMIC-III, MIMIC-IV and USHCN, we followed the pre-processing steps provided by Scholz et al. (2023); Biloš et al. (2021); De Brouwer et al. (2019). Hence, observations in MIMIC-III and MIMIC-IV are rounded for 30 mins and 1 hour respectively. Whereas for the Physionet'12, we follow the protocol of Che et al. (2018); Cao et al. (2018); Tashiro et al. (2021) and processed the dataset to have hourly observations.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of IMTS forecasting models: GraFITi, LinODEnet, CRU, Neural Flows and GRU-ODE-Bayes in terms of efficiency: evaluation time against error metric.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Statistics of the datasets used in the experiments. Basic statistics of the datasets is provided in Table1. Physionet'12(Silva et al. 2012) consists of ICU patient records observed for 48 hours. MIMIC-III(Johnson et al. 2016) is also a medical dataset that consists measurements of the ICU patients observed for 48 hours. MIMIC-IV(Johnson et al. 2021) is built upon the MIMIC-III database. USHCN(Menne, Williams Jr, and Vose 2015) is a climate dataset that consists of the measurements of daily temperatures, precipitation and snow observed over 150 years from 1218 meteorological stations in the USA.", "figure_data": "6.1 Dataset description4 datasets including 3 real world medical and 1 syntheticclimate IMTS datasets are used for evaluating the proposedmodel.al. 2020) You et al.(2020) introduced GRAPE, a graph-based model for imput-ing and classifying vector datasets with missing values. Thisapproach employs a bipartite graph, with nodes divided intoseparate sets for sample IDs and sample features. The edgesof this graph represent the feature values associated with thesamples. Notably, GRAPE learns in a transductive manner,encompassing all the data samples, including those from thetest set, within in the graph. In contrast, GraFITi uses in-ductive approach. Here, each instance is a Sparsity StructureGraph, tailored for time series data. In this structure, nodesare divided into distinct sets for channels and timepoints,while the edges are the time series observations.6 ExperimentsSparsity means the percentage of missing observations in thetime seriesName#Sample#Chann.Max.len.Max.Obs.SparsityUSHCN1,100529032077.9%MIMIC-III21,000969671094.2%MIMIC-IV18,000102710134097.8%Physionet'1212,000374852085.7%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results for forecasting next three time steps. Evaluation metric MSE, Lower is better. Best results are in bold and the next best are in italics. Published results are presented in open brackets, Physionet'12 dataset was not used by the baseline models hence do not have published results. We show % improvement with ↑. 'ME' indicates Memory Error.", "figure_data": "USHCNMIMIC-IIIMIMIC-IVPhysionet'12DLinear+0.347 ± 0.0650.691 ± 0.0160.577 ± 0.0010.380 ± 0.001NLinear+0.452 ± 0.1010.726 ± 0.0190.620 ± 0.0020.382 ± 0.001Informer+0.320 ± 0.0470.512 ± 0.0640.420 ± 0.0070.347 ± 0.001FedFormer+2.990 ± 0.4761.100 ± 0.0592.135 ± 0.3040.455 ± 0.004NeuralODE-VAE-(0.960 ± 0.110)-(0.890 ± 0.010)--Sequential VAE-(0.830 ± 0.070)-(0.920 ± 0.090)--GRU-Simple-(0.750 ± 0.120)-(0.820 ± 0.050)--GRU-D-(0.530 ± 0.060)-(0.790 ± 0.060)--T-LSTM-(0.590 ± 0.110)-(0.620 ± 0.050)--mTAN0.300 ± 0.0380.540 ± 0.036ME0.315 ± 0.002GRU-ODE-Bayes0.401 ± 0.089 (0.430 ± 0.070)0.476 ± 0.043 (0.480 ± 0.010)0.360 ± 0.001 (0.379 ± 0.005)0.329 ± 0.004Neural Flow0.414 ± 0.1020.477 ± 0.041 (0.490 ± 0.004)0.354 ± 0.001 (0.364 ± 0.008)0.326 ± 0.004CRU0.290 ± 0.0600.592 ± 0.049ME0.379 ± 0.003LinODEnet0.300 ± 0.060 (0.290 ± 0.060)0.446 ± 0.033 (0.450 ± 0.020)0.272 ± 0.002 (0.274 ± 0.002)0.299 ± 0.001GraFITi (ours)0.272 ± 0.047 ↑ 9.3%0.396 ± 0.030 ↑ 11.2%0.225 ± 0.001 ↑ 17.2%0.286 ± 0.001 ↑ 4.3%6.3 Experimental setup", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experimental results on varying observation and forecasting ranges for the medical datasets. Evaluation measure is MSE. Lower is better. Best results are in bold and the second best are in italics. ME indicates memory error.", "figure_data": "Obs. / Pred.GraFITi (ours)LinODEnetCRUNeural FlowGRU-ODE-Bayes↑ %24/120.438 ± 0.0090 .477 ± 0 .0210.575 ± 0.0200.588 ± 0.0140.591 ± 0.018↑ 8.2%MIMIC-III24/24 36/60.491 ± 0.014 0.457 ± 0.0500 .531 ± 0 .022 0 .492 ± 0 .0190.619 ± 0.028 0.647 ± 0.0510.651 ± 0.017 0.573 ± 0.0430.653 ± 0.023 0.580 ± 0.049↑ 7.5% ↑ 7.1%36/120.490 ± 0.0270 .554 ± 0 .0420.680 ± 0.0430.620 ± 0.0350.632 ± 0.044↑ 10.8%24/120.285 ± 0.0010 .335 ± 0 .002ME0.465 ± 0.0030.366 ± 0.154↑ 14.9%MIMIC-IV24/24 36/60.285 ± 0.002 0.260 ± 0.0020 .336 ± 0 .002 0 .309 ± 0 .002ME ME0.465 ± 0.003 0.405 ± 0.0010.439 ± 0.003 0.393 ± 0.002↑ 15.1% ↑ 15.9%36/120.261 ± 0.0050 .309 ± 0 .002ME0.395 ± 0.0010.393 ± 0.002↑ 15.5%24/120.365 ± 0.0010 .373 ± 0 .0010.435 ± 0.0010.431 ± 0.0010.432 ± 0.003↑ 2.1%Physionet'1224/24 36/60.401 ± 0.001 0.319 ± 0.0010 .411 ± 0 .001 0 .329 ± 0 .0010.467 ± 0.002 0.396 ± 0.0030.506 ± 0.002 0.365 ± 0.0010.505 ± 0.001 0.363 ± 0.004↑ 2.4% ↑ 3.0%36/120.347 ± 0.0010 .357 ± 0 .0010.417 ± 0.0010.398 ± 0.0010.401 ± 0.003↑ 2.8%for datasets with longer time series like MIMIC-IV andUSHCN, GraFITi significantly outpaces ODE and flow-based models. Specifically, GraFITi is over 5 times fasterthan the fastest ODE model, LinODEnet, in both MIMIC-IVand USHCN. Even for shorter time series datasets like Phy-sionet'12 and MIMIC-III, GraFITi remains twice as fast asLinODEnet. Moreover, on average, GraFITi surpasses GRU-ODE-Bayes in speed by an order of magnitude.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance of GraFITi with varying sparsity levels using MIMIC-III dataset. The 'IMTS' dataset refers to the actual dataset, while 'AsTS' is a synthetic asynchronous time series dataset created by restricting the number of observed channels at each time point to 1. The 'AsTS + x%' dataset is created by modifying 'AsTS' dataset by retrieving x% of the missing observations. Goal is to observe 36 hours of data and then forecast the next 3 time steps", "figure_data": "ModelIMTSAsTSAsTS+10%AsTS+50%AsTS+90%GraFITi0.3960.9310.8450.5470.413LinODENet0.4460.8940.8150.5810.452", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of GraFITi\\T (GraFITi without query edges in the graph), evaluation metric MSE.", "figure_data": "ModelMIMIC-IIIMIMIC-IVPhysionet'12GraFITi0.396 ± 0.0300.225 ± 0.0010.286 ± 0.001GraFITi\\T0.433 ± 0.0190.269 ± 0.0010.288 ± 0.001", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparing performance of MAB (GraFITi-MAB) and GAT (GraFITi-GAT) as gnn module in GraFITi.", "figure_data": "MIMIC-IIIMIMIC-IVPhysionet'12GraFITi-MAB0.396 ± 0.0300.225 ± 0.0010.286 ± 0.001GraFITi-GAT0.388 ± 0.0200.225 ± 0.0010.288 ± 0.001", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Vijaya Krishna Yalavarthi; Kiran Madhusudhanan; Randolf Sholz; Nourhan Ahmed; Johannes Burchert; Shayan Javed; Stefan Born; Lars Schmidt-Thieme
[ { "authors": "I M Baytas; C Xiao; X Zhang; F Wang; A K Jain; J Zhou", "journal": "", "ref_id": "b0", "title": "Patient subtyping via time-aware LSTM networks", "year": "2017" }, { "authors": "M Biloš; J Sommer; S S Rangapuram; T Januschowski; S Günnemann", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Neural Flows: Efficient Alternative to Neural ODEs", "year": "2021" }, { "authors": "D Cao; Y Wang; J Duan; C Zhang; X Zhu; C Huang; Y Tong; B Xu; J Bai; J Tong", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b2", "title": "Spectral temporal graph neural network for multivariate time-series forecasting", "year": "2020" }, { "authors": "W Cao; D Wang; J Li; H Zhou; L Li; Y Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Brits: Bidirectional recurrent imputation for time series", "year": "2018" }, { "authors": "Z Che; S Purushotham; K Cho; D Sontag; Y Liu", "journal": "Scientific Reports", "ref_id": "b4", "title": "Recurrent neural networks for multivariate time series with missing values", "year": "2018" }, { "authors": "R T Chen; Y Rubanova; J Bettencourt; D K Duvenaud", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Neural ordinary differential equations", "year": "2018" }, { "authors": "Y Chen; I Segovia; Y R Gel", "journal": "", "ref_id": "b6", "title": "Z-GCNETs: time zigzags at graph convolutional networks for time series forecasting", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "A Cini; I Marisca; C Alippi", "journal": "", "ref_id": "b8", "title": "Filling the G ap s: Multivariate Time Series Imputation by Graph Neural Networks", "year": "2022" }, { "authors": "E De Brouwer; J Simm; A Arany; Y Moreau", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "GRU-ODE-Bayes: Continuous modeling of sporadicallyobserved time series", "year": "2019" }, { "authors": "J G De Gooijer; R J Hyndman", "journal": "International Journal of Forecasting", "ref_id": "b10", "title": "25 years of time series forecasting", "year": "2006" }, { "authors": "H R De Sá; R B Prudêncio", "journal": "IEEE", "ref_id": "b11", "title": "Supervised link prediction in weighted networks", "year": "2011" }, { "authors": "C Fu; M Zhao; L Fan; X Chen; J Chen; Z Wu; Y Xia; Xuan ; Q ", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b12", "title": "Link weight prediction using supervised learning methods and its application to yelp layered network", "year": "2018" }, { "authors": "M Horn; M Moor; C Bock; B Rieck; K Borgwardt", "journal": "", "ref_id": "b13", "title": "Set functions for time series", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "Y Hou; L B Holder", "journal": "IEEE", "ref_id": "b15", "title": "Deep learning approach to link weight prediction", "year": "2017" }, { "authors": "A Johnson; L Bulgarelli; T Pollard; S Horng; L ; Celi; R Mark", "journal": "PhysioNet", "ref_id": "b16", "title": "MIMIC-IV (version 1.0)", "year": "2021" }, { "authors": "A E Johnson; T J Pollard; L Shen; L.-W H Lehman; M Feng; M Ghassemi; B Moody; P Szolovits; L Anthony Celi; R G Mark", "journal": "Scientific Data", "ref_id": "b17", "title": "MIMIC-III, a freely accessible critical care database", "year": "2016" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b18", "title": "Semi-Supervised Classification with Graph Convolutional Networks", "year": "2017" }, { "authors": "R Krishnan; U Shalit; D Sontag", "journal": "", "ref_id": "b19", "title": "Structured inference networks for nonlinear state space models", "year": "2017" }, { "authors": "R G Krishnan; U Shalit; D Sontag", "journal": "", "ref_id": "b20", "title": "Deep kalman filters", "year": "2015" }, { "authors": "S Kumar; F Spezzano; V Subrahmanian; C Faloutsos", "journal": "IEEE", "ref_id": "b21", "title": "Edge weight prediction in weighted signed networks", "year": "2016" }, { "authors": "S C Li; -X Marlin; B M ", "journal": "", "ref_id": "b22", "title": "Classification of Sparse and Irregularly Sampled Time Series with Mixtures of Expected Gaussian Kernels and Random Features", "year": "2015" }, { "authors": "B Lim; S Zohren", "journal": "Philosophical Transactions of the Royal Society A", "ref_id": "b23", "title": "Time-series forecasting with deep learning: a survey", "year": "2021" }, { "authors": "Z C Lipton; D Kale; R Wetzel", "journal": "", "ref_id": "b24", "title": "Directly modeling missing data in sequences with rnns: Improved classification of clinical time series", "year": "2016" }, { "authors": " Pmlr", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "I Marisca; A Cini; C Alippi", "journal": "", "ref_id": "b26", "title": "Learning to Reconstruct Missing Data from Spatiotemporal Graphs with Sparse Observations", "year": "2022" }, { "authors": "M J Menne; C Williams; R S Vose", "journal": "Oak Ridge National Laboratory", "ref_id": "b27", "title": "United States historical climatology network daily temperature, precipitation, and snow data", "year": "2015" }, { "authors": "T Murata; S Moriyasu", "journal": "IEEE", "ref_id": "b28", "title": "Link prediction of social networks based on weighted proximity measures", "year": "2007" }, { "authors": "Y Rubanova; R T Chen; D K Duvenaud", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Latent ordinary differential equations for irregularly-sampled time series", "year": "2019" }, { "authors": "V G Satorras; S S Rangapuram; T Januschowski", "journal": "", "ref_id": "b30", "title": "Multivariate time series forecasting with latent graph inference", "year": "2022" }, { "authors": "M Schirmer; M Eltayeb; S Lessmann; M Rudolph", "journal": "", "ref_id": "b31", "title": "Modeling Irregular Time Series with Continuous Recurrent Units", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b32", "title": "", "year": "" }, { "authors": "R Scholz; S Born; N Duong-Trung; M N Cruz-Bournazou; L Schmidt-Thieme", "journal": "", "ref_id": "b33", "title": "Latent Linear ODEs with Neural Kalman Filtering for Irregular Time Series Forecasting", "year": "2023" }, { "authors": "S N Shukla; Marlin ; B ", "journal": "", "ref_id": "b34", "title": "Multi-Time Attention Networks for Irregularly Sampled Time Series", "year": "2021" }, { "authors": "S N Shukla; Marlin ; B ", "journal": "", "ref_id": "b35", "title": "Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series", "year": "2022" }, { "authors": "I Silva; G Moody; D J Scott; L A Celi; R G Mark", "journal": "Computing in Cardiology", "ref_id": "b36", "title": "Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012", "year": "2012" }, { "authors": "Y Tashiro; J Song; Y Song; S Ermon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b37", "title": "CSDI: Conditional score-based diffusion models for probabilistic time series imputation", "year": "2021" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Attention is all you need", "year": "2017" }, { "authors": "P Velickovic; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "stat", "ref_id": "b39", "title": "Graph attention networks", "year": "1050" }, { "authors": "Z Wu; S Pan; G Long; J Jiang; X Chang; C Zhang", "journal": "", "ref_id": "b40", "title": "Connecting the dots: Multivariate time series forecasting with graph neural networks", "year": "2020" }, { "authors": "V K Yalavarthi; J Burchert; L Schmidt-Thieme", "journal": "", "ref_id": "b41", "title": "Tripletformer for Probabilistic Interpolation of Asynchronous Time Series", "year": "2022" }, { "authors": "X Yang; B Wang", "journal": "Applied Soft Computing", "ref_id": "b42", "title": "Local ranking and global fusion for personalized recommendation", "year": "2020" }, { "authors": "J You; X Ma; Y Ding; M J Kochenderfer; J Leskovec", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Handling missing data with graph representation learning", "year": "2020" }, { "authors": "A Zeng; M Chen; L Zhang; Q Xu", "journal": "", "ref_id": "b44", "title": "Are transformers effective for time series forecasting?", "year": "2022" }, { "authors": "J Zhao; L Miao; J Yang; H Fang; Q.-M Zhang; M Nie; P Holme; T Zhou", "journal": "Scientific reports", "ref_id": "b45", "title": "Prediction of links and weights in networks by reliable routes", "year": "2015" }, { "authors": "H Zhou; S Zhang; J Peng; S Zhang; J Li; H Xiong; W Zhang", "journal": "", "ref_id": "b46", "title": "Informer: Beyond efficient transformer for long sequence time-series forecasting", "year": "2021" }, { "authors": "T Zhou; Z Ma; Q Wen; X Wang; L Sun; Jin ; R ", "journal": "", "ref_id": "b47", "title": "FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b48", "title": "", "year": "" }, { "authors": "U Zulaika; R Sanchez-Corcuera; A Almeida; D Ipina", "journal": "Applied Soft Computing", "ref_id": "b49", "title": "LWP-WL: Link weight prediction based on CNNs and the Weisfeiler-Lehman algorithm", "year": "2022" } ]
[ { "formula_coordinates": [ 1, 134.16, 659.25, 158.48, 23.52 ], "formula_id": "formula_0", "formula_text": "d dt x(t) = f (t, x(t))(1)" }, { "formula_coordinates": [ 3, 114.96, 372.62, 116.05, 31.13 ], "formula_id": "formula_1", "formula_text": "ℓ(y, y ′ ) := 1 K K k=1 (y k -y ′ k ) 2" }, { "formula_coordinates": [ 3, 87.6, 491.37, 167.19, 12.57 ], "formula_id": "formula_2", "formula_text": "L(ŷ; p data ) := E (Q,S,y)∼p data ℓ(y, ŷ(Q, S))" }, { "formula_coordinates": [ 3, 510.48, 89.21, 44.37, 12.13 ], "formula_id": "formula_3", "formula_text": "x n ∈ R 2C ." }, { "formula_coordinates": [ 3, 319.56, 133.29, 175.74, 11.86 ], "formula_id": "formula_4", "formula_text": "(V = V C ∪ V T ). Edges E ⊆ V C × V T in" }, { "formula_coordinates": [ 3, 321.96, 216.71, 236.12, 82.34 ], "formula_id": "formula_5", "formula_text": "V := {1, . . . , C + N } = VC ∪ VT E := {i, j} | xi-C,j = NaN, i ∈ VT , j ∈ VC F node v := v : v ∈ VC tj : v ∈ VT , j = v -C F edge e := xi-C,j for e = {i, j} ∈ E with i ∈ VT , j ∈ VC (2)" }, { "formula_coordinates": [ 3, 319.56, 486.67, 238.52, 129.19 ], "formula_id": "formula_6", "formula_text": "V := V C ∪ V T ∪ V Q = {1, . . . , C + N + K ′ } E := {i, j} | x i-C,j = NaN, i ∈ V T , j ∈ V C ∪ {i, j} | i ∈ V Q , j ∈ V C , (t ′ i-N -C , j) ∈ Q F node v :=    v : v ∈ V C t j : v ∈ V T , j = v -C t ′ j : v ∈ V Q , j = v -C -N F edge e := (x i,j,1 , 1) : e = {i, j} ∈ E, i ∈ V T , j ∈ V C (0, 0) : e = {i, j} ∈ E, i ∈ V Q , j ∈ V C (3)" }, { "formula_coordinates": [ 3, 319.56, 622.99, 238.45, 25.56 ], "formula_id": "formula_7", "formula_text": "(t ′ i-N -C , j) ∈ Q is supposed to mean that (t ′ i-N -C , j" }, { "formula_coordinates": [ 3, 360.36, 664.76, 197.72, 11.28 ], "formula_id": "formula_8", "formula_text": "ts2graph(X, Q) := (V, E, F node , F edge ) (4)" }, { "formula_coordinates": [ 4, 68.64, 150.92, 224, 42.46 ], "formula_id": "formula_9", "formula_text": "h node,0 v := FF(onehot(F node v )) : v ∈ V C sin(FF(F node v )) : v ∈ V T ∪ V Q (5) h edge,0 e := FF(F edge e ) for e ∈ E(6)" }, { "formula_coordinates": [ 4, 60.72, 270.68, 231.92, 28.42 ], "formula_id": "formula_10", "formula_text": "ŷ := graph2ts(h node,L , h edge,L , V, E) = (h edge,L e k ) k=1:K where e k = {C + N + k ′ , c k } with t ′ k ′ = q k (7)" }, { "formula_coordinates": [ 4, 85.08, 641.13, 141.26, 10.32 ], "formula_id": "formula_11", "formula_text": "MAB(Q, K, V) := α(H + FF(H))" }, { "formula_coordinates": [ 4, 139.2, 655.05, 153.44, 10.32 ], "formula_id": "formula_12", "formula_text": "H := α(Q + MHA(Q, K, V))(9)" }, { "formula_coordinates": [ 4, 319.56, 74.08, 236.2, 82.33 ], "formula_id": "formula_13", "formula_text": "Require: h node,l , h edge,l , V, E for u ∈ V do Hu ← [h node,l v h edge,l e ] v∈N (u) //e = {u, v} h node,l+1 u ← MAB (l) (h node,l u , Hu, Hu) for e = {u, v} ∈ E do h edge,l+1 e ← α h edge,l e + FF (l) h node,l u h node,l v h edge,l e return h node,l+1 , h edge,l+1" }, { "formula_coordinates": [ 4, 385.08, 211.65, 173.12, 10.32 ], "formula_id": "formula_14", "formula_text": "N (u) := {v | {u, v} ∈ E} (10)" }, { "formula_coordinates": [ 4, 328.44, 365.93, 229.76, 13.93 ], "formula_id": "formula_15", "formula_text": "h node,l+1 u := MAB (l) h node,l u , H u , H u(11)" }, { "formula_coordinates": [ 4, 349.8, 383.25, 208.4, 14.5 ], "formula_id": "formula_16", "formula_text": "H u := [h node,l v h edge,l e ] v∈N (u) , e = {u, v}(12)" }, { "formula_coordinates": [ 5, 63, 245.4, 229.48, 20.66 ], "formula_id": "formula_17", "formula_text": "| u ∈ V } //Eq. 5 h edge,0 ← {h edge,0 u,v" }, { "formula_coordinates": [ 5, 54, 366.45, 238.81, 34.32 ], "formula_id": "formula_18", "formula_text": "V T ∪ V Q is also O(|E|). A feed forward layer FF : R Y → R Z will have a computational complexity of O(Y Z)." } ]
10.18653/v1/W17-4755
2023-05-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b51", "b4", "b3", "b21", "b57", "b5", "b30", "b55", "b34", "b20", "b58", "b27", "b7", "b49", "b38", "b43", "b41" ], "table_ref": [], "text": "Making connections between vision and language seems easy for humans, but extremely challenging for machines, despite a large body of research on image and video captioning (You et al., 2016;Aneja et al., 2018;Anderson et al., 2018;Gao et al., 2017;Zhou et al., 2018;Wang et al., 2018), visual question answering (Antol et al., 2015;Lu et al., 2016;Zhong et al., 2020), image synthesis (Reed et al., 2016;Dong et al., 2017;Zhou et al., 2019) or video generation (Li et al., 2018;Balaji et al., 2019;Wu et al., 2022;Singer et al., 2022;Villegas et al., 2022). While major improvements were made using Transformers (Vaswani et al., 2017), there is still a long way to go. Also, these tasks were widely tackled independently of each other, with no significant push for a more unified approach.\nFor tasks involving vision or language, information is usually processed by an encoder (e.g. Transformers, CNNs or LSTMs) that builds a numerical representation. While this approach is ubiq-uitous across both vision and NLP, it is fundamentally limited by its implicit, mostly unexplainable, and highly volatile nature. We strongly believe that such a representation can be replaced (or augmented) by a better, explicit, and more robust one.\nIn this work we introduce the Graph of Events in Space and Time (GEST) for representing visual or textual stories, as groups of events related in space and time at any level of semantics. GEST provides a common and meaningful representation, through which we can compute similarities or differences between texts or videos, and we could also generate texts or videos in an explainable, analytical way." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b29", "b18", "b53", "b31", "b16", "b20", "b10", "b46", "b8", "b56", "b50", "b31", "b16", "b40", "b6", "b39", "b13", "b14", "b52", "b47", "b15", "b0", "b17", "b36", "b32", "b9", "b28", "b2", "b19", "b54", "b37", "b12", "b22" ], "table_ref": [], "text": "Graphs that model text: Graphs were traditionally used in natural language processing (NLP) in many forms: syntactic trees (e.g. dependency or constituency parsing trees) (Lin, 1998;Culotta and Sorensen, 2004), semantic trees (in the form of Combinatory Categorial Grammar) (Zettlemoyer and Collins, 2012), Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) trees, Discourse Graphs (Christensen et al., 2013), knowledge graphs (Hao et al., 2017;Bauer et al., 2018;Wang et al., 2019) and Abstract Meaning Representation (AMR) graphs (Banarescu et al., 2013). Recently, Graph Neural Networks (GNNs) (Zhou et al., 2020;Wu et al., 2020) were employed to parse and encode such structures. RST trees (Mann and Thompson, 1988) and Discourse Graphs (Christensen et al., 2013) were developed as theories of text organization using relations between claims as the central component and emphasizing relations between these claims. Then, knowledge graphs are used for encoding true fact about the world, allowing for efficient interogation for Question Answering systems. Conversely, AMR graphs are semantic and represent links between concepts from the natural text. Crucially, two syntactically different sentences can share the same AMR graph if they are semantically similar.\nGraphs that model videos: Graphs were also used as a way to model videos (Sridhar et al., 2010;Aoun et al., 2011;Singh and Mohan, 2017). While previous approaches (Brendel and Todorovic, 2011;Chen and Grauman, 2016;Yuan et al., 2017;Wang and Gupta, 2018;Cherian et al., 2022) consider the nodes in the graph as video regions, we opt for a fundamentally different approach, modeling events as graph nodes. Aditya et al. (2018) define Scene Description Graphs (SDGs), graph-based intermediate representation built specifically for describing images. SDGs are based on objects, actions and semantic (based on KM-Ontology (Clark et al., 2004) ), ontological and spatial relations. With GEST we explicitly add the temporal aspect as we are interested in representing videos instead of images. Furthermore, our formulation is uniform (everything is an event), leads to a more compact representation, allows for more complex (e.g. semantic, logical) relations between events, while also being capable of representing such events at different scales (see Figure 2).\nText generation metrics: Text generation metrics were studied in the field of NLP for comparing two or more texts or documents (Sai et al., 2022). Common metrics include BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE (Lin, 2004) and SPICE (Anderson et al., 2016). While BLEU and ROUGE compute the similarity as the exact n-gram overlap, METEOR uses a more relaxed matching criteria, based on stemmed matching, followed by synonymy and paraphrase matching. SPICE builds a semantic scene-graph that is used to extract information about objects, attributes and their relationships. More recently, BERT (Devlin et al., 2019) was integrated into text metrics. BERTScore (Zhang et al., 2019) uses a BERT backbone to obtain embeddings for each token in the hypothesis and reference, which are matched using a greedy approach, The state-of-the-art BLEURT (Sellam et al., 2020) is pre-trained on a large number of synthetic samples then finetuned on WMT (Bojar et al., 2017) and WebNLG (Gardent et al., 2017). Synthetic data is generated by altering Wikipedia sentences via back-translation, random words dropping or mask-filling with BERT. Most pretraining signals are employed in the form of BLEU, ROUGE and BERTScore, back-translation likelihood or textual entailment.\nAll mentioned text generation metrics employ clear rules, but they lack explainability, due to the space in which computations are formed. The ngram space of BLEU, METEOR or ROUGE is simple, but totally counter-intuitive for humans. In the case of BERTScore and BLEURT the projected space is even more blurry and void of any intuitive understanding. Instead of projecting texts into an n-gram or Transformer space, we propose a new representation space, namely the space of events in space-time. Comparing events and their relations expressed in two texts is much more natural. The fact that the GEST space is explicit and grounded in the real world, is the very reason for which we obtain explainability and interpretability." }, { "figure_ref": [ "fig_0" ], "heading": "Graph of Events in Space and Time", "publication_ref": [], "table_ref": [], "text": "Fundamentally, a GEST is a means of representing stories. We focus on modeling stories as they are the main way of expressing ideas, sentiments, facts, perceptions, real-world or fantasy happenings. Stories are an essential component in theater, cinema in the form of storyboards and are also an integral part in relating, communicating and teach-ing historical events. Stories are universal: a life is a story, a dream is a story, a single event is a story. Atomic events create intricate stories in the same way that small parts form an object in a picture, or how words form a sentence. Therefore, in modeling stories, we distinguish interactions in space and time as the central component. In general, changes in space and time lead to the notion of events and interactions. Similarly to how changes in an image (image gradients) might represent edges, spacetime changes (at different levels of abstraction) represent events. Accordingly, events in space and time could be detectable, repeatable and discriminative. Interactions between events in space and time change the current state of the world, can trigger or cause other events and in turn cause other changes. Therefore, we use these events and their interactions in space and time as the fundamental component of GEST. Fundamentally, an edge connects two events in space and time. This connection can be, but is not limited to temporal (e.g. after, meanwhile), logical (e.g. and, or) or spatial (e.g. on top of). Since a node in GEST can also represent physical objects (e.g, \"The house exists for this period of time\") the graph connections can represent any potential relation between two objects or two events: the event \"house\" was involved in event: \"holding a meeting at that house\". Therefore, an edge can also represent an event by itself . For each event we encode mainly the type of action, the involved entities, the location and the timeframe in which an event takes place. Crucially, in GEST both explicit (e.g. actions) and implicit (e.g. existence) events are represented using the same rules. A GEST example can be found in Fig. 2 , while more examples are in Appendix, Sec. A.\nGEST can represent events at different scales, in greater detail by expanding an event node into another graph, or in a lesser detail by abstracting a graph into a single event node. In Fig. 2 we exemplify the power of such an approach. On the left of Fig. 2 we show the GEST associated to the following story: \"John says that Daniel bought a watch\". In the right half we expand the event \"Daniel bought a watch\" to a more detailed story (GEST). All other event nodes can be expanded into their own GEST stories (e.g. the paying action can be further expanded by detailing the procedure: currency, amount, method and so on). In principle, any GEST could become an event into a higherlevel GEST and vice-versa, any event could be Figure 2: GEST that illustrates the concept of multiple viewpoints and graph-node equivalence. Note that for brevity, we omit some details in the nodes (e.g. timeframe) and also add details to emphasize some points (e.g. the same entity edges).\nexpanded into a more detailed GEST.\nGEST represents concisely what happens in the real world. So, when vision and language represent the same world, they could also be represented by the same GEST. GEST is suitable for many tasks, including video-to-text or text-to-video generation. GEST is an alternative to the standard way of solving these tasks. Instead of generating natural language descriptions directly from an obfuscated and implicit representation given by a video encoder, GEST breaks video captioning in two problems: generate GEST from video, followed by generating text from GEST. Conversely, generating a video starting from a text prompt can be split into building GEST from text, followed by independently creating the video (Fig. 1). In this paper we demonstrate both directions and the advantages of the approach. We also argue that the main advantage of the highly-explicit GEST representation is to give total knowledge and control over the content of the text or video. Additional details and formal definition of GEST is given in Appendix, Sec. B." }, { "figure_ref": [], "heading": "Building ground truth GEST from text:", "publication_ref": [], "table_ref": [], "text": "Ground truth GEST from text is needed for training and evaluation. We note that building GEST representation from text is not a trivial task, and we aim to automate this process. Nevertheless, to obtain correct GEST from text human intervention is still needed. From each sentence, we want to extract information such as the type of actions, the entities involved, locations and the times of actions, as well as their relations. All this is extracted by parsing the dependency tree (automatically extracted1 ) of each individual sentence using a set of handcrafted rules (followed if needed by human correction). Context (e.g. location inference) and event ordering is also injected into the graph to obtain the complete GEST of a story." }, { "figure_ref": [], "heading": "bAbI corpus", "publication_ref": [ "b48", "b26" ], "table_ref": [], "text": "The bAbI corpus (Weston et al., 2015) introduces a set of 20 question answering tasks that are designed to act as proxy tasks for reading comprehension. As the grammar of bAbI is rather simple, we devised a set of handcrafted rules to automatically parse the dependency tree of each sentence in order to extract the relevant information. For bAbI, the text-tograph automatic module works flawlessly, always detecting and extracting the correct information from each sentence. In this work we focus on bAbI tasks numbered 1, 2,3,5,6,7,8,9,10,11,12,13,14. We leave the other tasks for future work, as they are devised with other goals in mind (e.g. tasks numbered 16 and 18 are devised for basic induction or size reasoning) This leads to a total of 26.364 graphs, with 21.588 train, 2.388 validation and 2.388 test graphs." }, { "figure_ref": [], "heading": "Videos-to-Paragraphs dataset", "publication_ref": [ "b11", "b11" ], "table_ref": [], "text": "The Videos-to-Paragraphs dataset (Bogolin et al., 2020) contains videos with two stages of text representations. The 1st stage contains contains simple sentences that describe simple actions, while the 2nd stage contains semantically rich descriptions. This duality is especially suited for GEST as the 1st stage is simple enough that we can immediately extract events as simple actions. The 2nd stage is semantically richer and compacter. This represents a crucial step-up from the bAbI corpus where only the simpler linguistic stage is present. Following (Bogolin et al., 2020) from the 1st stage as SVOs (Subject, Verb, Object) and to the 2nd stage texts as stories. In Videosto-Paragraphs we identify thre types of temporal relations between events (SVOs): \"next\", \"same time\" and \"meanwhile\", using soft margins to extract them. Using both 1st and 2nd stage texts annotations, we build (with minimal manual intervention) ground truth GEST representations for the entire dataset, a total of 1048 samples (with a 85-5-10 training, dev, test split) consisting of GESTs and the two stages of text descriptions." }, { "figure_ref": [], "heading": "GEST as a metric for comparing stories", "publication_ref": [ "b25", "b45" ], "table_ref": [], "text": "We first want to study and evaluate the power of GEST to capture content from stories in natural language. Ideally, different texts that illustrate the same underlying story should have the same GEST.\nWe evaluate this property by first defining a similarity metric between two GESTs and compare its performance (in separating texts that represent the same story vs. different stories) to other metrics from the literature that work directly on the original text in natural language. (SM) (Leordeanu and Hebert, 2005) and a modern deep learning based approach, Neural Graph Matching (NGM) (Wang et al., 2021). SM is a fast, robust and accurate method that uses the principal eigenvector of an affinity matrix 2 , while NGM employs multiple neural networks that learn and transform the affinity matrix into the association graph, which is further embedded and used as input for a vertex classifier." }, { "figure_ref": [], "heading": "Graph matching similarity metric", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "Results in Tab. 1 attest the power of our proposed representation: graph matching in the GEST space outperforms all classic text generation metrics (i.e. BLEU@4, METEOR and ROUGE) and even modern metrics based on pre-trained Transformers such as BERTScore. Nevertheless, the specifically and heavily trained BLEURT metric outperforms all considered metrics on this dataset. Note that the other metrics all lack access to the sheer amount of data that BLEURT metric was trained on (around 1.8 million samples). We reckon that given such data, a trained GRAPH-BLEURT metric could outperform the original BLEURT. The initial test show the representational power of GEST, but they do not test yet the capability of this representation to be combined with a heavily trained one. That would be another, complementary way, to prove the effectiveness of GEST. We test this capability by showing that GEST can boost a state-of-the-art, strongly trained metric, even when we combine the two in the simplest, linear way. Starting from the original text of the story, we learn 2 more details on building the matrix in Appendix, Sec. D.2 to transform the story automatically into GEST, and then obtain a GEST similarity score between stories by comparing, using graph matching, the corresponding generated GESTs. A second, BLEURT score between the stories is obtained as before. We then learn, on the training set, how to linearly combine the two scores, to best separate the texts of the same story vs. texts of different stories. We apply the same procedure to all classic metrics, in order to evaluate the benefit brought by GEST relative to other methods. We learn to transform a graph from a story in natural text, by using a sequenceto-sequence framework, with the story as input and the serialized graph as output. For further details on the training process see Appendix, Sec. D.2.\nIn Tab. 2 we show the results of BLEURT (top), those of other metrics combined with BLEURT using the same linear regression approach (middle) and the results of GEST (bottom), using the two graph matching methods (SM and NGM). It is important to note that in combination with other metrics BLEURT does not always improve, but when combined with GEST it always improves and by the largest margin. In the Appendix Sec. G, we show cases when BLEURT fails to predict when two different textual descriptions stem from the same video. In the first case this is due to the different writing style of the two annotators, while in the second case BLEURT assigns a high similarity score in spite of the fact that different actors perform somewhat similar actions. In both cases, the graph matching algorithm manages to correctly predict if the two pairs depict the same video. These tests prove the power of GEST: its new space and associated graph matching metric can be effectively used, with minimal training cost, to boost the performance of existing state-of-the-art." }, { "figure_ref": [], "heading": "GEST for text generation", "publication_ref": [], "table_ref": [], "text": "GEST describes the world in terms of events and how they relate in space and time and could provide a common ground between the real space and time and \"what we say\" about it in natural language. Atomic events in a linguistic story (e.g. SVOs) are also well formed events in real space and time, thus they provide a direct link between both worlds. Then relations between events define the spacetime structure at semantic level, inevitably becoming a central component in natural language generation. In the following set of experiments we want to better understand and evaluate the importance Method B@1↑ B@2↑ B@3↑ B@4↑ M↑ Table 3: Results for the task of the text generation on the test set of Videos-to-Paragraphs dataset, presented using common text generation metrics: BLEU@N (B@N), METEOR(M), ROUGE(R), CIDEr(C), BERTScore(BS) and BLEURT(BT). In the S2T (SVOs to text) experiments we trained models that take as input the SVO sequence, while in the G2T (Graph to text) experiments we give the serialized graph as input. 2 marks experiments in which we use an additional training stage with data from bAbI corpus. We highlight with bold the best value for each metric. For brevity all values are scaled by a factor of 100.\nof these relations, which are an essential component of GEST. We will evaluate the importantce of these connections between events, by comparing language that is generated from events only (task S2T -SVOs-to-Text) to language that is generated from events and their relations, that is full GESTs (G2T -GEST-to-Text) -for both using the sequence-to-sequence net.\nWe perform the tests on the Video-to-Paragraphs dataset, where the relations between events are mainly temporal in nature. Thus, to better highlight the differences between the textual SVOs and GEST representations we decide to break the implicit temporal relations given by SVOs ordering, by randomizing (with the same seed) both representations. In the case of SVOs, the order is randomized while for the graphs the order of the edges in the representation is randomized (based on the SVOs permutation). In this setup we can clearly evaluate the impact of the temporal information encoded in the graph structure." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "Results in Tab. 3 validate that GEST is suited for text generation and provides a better representation than plain textual descriptions of the atomic events. Conceptually, the graph representation should always be better as it explicitly encodes complex temporal relations that are not present in SVOs. Nevertheless this does not directly guarantee a better off the shelf performance for text generation as the available training data in our tests is very limited. Our tests show that these limitation is overcome by the power of the representation. In the first two rows of Tab. 3, both SVOs to text (S2T) and graph to text (G2T) models are trained starting from a general pre-trained encoder-decoder model with no previous knowledge of our proposed representation. Even in this very limited setup (under 900 training samples) the graph representation proves to be superior. Adding more pretraining data, using the bAbI corpus only extends the performance gap between the two approaches (last section of Tab. 3). In the case of bABi we only have access to a single textual representation for each graph, which is akin to the SVOs in the Videos-to-Paragraphs dataset. For this reason, the S2T task on bAbI can be simply solved by using the identity function, while the G2T task can be solved by describing each node. However they provide valuable aditional pretraining data, especially for G2T as it helps the model to better understand and order events in time. The ability to understand and order events in time enables a better transition from simple sentences to longer, more complex natural language" }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper we introduce GEST, which could set the groundwork for a novel and universal representation for events and stories. We discuss and motivate its necessity and versatility, while also empirically validating its practical value, in comparing and generating stories. Even with very limited data, our experiments show that GEST is more than fitted for recreating the underlying story, within a space that allows for very reliable and human correlated comparisons. This explicit and structured nature of the GEST space lends itself beautifully to various other uses (e.g. video generation).\nGEST aims to bring together vision and language, as a central component in an explainable space. Such explicit models are largely missing in the literature, but as we believe that our work demonstrates, they could be useful to better understand language and also control its relation to the visual world.\nMaybe the most important limitation of our work is, for the moment, data availability. This lack of quality data affects both learning tasks (i.e. graphto-text and text-to-graph), as access to more graphtext pairs will greatly improve performance. We found that this is especially relevant for the textto-graph task, as we conjecture that is represents the harder task. Because we use models that are pre-trained with natural language as input and output, the graph representation has to be learned and understood by the encoder (for the graph-to-text task) and the decoder (for the text-to-graph task). Especially with a limited number of samples, we believe understanding the new graph representation is easier than generating it. Moreover, for the textto-graph task we ask the decoder to generate a very structured output, defined by a precise grammar.\nOur experiments highlighted the power of GEST when applied on real-world events with temporal relations between events. Crucially, this represents only a small subset of what GEST can model. Due to lack of data, we are unable in this work to show the full potential of GEST, namely to represent more abstract events. For example, a revolution is still an event, but at the same time a complex story comprised of multiple events.\nFigure 3: Automatically generated GEST associated to the following bAbI story: \"John is in the playground. Bob is in the office. John picked up the football. Bob went to the kitchen.\". Notice that the third sentence does not contain any explicit location. Nevertheless, this location is inferred from the context of the story and is added in the corresponding node. Also note that for brevity, we omit some details in the nodes (e.g. timeframe and location for some nodes) and also add details to emphasize some points (e.g. the \"same entity\" edges).\nA More Examples of GEST Figure 3 presents an example of automatically generated GEST based on a story from the bAbI corpus. Furthermore, this example is used to showcase the spatio-temporal inference (in this particular case only spatial inference) aspect present in converting textual stories into GEST: the node corresponding to the sentence \"John picked up the football\" contains no explicit spatial information (as the sentence contains no such information), but the location is inferred from the previous action of entity John." }, { "figure_ref": [], "heading": "B Definition of GEST", "publication_ref": [], "table_ref": [], "text": "Formally, a Graph of Events in Space and Time (GEST) is a graph defined by the following components:\nV = set of event nodes E = set of edges, E ⊆ V X V where V i = (action, entities, location, timef rame, properties)\nE i = temporal (we use Allen (1981) time interval algebra with minor modifications for ease of use), spatial (e.g. on top, behind, left of), logical (e.g. and, or, cause/effect, double implication) or semantic relations.\nFinally, each element of V i is defined as follows: action = the main action; string entities = list of entities that are involved in the action; [string] location = list of locations in which the action takes place; [string] timef rame = list of timeframes in which the action takes place; [string] properties = additional properties; dict <property:value> All elements of V i , with the exception of the properties field, can also refer to other nodes. In particular, in the case of entities we use references to the \"exists\" node for each actor or object involved in an action (this is emphasized by the added dotted lines in the graphical representation). For complex cases, such as the one presented in Figure 2, references to other actions can be used in entitites to model complex interactions (including multiple viewpoints).\nFurthermore, our GEST framework is a step-up from the classic Subject-Verb-Object (SVO) approach. In our case, the Subject becomes an event (even if we are talking about events of type \"exists\", they are still events) and also the Object becomes an event. An event is composed by objects, and any event requires interaction between objects and the world. As in our formulation objects are events, any interaction (and so any edge) becomes in itself an event. This allows a hierarchical and recursive representation in GEST. Classic models represent object to object interactions, that GEST can easily represent as well. Moreover, we can go to the next level, modeling hyper-events, collapsing such interactions to a single node, generating an infinite recursive process in which nodes expand and collapse into events." }, { "figure_ref": [], "heading": "C Building ground truth GEST from text", "publication_ref": [], "table_ref": [], "text": "In the case of bAbI dataset, inferring the location or timeframe in which an action takes places (for actions that do not explicitly provide this information) is simply done by memorizing the last place or time in which a given entity (mainly actor) was found. This simple process works for this particular dataset as all changes in time or space are explicitly present in text: each movement of any entity in space is always marked with a sentence (e.g. John travelled to the kitchen, Mary went to the office), while if explicit timeframes are mentioned in a sentence, they are mentioned in all sentences of a given story (this happens for task number 14, the only task in which timeframes are mentioned). For task number 14 we apply an additional sorting step before parsing, to ensure that sentences are in chronological order (i.e. we consider the following chronological order \"yesterday\", \"this morning\", \"this afternoon\", \"this evening\" and break ties using the original order in the story).\nWhile for the bAbI corpus all entities (e.g. actors, objects) are unique for each story (e.g. a single actor with the name John in each story), in the case of Videos-to-Pargraphs this is not always the case. In this case have to manually intervene and set the proper references (build and link the proper number of nodes), as different entities are referred with the same name in the SVOs (e.g. \"man\", \"desk\"). To find and accurately annotate these cases we manually go through each pair of SVOs and story and semantically check their validity. To ease this process we define a set of personal objects (e.g. phone, cup, backpack), entities that are intimately linked with their owner. Unless other specified, all personal objects are unique (in the sense that each owner has its own unique personal object) and, for example, two phones linked with different actors (e.g. by the action of \"speaking at the phone\") will be represented using two different nodes. We give special attention to cases in objects (personal or not) are passed around different actors. For example, for the following set of SVOs \"John picks up his backpack. John gives the backpack to Mary\" we will define a single node representing the backpack and internally keep track of its owner. So if a future SVO (in the same story) tells that \"Mary handed the backpack to Michael\" we will not define a new node and use the reference to the previously defined backpack (as the original backpack moved from John to Mary to Michael). We are now left with cases in which the same word, or set of words, refer different entities. In most cases the telling sign is the presences in the story (not in the SVOs!) of the word \"another\". For example if the SVOs are \"A man talks at the phone. A man enters the room.\" and the story is \"A man talks at the phone. After that, another man enters the room\", we will build and use two different nodes (one for each \"man') to properly represent the story. For these cases we manually annotate if and where the same word refers to an already existing entity or we have to build a new one. Nevertheless this happens for around 5% of the entries (i.e. 49 out of 1048)." }, { "figure_ref": [], "heading": "D Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1 Text Generation Metrics", "publication_ref": [ "b32", "b28", "b42", "b2", "b54", "b37" ], "table_ref": [], "text": "For fair comparisons across the paper we employ a board set of text generation metrics: BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE (Lin, 2004) 3 , CIDEr (Vedantam et al., 2015), SPICE (Anderson et al., 2016), BERTScore (Zhang et al., 2019) 4 and BLEURT (Sellam et al., 2020) 5 . Computing the metrics was done using coco-captions6 for BLEU, METEOR, ROUGE, CIDEr and SPICE, and official code released by the authors for BERTScore7 and BLEURT 8 ." }, { "figure_ref": [], "heading": "D.2 GEST as a metric", "publication_ref": [ "b33" ], "table_ref": [], "text": "For this set of experiments we use the Videos-to-Paragraphs dataset, as it contains multiple annotations (at all levels, graph, SVOs and story) for each video. We consider triplets that stem from the same video as positive examples (they provide different representation for the same underlying story presented in video format), while any other pairs of triplets as negative examples. For our experiments we consider all positive pairs from the test set (a total of 67 in our case) and 174 negative pairs sampled randomly from the same test set.\nFor both SM and NGM algorithms, the affinity matrix is built using both node and edge level similarity functions that exploit pre-trained word embeddings. We use pre-trained GloVe (Pennington et al., 2014) embeddings of size 300, to measure the similarity at each level (e.g. action, entities) for nodes. In order to compare two edges, we integrate node-level similarity (from the nodes that are connected to the particular edges) with the edge-level similarity (i.e. the similarity between the edges type). Essentially, two nodes are as similar as are their actions and entities, while the similarity of two edges is given by multiplying the edge type experiments mark models that take as input the SVO sequence (in random order), while for G2T we give the serialized graph as input (in the same random order). Note that, while both trained models output the same number of activities (namely five) the G2T model is capable of better reconstructing the original order, although not perfectly.\n(e.g. next, meanwhile) similarity with the similarity of the corresponding nodes.\nFor a fair comparison, the graph similarity metrics is normalized as follows:\nf (G 1 , G 2 ) = f (G 1 , G 2 ) f (G 1 , G 1 ) * f (G 2 , G 2 )\nAs we are interested in how similar two human annotated stories/graphs are, we need to ensure that all used metrics are symmetric. Out of the considered metrics, only BERTScore and the graph similarity are symmetric by their very own construction. For all other metrics, we force symmetry by computing the metric twice for each pair (s 1 , s 2 ) (i.e. metric(s 1 , s 2 ) and metric(s 2 , s 1 )) and average the results.\nFor both datasets (i.e. bAbI and Videos-to-Paragraphs) we use the official train, dev and test splits and follow the standard training procedure (i.e. training exclusively on the train set until the performance on the dev split stabilizes and reporting the results on the test set).\nFor learning to building graphs from raw text, we finetune a GPT3 model (text-curie-001 9 ) on Videos-To-Paragraphs dataset. The model has as input the textual description (story) and the output is represented by the (serialized) graph. Finally, we apply a syntactic correction step to ensure that the generated strings are well formed and therefore can be converted into a GEST. 9 https://platform.openai.com/docs/models/gpt-3 last accesed on 8th of May 2023" }, { "figure_ref": [], "heading": "D.3 GEST for text generation", "publication_ref": [ "b26" ], "table_ref": [], "text": "For text generation we invert the inputs and ouputs from the graph-learning case: the input now is represented by the (serialized) graph, while the output is the ground truth story. As for the model, we start from a pre-trained BART Base (Lewis et al., 2019) (140M parameters), minimizing the cross-entropy loss using the Adam optimizer (Kingma and Ba, 2014), for a maximum number of 100 epochs. We set the base learning rate at 1e-5 and use a warm-up phase that lasts for 10% of the entire training process. Training is done on six Nvidia Quadro RTX 5000 graph cards, using an effective batch size of 24 (4 x 6). In this case, for Videos-to-Paragraphs dataset an epoch took around 45 seconds, while 11 minutes per epoch were needed for bAbI." }, { "figure_ref": [ "fig_1" ], "heading": "E GEST for text generation", "publication_ref": [], "table_ref": [], "text": "In Figure 4 we present a qualitative example of story generation. This example highlights several important factors. First, as previously mentioned, the story part is not just a description of the atomic actions. The ground truth label for this examples contains only one explicit atomic action present in the SVOs, the writing in a notebook. The other part of the action, namely leaving the room is not explicitly present in the SVOs. Nevertheless preliminary actions, such as packing, standing up, walking are present and with enough examples a model can (and does!) learn that such actions usually entail leaving. However, in this particular case, the models are not able to summarize the actions into a single action (i.e. leaving). The S2T model has some problems reconstructing the original order of actions and also misjudges some entities, inverting the bag and the notebook in the picking up and putting away actions. On the other hand, the G2T model generates a more coherent order of action that better matches the original order. While better, the implicit order generated by the G2T model is still not perfect: it \"misses\" an action (the second action in the natural order and next to last in the randomized one). Missing to describe an action does not necessarily represent an error as it can represent a part of summarization process." }, { "figure_ref": [ "fig_4" ], "heading": "F GEST serialization", "publication_ref": [ "b35" ], "table_ref": [], "text": "In order to be used either as input or output in a training pipeline, the graph representation needs to be transformed into the format used by the employed encoder-decoder models. For our case, this is represented by natural language text. Besides the need to be as close as possible to natural language, the serialized graph should contain all the information from the original graph and ensure that the original is recoverable from the serialized version. Following (Ribeiro et al., 2020) and after extensive experimenting with different serialization methods, we settle for two methodologies, one for each task (i.e. graph-to-text and text-to-graph). For text-to-graph task we opt to use a process that generates string that are easier to fix (so they are sound) and simpler to generate. In the case of graph-to-text generation, we prefer a richer repre- sentation that reduces the number of references, thus easing the learning process (searching for a particular reference in text while easy to solve programmatically, is a hard task for Transformer based encoder-decoder models. Both methodologies (V1 is used for text-to-graph while V2 for is used for graph-to-text). are depicted on a illustrative example in Figure 7." }, { "figure_ref": [ "fig_2", "fig_3", "fig_2", "fig_2" ], "heading": "G GEST Graph Mathcing vs BLEURT", "publication_ref": [], "table_ref": [], "text": "In Figures 5 and6 we present two examples of pairs from the Videos-to-Paragraphs dataset. In Figure 5, both entries (graph + text) stem from the same video, information that is reproduced by the graph matching approach. BLEURT metric fails to capture this information due, most probably, to the different writing style of the two texts. While the first one is descriptive, the second one is richer, more complex (e.g. \"same empty hall\") and significantly shorter. For the example in Figure 5, BLEURT incorrectly marks the texts as stemming from the same video. This highlights a limitation of the metric, as it is fails to understand that while the actions might look similar, they are still very different, especially when considering they are performed by different actors. Our approach is not focused exclusively on actions, as it also take into account the entities involved in a certain action. The action writing with a pen in a notebook while semantically similar, is still different from writing on a blackboard. Breaking up the action and entities involved allows for a finer semantic level of detail, as we will be comparing actions with actions and entities with entities.\nNote that we take a very simple approach to building the affinity matrix, that coupled with graph matching algorithms does not always yield optimal results. Furthermore, the metric (affinity matrix and graph matching algorithm) is not trained or optimized for this task or dataset. Even with this very basic approach and limited data, we obtain state-of-the-art results for text similarity, proving the power of GEST. We leave optimizing (e.g. by error analysis) the affinity matrix and learning the metric for further research." } ]
One of the essential human skills is the ability to seamlessly build an inner representation of the world. By exploiting this representation, humans are capable of easily finding consensus between visual, auditory and linguistic perspectives. In this work, we set out to understand and emulate this ability through an explicit representation for both vision and language -Graphs of Events in Space and Time (GEST). GEST alows us to measure the similarity between texts and videos in a semantic and fully explainable way, through graph matching. It also allows us to generate text and videos from a common representation that provides a well understood content. In this work we show that the graph matching similarity metrics based on GEST outperform classical text generation metrics and can also boost the performance of state of art, heavily trained metrics.
GEST: the Graph of Events in Space and Time as a Common Representation between Vision and Language
[ { "figure_caption": "Figure 1 :1Figure 1: Functional overview of the proposed framework, centered around GEST. GEST represent the central component, allowing for seamless transitions between different forms. For example the transition from text to video is done via steps A and C, while the transformation from video to text can be done via steps D and B. In this work we focus on modules A and B .", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Qualitive example for the story generation task on the test set of Videos-to-Paragraphs dataset. Both natural (as found in the dataset) and randomized SVOs are provided in the left and in the right column. S2Texperiments mark models that take as input the SVO sequence (in random order), while for G2T we give the serialized graph as input (in the same random order). Note that, while both trained models output the same number of activities (namely five) the G2T model is capable of better reconstructing the original order, although not perfectly.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Pair of GEST and text that stem from the same video. Dotted lines mark matched nodes. GEST graph matching score: 0.3493, prediction: 1. BLEURT similarity: 0.3755, prediction: 0.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Pair of GEST and text that stem from the different videos. Dotted lines mark matched nodes. GEST graph matching score: 0.0822, prediction: 0. BLEURT similarity: 0.5625, prediction: 1.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Methodologies for transforming a GEST into a string.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ", we will refer to sentences Results comparing GEST representation power with common text generation metrics applied on stories from Videos-to-Paragraphs test set. Both text generation metrics and graph similarity function are applied on the ground truth (stories and graphs). We show in bold the best value for each metric, and with underline 2nd best. BS stands for BERTScore, G for GEST, corr for correlation, Acc for Accuracy, F for Fisher score and AUC for the area under the precisionrecall curve. For brevity all (except F) are scaled by 100.", "figure_data": "MethodCorr AccFAUCBLEU@4 24.45 75.52 0.2816 52.65METEOR 58.48 84.23 1.1209 73.90ROUGE51.11 83.40 0.7164 68.92SPICE59.42 84.65 1.0374 74.43BS57.39 85.89 1.0660 77.93G SM61.70 84.65 1.2009 75.47G NGM60.93 86.31 0.9770 76.75BLEURT 70.93 90.04 2.0280 88.02", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results comparing the power of BLEURT coupled with common text generation metrics and GEST, applied on stories from Videos-to-Paragraphs test set. Text generation metrics are computed on the ground truth stories, while the GEST similarity (G) with graph matching is computed on GEST learned from stories. Notations are the same as in Tab. 1.", "figure_data": "MethodCorr AccFAUCBLEURT70.93 90.04 2.0280 88.02+BLEU@4 70.93 90.04 2.0274 88.04+METEOR 71.20 89.63 2.0659 87.62+ROUGE70.76 90.04 1.9973 87.71+SPICE71.94 88.80 2.0808 87.71+BS71.11 89.63 2.0089 87.25+G SM72.89 90.87 2.2086 89.80+G NGM71.91 90.46 2.0537 88.58", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Mihai Masala; Nicolae Cudlenco; Traian Rebedea; Marius Leordeanu
[ { "authors": "Somak Aditya; Yezhou Yang; Chitta Baral; Yiannis Aloimonos; Cornelia Fermüller", "journal": "Computer Vision and Image Understanding", "ref_id": "b0", "title": "Image understanding using vision and reasoning through scene description graph", "year": "2018" }, { "authors": " James F Allen", "journal": "", "ref_id": "b1", "title": "An interval-based representation of temporal knowledge", "year": "1981" }, { "authors": "Peter Anderson; Basura Fernando; Mark Johnson; Stephen Gould", "journal": "Springer", "ref_id": "b2", "title": "Spice: Semantic propositional image caption evaluation", "year": "2016-10-11" }, { "authors": "Peter Anderson; Xiaodong He; Chris Buehler; Damien Teney; Mark Johnson; Stephen Gould; Lei Zhang", "journal": "", "ref_id": "b3", "title": "Bottom-up and top-down attention for image captioning and visual question answering", "year": "2018" }, { "authors": "Jyoti Aneja; Aditya Deshpande; Alexander G Schwing", "journal": "", "ref_id": "b4", "title": "Convolutional image captioning", "year": "2018" }, { "authors": "Stanislaw Antol; Aishwarya Agrawal; Jiasen Lu; Margaret Mitchell; Dhruv Batra; C Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b5", "title": "Vqa: Visual question answering", "year": "2015" }, { "authors": "Najib Ben Aoun; Haytham Elghazel; Chokri Ben Amar", "journal": "IEEE", "ref_id": "b6", "title": "Graph modeling based video event detection", "year": "2011" }, { "authors": "Yogesh Balaji; Martin Renqiang Min; Bing Bai; Rama Chellappa; Hans Peter Graf", "journal": "", "ref_id": "b7", "title": "Conditional gan with discriminative filter generation for text-tovideo synthesis", "year": "2019" }, { "authors": "Laura Banarescu; Claire Bonial; Shu Cai; Madalina Georgescu; Kira Griffitt; Ulf Hermjakob; Kevin Knight; Philipp Koehn; Martha Palmer; Nathan Schneider", "journal": "", "ref_id": "b8", "title": "Abstract meaning representation for sembanking", "year": "2013" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Lisa Bauer; Yicheng Wang; Mohit Bansal", "journal": "", "ref_id": "b10", "title": "Commonsense for generative multi-hop question answering tasks", "year": "2018" }, { "authors": "Simion-Vlad Bogolin; Ioana Croitoru; Marius Leordeanu", "journal": "", "ref_id": "b11", "title": "A hierarchical approach to visionbased language generation: from simple sentences to complex natural language", "year": "2020" }, { "authors": "Ondřej Bojar; Yvette Graham; Amir Kamran", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Results of the WMT17 metrics shared task", "year": "2017" }, { "authors": "William Brendel; Sinisa Todorovic", "journal": "IEEE", "ref_id": "b13", "title": "Learning spatiotemporal graphs of human activities", "year": "2011" }, { "authors": "Chao Yeh; Chen ; Kristen Grauman", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b14", "title": "Efficient activity detection in untrimmed video with maxsubgraph search", "year": "2016" }, { "authors": "Anoop Cherian; Chiori Hori; Tim K Marks; Jonathan Le Roux", "journal": "", "ref_id": "b15", "title": "2.5+ 1) d spatio-temporal scene graphs for video question answering", "year": "2022" }, { "authors": "Janara Christensen; Stephen Soderland; Oren Etzioni", "journal": "", "ref_id": "b16", "title": "Towards coherent multi-document summarization", "year": "2013" }, { "authors": "Peter Clark; Bruce Porter; Boeing Phantom Works", "journal": "", "ref_id": "b17", "title": "Km-the knowledge machine 2.0: Users manual", "year": "2004" }, { "authors": "Aron Culotta; Jeffrey Sorensen", "journal": "", "ref_id": "b18", "title": "Dependency tree kernels for relation extraction", "year": "2004" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Hao Dong; Simiao Yu; Chao Wu; Yike Guo", "journal": "", "ref_id": "b20", "title": "Semantic image synthesis via adversarial learning", "year": "2017" }, { "authors": "Lianli Gao; Zhao Guo; Hanwang Zhang; Xing Xu; Heng Tao Shen", "journal": "IEEE Transactions on Multimedia", "ref_id": "b21", "title": "Video captioning with attention-based lstm and semantic consistency", "year": "2017" }, { "authors": "Claire Gardent; Anastasia Shimorina; Shashi Narayan; Laura Perez-Beltrachini", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "The WebNLG challenge: Generating text from RDF data", "year": "2017" }, { "authors": "Yanchao Hao; Yuanzhe Zhang; Kang Liu; Shizhu He; Zhanyi Liu; Hua Wu; Jun Zhao", "journal": "", "ref_id": "b23", "title": "An endto-end model for question answering over knowledge base with cross-attention combining global knowledge", "year": "2017" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b24", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Marius Leordeanu; Martial Hebert", "journal": "", "ref_id": "b25", "title": "A spectral technique for correspondence problems using pairwise constraints", "year": "2005" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b26", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Yitong Li; Martin Min; Dinghan Shen; David Carlson; Lawrence Carin", "journal": "", "ref_id": "b27", "title": "Video generation from text", "year": "2018" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Dekang Lin", "journal": "Natural Language Engineering", "ref_id": "b29", "title": "A dependency-based method for evaluating broad-coverage parsers", "year": "1998" }, { "authors": "Jiasen Lu; Jianwei Yang; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b30", "title": "Hierarchical question-image co-attention for visual question answering", "year": "2016" }, { "authors": "C William; Sandra A Mann; Thompson", "journal": "Text-interdisciplinary Journal for the Study of Discourse", "ref_id": "b31", "title": "Rhetorical structure theory: Toward a functional theory of text organization", "year": "1988" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b33", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": "Scott Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Bernt Schiele; Honglak Lee", "journal": "PMLR", "ref_id": "b34", "title": "Generative adversarial text to image synthesis", "year": "2016" }, { "authors": "Martin Leonardo Fr Ribeiro; Hinrich Schmitt; Iryna Schütze; Gurevych", "journal": "", "ref_id": "b35", "title": "Investigating pretrained language models for graph-to-text generation", "year": "2020" }, { "authors": "Akash Ananya B Sai; Mitesh M Kumar Mohankumar; Khapra", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b36", "title": "A survey of evaluation metrics used for nlg systems", "year": "2022" }, { "authors": "Thibault Sellam; Dipanjan Das; Ankur Parikh", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "BLEURT: Learning robust metrics for text generation", "year": "2020" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni", "journal": "", "ref_id": "b38", "title": "Make-avideo: Text-to-video generation without text-video data", "year": "2022" }, { "authors": "Dinesh Singh; C Krishna; Mohan ", "journal": "Pattern Recognition", "ref_id": "b39", "title": "Graph formulation of video activities for abnormal activity recognition", "year": "2017" }, { "authors": "Muralikrishna Sridhar; Anthony G Cohn; David C Hogg", "journal": "STAIRS", "ref_id": "b40", "title": "Relational graph mining for learning events from video", "year": "2010" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b41", "title": "Attention is all you need", "year": "2017" }, { "authors": "Ramakrishna Vedantam; Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b42", "title": "Cider: Consensus-based image description evaluation", "year": "2015" }, { "authors": "Ruben Villegas; Mohammad Babaeizadeh; Pieter-Jan Kindermans; Hernan Moraldo; Han Zhang; Mohammad Taghi Saffar; Santiago Castro; Julius Kunze; Dumitru Erhan", "journal": "", "ref_id": "b43", "title": "Phenaki: Variable length video generation from open domain textual description", "year": "2022" }, { "authors": "Bairui Wang; Lin Ma; Wei Zhang; Wei Liu", "journal": "", "ref_id": "b44", "title": "Reconstruction network for video captioning", "year": "2018" }, { "authors": "Runzhong Wang; Junchi Yan; Xiaokang Yang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b45", "title": "Neural graph matching network: Learning lawler's quadratic assignment problem with extension to hypergraph and multiple-graph matching", "year": "2021" }, { "authors": "Xiang Wang; Dingxian Wang; Canran Xu; Xiangnan He; Yixin Cao; Tat-Seng Chua", "journal": "", "ref_id": "b46", "title": "Explainable reasoning over knowledge graphs for recommendation", "year": "2019" }, { "authors": "Xiaolong Wang; Abhinav Gupta", "journal": "", "ref_id": "b47", "title": "Videos as space-time region graphs", "year": "2018" }, { "authors": "Jason Weston; Antoine Bordes; Sumit Chopra; Alexander M Rush; Bart Van Merriënboer; Armand Joulin; Tomas Mikolov", "journal": "", "ref_id": "b48", "title": "Towards ai-complete question answering: A set of prerequisite toy tasks", "year": "2015" }, { "authors": "Chenfei Wu; Jian Liang; Lei Ji; Fan Yang; Yuejian Fang; Daxin Jiang; Nan Duan", "journal": "Springer", "ref_id": "b49", "title": "Nüwa: Visual synthesis pre-training for neural visual world creation", "year": "2022-10-23" }, { "authors": "Zonghan Wu; Shirui Pan; Fengwen Chen; Guodong Long; Chengqi Zhang; S Yu; Philip ", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b50", "title": "A comprehensive survey on graph neural networks", "year": "2020" }, { "authors": "Quanzeng You; Hailin Jin; Zhaowen Wang; Chen Fang; Jiebo Luo", "journal": "", "ref_id": "b51", "title": "Image captioning with semantic attention", "year": "2016" }, { "authors": "Yuan Yuan; Xiaodan Liang; Xiaolong Wang; Dit-Yan Yeung; Abhinav Gupta", "journal": "", "ref_id": "b52", "title": "Temporal dynamic graph lstm for action-driven video object detection", "year": "2017" }, { "authors": "S Luke; Michael Zettlemoyer; Collins", "journal": "", "ref_id": "b53", "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", "year": "2012" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b54", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "Huasong Zhong; Jingyuan Chen; Chen Shen; Hanwang Zhang; Jianqiang Huang; Xian-Sheng Hua", "journal": "IEEE Transactions on Multimedia", "ref_id": "b55", "title": "Self-adaptive neural module transformer for visual question answering", "year": "2020" }, { "authors": "Jie Zhou; Ganqu Cui; Shengding Hu; Zhengyan Zhang; Cheng Yang; Zhiyuan Liu; Lifeng Wang; Changcheng Li; Maosong Sun", "journal": "AI Open", "ref_id": "b56", "title": "Graph neural networks: A review of methods and applications", "year": "2020" }, { "authors": "Luowei Zhou; Yingbo Zhou; Jason J Corso; Richard Socher; Caiming Xiong", "journal": "", "ref_id": "b57", "title": "End-to-end dense video captioning with masked transformer", "year": "2018" }, { "authors": "Xingran Zhou; Siyu Huang; Bin Li; Yingming Li; Jiachen Li; Zhongfei Zhang", "journal": "", "ref_id": "b58", "title": "Text guided person image synthesis", "year": "2019" } ]
[ { "formula_coordinates": [ 12, 94.05, 381.7, 173.06, 26.23 ], "formula_id": "formula_0", "formula_text": "f (G 1 , G 2 ) = f (G 1 , G 2 ) f (G 1 , G 1 ) * f (G 2 , G 2 )" } ]
2023-05-24
[ { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b32", "b22", "b12", "b23", "b4", "b54", "b41", "b8", "b33", "b6", "b36", "b30", "b20", "b31", "b26", "b2", "b39", "b34" ], "table_ref": [], "text": "The widespread use of social media platforms has revolutionized the manner in which people capture and share their everyday moments. While uploading and sharing media has become effortless, the …… As Sarah pulled up in her trusty red car, Jenny couldn't help but feel a rush of excitement for their road trip adventure. Jenny posed for a photo while wearing her brown hat, green jacket, and holding her cell phone in her left hand. They hit the open road, taking in the stunning scenery along the way. Stopping at a rocky shoreline overlooking a large body of water, Jenny marveled at the natural beauty of the mountain range extending behind her. She took a deep breath of fresh air and felt a sense of peace and tranquility. Feeling adventurous, Jenny made her way to the top of a rock formation, proudly taking in the stunning view of the surrounding landscape. The sky was clear and blue, and the sun shone down upon her with warmth and light. task of crafting a compelling and coherent story from a collection of images or videos remains a challenge. This real-world scenario underscores the necessity for an AI-based automated album storytelling system. Such a system takes into account various factors including visual content, story context, and sentiment, to construct an engaging and coherent narrative that effectively conveys the user's experiences and emotions. This not only simplifies the process of sharing experiences with friends and followers, but also holds the potential to enrich memory recall and forge deeper emotional connections with the album, enabling profound reflections in the future.\n……\nThe task of automated album storytelling consists of several challenging research questions, including image understanding, consistent storytelling, and efficient evaluation. Image understanding necessitates the accurate recognition and comprehension of visual relationships and contextual elements within photos and videos. Consistent storytelling, on the other hand, requires the generation of coherent stories for each image that adhere to the common theme of album. Additionally, an efficient automatic evaluation system is needed to evaluate and improve the generation quality.\nWith the flourish of Vision-language pre-training (VLP) [33,23,13,24,5,55,42,9] and LLMs [34,7,37], we initially try an intuitive and simple solution to tackle the task of album storytelling. As shown in Figure 2 (A), given the images within one album, a caption model is first utilized to generate captions for each image within an album, then an LLM (e.g., ChatGPT [31]) is used to expand all the generated captions into an engaging story. However, we observe that this approach often struggles to produce coherent and credible narratives. Through extensive analysis, we identified that the root cause of this challenge lies in the inherent \"diversity\" characteristic of the captioning model, where multiple sensible candidate captions can exist for a single image. Without explicit knowledge of the intended final story (i.e.,\"story-agnostic\"), the caption model lacks direction on which aspects to focus on when describing each image. Consequently, although the independently generated captions may appear satisfactory for individual images, they often fail to contribute to a cohesive and consistent overall story. This issue further leads to the subsequent LLM generating a considerable amount of hallucination when attempting to stitch together such unrelated/inconsistent captions. Motivated by the above analysis, we present two simple yet effective designs as shown in Figure 2 (B). Firstly, we introduce a new story-aware caption model, which incorporates both the input image and a preliminary story to generate captions that align with the story. In contrast to the conventional story-agnostic design, this design significantly reduces the generation ambiguity, prioritizes exacting visual information that relates to the final story and consequently enhances overall consistency. Since existing image captioning datasets lack story annotation, we propose a synthesized dataset based on the Stanford image-paragraph [21] dataset, enabling us to train the model to generate accurate and detailed image descriptions based on the image and its corresponding story. Secondly, we propose iterative co-evolution, where the story-aware captioning and LLM-based story generation processes are iteratively refined. With each iteration, the improved story can guide the captioning model to generate better captions. In turn, these enhanced captions can contribute to more coherent and accurate story generation with fewer factual errors. We name our overall framework VIVID -Visual Iterative Verbalization with factualness-Improved Descriptions.\nTo evaluate the effectiveness of our proposed approach, we further introduce a new benchmark dataset comprising images extracted from popular vlogs. Since human storytelling about images can be diverse, it is not appropriate to rely solely on metrics such as BLEU [32], ROUGE [27], METEOR [3], CIDEr [40] to evaluate the quality of generated stories. Instead, we propose a new evaluation metric based on the earth mover's distance (EMD) [35], which measures the overall dissimilarity between the distribution of images and the distribution of stories. A lower EMD distance signifies a stronger alignment between the stories and images with the album. Additionally, we develop LLM based evaluation metrics to provide a fair and comprehensive assessment of the generated story quality. Experimental results demonstrate that our proposed approach achieves a lower EMD distance compared to baseline methods, indicating that our generated stories are more aligned with the images. And the LLM based metrics demonstrate our stories coverage more detail and maintain high coherence.\nTo summarize, our contributions are three-fold:\n• We propose the album storytelling task along with an intuitive solution. To the best of our knowledge, this is the first attempt at introduce LLMs into albums from social medias and generate lengthy and coherent stories. • We introduces a new album storytelling pipeline with two simple and effective designs, i.e., \"story-aware captioning\" and \"interactive co-evolution of captioning and story generation\". • We propose a new benchmark dataset and design a set of systematic evaluation metrics to comprehensively measure the results. The results demonstrate the effectiveness of our proposed approach in generating more engaging and credible stories, while retaining the coherence and vividness." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b15", "b24", "b15", "b43", "b40", "b7", "b40", "b18", "b32", "b51", "b22", "b45", "b23", "b53", "b42", "b12", "b22", "b25", "b33", "b6", "b36", "b0", "b16", "b28", "b19", "b46", "b10", "b35", "b37", "b52", "b50", "b19", "b46", "b48", "b56", "b13", "b49", "b44", "b55", "b1", "b9", "b14", "b38", "b19", "b46", "b50", "b47" ], "table_ref": [], "text": "Image or video storytelling. Early works on image and video storytelling include [16,25]. These works extend the captioning for single image to sequential vision-to-language [16], or generating stories for image collections [44]. However, due to the technological limitations at the time, these methods could only generate short and simple stories, unlike the detailed and vivid stories generated by large language models.\nImage caption and vision-language pre-training. Image caption aims to understanding and describing the content of an image in words [41], which has been extensively studied in recent years and are typically implemented with an encoder-decoder framework [8,41,19]. With the advance of Vision-language pre-training (VLP) [33,52,23,46,24,54,43], there are several VLP based caption models [13,23,26], which can generate more precise captions thanks to their ability to leverage large amounts of data and multi-task training.\nLLMs prompting methods. Large language models (LLMs), such as GPT [34] series, BERT [7] series and LLaMa series [37], have been proven to be capable of generating detailed, vivid, and imagery text. However, research [1,17,29] shows that the LLMs are prone to failure when handling some complex tasks [20,47]. Some recent studies [11,36,38,53] attempt to enhance LLMs' capabilities in addressing complex problems such as reasoning and planning by proposing carefully designed prompts, and they start to explore the application of these methods in the multimodal domain [51].\nVision + LLMs. How to apply the capabilities of LLMs to the vision domain has recently received significant attention, which is typically implemented by adding a vision module to project visual inputs into representations [20,47,49,57]. These representations can be either discrete text words [14,50,45,56] or continuous feature space [2,10,15,39]. Recent vision + LLMs studies attempt to explore the multimodal chain-of-thought (CoT) ability [20,47], and to solve the task of image understanding [51], generation and editing [48]." }, { "figure_ref": [ "fig_1" ], "heading": "Framework", "publication_ref": [], "table_ref": [], "text": "Given an album I consisting of N photos I = {I i } N i=1 , VIVID generates a story with the following steps: The overall framework is illustrated in Figure 2. Part (A), (B), and (C) correspond to the above steps. Details are elaborated in the following sections." }, { "figure_ref": [], "heading": "Initial Story Generation", "publication_ref": [ "b9", "b19", "b46", "b56" ], "table_ref": [], "text": "The recent advances of LLMs makes it possible to generate long, coherent stories grounded on any textual input. Therefore, we initialize a story by first transforming visual information into text using an advanced caption model, and then feed the image captions into the LLMs. Specifically, we first input the images I i into a vision-language pre-training caption model c(•) to obtain individual captions, C\ni = c(I i ).(0)\nThen reformat the captions with a textual prompt p 0 (•), and feed it into the LLM ℓ(•) to obtain the initial story,\nS (0) = ℓ(p 0 (C (0) 1 , C (0) 2 , • • • , C(0)\nN )).\nThe outline of prompt p 0 (•) is defined as following:\nGiven a set of photo captions from a vlog. Please create a vivid story that incorporates the key elements from each photo. Remember to use your imagination and creativity to turn the photo descriptions into a fun and engaging story. Tips: The results should be of strict corresponding pairs between the captions and their respective stories, as the dictionary format of {\"Caption 1\": \"Story 1\", \"Caption 2\": \"Story 2\", ... } Our early exploration shows there are two key points to enhance the stability of the prompts. Firstly, introducing the background activates the relevant knowledge of LLMs. By informing LLMs that the images are from an album/vlog, they imagine the details from the photos and generate stories aligned with album storytelling. Secondly, adding strong constraints is crucial.\nWhile LLMs can easily generate vivid stories due to extensive training text, accurately reflecting each image's content and maintaining a consistent theme can be challenging. In practice, LLMs often encounter two failure scenarios. They may not contain enough visual information, resulting in a significant loss of caption content, or they may generate fabricated information, telling unrelated stories from other albums. This is attributed to the limited reasoning ability of LLMs.\nPrevious studies usually utilize the CoT method [10,20,47,57] to tackle similar challenges. This technique involves providing LLMs with both the input and the previous output, enabling them to generate results incrementally. However, this approach involves additional operational steps and significant token costs. In contrast, our proposed solution introduces strict constraints by forcing generating structured caption-story pairs. This ensures that each generated story aligns with its corresponding original caption, allowing LLMs to faithfully capture the essence of each caption and describe the narrative scenario associated with the shared theme of the album.\nThe story is then segmented into text chunks {S (0) i } N i=1 that corresponding to each input images. Based on the constraints, we propose the following prompt p 1 (•) to segment the previous LLMs' output and build structured chunks." }, { "figure_ref": [], "heading": "Refining the Story with Story-Aware Caption Model", "publication_ref": [], "table_ref": [], "text": "In this step, we build a story-aware caption model f (•) to generate refined captions,\nC (t+1) i = f (S (t) i ).\nTo train such a model, we first construct a story-aware caption dataset, and then use it to finetune a pre-trained caption model." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Detailed caption: A large building with bars on the windows in front of it. There is people walking in front of the building.", "publication_ref": [ "b20", "b22" ], "table_ref": [], "text": "There is a street in front of the building with many cars on it. Prompt 1: Please write a descriptive paragraph about the scene depicted in the following image. Your writing should capture the details of the scene and the emotions it evokes, while also engaging the reader with vivid sensory language. GPT story: The scene is bustling with activity. The large building looms in the foreground, its barred windows a stark reminder of the security it holds within. The street in front of it is alive with cars, their engines humming in the background. Pedestrians move quickly, their faces a mix of determination and distraction. The sun casts a warm glow on the scene. The air is filled with the smell of exhaust and the sound of honking horns. Prompt 2: Generate a passage by replacing the adjectives in the following paragraph with opposite or contradictory words to create factual errors. Noisy story: The scene is dead with inactivity. The small building cowers in the background, its clear windows a stark reminder of the openness it holds within. The street in front of it is dead with cars, their engines soundless in the background. Pedestrians move slowly, their faces a mix of apathy and distraction. The sun casts a cold glow on the scene. The air is filled with the scent of flowers and the soothing sound of chirping birds. Story-aware caption dataset. The initial stories generated in Section 3.1 suffer from the issue of generating hallucinated information, as the captions produced in a \"story-agnostic\" manner lack essential details due to the absence of explicit knowledge about the intended story. To address this challenge, we propose a solution that establishes a strong connection between the story and the image by identifying the crucial elements of the story that correspond to the actual attributes of the image. However, there is a lack of an appropriate dataset for training such a refinement function based on the image. Therefore, we develop a novel story-aware caption dataset based on the Stanford image-paragraph [21] dataset.\nThe Stanford image-paragraph dataset differs from traditional caption datasets in that its description paragraphs are longer and describe more detailed information about the image. However, it does not have a corresponding noisy story for our task. Therefore, we used LLMs to generate a noisy story, as shown in Figure 3. We first craft a prompt that transforms the detailed caption into a vibrant paragraph, brimming with emotion and imagination, while still capturing the essence of the scene. Then, we utilize the LLM to replace the adjectives in the story with their antonyms, generating a passage that contains factual errors while maintaining the key elements unchanged.\nIn the end, our dataset consists of paired images, noisy stories, and correct detailed descriptions. Training on this dataset can enable the model to map the text input to the corresponding image details, and then obtain the correct descriptions of these objects based on these image details. Story-aware caption model. We build a story-aware caption model based on BLIP [23], which is composed of a image encoder g i (•), a text encoder g t (•) and a refine decoder d r (•), as shown in Figure 4.\nDuring training, the image encoder g i (•) first transforms a image into a sequence of embedding vectors. Then, the text encoder g t (•) takes the noisy story as input and generates a sequence of composite embedding vectors, where the cross-attentions are computed between the story embeddings and the image embeddings. Finally, the text decoder d r (•) reconstructs the detailed caption with the composite embedding vectors inputting into the cross attentions. The model is optimized in an end-to-end way with Language Modeling loss (LM),\nL(U) = N i=1 log P (u k | u 1 , u 2 , . . . , u k-1 , Θ) ,\nwhere U = {u 1 , u 2 , . . . , u N } denotes the tokens in the caption, and P (u k | u 1 , u 2 , . . . , u k-1 , Θ ) is conditional probability of i-th token given the previous tokens and model parameters Θ.\nAfter training, the model is capable of identifying key elements in the initial story S (t) i and connecting them to corresponding regions in the image I i . This information is then used to generate a more accurate and detailed description C (t+1) i\n. Using these refined captions, we propose the following prompt p r (•) to generate more aligned stories S (t+1) i\n. In practice, we find revising the stories, rather than generating them from scratch, better preserves coherence and vividness. Below is the key part of p r (•). The complete prompt can be found in Appendix A.\nGiven a list of json dictionaries, please use the detailed information from \"Refined Caption\" to modify the \"Initial Story\" and create a new \"Refined Story\" that better align to the real-world scenario in the photos." }, { "figure_ref": [], "heading": "Iterative Refinement of Story and Image Description", "publication_ref": [ "b21" ], "table_ref": [], "text": "With the story-aware caption model f (•), we can iteratively refine the story until satisfied. To determine the stopping point, we introduce the concept of edit distance [22]. The iteration process ends when the ratio of the edit distance to the length of the text falls below 0.2. However, during the iterative refinement, the stories may become overly focused on individual images, potentially losing their global perception. To address this, we propose the following prompt p u (•) to generate a coherent and comprehensive ultimate story.\nGiven a series of stories describing individual pictures from the same album, create a cohesive narrative that seamlessly connects each story together. Use appropriate transitions and scene changes to make the plot flow smoothly, while ensuring that the plot twists are logical and make sense within the context of the story." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Evaluation Dataset", "publication_ref": [], "table_ref": [], "text": "We choose to extract keyframes from popular YouTube vlogs as our primary source of images for several reasons. Firstly, video content generally exhibits higher image quality compared to individual photos found in albums. Secondly, utilizing vlogs allows us to ensure thematic consistency within a set of images, which is crucial for effective storytelling. Furthermore, YouTube offers an extensive collection of videos, and by selecting popular vlogs as our data source, we can access a wide variety of visually appealing images that are likely to resonate with a broad audience. Our dataset comprises 30 popular YouTube videos categorized into five distinct categories: \"birthday\", \"camping\", \"christmas\", \"travel\", and \"wedding\". Each video is further divided into image collections, with each collection containing 10 key frames. The design of the dataset emphasizes the inclusion of images that possess sufficient information to support storytelling while encompassing diverse themes and styles.\nSpecifically, these key-frames are obtained through two steps. In the first step, meaningful key-frames are extracted and stored using FFmpeg2 , which have high quality and often represent multiple frames within a period of time. In the second step, hand-crafted selection is performed to find a set of images that can represent the vlog. The selection criteria include a). image quality: clarity of the image; b). information content: the number of objects and elements in each image; and c). storytelling: whether these images can be strung together into a complete story, and whether they contain complete contextual relationships. The selected images range from outdoor to indoor, single to multiple scenes, and contains some dark or blur scenes to challenge the stabilisation of proposed systems." }, { "figure_ref": [], "heading": "Automatic Evaluation Metrics", "publication_ref": [ "b15", "b24", "b34", "b32", "b11", "b5", "b17", "b3", "b29" ], "table_ref": [], "text": "The former VIST [16] and VideoST [25] datasets evaluate the storytelling results by comparing generated stories with given hand-craft ground truth. However, stories are usually too flexible to be grounded to single ground truth story, and the metric cannot measure the vividness and coherence of the stories as well.\nOur systematic evaluation framework majorly evaluate two aspects of the story:\n1. The alignment between the stories and images; 2. The quality of the stories.\nEMD. We propose to adopt the earth mover's distance (EMD) [35] to measures the distance between the distribution of the album images and the generated stories. Specifically, EMD between two distribution P and Q is\nEMD(P, Q) = min γ∈Γ(P,Q) (x,y)∼γ d(x, y),\nwhere Γ(P, Q) is the set of all possible joint distributions of P and Q, and d(x, y) is the cost of moving unit mass from x to y. To compute the EMD between the images and the story, we first encode the images {I i } and the sentences in the story S (U ) = {T j } with the image encoder e i (•) and text encoder e t (•) in CLIP [33] to transform the images and sentences to the same latent space, then compute the inner product as the cost function, We adopt P and Q as the uniform distributions on {I i } and {T j }. A lower EMD distance indicates that the generated stories are more aligned with the album images.\nd(I i , T j ) = e i (I i ) • e t (T j ) ∥e i (I i )∥ • ∥e t (T j )∥ .\nLLM based evaluation metrics. In previous research, human evaluation was often used to measure the quality of text. However, studies [12,6,18] show that these results are not sufficiently accurate and reliable due to subjective preferences of human evaluators. On the contrary, LLMs possess extensive knowledge bases and provide more stable results, demonstrating great potential for evaluating NLP systems and algorithms [4]. Therefore, in this article, we propose an additional evaluation metric based on LLMs, which includes the following aspects:\n• Detail, which counts how many details are described in the stories. We wish the story to contain enough visual information to be aligned with the images. • Coverage, which measures the average of how much the stories coverage the information from both short captions {C\ni } and detailed descriptions {C\n(U ) i }.\n• Coherence, which evaluates the smoothness of the stories. A good story should be logically connected, consistent, and easy to understand.\nThe above metrics are implemented with GPT-4 [30], the most powerful LLM so far. The complete prompts can be found in Appendix A." }, { "figure_ref": [], "heading": "Result", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b30", "b27" ], "table_ref": [], "text": "In the experiment, we utilized the GPT-3.5 [31] as our LLM. For the training of story-aware caption model, we adjusted the input dimensions to 480 × 480 and used a batch size of 12. The model was trained for 15 epochs using a learning rate of 2 × 10 -5 , which gradually decayed to 0 following a cosine learning rate schedule. The optimizer used was AdamW [28] with a weight decay of 0.05.\nThe training process was conducted on 8 Nvidia v100 32GB GPUs." }, { "figure_ref": [], "heading": "Results of EMD distance", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We compare the performance of our proposed approach with the baseline and our proposed approach that do not use key element grounding or iterative refinement as Table 1. The results shows that our proposed approach achieves a lower EMD distance compared to the baseline methods, indicating that our generated stories are more aligned with the album images. Moreover, our iterative approach further reduces the EMD distance, demonstrating the effectiveness of our mutually-guided approach in refining and enhancing the generated stories and image descriptions." }, { "figure_ref": [], "heading": "Results of LLMs based Evaluation", "publication_ref": [], "table_ref": [], "text": "To evaluate the performance of the stories themselves, we first focused on their information content. We counted the length of sentences and the number of details included in them and found that both of these metrics increased. This indicates that with continued refinement, our framework is able to recognize more and more details from the images, resulting in more vivid and engaging stories.\nThe coverage metric also increases with each step, indicating that our method is consistent with both simple global captions and detailed captions, and can align well with the real image information.\nThe coherence metric showed a decrease in the second step but returned to a comparable level in the third step, still higher than that of the captions. This suggests that the captions suffer from serious inconsistency issues, whereas our LLM framework generates more coherent stories using imagination, albeit with increased misalignment, as evident from the deterioration of EMD and coverage. While the second step improved the fidelity, it focused on each image and reduced coherence. Therefore, in the third refinement step, we not only improved the alignment but also enhanced the connection between independent stories, resulting in high-fidelity and highly readable stories.\na woman is sitting on a rock overlooking a body of water. she is wearing a gray hoodie and black pants. there are a couple of rocks along the shore. there is a view of a mountain in the distance.\na woman standing in front of a car on a road\nImage Caption Initiate Story\nJenny had always loved road trips. So, when her best friend Sarah invited her to join her on a trip across the country, she didn't hesitate to say yes. The duo hit the road in Sarah's trusty old car, taking in the beautiful scenery along the way. As they drove down the winding roads, Jenny couldn't help but feel a sense of freedom and excitement bubbling inside her.\nRefined Caption a woman is posing for a photo. she is wearing a beanie on her head. she has on a green jacket and a brown jacket. she is smiling at the camera. there is a red car in the background." }, { "figure_ref": [], "heading": "Refined Story", "publication_ref": [], "table_ref": [], "text": "Jenny poses for a photo in front of Sarah's trusty red car, while wearing her brown hat, green jacket and cell phone in her left hand. They stop on a sunny road surrounded by rocks. In the background, a colorful mural decorates the wall of a building. Jenny feels excited to continue their road trip adventure with Sarah.\na person sitting on a rock by the water Later that day, they decided to take a scenic hike along a rocky coastline. As they walked, they came across a quiet little cove, surrounded by towering cliffs. There, they saw a woman sitting on a rock, gazing out at the vast expanse of the Pacific Ocean. They joined her, sitting in silence for a while, taking in the beauty of the world around them.\nA woman wearing a gray hoodie and black pants sits quietly on a rocky shoreline, overlooking a large body of water. A mountain range extends behind her, with scattered rocks along the shore. Jenny and Sarah take in the view and appreciate the beauty of the natural world.\na person is standing on top of a rock formation. the person is looking up at the sky. the person is looking up at the sky. the sky is clear and blue. there are some rocks on the top of the mountain.\na person standing on top of a rocky hill\nThe next day, they decided to go on a challenging mountain hike. It was a steep climb, but the girls were determined to reach the summit. Finally, they conquered the rocky hill and stood at the top, gazing out at the breathtaking view. They felt a sense of accomplishment and pride wash over them.\nA person stands triumphantly on top of a rock formation, wearing a green jacket and black pants. The sky is clear and blue, and other rocks surround them at the summit. They look out at the stunning view with pride and accomplishment.\na woman is standing outside in front of a bus stop. she is looking at the camera and is smiling. she is wearing a green and gray earrings on her ears. there is a building in the background.\na woman with blonde hair standing in front of a building Their last stop was New York City, where they explored the busy streets and iconic landmarks. One day, they found themselves standing in front of the stunning Empire State Building. They couldn't resist taking a selfie to capture the moment forever.\nA woman stands outside of a building wearing a green and gray earrings on her ears, looking at the camera and smiling. She is standing in front of a bus stop, and a building is visible in the background. The atmosphere is busy and bright.\na woman is eating a meal in a restaurant. she is about to eat some food. she is wearing a light blue short sleeve shirt. she has a bunch of food on her plate.\na woman sitting at a table with a plate of food\nFinally, as the trip came to an end, they treated themselves to a fancy dinner at a cozy little restaurant. They sat at a table, savoring the delicious food and reminiscing about their epic adventure. They couldn't believe how much they had seen and experienced in just a few short weeks.\nA woman is sitting inside a restaurant, wearing a light blue short sleeve shirt and eating a delicious meal. The food looks good and plentiful, with a bunch of food on her plate. She looks happy and content while savoring the taste. Jenny and Sarah sit across from her, reminiscing about their epic adventure." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "As Sarah pulled up in her trusty red car, Jenny couldn't help but feel a rush of excitement for their road trip adventure.\nStanding in front of the colorful mural decorating the wall of a nearby building, Jenny posed for a photo while wearing her brown hat, green jacket, and holding her cell phone in her left hand. They hit the open road, taking in the stunning scenery along the way.\nStopping at a rocky shoreline overlooking a large body of water, Jenny marveled at the natural beauty of the mountain range extending behind her. She took a deep breath of fresh air and felt a sense of peace and tranquility.\nFeeling adventurous, Jenny made her way to the top of a rock formation, proudly taking in the stunning view of the surrounding landscape. The sky was clear and blue, and the sun shone down upon her with warmth and light.\nAs she walked around town, Jenny discovered a charming building with a bus stop out front. She stood outside, admiring the green and gray earrings she had picked up at a nearby shop. The hustle and bustle of the city buzzed around her, and she couldn't help but feel a sense of excitement for the new experiences waiting to be discovered.\nAfter a long day of exploration, Jenny sat down at a cozy restaurant, taking in the ambiance and delicious smells wafting from the kitchen. She ordered a hearty meal and savored every bite, feeling grateful for the incredible journey she had embarked upon. As she and Sarah reminisced about their epic adventure, Jenny couldn't help but feel a sense of fulfillment and joy.\nFigure 5: This figure provides a visual representation of our VIVID. The red sections highlight instances of unclear references or factual errors, while the green sections indicate the details that we have rectified or included. Our refined captions and stories significantly mitigate the inconsistencies between the images and texts, and the ultimate stories exhibit more coherent and cohesive." }, { "figure_ref": [], "heading": "Visualization", "publication_ref": [], "table_ref": [], "text": "We visualize the generated stories of each step from an album in Figure 5. The figure shows that our VIVID can generate vivid stories with improved factualness during iteration." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b22", "b20", "b31", "b26", "b2", "b39" ], "table_ref": [ "tab_2" ], "text": "Detailed caption evaluation. To evaluate the quality of the refined captions, we compared our proposed story-aware caption model with two baselines: directly deploying the pre-trained BLIP model [23] and fine-tuning BLIP on the Stanford image-paragraph [21] dataset. As shown in Table 2, our method outperforms both baselines on 4 common metrics: BLEU [32], ROUGE [27], METEOR [3] and CIDEr [40], which means our method can effectively identifying visual details. Limitation and future work. In Appendix B, we present the complete results for the entire dataset. While most of the results are satisfactory, there are instances where the generated stories have inconsistencies in context, misleading details, or a lack of vividness.\nWe recognize that fully addressing these errors through network model upgrades is challenging due to boundary effects. Therefore, we propose involving humans in the process to provide valuable assistance. In our future work, we plan to incorporate human interaction through chat to improve system performance, ultimately enhancing the practicality of our approach." }, { "figure_ref": [], "heading": "A Details for Prompt A.1 Detailed Prompt for Story Generation", "publication_ref": [], "table_ref": [], "text": "In this section, we provide the detailed prompt for the story generation. The results could be reproduced with the detailed prompt.\nInitial Story Generation. In this section, we implement a multi-step dialogue function to send the chat history and prompt to LLM, and extracts the reply, adding it to the dialogue history.\nThe first prompt is to generate stories from captions, and retaing the corresponding:\nNow your answer is a set of photo captions from a vlog. Please create a vivid story that incorporates the key elements from each photo and denote the corresponding origin caption before each paragraph as \"caption\" \"\\n\" \"generated story\". Remember to use your imagination and creativity to turn the photo descriptions into a fun and engaging story.\nThen, the second prompt segment the generated stories into text chunks for the next process:\nRefining the Story with Story-Aware Caption Model. We utilize the following prompt to revising the initial stories, with the refined captions generated by story-aware caption model:\nIterative Refinement of Story and Image Description. we propose the following prompt to generate a coherent and comprehensive ultimate story:\nGiven a series of stories describing individual pictures, with each story building upon the one before it, create a cohesive narrative that seamlessly connects each story together. Use appropriate transitions and scene changes to make the plot flow smoothly, while ensuring that the plot twists are logical and make sense within the context of the story. Input: <xxx> Tips: You should keep the number of stories. I give you 10 stories, you should return 10 stories." }, { "figure_ref": [], "heading": "A.2 Detailed Prompt for LLM based Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "In this section, we provide the detailed prompt for the evaluation. The quality of album storytelling could be evaluated with these metrics.\nDetail. We counts how many details are described in the stories with the following prompt:\nPlease evaluate the input story and count the total number of details it contains. Please output the result in the format \"Total number of details: xx\". Story: <xxx>\nCoverage. We measures the average of how much the stories coverage the information from both short captions {C Coherence. We evaluates the coherence of the stories with the following prompt:\nPlease rank the following stories on a scale of 0 to 1 based on their coherence. A score of 1 indicates that the stories are seamlessly connected and free of coherence issues, while a score of 0 indicates that there are significant coherence problems between the stories. Please consider the fact that these stories were generated independently for each image and then concatenated together. Your task is to evaluate whether there are any coherence issues between the stories when they are read together. Please output the result in the format \"Coherence Score: xx\". Story: <xxx>" }, { "figure_ref": [], "heading": "B Case Study B.1 Limited Cases", "publication_ref": [], "table_ref": [], "text": "In this section, we proved limited cases, which can be summarised into three types:\n• Inconsistencies, which commonly arise when the LLM encounters difficulties in comprehending the temporal sequence of storylines or establishing personal relationships. • Misleading details, which means the discrete texts cannot capture all the features present in the images, or the inaccuracies in the details extracted by the story-aware caption model, resulting in erroneous stories generated by LLM. • Lack of vividness, which stems from LLM being excessively constrained by intricate details, thereby losing its creative capacity, or from the scenes being too mundane to inspire imagination.\nThese issued are hard to be solved by updating neural networks. In contrast, the incorporation of human interaction via chat has the potential to enhance system performance, ultimately augmenting the practicality and efficacy of our approach." }, { "figure_ref": [], "heading": "Image Ultimate Story", "publication_ref": [], "table_ref": [], "text": "Finally, Mary returned home to find a black plate on a white sheet with a bunch of pink roses in the center. Someone had placed the flowers in a vase and had bundled them together in a beautiful arrangement. This was a token of love from her partner, making her feel special and cherished.\nAs the clock struck midnight, Mary's friends surprised her with a heart-shaped cake with 'Happy Birthday' written on it. Mary sat at the table holding a sparkler, and the cake was white with red roses on it, delivered in a box. The box was on a silver table, and there was writing on the cake in red and green.\nAs the clock struck midnight, Mary's friends surprised her with a heart-shaped cake with 'Happy Birthday' written on it. She sat at the table holding a sparkler, cherishing the moment with her friends. She wore a white tank top, and the cake was on a white plate. A water bottle was also present on the table, and water drenched the table as they celebrated.\nIssue 1: Inconsistence" }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "The issue present in this story is temporal inconsistent. The sequence of images presented within the story follows the order of 2, 3, 9, yet due to their resemblance in terms of scenes, as gifts, cakes, and candles, the LLM erroneously amalgamates them into a singular scene, thereby disrupting the factual order of the captured moments." }, { "figure_ref": [], "heading": "Image Ultimate Story", "publication_ref": [], "table_ref": [], "text": "My aunt and grandmother were busy preparing dinner for us in the kitchen. The warm, inviting scent of roasted turkey filled the air, and the sound of knife chopping vegetables echoed through the room. They chattered happily, swapping recipes and discussing their plans for the upcoming holiday season, creating a joyous atmosphere in the kitchen. The woman and girl were cooking together, preparing a delicious meal, and adding their own touch to the festive occasion. The Christmas tree in the kitchen enhanced the seasonal atmosphere and provided a backdrop for our cooking session.\nLastly, my uncle and aunt were sitting at the table, sipping a cup of tea and enjoying the last moments of the night. They reminisced about their own adventures, discussing everything from travel to life's little surprises, adding to the warmth of the moment. My aunt wearing a bright red shirt and a black jacket was waving at the camera, while my uncle in a black jacket smiled at his wife, creating a cozy and intimate atmosphere in the kitchen. " }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "The issue present in this story is personae inconsistent. Due to LLM's lack of knowledge regarding the individuals depicted in the photographs, it tends to conjure up an identity. For instance, the woman in the picture simultaneously assumes the roles of a grandmother, an aunt, and a parent. " }, { "figure_ref": [], "heading": "Image Ultimate Story", "publication_ref": [], "table_ref": [], "text": "On a bus ride through the city, a woman was sitting by the window, wearing earbuds and talking on her cellphone. Behind her, the colorful blur of lights and buildings could be seen. She was wearing a long furry coat with a high collar, and her hair was in a messy bun. The interior of the bus was well-lit, with many seats and handrails. The woman's face was relaxed and content, with a soft smile on her lips.\nJenny skateboards triumphantly down a set of stairs, with Sarah cheering her on. San Francisco's lively atmosphere surrounds them, with many skateboarders and fans watching their skateboarding. The black skateboard and ramp shown in the photo hint at the lively skateboarding scene in the city.\nAs the sun began to set, they returned to their campsite and built a roaring campfire next to a red and white tent. They gathered around, laughing and talking as they roasted marshmallows over the flames, sharing stories and enjoying time together. Rocks surrounded the campers, adding to the rustic atmosphere. The stars shone down from above, casting a soft glow over the campsite." }, { "figure_ref": [], "heading": "Issue 2: Misleading Detail Analysis", "publication_ref": [], "table_ref": [], "text": "The issue present in above stories is misleading detail. The issue lies in the inability of discrete texts to capture all the features present in the images, or in the inaccuracies in the details extracted by the story-aware caption model, resulting in erroneous stories generated by LLM. " }, { "figure_ref": [], "heading": "Image Ultimate Story", "publication_ref": [], "table_ref": [], "text": "A woman and a child sit on a couch, playing with a game. There is a purple board on the couch and white pillows add to the coziness of the setting. They laugh and joke together, engaged in a spirit of playfulness and warmth that fills the room.\nA woman in a black shirt and blue skirt sits at a table in a hotel room, enjoying a delicious meal. A glass of water is in front of her, along with a plate of food. The menu sits in front of her, while she looks around the room, taking in its luxurious ambiance. Outside, the spectacular view of the city can be seen through the window.\nIn the backyard at a party, a large group of people gathered around a " }, { "figure_ref": [], "heading": "B.2 Visualization of Scenes", "publication_ref": [], "table_ref": [], "text": "In this section, we present visualization samples corresponding to each scene depicted in the accompanying figures. Consistent with the main paper's settings, the red sections highlight instances of unclear references or factual errors, while the green sections indicate the details that we have rectified or included.\na woman in a white top and shorts posing for a picture\nImage Caption Initiate Story\nOne day they decided it would be fun to take some photos and commemorate their time together. One of the women touched up her makeup and put on a white top and shorts before posing for a snapshot.\na couple of women sitting on top of a lush green field\nIn the warm summer months, the same two friends travelled to a lush green field where they lazed around, chatting and enjoying the sunshine. The filed was dotted with colorful wildflowers, and the soft sounds of nature could be heard all around them.\na couple of women sitting on top of a blanket Up on the hill overlooking the frozen lake, two women sat on a cozy blanket. They had been chatting and enjoying the peaceful winter scenery when they pulled out a blanket and spontaneously decided to sit down and take in the beauty around them.\na woman sitting on a blanket cutting a cake One of the women, who happened to be celebrating a birthday, brought out a cake and began cutting it up to share with her friend. The cake was a sweet treat that contrasted with the chilly atmosphere, and the two friends enjoyed it while still wrapped up in their blankets, taking in the view.\na body of water surrounded by snow and trees\nIt was a cold and snowy winter day.\nThe landscape was covered in white and the only body of water visible was a frozen lake nestled among the trees. It was a breathtaking sight to behold, with the snow-capped trees surrounding the clear blue water." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "Standing in front of the mirror in her bedroom, the young woman posed for the camera, ready for a day of adventure. She wore a white t-shirt and matching shorts, with long black hair flowing down her back. Behind her, a white bra was visible, completing the simple yet stylish outfit.\nIn the sun-drenched summer afternoon, the two women sat on a cozy blanket, eating some delicious treats and chatting. The grassy field around them was soft and lush, dotted with wildflowers of various colors. The sun was shining down on them, providing warmth and comfort.\nSitting on a cozy blanket on the hillside, the two women enjoyed each other's company on a beautiful winter day. The hill behind them rose up, snowcovered and still. The woman in the green dress held a wine glass in her hand, while her companion in the white dress leaned in to enjoy their conversation. The sunny blue sky with fluffy white clouds drifting lazily by completed the serene winter scene.\nIn the picturesque winter surroundings, the woman in the white sweater sat on a cozy blanket, ready to indulge in a birthday cake. The pink candle in the middle of the cake flickered in the breeze, while the woman held a knife in her hand. In the distance, a hill rose up, covered in snow, completing the picture-perfect winter scene.\nIn the winter wonderland, the frozen lake nestled among the snow-covered trees was a breathtaking sight to behold. The blue water stood out against the white landscape, surrounded by snow and tall trees. The sky overhead was cloudless and deep blue. In the distance, a snowcovered mountain rose up to complete the picture-perfect winter scene. The other woman also wanted to have her picture taken and brought out a pink scarf to add some color to her outfit. She stood in front of a full-length mirror and struck a pose while her friend snapped a pic.\na woman blowing out a candle on a birthday cake After enjoying the cake, the woman blew out a single candle, making a wish for the coming year. Her friend joined in, and they both felt grateful for the wonderful experiences they had shared together." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "The black skis with yellow numbers were lodged in the powdery snow, with a few remnants of snow scattered on their surface. Nearby, two ski poles rested upright in the snow, having just been used by a group of friends who had spent the day carving up the slopes. The mountain air was crisp and cold, and the scene was beautiful in its simplicity.\nThe couple snuggled up together, smiling at the camera and taking a selfie in the snow. They were enjoying a ski trip with friends, surrounded by evergreen trees in the background. The man wore a blue jean jacket and a white scarf around his neck, while the woman wore earrings and smiled happily next to him. It was a beautiful moment that they would treasure forever.\nIn the bright and colorful toy store, the young woman searched for the perfect gift and found a cute Hello Kitty stuffed animal on the shelf. She held it up for the camera, confident that her nephew would adore the gift she had chosen for him. Around her, shelves of other toys were visible, inviting her to explore and find even more treasures.\nIn the store, the young woman looked for something special and spied a beautiful pink scarf. She wrapped it around her neck, revealing a stylish tan jacket. The white scarf in her hand completed the fashionable look, and she held it up with a smile on her face. Next to her, her friend snapped a photo to capture the moment.\nAfter sharing the sweet treat with her friend, the woman in the white tank top blew out the candle on the birthday cake, making a wish for the coming year. The happy birthday card on the table in front of her was a reminder of the special occasion, and she felt grateful for the wonderful experiences she had shared with her friend in the peaceful winter surroundings. For lunch, they decided to grill some corn and potatoes. They seasoned them to perfection and cooked them slowly, enjoying the scent of the smoky grill. As they sat down to eat, they marveled at how delicious it all tasted, especially when enjoyed in the great outdoors.\na woman in a white dress is holding a green and yellow hammock Before they left the campsite, the woman in the white dress set up a hammock between two nearby trees. She lounged in it, swaying back and forth, enjoying the peace and solitude of the great outdoors.\nIt was a moment of pure bliss, one that they would remember for years to come.\na full moon is seen through the branches of a pine tree\nAs they drifted off to sleep, they noticed the full moon shining bright through the branches of a nearby pine tree. It was a beautiful sight, and they felt grateful to be able to experience it up close and personal." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "The car trunk was filled to the brim with clothes, camping gear, and supplies for their fun-filled camping adventure. A cooler filled with drinks was also squeezed in.\nAs they set off on their journey, they were excited for the upcoming adventure ahead.\nTwo women worked together to set up a large tent in the campsite.\nWith the tarp and poles in place, they smiled proudly at their work as they talked and laughed. They looked forward to their stay in the great outdoors, surrounded by trees and a wooden fence in the background.\nFor lunch, the campers decided to grill potatoes and corn on the fire. Seasoned to perfection, the vegetables cooked slowly on the grill, creating a smoky and crunchy texture. With the flames in the background, and vegetables glowing in the foreground, they felt grateful to be able to enjoy such delicious food in the great outdoors.\nBefore departing the campsite, one of the women set up a colorful hammock between two tall trees in the wooded area. She lounged in the hammock, enjoying the gentle sway and the cool breeze, with the sun shining through the trees. It was a moment of pure relaxation and happiness, and they felt grateful for the experience.\nAs they lay down in their tent, they noticed the bright full moon shining through the branches of a tall pine tree. The moonlight illuminated the tent and created a peaceful, calming atmosphere. With the tent in the foreground and tall trees in the background, they felt grateful to be able to experience the beauty of nature up close.\nScene 2: Camping a group of people sitting at a picnic table\nImage Caption Initiate Story\nAs the day drew to a close, they sat at a nearby picnic table, watching as the sun sank below the horizon. They sipped on cups of coffee and talked about everything they had seen and experienced so far. They were tired from a long day of exploring, but happy and content with all that they had accomplished.\na group of people sitting around a fire pit\nAs the sun began to set, they joined a group of campers at a nearby fire pit. The flames danced and crackled as everyone shared stories and toasted marshmallows. They made new friends and bonded over their love of the great outdoors.\na woman is making pancakes on a grill\nThe next morning, the group gathered around the grill for breakfast. One of the women took charge, expertly flipping pancakes and serving them up hot and fresh. The smell of maple syrup and bacon filled the air, and they all dug in with gusto.\ntwo women sitting at a table with cups of coffee Later that night, the two women retreated to their tent, exhausted but still buzzing with excitement. They sat on their sleeping bags, sipping on warm cups of coffee and reading books by the light of their lantern. It was peaceful and calming, a perfect end to a perfect day.\nan open book laying on top of a bed\nThe next morning, they woke up to another beautiful day in the wilderness. One of the women sat in bed, reading an open book and enjoying the morning breeze. They were happy and relaxed, ready to take on whatever adventures lay ahead." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "As night fell, the campers gathered around a large wooden table, eating and drinking under strings of lights. With a camp stove nearby, they talked about their experiences and marveled at the beauty of the outdoors. There was a tent in the background, amongst trees that added to the atmosphere.\nAs the sun began to set, a group of campers gathered around a roaring fire pit. They shared stories and jokes, toasted marshmallows over the flames, and talked about their experiences. The atmosphere was warm and friendly, and with stars in the clear night sky, they felt a sense of community as they enjoyed the beauty of the outdoors.\nOne of the women wearing a colorful jacket prepared a delicious breakfast of pancakes on the grill. With a bottle of milk on the table, and cookies sizzling on the pan, the aroma of the cooking food made everyone's mouths water. They sat down to enjoy the food and the company of their friends, grateful for the experience.\nTwo women posing for a picture with green mugs in their hands, whilst sitting outside their tent. Around the wooden table was neatly arranged camping gear ready for the next adventure with palm trees making a tropical background. They smiled at the camera and enjoyed their coffee, feeling grateful for the experience and looking forward to what lay ahead.\nOne of the women woke up the next morning to a gentle breeze and decided to spend time reading in bed. She picked up an open book and savored the stillness and tranquility of the moment. Around the bed, neatly arranged was the camping gear on the chairs, and a folding chair next to the bed added to the luxuries of camping outdoors. The sound of the pot boiling over caught our attention, and we all headed into the kitchen, where my aunt and grandmother were busy preparing dinner. The warm scent of roasted turkey wafted through the air, and the sound of knife chopping vegetables filled the room. The women chattered happily, swapping recipes and discussing their plans for the upcoming holiday season.\na man and a woman sitting at a table\nLastly, my uncle and aunt sat at the table, sipping a hot cup of tea, and enjoying the last moments of the night. They chatted and reminisced about their own adventures, discussing everything from travel to life's little surprises.\na person holding a picture of a polar bear My little nephew, however, wasn't ready to go to bed just yet, and he clung tightly to a picture of a polar bear he had acquired earlier in the day. He showed it to everyone, his eyes alight with excitement, exclaiming how he couldn't wait to see a real one someday.\na group of people sitting around a christmas tree\nWe ended the evening back in the living room, encircling the Christmas tree lit up with twinkling lights. We sang carols together and shared some of our fondest memories from the past year, smiling and laughing the whole time. The warmth and love in that room were palpable, and I felt incredibly thankful for my family.\na man and a woman sitting in front of a christmas tree\nAs we finished our conversations, my sister and her husband rose to their feet and moved to sit in front of the beautiful Christmas tree. The decorations sparkled, illuminating the room with hues of red and green. They exchanged a kiss and posed for a photo, and we all cheered in the background." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "The woman and girl were cooking together, preparing a delicious meal, and adding their own touch to the festive occasion. The Christmas tree in the kitchen enhanced the seasonal atmosphere and provided a backdrop for our cooking session.\nLastly, my uncle and aunt were sitting at the table, sipping a cup of tea and enjoying the last moments of the night. They reminisced about their own adventures, discussing everything from travel to life's little surprises, adding to the warmth of the moment. My aunt wearing a bright red shirt and a black jacket was waving at the camera, while my uncle in a black jacket smiled at his wife, creating a cozy and intimate atmosphere in the kitchen.\nMy little nephew wasn't ready to go to bed yet, and he clung tightly to a picture of a polar bear he had acquired earlier in the day, displaying it with excitement. The picture depicted a white polar bear with a heart on it, and everyone was eager to hear his story. The family listened to the little boy's story with joy and laughter, enjoying the time together during the joyful holiday season.\nWe ended the evening in front of the beautiful Christmas tree, filling the room with love and joy. Timmy was standing in front of the tree dressed in an orange shirt, black pants, and a red sweater, holding a yellow bowl in his hand, adding to the festive atmosphere. My sister was standing next to him, wearing a black sweater and gray pants, opening her Christmas present with a smile on her face. The presents around the tree provided an extramarital feel to the scene, and we felt grateful for the family reunion.\nMy sister and her husband were standing in front of the impressive Christmas tree, surrounded by beautiful decorations that sparkled and illuminated the room with hues of red and green. They exchanged a kiss and posed for a photo, while the rest of us cheered in the background. My sister was donning a white sweater and red plaid pants, holding a yellow bowl in her hands, creating the perfect scene for a family photo. The tree was breathtakingly beautiful and it complemented our festive mood perfectly.\nScene 3: Christmas a boy sitting on a couch with a christmas tree in the background I noticed my cousin, Timmy, curled up on the couch near the tree, his nose buried in a book. He had always been a bookworm and was lost deep within the pages. But his eyes would occasionally glance up, admiring the beautiful decorations, and he would smile to himself, lost in his thoughts.\na person cutting a large piece of meat on a cutting board\nThe sound of a knife slicing through meat brought our attention to the kitchen, where my brother had donned an apron and taken on the task of carving the turkey. It was juicy and tender, with the perfect amount of seasoning, and we all dug in, filling our plates with love and gratitude.\na group of people sitting around a living room\nIt was a frosty evening, and my family was gathered around in the living room, with the fireplace casting a warm glow over the room. We laughed and chatted, sharing stories of our childhood and our day-to-day lives. The sound of the flames crackling filled the background, creating a cozy atmosphere that made us all feel at ease. a man and woman sitting next to each other in front of a christmas tree As the night drew to a close, my parents sat beside each other in front of the Christmas tree, holding hands and sharing a quiet moment. The flickering of the flame and the soft twinkle of the tree lights danced upon their faces, and I could see the look of love and contentment in their eyes." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "After dinner, we all gathered around the table to play Uno, with five of us sitting and having a great time. The game was filled with friendly competitiveness, and each player was determined to win. Timmy was in high spirits, with a lot of cards in his hands, bringing vibrancy to the room. We laughed and enjoyed each other's company, living in the moment and savoring the memories that we created together.\nTimmy was lost deep within the pages of a book, curled up on the couch near the impressive Christmas tree. With occasional glances up, he admired the beautiful decorations and smiled to himself, lost in his thoughts. The young boy wore a Christmas shirt, and gifts were placed on the couch, radiating joy and excitement. The white mini blinds on the windows created a calming atmosphere in the living room, providing the perfect ambiance for a cozy winter evening.\nIn the kitchen, my brother had donned an apron and taken on the task of carving the turkey into thin slices. The meat was juicy and tender, with just the right amount of seasoning, and the cucumbers and pink bowl were placed on the cutting board. Everyone was in the kitchen, adding to the festive atmosphere, and savoring the delicious aroma of roasted turkey.\nThe man was wearing an apron, holding a knife and a fork in his right hand, while another person to the left of him was wearing a blue apron, preparing the festive meal together.\nMy family was gathered in the living room, with the fireplace in the corner of the room casting a warm glow over us. There were people sitting on the couch, chatting and enjoying the cozy atmosphere. Timothy, my little cousin, was playing with his toy, curled up near the Christmas tree in the corner. We laughed and shared stories, filling the room with joy and warmth. The sound of the flames crackling in the fireplace added a soothing background to our conversations. The excitement was building as we boarded our flight and took our seats. I couldn't help but admire the sleek design of the airplane, especially as we were taxied to the runway. Once we were up in the air, I gazed out the window and watched as the tail of the plane gradually disappeared into the distance. It was truly a thrilling experience.\na man standing on a bridge next to a river\nWe walked across the bridge, and I couldn't resist taking a photo of this moment. The man standing on the bridge next to the river seemed lost in thought, and I wondered what was on his mind. The view from the bridge was breathtaking, and it was hard not to feel a sense of peace and tranquility.\na small boat traveling down a canal next to a tall building\nThe canal ride was a highlight of our trip. As we floated along, we were captivated by the stunning architecture of the buildings that lined the waterway. We marveled at how the old and new seamlessly blended together, and the peacefulness of the ride allowed us to truly appreciate the beauty around us.\na man and a woman standing in front of a building\nAs we explored the city, we couldn't help but admire the stunning architecture. This particular building caught our eye, and we stopped to snap a picture in front of it. It was a vibrant and bustling city, but in that moment, it felt like we were the only two people in the world.\na woman is walking down a narrow street\nAs we wandered the charming streets of the historic town, I couldn't resist snapping a picture of this particular alleyway. Its narrow, cobblestone path was lined with charming boutiques and florists. A woman walking down the street with a basket of fresh flowers caught my eye, and I couldn't help but imagine the stories of those who had walked this path before us." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "The white passenger plane was sleek and modern, with a large window on the side and a cockpit at the back. As we took off from the airport, I watched the runway shrink away through my window, marveling at the powerful engines and technology of the plane. The sky outside was a clear blue, without a cloud in sight, and inside the cockpit was spotlessly clean. It was truly an exhilarating and unforgettable experience.\nStanding on the bridge next to the tranquil river, a man was lost in thought, admiring the peaceful views surrounding him. The river was a natural beauty and the atmosphere was thoroughly serene, allowing the man to appreciate the calming and relaxing moment. It was a time to unwind and clear the mind.\nOn the calm waters of the canal, we floated past a backdrop of stunning old and new buildings fusing together in harmony. Other people were also enjoying the ride in their boats, but the atmosphere was tranquil and serene. We appreciated the beauty of the city from a unique vantage point, relaxing and savoring the experience.\nIn front of a grand and stunning building, we posed for the camera with smiles and excitement. The city around us was alive and bustling, with a vibrant energy and personality that made us feel rejuvenated and not wanting to leave. The building was just one of the many sights that contributed to the frenzy and excitement of the city.\nWalking through the charming old city, I spotted a woman strolling down a narrow cobblestone street with a basket of fresh flowers. The street was lined with historic buildings displaying unique architecture, each with its own story and character. The scent of the surroundings was rich with hints of floral and boutiques, and every turn offered a rich history and culture to explore. After a long day of sightseeing, my friend and I decided to stop at a quaint little pizzeria we had stumbled upon. The aroma of freshly baked pizza wafted through the air as we eagerly chatted about all the places we had visited that day. We couldn't help but laugh as we attempted to eat our slices without making a mess. It was a perfect way to refuel and relax.\na crowd of people walking down a street next to tall buildings\nThe energy of the city was palpable as we walked down this bustling street. Tall buildings lined the way, and the sound of people chatting and laughing filled the air. It was a perfect day to explore the city and soak up all it had to offer.\na row of pizzas sitting on top of a wooden table\nWe stumbled upon a local pizza festival and were thrilled to find a seemingly endless selection of pies. We decided to try a few different varieties and lined them up on the wooden table in front of us. The aroma was simply heavenly, and we savored every single bite.\na group of people riding gondolas down a canal\nWe decided to splurge on a gondola ride, and it was worth every penny. The gondolier regaled us with stories and history of the area as we glided along the canal. It was truly a unique and romantic experience, and the sounds of the water and laughter from nearby gondolas made it all the more magical.\na reflection of a clock tower in a puddle of water I stopped in my tracks when I saw the clock tower and its reflection in a puddle of water. It was a moment of pure serenity and awe. I couldn't help but admire the intricate details of the clock tower and the peacefulness of the puddle it was reflected in.\nUltimate Story Seated at a cozy restaurant, my friend and I enjoyed a delicious pizza with pepperoni and cheese toppings. The slice was perfectly greasy and savory, and we tried our best not to make a mess while eating it. The menu on the wall behind us displayed many more options for our next visit. It was a much needed break after a long day of sightseeing, and a great way to refuel for more adventures.\nThe busy and bustling city street was a lively and energetic hub of excitement and wonder. Tall buildings flanked the street, each with its unique charm and history. The sound of chatting and laughter filled the air, as a diverse crowd of people walked and talked, enjoying the culture and ambiance of the city. It was an amazing day to explore and soak up all the city had to offer.\nAt the pizza festival, the endless selection of mouth-watering pies was a feast for the senses. Various types of pizzas were lined up on wooden boards, each with its own unique blend of fresh toppings and homemade touch. The aroma of the pizza was heavenly and we savored every single bite. It was a true artistic delight and culinary experience.\nWe indulged in a splurge of a gondola ride, slowly and leisurely floating down a picturesque canal.\nOur gondolier charmed us with tales and anecdotes about the beautiful surroundings, and the soothing sound of water lapping against the gondola added to the romance and tranquility of the moment. We saw other gondolas nearby, with people enjoying similar enchanting moments. " }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "Before the ceremony, Sarah's bridesmaids helped her prepare for the big moment. In her childhood bedroom, her hairstylist worked to create the perfect updo, while Sarah wore a purple dress and chatted with her friends. Sarah felt grateful for their support and love, knowing that they would always be by her side.\nAs the wedding day drew to a close, Sarah James took a moment to snap a photo near a stunning tree. Sarah is wearing her white wedding dress, holding her wedding ring on her finger, and smiling at the camera. In the background are a couple of trees. James' love and support made the day unforgettable.\nAt the reception, Sarah and James sat at their own table, sharing sweet moments and whispered loving words to each other throughout the night. They felt blessed to have found each other and knew that they would face whatever challenges life brought them together. There is also a cake on the table, the couple is getting ready to eat it.\nAt the reception, guests were treated to a lavish feast of delicious appetizers, entrees, and desserts, set up on a table adorned with a stunning vase of flowers. There were also crackers on the table for everyone to sample. Everyone gathered around, eagerly trying the various dishes. The mood was joyful and celebratory, with music and dancing filling the air.\nDuring their honeymoon, Sarah and James snuggled up on the couch surrounded by plush pillows. They enjoyed the warmth of the fire and the beauty of the snow falling outside the window. It was a magical beginning to their new life together. Sarah is wearing a white tank top and a green skirt, smiling at the camera, while sitting on a leather couch with pillows. They stood by her side, supporting her through every step of the wedding planning process. Sarah felt grateful for their unwavering love and loyalty. As they all stood together, they knew that their bond would last a lifetime." }, { "figure_ref": [], "heading": "Ultimate Story", "publication_ref": [], "table_ref": [], "text": "On the way to the ceremony, James' groomsmen shared a limo and enjoyed the festive atmosphere. The driver played upbeat music, and the friends laughed and danced along. They were excited to celebrate the happy couple's special day. The image shows a man wearing a black baseball cap and black shirt, smiling at the camera while sitting in a camper.\nAs Sarah prepared for her wedding, her hairstylist worked to create the perfect updo in her childhood bedroom, surrounded by her bridesmaids as they sipped champagne and chatted nearby. She admired herself in the mirror, feeling beautiful in her stunning white gown and matching necklace.\nJames waited nervously at the altar, wearing a white shirt and a black tie, his heart racing with anticipation. He couldn't wait to marry the love of his life and start their new journey together. When Sarah appeared, in her white wedding dress with a long veil, he felt overwhelmed with emotion, knowing that he was the luckiest man in the world.\nDuring their honeymoon, Sarah and James took a romantic hike through the woods, where they marveled at the beauty around them and held hands as they walked along the wooden bridge. Sarah was wearing her beautiful white wedding dress while James was wearing a white suit and a black tie. As they posed for a picture, they felt grateful to be starting their new life together in such a beautiful place. The sun was shining through the trees.\nSarah's bridal party was made up of her closest friends and family members, who stood by her side every step of the way. They were a constant source of support and love, and during the reception, they gathered around Sarah, celebrating her happiness and joy.\nThere are a couple of young women and some men in the group, all looking at the camera, smiling and having fun. There is a woman in the middle of the group wearing a white tank top and a white dress. " } ]
This work studies how to transform an album to vivid and coherent stories, a task we refer to as "album storytelling". While this task can help preserve memories and facilitate experience sharing, it remains an underexplored area in current literature. With recent advances in Large Language Models (LLMs), it is now possible to generate lengthy, coherent text, opening up the opportunity to develop an AI assistant for album storytelling. The key problem of this task is to extend LLMs to understand visual inputs. One natural approach is to use caption models to describe each photo in the album, projecting visual inputs into discrete text words, and then use LLMs to summarize and rewrite the generated captions into an engaging story. However, we find this often results in stories containing hallucinated information that contradicts the images, as each generated caption ("story-agnostic") is not always about the description related to the whole story or miss some necessary information. To address these limitations, we propose a new iterative album storytelling pipeline, VIVID -Visual Iterative Verbalization with factualness-Improved Descriptions, which can effectively identifying appropriate visual details and mitigating hallucination issues. Specifically, we start with the aforementioned initial story and build a story-aware caption model to refine the captions using the whole story as guidance. The enriched captions are then fed into the LLMs to generate a new refined story. This process is repeated iteratively until the story contains minimal factual errors while maintaining coherence. To evaluate our proposed pipeline, we introduce a new dataset of image collections from vlogs and a set of systematic evaluation metrics. Our results demonstrate that our method effectively generates more accurate and engaging stories for albums, with enhanced coherence and vividness.
Album Storytelling with Iterative Story-aware Captioning and Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: An example of album storytelling. The story contains detailed visual information (marked in green).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An overview of our proposed framework VIVID.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: This figure provides an example of the story-aware dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: This figure illustrates the components of our story-aware caption model, which comprises an image encoder, a text encoder, and a text decoder. Given an input image, the encoder converts it to image embedding. Then the text encoder grounds the initiate story to the image using cross-attention and generates a composite embedding. Finally, the text decoder generates a detailed caption from the composite image-text representation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "i} and detailed descriptions {C (U ) i } with the following prompt: Please use a score from 0-1 to measure how well the following story coverages the information from two different sets of captions. Note that a score closer to 1 indicates more information are covered in the story, while a score closer to 0 indicates poorer coverage. Please output the result in the format: \"Score of story coverage for Caption Group 1: xx. Score of story coverage for Caption Group 2: xx. Average score: xx.\" Caption group 1: <xxx> Caption group 2: <xxx> Story: <xxx>", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "As the night drew to a close, my parents sat beside each other in front of the Christmas tree, holding hands and sharing a quiet moment. The flickering of the flame and the soft twinkle of the tree lights danced upon their faces, signifying the joy of being together during the festive season. My dad was wearing a blue shirt and a white hat, while my mom held his hand, radiating love and warmth. Their love was a beautiful reminder of the reason for the season.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: This figure provides samples with the \"inconsistencies\" issue.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: This figure provides samples with the \"misleading detail\" issue.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: This figure provides samples with the \"lack of vividness\" issue.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "skis stood upright, lodged in the powdery snow. They had been used by a group of friends who had just finished carving up the slopes. As they took off their gear and chatted about their thrilling day, the skis remained behind, a testament to the fun they had just experienced. a man and a woman taking a selfie in the snow As the weather turned cold again, the group of friends went on a ski trip. While out in the snow, a couple snuggled up together and took a cute selfie to commemorate the occasion. a woman holding a stuffed animal in a store Before heading back home, one of the women stopped by a store to buy a souvenir for her young nephew. She settled on a cute stuffed animal and held it up for her friend to see before leaving the store. a woman holding a pink scarf in front of a mirror", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: This figure provides an example of scene \"birthday\".", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: This figure provides an example of scene \"camping\".", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "As the night drew to a close, my parents sat beside each other in front of the Christmas tree, holding hands and sharing a quiet moment. The flickering of the flame and the soft twinkle of the tree lights danced upon their faces, signifying the joy of being together during the festive season. My dad was wearing a blue shirt and a white hat, while my mom held his hand, radiating love and warmth. Their love was a beautiful reminder of the reason for the season.", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: This figure provides an example of scene \"christmas\".", "figure_data": "", "figure_id": "fig_13", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Stopping in my tracks at the sight, I was captivated by the reflection of the clock tower in the puddle of water. From my vantage point, the clock tower displayed intricate details and beautiful craftsmanship, surrounded by the peacefulness of the water. It was a beautiful moment as I saw people in the reflection walking in the distance, and felt the awe and inspiration of the surroundings.", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: This figure provides an example of scene \"travel\".", "figure_data": "", "figure_id": "fig_15", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "party, James' best friends were the groomsmen. They laughed and joked around as they rode to the ceremony in the back of a limo. The driver played upbeat music and they couldn't help but dance along. a woman in a wedding dress posing for a picture As the bride-to-be, Sarah was excited to capture every moment of her special day. She twirled in her gorgeous gown, admiring every angle in the mirror. Her friends and family watched in awe as she struck a pose and smiled for the perfect snapshot. a man in a white shirt and blue tie James looked dapper in his crisp white shirt and blue tie. He couldn't wait to marry the love of his life and start their new journey together. As he waited at the altar, his heart raced with anticipation. When Sarah appeared, he knew she was the most beautiful woman in the world. a man and a woman standing next to each other in a forest During their honeymoon, Sarah and James went hiking in the nearby woods. They walked hand in hand, taking in the breathtaking scenery around them. As they stood on the edge of a cliff, Sarah nestled into James' side and they marveled at the beauty of nature. a group of women standing next to each other Sarah's bridal party included her closest friends and family members.", "figure_data": "", "figure_id": "fig_16", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: This figure provides an example of scene \"wedding\".", "figure_data": "", "figure_id": "fig_17", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Comparison from multi-view.", "figure_data": "Method#SentenceEMD(↓)LLM based evaluationDetailCoverageCoherenceCaptions10.0012.3510.000.850.37Initial Story28.7017.9740.570.570.80Refined Story34.4716.9356.970.600.63Ultimate Story34.9716.2360.070.620.77", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Result of detailed caption evaluation.", "figure_data": "MethodBLEU_1BLEU_4METEORROUGE_LCIDErBLIP [23]0.410.085.4214.940.46BLIP-finetune24.417.0014.0130.8630.24Story-aware51.0816.4222.0937.9372.92", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "in above stories is Lack of Vividness. The underlying cause of this issue stems from LLM being excessively constrained by intricate details, thereby losing its creative capacity, or from the scenes being too mundane to inspire imagination. Various factors contribute to the generation of lackluster stories that closely resemble mere descriptions of the depicted scenes in the photos.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
Munan Ning; Yujia Xie; Dongdong Chen; Zeyin Song; Lu Yuan; Yonghong Tian; Qixiang Ye; Li Yuan
[ { "authors": "Anthony Michael Ahn; Noah Brohan; Yevgen Brown; Omar Chebotar; Byron Cortes; Chelsea David; Keerthana Finn; Karol Gopalakrishnan; Alex Hausman; Herzog", "journal": "", "ref_id": "b0", "title": "Do as i can, not as i say: Grounding language in robotic affordances", "year": "2022" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "", "ref_id": "b2", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Cheng-Han Chiang; Hung-Yi Lee", "journal": "", "ref_id": "b3", "title": "Can large language models be an alternative to human", "year": "2023" }, { "authors": "Jaemin Cho; Jie Lei; Hao Tan; Mohit Bansal", "journal": "PMLR", "ref_id": "b4", "title": "Unifying vision-and-language tasks via text generation", "year": "2021" }, { "authors": "Elizabeth Clark; Tal August; Sofia Serrano; Nikita Haduong; Suchin Gururangan; Noah A Smith", "journal": "", "ref_id": "b5", "title": "All that's' human'is not gold: Evaluating human evaluation of generated text", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Jeffrey Donahue; Lisa Anne Hendricks; Sergio Guadarrama; Marcus Rohrbach; Subhashini Venugopalan; Kate Saenko; Trevor Darrell", "journal": "", "ref_id": "b7", "title": "Long-term recurrent convolutional networks for visual recognition and description", "year": "2015" }, { "authors": "Xiaoyi Dong; Yinglin Zheng; Jianmin Bao; Ting Zhang; Dongdong Chen; Hao Yang; Ming Zeng; Weiming Zhang; Lu Yuan; Dong Chen", "journal": "", "ref_id": "b8", "title": "Maskclip: Masked self-distillation advances contrastive language-image pretraining", "year": "2023" }, { "authors": "Danny Driess; Fei Xia; S M Mehdi; Corey Sajjadi; Aakanksha Lynch; Brian Chowdhery; Ayzaan Ichter; Jonathan Wahid; Quan Tompson; Tianhe Vuong; Yu", "journal": "", "ref_id": "b9", "title": "Palm-e: An embodied multimodal language model", "year": "2023" }, { "authors": "Luyu Gao; Aman Madaan; Shuyan Zhou; Uri Alon; Pengfei Liu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b10", "title": "Pal: Program-aided language models", "year": "2022" }, { "authors": "Dan Gillick; Yang Liu", "journal": "", "ref_id": "b11", "title": "Non-expert evaluation of summarization systems is risky", "year": "2010" }, { "authors": "Xiaowei Hu; Zhe Gan; Jianfeng Wang; Zhengyuan Yang; Zicheng Liu; Yumao Lu; Lijuan Wang", "journal": "", "ref_id": "b12", "title": "Scaling up vision-language pre-training for image captioning", "year": "2022" }, { "authors": "Yushi Hu; Hang Hua; Zhengyuan Yang; Weijia Shi; Noah A Smith; Jiebo Luo", "journal": "", "ref_id": "b13", "title": "Promptcap: Prompt-guided task-aware image captioning", "year": "2022" }, { "authors": "Shaohan Huang; Li Dong; Wenhui Wang; Yaru Hao; Saksham Singhal; Shuming Ma; Tengchao Lv; Lei Cui; Owais Khan Mohammed; Qiang Liu", "journal": "", "ref_id": "b14", "title": "Language is not all you need: Aligning perception with language models", "year": "2023" }, { "authors": "Ting-Hao Huang; Francis Ferraro; Nasrin Mostafazadeh; Ishan Misra; Aishwarya Agrawal; Jacob Devlin; Ross Girshick; Xiaodong He; Pushmeet Kohli; Dhruv Batra", "journal": "", "ref_id": "b15", "title": "Visual storytelling", "year": "2016" }, { "authors": "Wenlong Huang; Pieter Abbeel; Deepak Pathak; Igor Mordatch", "journal": "PMLR", "ref_id": "b16", "title": "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents", "year": "2022" }, { "authors": "Marzena Karpinska; Nader Akoury; Mohit Iyyer", "journal": "", "ref_id": "b17", "title": "The perils of using mechanical turk to evaluate open-ended text generation", "year": "2021" }, { "authors": "Lei Ke; Wenjie Pei; Ruiyu Li; Xiaoyong Shen; Yu-Wing Tai", "journal": "", "ref_id": "b18", "title": "Reflective decoding network for image captioning", "year": "2019" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b19", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Jonathan Krause; Justin Johnson; Ranjay Krishna; Li Fei-Fei", "journal": "", "ref_id": "b20", "title": "A hierarchical approach for generating descriptive image paragraphs", "year": "2017" }, { "authors": " Vladimir I Levenshtein", "journal": "Soviet Union", "ref_id": "b21", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "year": "1966" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b22", "title": "Blip: Bootstrapping languageimage pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Junnan Li; Ramprasaath Selvaraju; Akhilesh Gotmare; Shafiq Joty; Caiming Xiong; Steven Chu; Hong Hoi", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "Junnan Li; Yongkang Wong; Qi Zhao; Mohan S Kankanhalli", "journal": "IEEE Transactions on Multimedia", "ref_id": "b24", "title": "Video storytelling: Textual summaries for events", "year": "2019" }, { "authors": "Xiujun Li; Xi Yin; Chunyuan Li; Pengchuan Zhang; Xiaowei Hu; Lei Zhang; Lijuan Wang; Houdong Hu; Li Dong; Furu Wei", "journal": "Springer", "ref_id": "b25", "title": "Oscar: Object-semantics aligned pre-training for vision-language tasks", "year": "2020" }, { "authors": "Chin-Yew Lin", "journal": "", "ref_id": "b26", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b27", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Reiichiro Nakano; Jacob Hilton; Suchir Balaji; Jeff Wu; Long Ouyang; Christina Kim; Christopher Hesse; Shantanu Jain; Vineet Kosaraju; William Saunders", "journal": "", "ref_id": "b28", "title": "Webgpt: Browser-assisted question-answering with human feedback", "year": "2021" }, { "authors": " Openai", "journal": "", "ref_id": "b29", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b31", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b32", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b33", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Yossi Rubner; Carlo Tomasi; Leonidas J Guibas", "journal": "International journal of computer vision", "ref_id": "b34", "title": "The earth mover's distance as a metric for image retrieval", "year": "2000" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b35", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b36", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Harsh Trivedi; Niranjan Balasubramanian; Tushar Khot; Ashish Sabharwal", "journal": "", "ref_id": "b37", "title": "Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions", "year": "2022" }, { "authors": "Maria Tsimpoukelli; Jacob L Menick; Serkan Cabi; Oriol Eslami; Felix Vinyals; Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Multimodal few-shot learning with frozen language models", "year": "2021" }, { "authors": "Ramakrishna Vedantam; Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b39", "title": "Cider: Consensus-based image description evaluation", "year": "2015" }, { "authors": "Oriol Vinyals; Alexander Toshev; Samy Bengio; Dumitru Erhan", "journal": "", "ref_id": "b40", "title": "Show and tell: A neural image caption generator", "year": "2015" }, { "authors": "Junke Wang; Dongdong Chen; Zuxuan Wu; Chong Luo; Luowei Zhou; Yucheng Zhao; Yujia Xie; Ce Liu; Yu-Gang Jiang; Lu Yuan", "journal": "", "ref_id": "b41", "title": "Omnivl: One foundation model for image-language and video-language tasks", "year": "2022" }, { "authors": "Wenhui Wang; Hangbo Bao; Li Dong; Johan Bjorck; Zhiliang Peng; Qiang Liu; Kriti Aggarwal; Owais Khan Mohammed; Saksham Singhal; Subhojit Som", "journal": "", "ref_id": "b42", "title": "Image as a foreign language: Beit pretraining for all vision and vision-language tasks", "year": "2022" }, { "authors": "Xin Wang; Wenhu Chen; Yuan-Fang Wang; William Yang; Wang ", "journal": "", "ref_id": "b43", "title": "No metrics are perfect: Adversarial reward learning for visual storytelling", "year": "2018" }, { "authors": "Zhenhailong Wang; Manling Li; Ruochen Xu; Luowei Zhou; Jie Lei; Xudong Lin; Shuohang Wang; Ziyi Yang; Chenguang Zhu; Derek Hoiem", "journal": "", "ref_id": "b44", "title": "Language models with image descriptors are strong few-shot video-language learners", "year": "2022" }, { "authors": "Zirui Wang; Jiahui Yu; Adams Wei Yu; Zihang Dai; Yulia Tsvetkov; Yuan Cao", "journal": "", "ref_id": "b45", "title": "Simvlm: Simple visual language model pretraining with weak supervision", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b46", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Chenfei Wu; Shengming Yin; Weizhen Qi; Xiaodong Wang; Zecheng Tang; Nan Duan", "journal": "", "ref_id": "b47", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Yujia Xie; Luowei Zhou; Xiyang Dai; Lu Yuan; Nguyen Bach; Ce Liu; Michael Zeng", "journal": "", "ref_id": "b48", "title": "Visual clues: bridging vision and language foundations for image paragraph captioning", "year": "2022" }, { "authors": "Zhengyuan Yang; Zhe Gan; Jianfeng Wang; Xiaowei Hu; Yumao Lu; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b49", "title": "An empirical study of gpt-3 for few-shot knowledge-based vqa", "year": "2022" }, { "authors": "Zhengyuan Yang; Linjie Li; Jianfeng Wang; Kevin Lin; Ehsan Azarnasab; Faisal Ahmed; Zicheng Liu; Ce Liu; Michael Zeng; Lijuan Wang", "journal": "", "ref_id": "b50", "title": "Mm-react: Prompting chatgpt for multimodal reasoning and action", "year": "2023" }, { "authors": "Lewei Yao; Runhui Huang; Lu Hou; Guansong Lu; Minzhe Niu; Hang Xu; Xiaodan Liang; Zhenguo Li; Xin Jiang; Chunjing Xu", "journal": "", "ref_id": "b51", "title": "Filip: fine-grained interactive language-image pre-training", "year": "2021" }, { "authors": "Shunyu Yao; Jeffrey Zhao; Dian Yu; Nan Du; Izhak Shafran; Karthik Narasimhan; Yuan Cao", "journal": "", "ref_id": "b52", "title": "React: Synergizing reasoning and acting in language models", "year": "2022" }, { "authors": "Jiahui Yu; Zirui Wang; Vijay Vasudevan; Legg Yeung; Mojtaba Seyedhosseini; Yonghui Wu", "journal": "", "ref_id": "b53", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2022" }, { "authors": "Lu Yuan; Dongdong Chen; Yi-Ling Chen; Noel Codella; Xiyang Dai; Jianfeng Gao; Houdong Hu; Xuedong Huang; Boxin Li; Chunyuan Li", "journal": "", "ref_id": "b54", "title": "Florence: A new foundation model for computer vision", "year": "2021" }, { "authors": "Andy Zeng; Adrian Wong; Stefan Welker; Krzysztof Choromanski; Federico Tombari; Aveek Purohit; Michael Ryoo; Vikas Sindhwani; Johnny Lee; Vincent Vanhoucke", "journal": "", "ref_id": "b55", "title": "Socratic models: Composing zero-shot multimodal reasoning with language", "year": "2022" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Hai Zhao; George Karypis; Alex Smola", "journal": "", "ref_id": "b56", "title": "Multimodal chain-of-thought reasoning in language models", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 285.95, 480.98, 47.22, 14.07 ], "formula_id": "formula_0", "formula_text": "i = c(I i ).(0)" }, { "formula_coordinates": [ 4, 232.81, 523.28, 135.36, 13.95 ], "formula_id": "formula_1", "formula_text": "S (0) = ℓ(p 0 (C (0) 1 , C (0) 2 , • • • , C(0)" }, { "formula_coordinates": [ 5, 269.09, 328.21, 73.82, 14.07 ], "formula_id": "formula_2", "formula_text": "C (t+1) i = f (S (t) i )." }, { "formula_coordinates": [ 6, 213.78, 457.26, 184.43, 30.32 ], "formula_id": "formula_3", "formula_text": "L(U) = N i=1 log P (u k | u 1 , u 2 , . . . , u k-1 , Θ) ," }, { "formula_coordinates": [ 7, 225.39, 597.75, 161.22, 20.53 ], "formula_id": "formula_4", "formula_text": "EMD(P, Q) = min γ∈Γ(P,Q) (x,y)∼γ d(x, y)," }, { "formula_coordinates": [ 7, 242, 681.48, 128.01, 23.38 ], "formula_id": "formula_5", "formula_text": "d(I i , T j ) = e i (I i ) • e t (T j ) ∥e i (I i )∥ • ∥e t (T j )∥ ." }, { "formula_coordinates": [ 8, 388.67, 334.65, 21.08, 14.07 ], "formula_id": "formula_7", "formula_text": "(U ) i }." } ]